That’s disgusting.
UK Expands Online Safety Act to Mandate Preemptive Scanning of Digital Communications
Submitted 2 months ago by themachinestops@lemmy.dbzer0.com to technology@lemmy.world
https://reclaimthenet.org/uk-expands-online-safety-act-to-mandate-preemptive-scanning
Comments
serpineslair@lemmy.world 2 months ago
elvis_depresley@sh.itjust.works 2 months ago
Why is the government obsessed with scanning people’s privates?
halcyoncmdr@lemmy.world 2 months ago
Sex is a scapegoat for implementing surveillance and discrimination systems. The same as every “think of the children” law ever passed.
fort_burp@feddit.nl 2 months ago
Agree, and it’s a way for governments to look like they’re doing something while continuing to not do anything. Wanna stop non-consensual sex with minors? Release the Epstein files and prosecute the offenders to the full extent of the law. Don’t wanna stop non-consensual sex with minors? Do this. All the privacy and free speech violations are just “collateral damage” and “externalities” inflicted on people who don’t really matter… the 99%.
jasoman@lemmy.world 2 months ago
It is not the goal just a step to the end goal.
UnGlasierteGurke@feddit.org 2 months ago
they should just do preemptive scanning of everything you see at this point.~/s~
unabart@sh.itjust.works 2 months ago
That’s pretty much the endgame here.
Skankhunt420@sh.itjust.works 2 months ago
I wonder how this will affect projects like Simplex
Frozentea725@feddit.uk 2 months ago
I’ve only seen this reported here, is there an alternative new link
fort_burp@feddit.nl 2 months ago
The government’s new Online Safety Act 2023 (Priority Offenses) (Amendment) Regulations 2025, which came into force on January 8, 2026, designates “cyberflashing” and “encouraging or assisting serious self-harm” as priority offenses, categories that trigger the strictest compliance duties under the OSA.
How are they going to prosecute Grok and ChatGPT? Just above in my feed is an article where ChatGPT suggested drug combinations that led a young man to accidentally kill himself with drugs.
Voroxpete@sh.itjust.works 2 months ago
Even if you can somehow get past the absolutely horrendous privacy implications, how the fuck is this even supposed to work? They want to prevent “digital flashing” (eg, dick pics), but how the fuck is any system supposed to be able to tell the difference between consensual and non-consensual content? What if someone wants to see a picture of someone’s dick? Even assuming you can create a computer model that can accurately identify a dick pic every single time (you can’t), it would also have to be able to infer context to a level that would require effectively human level intelligence and the ability to make judgements across the entirety of a person’s communications. This is so far beyond impossible, from a purely technical standpoint, that I cannot begin to imagine how it was ever allowed to become law.
jasoman@lemmy.world 2 months ago
Dick or got dog.
kambusha@sh.itjust.works 2 months ago
www.youtube.com/watch?v=tWwCK95X6go
xthexder@l.sw0.com 2 months ago
I’d also argue a human monitoring your conversation would likely make similar mistakes in judgement about what’s happening, and this kind of invasion of privacy just isn’t okay in any form. There could be whole extra conversations happening that they can’t see (like speaking IRL before sending a consentual picture).
Voroxpete@sh.itjust.works 2 months ago
I agree, but my point is that even in the minds of the people who think it’s OK to invade privacy like this, I still don’t see how this is supposed to produce useful results.
HurricaneLiz@lemmy.world 2 months ago
Yeah, r/cospenis might break an AI