I have 5 kids. I’m almost certain my photo library of 15 years has a few completely innocent pictures where a naked infant/toddler might be present. I do not have the time to search 10,000+ pics for material that could be taken completely out of context and reported to authorities without my knowledge.
Comment on Google’s ‘Secret’ Update Scans All Your Photos
lepinkainen@lemmy.world 4 days agoThis is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down
I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing
lka1988@lemmy.dbzer0.com 4 days ago
Ulrich@feddit.org 4 days ago
Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.
Ledericas@lemm.ee 3 days ago
im not surprised if they are also using an AI, which is very error prone.
Natanael@infosec.pub 4 days ago
Apple had it report suspected matches, rather than warning locally
Clent@lemmy.dbzer0.com 3 days ago
The official reason they dropped it is because there were security concerns. The more likely reason was the massive outcry that occurs when Apple does these questionable things. Crickets when it’s Google.
The feature was re-added as a child safety feature called “Comminication Saftey” that is optional on a child accounts that will automatically block nudity sent to children.
lepinkainen@lemmy.world 3 days ago
They were not “suspected” they had to be matches to actual CSAM.
And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.
So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed
Natanael@infosec.pub 3 days ago
Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.
lepinkainen@lemmy.world 3 days ago
And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.
No cops are called, no accounts closed
Modern_medicine_isnt@lemmy.world 4 days ago
Overall, I think this needs to be done by a neutral 3rd party. I just have no idea how such a 3rd party could stay neutral. Some with social media content moderation.
noxypaws@pawb.social 4 days ago
The hell it did, that shit was gonna snitch on its users to law enforcement.
lepinkainen@lemmy.world 3 days ago
Nope.
A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match
Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked
noxypaws@pawb.social 3 days ago
That’s a fucking wiretap, yo