The official reason they dropped it is because there were security concerns. The more likely reason was the massive outcry that occurs when Apple does these questionable things. Crickets when it’s Google.
The feature was re-added as a child safety feature called “Comminication Saftey” that is optional on a child accounts that will automatically block nudity sent to children.
lepinkainen@lemmy.world 1 year ago
They were not “suspected” they had to be matches to actual CSAM.
And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.
So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed
Natanael@infosec.pub 1 year ago
Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.
scaling-attacks.net
lepinkainen@lemmy.world 1 year ago
And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.
No cops are called, no accounts closed
Natanael@infosec.pub 1 year ago
The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above