Comment on Child Safety on Federated Social Media

<- View Parent
blazera@kbin.social ⁨1⁩ ⁨year⁩ ago

it says 112 instances of known CSAM. But that's based on their methodology, right, and their methodology is not actually looking at the content, it's looking at hashtags and whether google safesearch thinks it's explicit. Which Im pretty sure doesnt differentiate with what the subject of the explicitness is. It's just gonna try to detect breast or genitals I imagine.

Though they do give a few damning examples of things like actual CP trading, but also that they've been removed.

source
Sort:hotnewtop