Comment on I just developed and deployed the first real-time protection for lemmy against CSAM!
xeddyx@lemmy.nz 1 year agoI don’t see the problem here. What makes you think that the false positives in this case is “unacceptable”? So what if Joe Bloggs isn’t able to share a picture of a random kid (why tho) or an image of a child-like person?
joshuaacasey@lemmy.world 1 year ago
false positives not only leads to unnecessary censorship, but it also wastes resources that would be better used to protect *ACTUAL victims and children (although, the optimal solution is protecting people before any harm is done so that we don’t even need these “band-aid” solutions for reacting afterward)
mojo@lemm.ee 1 year ago
Unnecessary censorship is fine when it’s clearly a underaged person. You don’t need to check their ID to tell if it’s CSAM, and you don’t need to as well with generated child stuff. If you want to debate it’s legality, that’s a diff conversation, but even an AI generated version is enough to mentally scar the viewer, so there is still harm being done.
joshuaacasey@lemmy.world 1 year ago
an imaginary person has no ID because an imaginary person doesn’t exist. Why do you care about protecting imaginary characters instead of protecting actual real existing human beings?
xeddyx@lemmy.nz 1 year ago
Again, what you’re saying isn’t relevant to Lemmy at all. Please elaborate how would a graphics card on some random server help protect actual victims?