Microsoft’s PhotoDNA
My issue with these services is that they aren’t available for non-US people. db0’s project can be deployed anywhere (provided you have a capable GPU).
Comment on I just developed and deployed the first real-time protection for lemmy against CSAM!
joshuaacasey@lemmy.world 1 year ago
disappointed that this uses AI instead of something like Microsoft’s PhotoDNA that compares image hashes. AI has too much (unnecessary & unacceptable) risk of false positives that results in overbroad censorship.
Microsoft’s PhotoDNA
My issue with these services is that they aren’t available for non-US people. db0’s project can be deployed anywhere (provided you have a capable GPU).
ok then use Cloudflare’s
That also isn’t available for non-US people.
It’s available to every Cloudflare user, US or global.
PhotoDNA is very proprietary…
I don’t see the problem here. What makes you think that the false positives in this case is “unacceptable”? So what if Joe Bloggs isn’t able to share a picture of a random kid (why tho) or an image of a child-like person?
false positives not only leads to unnecessary censorship, but it also wastes resources that would be better used to protect *ACTUAL victims and children (although, the optimal solution is protecting people before any harm is done so that we don’t even need these “band-aid” solutions for reacting afterward)
Unnecessary censorship is fine when it’s clearly a underaged person. You don’t need to check their ID to tell if it’s CSAM, and you don’t need to as well with generated child stuff. If you want to debate it’s legality, that’s a diff conversation, but even an AI generated version is enough to mentally scar the viewer, so there is still harm being done.
an imaginary person has no ID because an imaginary person doesn’t exist. Why do you care about protecting imaginary characters instead of protecting actual real existing human beings?
Again, what you’re saying isn’t relevant to Lemmy at all. Please elaborate how would a graphics card on some random server help protect actual victims?
PhotoDNA isn’t run by Microsoft anymore, but by the National Center for Missing and Exploited Children.
My friend, you haven’t heard about Oracle.
Microsoft at least gave the world Powershell, to balance out their sins. I can also name other good things they have done. Oracle is pure and deliberate evil.
I believe that the human race will end in one of three ways:
db0@lemmy.dbzer0.com 1 year ago
PhotoDNA requires a lot more bureaucratic work than most instance admins can handle, but if you really want it, you can easily plug it into pictrs-safety instead.
However PhotoDNA will not catch novel generativeAI CSAM.
joshuaacasey@lemmy.world 1 year ago
there’s no such thing as “AI-generated CSAM”. CSAM literally is created by abusing a real human child. There’s no such thing as an “AI child”. It would be a much better idea to protect *ACTUAL existing children instead of wasting resources on *checks notes* fiction
db0@lemmy.dbzer0.com 1 year ago
You won’t know a photorealistic generative AI image is real or not.
joshuaacasey@lemmy.world 1 year ago
so you admit that like most people you don’t actually give a shit about protecting anyone. You would rather protect imaginary fake fictional characters because it’s easier and makes you “feel good about yourself”. I genuinely hate performative assholes (which is 99% of humans, let’s be honest. 99% of people only care about their feelings and making themselves feel good by thinking that they are doing something good, not actually doing a good thing). There’s no evidence that fictional material is harmful, in fact, quite the opposite there is some evidence that access to fictional material may actually protect kids and prevent abuse from occurring, by serving as a harmless sexual outlet. I mean let’s put it this why, go ask a victim of sexual abuse “If you had a choice, would you prefer that your abuser abused you or that your abuser relieved their pent-up sexual frustration to some fictional material” I guarantee 100% of them will say that they would prefer to have not been abused.
glue_snorter@lemmy.sdfeu.org 1 year ago
I think you were merely being pedantic, but there are some interesting points in there.
Is it a crime to generate fake “csam”?
Should it be a crime?
How can prosecutors get convictions against a defense of “no, your honour, that video is AI-generated”?
What we have now is still miles off general AI, but it’s going to take years for society to catch up. Interesting times.