We could feed the pedo AI more training data and make it even better!
No.
Comment on Grok AI still being used to digitally undress women and children despite suspension pledge
Allero@lemmy.today 3 days agoHonestly, I’d love to see more research on how AI CSAM consumption affects consumption of real CSAM and rates of sexual abuse.
Because if it does reduce them, it might make sense to intentionally use datasets already involved in previous police investigations as training data. But only if there’s a clear reduction.
(Police has already used some materials, with victims’ consent, to crack down on CSAM sharing platforms in the past).
We could feed the pedo AI more training data and make it even better!
No.
Why though? If it does reduce consumption of real CSAM (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.
Or is it simply a matter of spite?
Because with harm reduction as the goal, the solution is never “give them more of the harmful thing.”
I’ll compare it to the problems of drug abuse. You don’t help someone with an addiction by giving them more drugs, you don’t help them by throwing them in jail just for having an addiction, you help them by making it safe and easy to get treatment for the addiction.
Look at what Portugal did in the early 2000s to help mitigate the problems associated with drug use, treating it as a health crisis rather than a criminal one.
You don’t arrest someone for being addicted to meth, you arrest them for stabbing someone and stealing their wallet to buy more meth; you don’t arrest someone just for being a pedophile, you arrest them for abusing children.
This means making AI better, more realistic, and at the same time more diverse.
No, it most certainly does not. AI is already being used to generate explicit images of actual children. Making it better at that task is the opposite of harm reduction, it makes creating new victims easier than ever.
Acting reasonably to prevent what we can prevent means shutting down the CSAM-generating bot, not optimizing and improving it.
To me, it’s more like the Netherlands giving out free syringes and needles so that drug consumers at least wouldn’t contract something from the used ones.
To be clear: granting any and all pedophiles access to therapy would be of tremendous help. I think it must be done. But there are two issues remaining:
The idea is that to generate csam there was harm done to get the training data. This is why it’s bad.
That would be true if children were abused specifically to obtain the training data. But what I’m talking about is using the data that already exists, taken from police investigations and other sources. Of course, it also requires victim’s consent (as they grow old enough), as not everyone will agree to have materials of their abuse proliferate in any way.
Police has already used CSAM with victim’s consent to better impersonate CSAM platform admins in investigative operations, leading to arrests of more child abusers and those sharing the materials around.
The case with AI is milder, as it requires minimum human interaction, so no one will need to re-watch the materials as long as victims are already identified. It’s enough for the police to contact victims, get the agreement, and feed the data into AI without releasing the source. With enough data, AI could improve image and video generation, driving more watches away from real CSAM and reducing rates of abuse.
That is, if it works this way. There’s a glaring research hole in this area, and I believe it is paramount to figure out if it helps. Then, we could decide whether to include already produced CSAM into the data, or if adult data is sufficient to make it good enough for the intended audience to make a switch.
You’re going to tell me that there’s no corporation out there that won’t pay to improve their model with fresh data and not ask questions about where that data came from?
I think such matters should be kept strictly out of corporate hands
NikkiDimes@lemmy.world 3 days ago
Or we could like…not