How do you think they train the models?
Comment on Instagram Is Full Of Openly Available AI-Generated Child Abuse Content.
Cryophilia@lemmy.world 5 weeks ago
If a child is not being harmed, I truly do not give a shit.
bigfondue@lemmy.world 5 weeks ago
Cryophilia@lemmy.world 5 weeks ago
With a set of all images on the internet. Why do you people always think this is a “gotcha”?
unconfirmedsourcesDOTgov@lemmy.sdf.org 5 weeks ago
I’ve been assuming it’s because they truly have no idea how this tech works
surewhynotlem@lemmy.world 5 weeks ago
Hey.
I’ve been in tech for 20 years. I know python, Java, c#. I’ve worked with tensorflow and language models. I understand this stuff.
You absolutely could train an AI on safe material to do what you’re saying.
Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.
It’s like going to buy a burger, and the restaurant says “We can’t guarantee there’s no human meat in here”. At best it’s lazy. At worst it’s abusive.
RowRowRowYourBot@sh.itjust.works 5 weeks ago
What if it features real kid’s faces?
boreengreen@lemm.ee 5 weeks ago
Then it is harming someone.
AutomaticButt@lemm.ee 5 weeks ago
The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.
yyprum@lemmy.dbzer0.com 5 weeks ago
As a counterpart, the fact that it is so easy and simple to get those AI images, compared to the risk and extra effort of doing it for real, could make the actual child abuse become less common and less profitable for mafias and assholes in general. It’s a really complex topic that no simple straight answer would solve.
Normalising it would be horrible and should be avoided, but there will always be some amount of people looking for that content. I rather have them using AI to create it than having to go searching for real content. Persecuting the AI content is not only very inefficient, it might also be harmful as the only other content left would be the real one that is much harder to catch those who make it.
joshchandra@midwest.social 5 weeks ago
A rebuttal to this that I’ve read is that the easy access may encourage people to dig into it and eventually want “the real thing”… but regardless, with it being FOSS, there’s no easy way to stop it anyway… It’s just a Pandora’s box that we can never close.
yyprum@lemmy.dbzer0.com 5 weeks ago
And I could rebute to that, that if someone is interested enough to check it with AI then they were likely to try and check it anyway without AI, maybe it would take longer, it would be harder to find… But they’d be the intended audience that now are redirected elsewhere.
To quote myself:
We could rebute again and again and again, and get nowhere because either option is hard to discuss as it is simply impossible to give proper data to prove anything. And worse, when defending the use of AI for it can lead to being told you are allowing it in the first place and that’s not even telling how many people still believe that AI needs real sample images to produce those (whether the algorithm is trained or not on CP is irrelevant on this particular point, as it is not needed to be created)
Cryophilia@lemmy.world 5 weeks ago
And we have absolutely no data to suggest that’s happening. It’s purely hypothetical.