I remember reading that this may be already happening to some extent, eg people sharing tips on creating it on the deep web, maybe through prompt engineer, fine tuning or pretraining.
I don’t know how those models are made, but I do wonder the ones that need retraining/finetuning by using real csam can be classified as breaking the law.
Knusper@feddit.de 1 year ago
Well, to develop such a service, you need training data, i.e. lots of real child pornography in your possession.
Legality for your viewers will also differ massively around the world, so your target audience may not be very big.
And you probably need investors, which likely have less risky projects to invest into.
Well, and then there’s also the factor of some humans just not wanting to work on disgusting, legal grey area stuff.
Womble@lemmy.world 1 year ago
yup, just like the ai needed lots of pictures of astronaughts on horses to make pictures of those…
JonEFive@midwest.social 1 year ago
Exactly. Some of these engines are perfectly capable of combining differing concepts. In your example, it knows basically what a horse looks like, and what a human riding on horseback looks like. It also knows that an astronaut looks very much like a human without a space suit and can put the two together.
Saying nothing of the morality, In this case, I suspect that an AI could be trained using pictures of clothed children perhaps combined with nude images of people who are of age and just are very slim or otherwise have a youthful appearance.
While I think it’s repugnent in concept, I also think that for those seeking this material, I’d much rather it be AI generated than an actual exploited child. Realistically though, I doubt that this would actually have any notable impact to the prevalence of CSAM, and might even make it more accessible.
Furthermore, if the generative AI gets good enough, it could make it difficult to determine whether an image is real or AI generated. That would make it more difficult for police to find the child and offender to try to remove them from that situation. So now we need an AI to help analyze and separate the two.
Yeah… I don’t like living in 2023 and things are only getting worse. I’ve put way more thought into this than I ever wanted to.
Ryantific_theory@lemmy.world 1 year ago
Aren’t AI generated images pretty obvious to detect from noise analysis? I know there’s no effective detection for AI generated text, and not that there won’t be projects to train AI to generate perfectly realistic images, but it’ll be a while before it does fingers right, let alone invisible pixel artifacts.
As a counterpoint, won’t the prevalence of AI generated CSAM collapse the organized abuse groups, since they rely on the funding from pedos? If genuine abuse material is swamped out by AI generated imagery, that would effectively collapse an entire dark web market. Not that it would end abuse, but it would at least undercut the financial motive, which is progress.
That’s pretty good for 2023.
d13@programming.dev 1 year ago
Unfortunately, no, you just need training data on children in general and training data with legal porn, and these tools can combine it.
It’s already being done, which is disgusting but not surprising.
People have worried about this for a long time. I remember a subplot of a sci-fi series that got into this. (I think it was The Lost Fleet, 15 years ago).