Comment on AI-generated child sex imagery has every US attorney general calling for action
ram@lemmy.ca 1 year agoI can’t say I entirely agree. I do think that they should be helped, but in a measured and rigorous way. None of this “let them find shit online that quells their needs”. Pedophilia, in the psychological profession, is viewed in a similar light to sexual orientations; of that the person I’m responding to is correct. It’s simply that they seem to be blind to nuance beyond that stance that they’re stuck.
AI pedophilia is certainly a very risky move for us to simply accept, when we don’t even have any data on how consumption of real or virtual CSAM impacts those who indulge in it, and to get that data would require us to do very unethical and likely illegal research as far as I can tell. The approach Kerfuffle@shi.tjust.works is suggesting is one that is naive and myopic in the most generous light; which is how I’m choosing to take it so as to not accuse them of something they may not be guilty of.
I’m also someone who’s extremely progressive, and while I can sympathize with people who have these urges and no true wish to act on them, I think it’s outright malicious to say that the solution is to simply allow them to exist with informal self-treatments based on online “common sense” idealism. Mental health support should absolutely be available and encouraged; part of that is making sure people are safe to disclose this stuff to medical professionals, but no part of that is just having this shit freely spread online.
I appreciate your measured and metered response. I think these are extremely tricky conversations to have, but important, especially with how technology is progressing.
mean_bean279@lemmy.world 1 year ago
The problem is that the technology is progressing so rapidly without any checks or balances that our reaction for the time being should simply be one that enables further research without allowing others to create. This isn’t to say we should be stopping advancements, but we should be taking measured responses and using the input of psychologists to help us better understand the repercussions. It’s the same as if someone could generate AI gore that allows them to make generated videos of them killing someone they have always wanted to kill. It’s something that needs to be evaluated before we just release this stuff into the world. Specifically before this technology gets even better and more realistic. That blending of reality from fiction could be a path we as a society are not prepared for.
My worry is that people with backgrounds in computers are making decisions around things that impact human brains.
ram@lemmy.ca 1 year ago
I agree completely. Unfortunately techbros have been making important world-changing decisions for two decades now and our legislators seem mostly fine to let them continue unabated.