If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.
Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.
But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
If you’re sufficiently detached from reality, nearly anything validates the psychosis.
sugar_in_your_tea@sh.itjust.works 3 days ago
So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you’re getting that validation from humans or a machine. To me, that’s a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.
lenz@lemmy.ml 3 days ago
I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.
sugar_in_your_tea@sh.itjust.works 3 days ago
Sure. But on the flip side, you can ask it the opposite question (tell me the issues with <belief>) and it’ll do that as well, and you’re not going to get that from a conspiracy theory forum.
qarbone@lemmy.world 2 days ago
I don’t have personal experience with people suffering psychoses but I would think that, if you have the werewithal to ask questions about the opposite beliefs, you’d be noticeably less likely to get suckered into scams and conspiracies.