This feels a bit like PTA-driven panic about kids eating Tide Pods when like one person did it. Or razor blades in Halloween candy. Or kids making toilet hooch with their juice boxes. Or the choking game sweeping playgrounds.
But also, man on internet with no sense of mental health … sounds almost feasible.
pennomi@lemmy.world 3 days ago
Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.
Bouzou@lemmy.world 2 days ago
I dunno, I think there’s credence to considering it as a worry.
Like with an addictive substance: yeah, some people are going to be dangerously susceptible to it, but that doesn’t mean there shouldn’t be any protections in place…
Now what the protections would be, I’ve got no clue. But I think a blanket, “They’d fall into psychosis anyway” is a little reductive.
pennomi@lemmy.world 2 days ago
I don’t think I suggested it wasn’t worrisome, just that it’s expected.
If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.
“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.
Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.
Bouzou@lemmy.world 2 days ago
Ah, I see what you’re saying – that’s a great point. It’s designed to be entrancing AND designed to actively try to be more entrancing.