I think we need a built in safety for people who actually develop an emotional relationship with AI because that’s not a healthy sign
Comment on New study sheds light on ChatGPT’s alarming interactions with teens
morto@piefed.social 1 week agoYes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
TheReanuKeeves@lemmy.world 1 week ago
postmateDumbass@lemmy.world 6 days ago
Good thing capitalism has provided you an AI chat bot psychiatrist to help you not depend on AI for mental and emotional healrh.
TheMonk@lemmings.world 5 days ago
I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”