Comment on Character.ai Faces Lawsuit After Teen’s Suicide
hendrik@palaver.p3x.de 4 weeks agoThat sentence also stood out to me. Somehow the article is lots of pages about what he did on his phone. And then half a sentence about the gun, and he's dead. No further questions about that.
RunningInRVA@lemmy.world 4 weeks ago
The mother was on CBS this morning and while the story is sad my wife and I looked at each other with the same question when the mom stated the teen shot himself. Gayle King would have been terrible to start questioning the mother on the gun question but you kind of wish she would have.
hendrik@palaver.p3x.de 4 weeks ago
Sure. Once you start blaming people, I think some other questions should be allowed, too...
For example: Isn't it negligent to give access to a loaded handgun to a 14 yo teen?
And while computer games, or chatbots or loneliness can be linked, that's rarely the underlying issue, or sole issue to blame. Sounds to me like the debate on violend computer games in the early 2000s, when lots of parents thought playing CounterStrike would make us murder people.
geekwithsoul@lemm.ee 4 weeks ago
I understand what you mean about the comparison between AI chatbots and video games (or whatever the moral panic du jour is), but I think they’re very much not the same. To a young teen, no matter how “immersive” the game is, it’s still just a game. They may rage against other players, they may become obsessed with playing, but as I said they’re still going to see it as a game.
An AI chatbot who is a troubled teen’s “best friend” is different and no matter how many warnings are slapped on the interface, it’s going to feel much more “real” to that kid than any game. They’re going to unload every ounce of angst into that thing, and by defaulting to “keep them engaged”, that chatbot is either going to ignore stuff it shouldn’t or encourage them in ways that it shouldn’t. It’s obvious there’s no real guardrails in this instance, as if he was talking about being suicidal, some red flags should’ve popped up.
Yes the parents shouldn’t have allowed him such unfettered access, yes they shouldn’t have had a loaded gun that he had access to, but a simple “This is all for funsies” warning on the interface isn’t enough to stop this from happening again. Some really troubled adults are using these things as defacto therapists and that’s bad too. But I’d be happier if lawmakers were much more worried about kids having access to this stuff than accessing “adult sites”.
ifItWasUpToMe@lemmy.ca 3 weeks ago
I get what you are saying, and somewhat agree. However this really reads exactly like a decade ago when it was “teens can’t tell the difference between killing someone in a video game vs real life”
hendrik@palaver.p3x.de 4 weeks ago
The warning is a joke. Alike printing "Smoking kills" on ciragette packages, just that even less people care. And I doubt that sentence is going to change anything in a legal battle.
I'm like half convinced.
I think the dynamics are the same as with other things. Sometimes we like to escape reality. That can be done by reading books, watching TV or playing computer games. Or social media or watching some twitch streamer daily. I believe the latter is called parasocial interaction. It becomes an issue once done excessively. Or the lines get blurry. Or mental issues get into the mix.
Certainly AI chatbots are more convincing than some regular old book. (Allegedly already in 1775 we had young people commit copycat suicide after reading Goethe's "The Sorrows of Young Werther", so it's not a new topic.) But an AI can get to you and exploit your individual needs and wants and really get to you. I read the effects are currently being studied. I skimmed some long papers, but it seems we don't have a final answer, yet. About what that does psychologically.
I've tried roleplaying with AI. And I've also tried loading those characters like the famous AI therapist and pop culture characters. For me, it's pretty clear it's just a game. All of the interaction happens through text on the screen, I can't touch them or talk to them verbally (yet). I've heard from some other people here on Lemmy, they don't like the experience that is alike some pen and paper game... And I know how these things work, and that my hypothetical AI girlfriend is just a dream. So I don't think I'm at harm. And I don't think lots of other people are. But... obviously some people are. This isn't the first article about people getting harmed. And I can see how you wouldn't be able to defend yourself against some chatbot if you have serious issues or a mental condition.
I still think we can't skip all the other factors at play. We need to address (teenage) loneliness, guns and not having a caring and healthy social/human environment. A proper education and giving people some knowledge how these things work and what they are, would certainly help, too. It's always the same story. We leave people alone, without education, without a healthy social environment, the people close to them miss how much they're struggling, there is guns laying on the desk...
And after the inevitable happened, we don't address any of that. But completely focus on one topic that's more symptom then cause. And that's why I'm annoyed by the article.
(But I get there is some risk specific to chatbots that goes beyond other things. And it's probably nit just symptom, but also contributing factor. We'd need more non-sensationalist information to judge...)
echodot@feddit.uk 4 weeks ago
When a kid dies it’s natural for parents to want to seek someone to blame but sometimes there not a lot you can do. However sad it is and it’s definitely sad you just need to accept it as something that happened, isn’t always anyone’s fault.
There is a bare minimum one could do and I would have thought that gun safety would be covered under that bare minimum. Especially once they start throwing around accusations at other people.