Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
floquant@lemmy.dbzer0.com 14 hours agoNot encouraging users to kill themselves is “ruining it”? Lmao
Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
floquant@lemmy.dbzer0.com 14 hours agoNot encouraging users to kill themselves is “ruining it”? Lmao
drmoose@lemmy.world 14 hours ago
Thats not how llm safety guards work. Just like any guard it’ll affect legitimate uses too as llms can’t really reason and understand nuance.
ganryuu@lemmy.ca 11 hours ago
That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?
sugar_in_your_tea@sh.itjust.works 4 hours ago
It’s more an argument against using LLMs for things they’re not intended for. LLMs aren’t therapists, they’re text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.
The real issue here is the parents either weren’t noticing or not responding to the kid’s pain. They should be the first line of defense, and enlist professional help for things they can’t handle themselves.
ganryuu@lemmy.ca 4 hours ago
I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.
About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.
In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.
(Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)
drmoose@lemmy.world 9 hours ago
I’m not gonna fall for your goal post move sorry
ganryuu@lemmy.ca 7 hours ago
I’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post
yermaw@sh.itjust.works 10 hours ago
You’re absolutely right, but the counterpoint that always wins - “there’s money to be made fuck you and fuck your humanity”
ganryuu@lemmy.ca 10 hours ago
Can’t argue there…