Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
drmoose@lemmy.world 3 weeks ago
Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.
Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
drmoose@lemmy.world 3 weeks ago
Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.
floquant@lemmy.dbzer0.com 3 weeks ago
Not encouraging users to kill themselves is “ruining it”? Lmao
drmoose@lemmy.world 3 weeks ago
Thats not how llm safety guards work. Just like any guard it’ll affect legitimate uses too as llms can’t really reason and understand nuance.
ganryuu@lemmy.ca 3 weeks ago
That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?
yermaw@sh.itjust.works 3 weeks ago
You’re absolutely right, but the counterpoint that always wins - “there’s money to be made fuck you and fuck your humanity”
sugar_in_your_tea@sh.itjust.works 3 weeks ago
It’s more an argument against using LLMs for things they’re not intended for. LLMs aren’t therapists, they’re text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.
The real issue here is the parents either weren’t noticing or not responding to the kid’s pain. They should be the first line of defense, and enlist professional help for things they can’t handle themselves.
drmoose@lemmy.world 3 weeks ago
I’m not gonna fall for your goal post move sorry
lmmarsano@lemmynsfw.com 3 weeks ago
As far as I know, magic doesn’t exist, so words are incapable of action & can’t actually kill anyone. A person who commits suicide chooses it & takes action to perform it. They are responsible for their suicide even if another person tells them & hands them a weapon.
These are merely words on a screen lacking force to compel. There’s no intent or likelihood to incite imminent, lawless action. Readers have agency & plenty of time to think words through & reject ideas.
It’s hardly any different than an oblivious peer saying the same thing. Their words shouldn’t create any legal obligation, and neither should these.