Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
FiskFisk33@startrek.website 3 weeks agoThe fact the parents might be to blame doesn’t take away from how openai’s product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.
Randomgal@lemmy.ca 3 weeks ago
He could have Google the info. Humans failed this guy. Human behavior needs to change
GPT could have been Google or a stranger in. Chatroom.
LillyPip@lemmy.ca 2 weeks ago
You should read the filing.
Google might have clinically told him things, but it wouldn’t have encouraged him, telling him he should hide the marks on his neck from a previous failed attempt by wearing a black turtleneck, telling him how to tie the knot next time, and telling him to hide his feelings from his parents and others.
His parents had him in therapy. He also told the AI he wanted to leave a noose out where his parents would find it, and the AI told him not to. It actively encouraged him to hide all this from his parents. A Google search wouldn’t do that, and it sounds like his parents did care.
FiskFisk33@startrek.website 2 weeks ago
I am not arguing this point, I agree.
A search engine presents the info that is available, it doesnt also help talk you into doing it.
A stranger doing it in a chatroom doing it should go to prison, as has happened in the past. Should this not also be illegal for LLM’s?