Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
FiskFisk33@startrek.website 1 day agoThe fact the parents might be to blame doesn’t take away from how openai’s product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.
Randomgal@lemmy.ca 1 day ago
He could have Google the info. Humans failed this guy. Human behavior needs to change
GPT could have been Google or a stranger in. Chatroom.
FiskFisk33@startrek.website 12 hours ago
I am not arguing this point, I agree.
A search engine presents the info that is available, it doesnt also help talk you into doing it.
A stranger doing it in a chatroom doing it should go to prison, as has happened in the past. Should this not also be illegal for LLM’s?