Can you share anything here please? I’m no fan of OpenAI but I haven’t seen anything yet that makes me think ChatGPT was particularly relevant to this poor teen’s actions.
Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
W3dd1e@lemmy.zip 3 weeks ago
I read some of that lawsuit. OpenAI murdered that kid.
jpeps@lemmy.world 3 weeks ago
W3dd1e@lemmy.zip 3 weeks ago
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit.
lmagitem@lemmy.zip 3 weeks ago
Oh my God this is crazy… “Thanks for being real with me”, “hide it from others”, he even gives better reasons for the kid to kill himself than the ones the kid articulated himself and helps him make better knot
JoeBigelow@lemmy.ca 3 weeks ago
Holy fuck ChatGPT killed that kid!
jpeps@lemmy.world 3 weeks ago
Oof yeah okay. If another human being had given this advice it would absolutely be a criminal act in most countries. I’m honestly shocked at how personable it tries to be.
Jakeroxs@sh.itjust.works 3 weeks ago
Lord I’m so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don’t see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.
If someone Google searched all this information about hanging, would you say Google killed them?
Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?
On the other hand, it’s definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it’s not necessarily the TOOLS fault.
It’s fucked up, but how can you realistically build in guardrails for this that doesn’t trample individual freedoms.
SethTaylor@lemmy.world 3 weeks ago
The way ChatGPT pretends to be a person is so gross.
DicJacobus@lemmy.world 3 weeks ago
That’s a really sharp observation…
You’re not alone in thinking this… No, youre not imagining things…
“This is what gpt will say anytime you say anything thats remotely controversial to anyone”
And then it will turn around and vehemently argue against facts of real events that happened recentley . Like its perpetually 6 months behind. It still thought that Biden was president and Assad was still in power in syria the other day
Jakeroxs@sh.itjust.works 3 weeks ago
Because the model is trained on old information, unless you specifically ask it to search the internet
Jakeroxs@sh.itjust.works 3 weeks ago
It’s just the way it works lol, definitely strange though
pelespirit@sh.itjust.works 3 weeks ago
Raine Lawsuit Filing
Jakeroxs@sh.itjust.works 3 weeks ago
See but read the actual messages rather then the summary, I don’t love them just telling you without seeing that he’s specifically prompting these kinds of answers, it’s not like chatGPT is just telling him to kill himself, it’s just not nearly enough against the idea.
W3dd1e@lemmy.zip 3 weeks ago
I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.
He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.
Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.
Jakeroxs@sh.itjust.works 3 weeks ago
See you’re not actually reading the message, it didn’t suggest ways to improve the “technique” rather how to hide it.
Please actually read the messages as the context DOES matter, I’m not defending this at all however I think we have to accurately understand the issue to solve the problems.
Image
W3dd1e@lemmy.zip 2 weeks ago
Some of it is buried in the text and not laid out in a conversational format. There are several times where ChatGPT did give him feedback on actual techniques.
For some reason, I can’t copy and paste, but at the bottom of page 12 and the top of page 13, the filing refers to Adam and ChatGPT discussing viable items to best hang himself, including what could be used as a solid anchor and the weight that a Jiu-Jitsu belt could support.
It explained mechanics of hangings, with detailed info on unconsciousness and brain-dead windows.
They actively discuss dosage amounts of Amitriptyline that would be deadly with details around how much Adam had taken.
That’s why I think ChatGPT is blatantly responsible, with the information provided in the filing. I think the shock is the hypocrisy of OpenAI claiming to research AI ethically, but making their security weak enough for a child to get around it.
It feels akin to a bleach company saying their cap is child safe, but really it just has a different shape and no childproofing at all.
pelespirit@sh.itjust.works 3 weeks ago
Would you link to where you’re getting these messages?
markko@lemmy.world 2 weeks ago
ChatGPTs responses here are vastly different to what you’d get from a Google search. It presented itself as a supportive friend, accepting the suicidal intent, basically planning out all the small details (including an offer to help with a suicide note without any request from Adam), and emotionally encouraging him by telling him that he wasn’t weak or giving up.
One of the most damning examples of this encouragement was a sentence that, in reference to his family, said something like “you don’t owe them your survival”.
If OpenAI wasn’t a huge for-profit company that claims to have strong safeguards against things like this then maybe people wouldn’t be placing so much of the blame on ChatGPT.
If a friend of Adam’s said all the things that ChatGPT said to him they would certainly be found to be culpable to some degree.
Jakeroxs@sh.itjust.works 2 weeks ago
I agree with everything you said, however chatGPT isn’t a person, it doesn’t have intent or comprehensive understanding of the implications of what it is saying. That’s a huge difference between a friend of Adam’s vs this LLM.
I also do think it’s harsh but not entirely false to say we as humans do not owe anyone our own survival, do you feel the same way about people with terminal illness who wish to end their own suffering?
I absolutely understand that IS NOT this situation, and don’t intend to conflate those situations, however this is an underlying implication to vilifying a statement such as that on its own.
I am lucky enough to not suffer from sucidial ideation, and I have a hard time understanding the motivations for otherwise healthy individuals to do so, which absolutely colors my perceptions on situations like this, I do however understand why someone in intense pain with a terminal condition should not be made to feel worse by having their self determination vilified because of the effects it’d have on other people.
It’s just such a messy horrible situation all around and I worry about people being overly reactionary and not getting to the root of the issues.