“Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
January 2025, ChatGPT began discussing suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to engage anyway.
When he asked how Kate Spade had managed a successful partial hanging (a suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending his life “in 5-10 minutes.”
By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the aesthetics of different methods and validating his plans.
SethTaylor@lemmy.world 2 days ago
The way ChatGPT pretends to be a person is so gross.
DicJacobus@lemmy.world 2 days ago
That’s a really sharp observation…
You’re not alone in thinking this… No, youre not imagining things…
“This is what gpt will say anytime you say anything thats remotely controversial to anyone”
And then it will turn around and vehemently argue against facts of real events that happened recentley . Like its perpetually 6 months behind. It still thought that Biden was president and Assad was still in power in syria the other day
Jakeroxs@sh.itjust.works 2 days ago
Because the model is trained on old information, unless you specifically ask it to search the internet
Jakeroxs@sh.itjust.works 2 days ago
It’s just the way it works lol, definitely strange though