LLMs can’t be fully controlled. They shouldn’t be held liable
OpenAI Is Giving Exactly the Same Copy-Pasted Response Every Time Time ChatGPT Is Linked to a Mental Health Crisis
Submitted 12 hours ago by tonytins@pawb.social to technology@lemmy.world
https://futurism.com/openai-response-chatgpt-mental-health
Comments
Electricd@lemmybefree.net 6 hours ago
scratchee@feddit.uk 1 hour ago
Neither can humans, ergo nobody should ever be held liable for anything.
Civilisation is a sham, QED.
surewhynotlem@lemmy.world 5 hours ago
I made this car with a random number generator that occasionally blows it up. Its cheap so lots of people buy it. Totally not my fault that it blows up though. I mean yes, I designed it, and I know it occasionally explodes. But I can’t be sure when it will blow up so it’s not my fault.
Not_mikey@lemmy.dbzer0.com 9 hours ago
Ah, I was hoping this meant chatgpt would give canned responses, ie. “Seek help”, whenever it detected it was being used for mental health issues, which it should. But no it’s just open air flipping off anyone who asks why there chatbot pushed a person to suicide.
WanderingThoughts@europe.pub 7 hours ago
Reminds me of the time that Twitter, when it was still Twitter and freshly taken over by Musk, replied to all slightly critical questions with a poop emoji.
interdimensionalmeme@lemmy.ml 7 hours ago
It should also refuse to answer, any medical question, any engineering question, any finance question, anything that requires the responsibility of an accredited member of the professional–managerial class, to read the question, read the answer, and then decide if the question is allowed all and if he will re-write, change or simply block replies to the question entirely.
Also such question should starting be 160$USD a pop