Comment on OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide - Ars Technica
Pieisawesome@lemmy.dbzer0.com 1 week agoTo my legal head cannon, this boils down to if OpenAi flagged him and did nothing.
If they flagged him, then they knew about the ToS violations and did nothing, then they should be in trouble.
If they don’t know, but can demonstrate that they will take action in this situation, then, in my opinion, they are legally in the clear…
HeyThisIsntTheYMCA@lemmy.world 1 week ago
depends whether intent is a required factor for the state’s wrongful death statute (my state says it’s not, as wrongful death is there for criminal homicides that don’t fit the murder statute). if openai acted intentionally, recklessly, or negligently in this they’re at least partially liable. if they flagged him, it seems either intentional or reckless to me. if they didn’t, it’s negligent.
however, if the deceased used some kind of prompt injection (i don’t know the right terms, this isn’t my field) to bypass gpt’s ethical restrictions, and if understanding how to bypass gpt’s ethical restrictions is in fact esoteric, only then would i find openai was not at least negligent.
as i myself have gotten gpt to do something it’s restricted from doing, and i haven’t worked in IT since the 90s, i’m led to a single conclusion.