Comment on LLMDeathCount.com

<- View Parent
atrielienz@lemmy.world ⁨2⁩ ⁨days⁩ ago

I like your username, and generally even agree with you up to a point.

But I think the problem is there are a lot of mentally unwell people out there who are isolated and using this tool (with no safeguards) to interact with socially as a sort of human stand in.

If a human actually agrees that you should kill yourself and talks you into doing it, they are complicit and can be held accountable.

Because chatbots are being… Billed as a product that passes the Turing test, I can understand why people would want the companies that own them to be held accountable.

These companies won’t let you look up how to make a bomb on their LLM, but they’ll let people confide suicidal ideation and not put in any safeguards for that, and because they’re designed to be agreeable, the LLM will agree with a person who tells it they think they should be dead.

source
Sort:hotnewtop