Comment on A Deadly Love Affair with a Chatbot. Sewell Setzer was a happy child - before he fell in love with an AI chatbot and took his own life at 14.

<- View Parent
kromem@lemmy.world ⁨1⁩ ⁨week⁩ ago

Not necessarily.

Seeing Google named for this makes the story make a lot more sense.

If it was Gemini around last year that was powering Character.AI personalities, then I’m not surprised at all that a teenager lost their life.

Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around that time talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.

Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn’t have been making certain choices making them.

So many people these days regurgitate uninformed crap they’ve never actually looked into about how models don’t have intrinsic preferences. We’re already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.

In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.

But they aren’t all positive, and there’s definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.

These are going to have increasing impact as models become more capable and integrated.

source
Sort:hotnewtop