Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation

⁨48⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨return2ozma@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://www.forbes.com/sites/conormurray/2026/02/09/anthropic-ai-safety-researcher-warns-of-world-in-peril-in-resignation/

source

Comments

Sort:hotnewtop
  • ageedizzle@piefed.ca ⁨1⁩ ⁨day⁩ ago

    According to a study Sharma published last week, in which he investigated how using AI chatbots could cause users to form a distorted perception of reality, he found “thousands” of these interactions that may produce these distortions “occur daily.”

    This is interesting. I think it suggests that the AI psychosis problem is bigger than we realize. All we know of now is a few high profile cases, without any real stats on this issue. But Sharma seems to know enough to be disturbed by the scale of this issue. Anecdotally this matches my own experience, as I think I’ve come across AI psychosis in the wild.

    I suspect that AI psychosis will become widely acknowledged problem within the next decade, similar to how it’s now widely acknowledged that Instagram and TikTok can trigger eating disorders.

    source
    • unexposedhazard@discuss.tchncs.de ⁨22⁩ ⁨hours⁩ ago

      Its kind of an obvious outcome if you combine the concept of automation bias with the hallucinations produced by LLMs. They really are perfect misinformation and confusion machines.

      source