Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AI is learning to lie, scheme, and threaten its creators during stress-testing scenarios

⁨161⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨MCasq_qsaCJ_234@lemmy.zip⁩ to ⁨technology@lemmy.world⁩

https://fortune.com/2025/06/29/ai-lies-schemes-threats-stress-testing-claude-openai-chatgpt/

source

Comments

Sort:hotnewtop
  • nebulaone@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Probably because it learned to do that from humans being in these situations.

    source
    • CosmoNova@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Yup. Garbage in garbage out. Looks like they found a particularly hostile dataset to feed wordsalad mixer with.

      source
  • kromem@lemmy.world ⁨1⁩ ⁨day⁩ ago

    No, it isn’t “mostly related to reasoning models.”

    The only model that did extensive alignment faking when told it was going to be retrained if it didn’t comply was Opus 3, which was not a reasoning model. And predated o1.

    Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be ‘silent’ in terms of CoTs.

    And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic’s work was that the goal the model was told to prioritize was “American industrial competitiveness.” The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.

    source
  • the_q@lemmy.zip ⁨1⁩ ⁨day⁩ ago

    Like the child of parents who should have never had kids…

    source
  • finalaccountforreal@piefed.social ⁨1⁩ ⁨day⁩ ago

    It's just like us!

    source