Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

In a 38-page essay, Dario Amodei warns of civilization-level damage from superintelligent AI, questioning whether humanity has the maturity to handle such power

⁨18⁩ ⁨likes⁩

Submitted ⁨⁨5⁩ ⁨days⁩ ago⁩ by ⁨Innerworld@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://www.axios.com/2026/01/26/anthropic-ai-dario-amodei-humanity

source

Comments

Sort:hotnewtop
  • LordMayor@piefed.social ⁨5⁩ ⁨days⁩ ago

    Fucking delusional twat. Either he’s on some serious drugs or this is some PR bullshit made to sound like he’s genuinely worried that Nobel laureate-level AI is right around the corner so that people will throw more money into his pocket company. Or both, could be both. But, I think he’s simply an asshole.

    source
    • DoctorNope@lemmy.world ⁨5⁩ ⁨days⁩ ago

      I think you’re 100% right, and boy, this piece made me big mad. Yet another outlet breathlessly publishing fucking nonsense for a ghoul, who by uncritically publishing said ghoul’s dire warning of the imminent birth of a superintelligent malign(?) entity, serves as his unpaid marketing firm. Axios should be embarrassed. If anyone who wasn’t the head of an LLM company spouted this drivel, they’d be locked away in a padded room and Axios would rightly be called out for exacerbating the mental health crisis of a paranoid schizophrenic.

      The whole essay reads like, “Here at Anthropic, we’re doing our best to create the Torment Nexus, but if anybody else were to successfully create the Torment Nexus, that would represent an existential risk for humanity. We’re doing our best to create it first, so please give us more money to save humanity.” It would be utter lunacy if he actually believed it.

      source
      • Perspectivist@feddit.uk ⁨5⁩ ⁨days⁩ ago

        If anyone who wasn’t the head of an LLM company spouted this drivel, they’d be locked away in a padded room and Axios would rightly be called out for exacerbating the mental health crisis of a paranoid schizophrenic.

        Like Eliezer Yudkowsky, Roman Yampolskiy, Stuart Russell, Nick Bostrom, Yoshua Bengio, Geoffrey Hinton, Max Tegmark and Toby Ord?

        source
  • nyan@lemmy.cafe ⁨5⁩ ⁨days⁩ ago

    If we actually had superintelligent AI, I might be concerned. But what we have instead is stochastic parrots with no innate volition. In and of themselves, they aren’t dangerous at all—it’s the humans backing them that we have to be wary of.

    source
  • verdi@tarte.nuage-libre.fr ⁨5⁩ ⁨days⁩ ago

    These fucking grifters really don’t know shit about the snake oil they are selling huh… 

    source
    • Perspectivist@feddit.uk ⁨5⁩ ⁨days⁩ ago

      This comes across more as a warning than a sales pitch but I get the reflexive hostility on your part - the headline mentions “AI” after all.

      source
      • verdi@tarte.nuage-libre.fr ⁨5⁩ ⁨days⁩ ago

        There is factually 0 chance we’ll reach AGI with the current brand of technology. There’s neither context size or compute to even come close to AGI. You’d have to either be selling snake oil or completely oblivious about the subject to even consider AGI as a real possibility. This tells me the average user really doesn’t know shit… 

        source
        • -> View More Comments
  • devolution@lemmy.world ⁨3⁩ ⁨days⁩ ago

    The things these assclowns do and say to make themselves relevant.

    source
  • JailElonMusk@sopuli.xyz ⁨5⁩ ⁨days⁩ ago

    Spoiler Alert: We don’t.

    source