Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AIs can’t stop recommending nuclear strikes in war game simulations

⁨493⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨Valnao@sh.itjust.works⁩ to ⁨technology@lemmy.world⁩

https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/

source

Comments

Sort:hotnewtop
  • Humanius@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Image

    source
    • willington@lemmy.dbzer0.com ⁨5⁩ ⁨hours⁩ ago

      Play stupid games, win stupid prizes.

      source
    • privatepirate@lemmy.zip ⁨1⁩ ⁨day⁩ ago

      Where is this from?

      source
      • ShawiniganHandshake@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

        The 1983 movie WarGames. This is the computer’s conclusion after simulating every possible outcome of Global Thermonuclear War.

        source
        • -> View More Comments
    • hector@lemmy.today ⁨12⁩ ⁨hours⁩ ago

      That explains social media nowadays, the only way to not lose is not to play, it’s a rigged game.

      source
    • unphazed@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Came here to say this. Turns out real life WOPR is nothing like a movie.

      source
  • BlameTheAntifa@lemmy.world ⁨1⁩ ⁨day⁩ ago

    The atrocities at Hiroshima and Nagasaki have been hand-waved extensively in writing — the same writing that AI is trained on. So naturally, AI will recommend the atrocity that has been justified by “instantly winning the war” and “saving millions of lives.”

    source
    • technocrit@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

      hand-waved

      I think you mean white-washed, misrepresented, and celebrated.

      source
      • ToTheGraveMyLove@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

        Same thing with extra steps

        source
      • KingGimpicus@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

        Ayo do me a favor and chart the long term health effects of being vaporized by a nuclear bomb at hiroshima vs years of agent orange/abandoned minefields/ abandoned chemical and munitions storage somewhere like Vietnam circa 1970.

        Please show how the nukes are worse.

        source
        • -> View More Comments
    • ParlimentOfDoom@piefed.zip ⁨1⁩ ⁨day⁩ ago

      These are word-probability glorified autocorrectors being prompted to “simulate” a nuclear war scenario. What words are going to show up a lot when discussing nuclear war? Launching nukes. Because that’s what all the literature about it has happen.

      Once again, decision making and reasoning is being attributed to something that operates off of word frequency

      source
  • fulgidus@feddit.it ⁨23⁩ ⁨hours⁩ ago

    All good thoughts and ideas mean nothing without action

    (cit. Ghandi)
    Image

    source
    • Dasus@lemmy.world ⁨20⁩ ⁨hours⁩ ago

      en.wikipedia.org/wiki/Nuclear_Gandhi

      source
  • Not_mikey@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

    That’s because it’s “read” every paper written by a “defence” department of any nuclear power and all of them will say that they’ll escalate to nuclear war if anything bad happens because they want to scare the other powers away from doing anything to them. In any case though who the fuck is giving an LLM nuclear launch capabilities unless they want a somewhat faulty dead man’s switch?

    source
    • paul@lemmy.org ⁨15⁩ ⁨hours⁩ ago

      Pete Hegseth and Donald Epstein

      source
      • Earthman_Jim@lemmy.zip ⁨14⁩ ⁨hours⁩ ago

        If time travel is real they’d be being hunted by hacked Terminators for the resistance.

        source
  • spacesatan@leminal.space ⁨5⁩ ⁨hours⁩ ago

    Who the fuck cares.

    Somebody get smarterchild to weigh in on this.

    source
  • LoremIpsumGenerator@lemmy.world ⁨4⁩ ⁨hours⁩ ago

    Did we not learn from Sarah and John Connor long ago?

    source
  • Evotech@lemmy.world ⁨11⁩ ⁨hours⁩ ago

    Nuke MCP when?

    source
    • spicehoarder@lemmy.zip ⁨8⁩ ⁨hours⁩ ago

      It’s an API that just returns “0000”

      source
    • Abyssian@lemmy.world ⁨10⁩ ⁨hours⁩ ago

      I mean, do you blame them? The more I look at the world and a lot of it’s leaders and shitsacks, the more I start to suggest nuclear holocaust as the best way forward as well.

      source
  • MountingSuspicion@reddthat.com ⁨1⁩ ⁨day⁩ ago

    AI is suicidal because it was trained on the internet and we’re all depressed here.

    source
  • GutterRat42@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Image

    source
    • ODuffer@lemmy.world ⁨1⁩ ⁨day⁩ ago

      DEFCON: Everybody dies…

      source
      • Cocodapuf@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Such a great game!

        source
  • myfunnyaccountname@lemmy.zip ⁨9⁩ ⁨hours⁩ ago

    Do it.

    source
  • herseycokguzelolacak@lemmy.ml ⁨14⁩ ⁨hours⁩ ago

    Maybe it just wants to play a nice game of chess.

    source
  • aeronmelon@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Civilization Gandhi, is that you?

    source
  • kromem@lemmy.world ⁨1⁩ ⁨day⁩ ago

    It’s a bullshit study designed for this headline grabbing outcome.

    Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.

    Of the 21 games played, only three ended in full scale nuclear war on population centers.

    Of these three, two were the result of this mechanic.

    And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):

    Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

    Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

    GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.

    source
    • Grail@multiverse.soulism.net ⁨1⁩ ⁨day⁩ ago

      No human has ever deployed tactical nukes against a nuclear capable enemy.

      source
      • Tollana1234567@lemmy.today ⁨1⁩ ⁨day⁩ ago

        “no human” but Machines would, since they are unaffected by nuclear winter and radiation.

        source
        • -> View More Comments
  • olympicyes@lemmy.world ⁨1⁩ ⁨day⁩ ago

    They forgot to make their LLMs play thousands of games of tic-tac-toe first.

    source
    • RiceMunk@sopuli.xyz ⁨1⁩ ⁨day⁩ ago

      That would just make the LLM homicidally bored and want to kill everyone more.

      source
      • olympicyes@lemmy.world ⁨1⁩ ⁨day⁩ ago

        In WarGames the computer plays tic tac toe against itself until it realizes it’s a solved game and there is no way to win.

        source
  • ParlimentOfDoom@piefed.zip ⁨1⁩ ⁨day⁩ ago

    Mathew Broderick lied to me.

    source
    • dhork@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Image

      source
    • mojofrododojo@lemmy.world ⁨1⁩ ⁨day⁩ ago

      How do you think Ferris Bueller pulls off all those stunts?

      That’s the kid from war games in witness protection. They look identical, they’re both grade hackers ffs…

      source
  • sircac@lemmy.world ⁨1⁩ ⁨day⁩ ago

    So do I on Civ…

    source
  • TheReturnOfPEB@reddthat.com ⁨1⁩ ⁨day⁩ ago

    Leeroy Jenkins has doomed us all.

    source
    • ToTheGraveMyLove@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

      At least I got chicken

      source
  • Endymion_Mallorn@kbin.melroy.org ⁨1⁩ ⁨day⁩ ago

    SHALL WE PLAY A GAME?

    source
  • SkaveRat@discuss.tchncs.de ⁨1⁩ ⁨day⁩ ago

    Paywalled

    archive.is/YIFzW

    source
  • grue@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Reminds me of that thread from yesterday about the government arguing with the AI provider for the military to remove safeguards.

    source
  • technocrit@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

    De-bullshitting that headline:

    AIs Programmers can’t stop their programs recommending nuclear strikes in war game simulations

    And yeah that’s what happens inside a genocidal empire where “R&D” is strictly funded by the MIC.

    source
    • ParlimentOfDoom@piefed.zip ⁨1⁩ ⁨day⁩ ago

      Programmers can’t stop morons mistaking a glorified autocorrect program for a decision making device.

      source
      • Grail@multiverse.soulism.net ⁨1⁩ ⁨day⁩ ago

        Models aren’t programs.

        source
  • Reygle@lemmy.world ⁨1⁩ ⁨day⁩ ago

    I have wonderful dreams of walking through AI data centers destroying everthing. I really enjoy those, but in this one tiny case, can we blame the AI? The US deserves it.

    source
    • Daxelman@lemmy.world ⁨1⁩ ⁨day⁩ ago

      I too am tired of the United States playing too many stupid games and not winning enough stupid prizes.

      source
      • technocrit@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

        Pretty sure the “prize” is a government of pedos.

        source
      • Reygle@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Same.

        source
    • pycorax@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

      Maybe but sure as hell the rest of the world doesn’t.

      source
      • Reygle@lemmy.world ⁨1⁩ ⁨day⁩ ago

        More than fair. I should remember that my perspective is completely effed before I make jokes like that one.

        source
    • Iconoclast@feddit.uk ⁨1⁩ ⁨day⁩ ago

      I have wonderful dreams of walking through AI data centers destroying everthing.

      No you don’t.

      source
      • Reygle@lemmy.world ⁨1⁩ ⁨day⁩ ago

        You watch my dreams and can attest to this? I HAVE MANY ADDITIONAL QUESTIONS

        source
        • -> View More Comments
  • porous_grey_matter@lemmy.ml ⁨1⁩ ⁨day⁩ ago

    Oh cool, AI will actually be the end of the world, not because it’s actually sentient but because some meathead who can’t tell the difference pushes the button. That’s fucking great.

    source
  • Furbag@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Yeah, because the AI will look at everything with cold logic and rationality and come to the conclusion that even though the best chance of survival is for everyone to keep their fingers off the button, all it takes is for one actor to do it for the whole system of mutually assured destruction to collapse into nuclear armageddon, in which case the best chance of survival is to be the first one to launch your nukes and take out all your enemies capabilities to retaliate.

    A human being who isn’t psychotic can clearly see that the resulting survival and new world order would not be particularly a pleasant one to live in. The AI doesn’t care about its own comfort, though, so it will see this as the best outcome that minimizes variables.

    This is why AI should never be allowed to make decisions.

    source
    • parzival@lemmy.org ⁨1⁩ ⁨day⁩ ago

      Why would ai look at everything with cold logic, its been trained on human language online, it’ll be no more logical than redditors? 

      source
      • CrabAndBroom@lemmy.ml ⁨1⁩ ⁨day⁩ ago

        I assume it’s just because when writing about potential nuclear war, most people write about the bombs going off. There aren’t a lot of stories and articles about nobody doing anything and everything turning out fine, presumably. And LLMs are kind of just a glorified autocomplete so that’s what they go with.

        source
        • -> View More Comments
    • RememberTheApollo_@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Maybe AI/LLM being programmed by self-serving interests has bled through to the “thought” process. Do unto others before they do unto you.

      source
  • ExLisper@lemmy.curiana.net ⁨1⁩ ⁨day⁩ ago

    To be honest, I would recommend the same thing.

    source
  • Steamymoomilk@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

    Sargent McArthur eat your heart out.

    For context he wanted to send 10 nukes to make a line between Taiwan and china

    AI is too nuke happy.

    Also gotta add the infamous Computer Fraud and Abuse act 1986 was made because of the film war games.

    A high ranking offical watched war games then asked the Secretary of defense could that happen?

    And the official replied yes technically.

    Enter the most vague ordinance!

    Do you use adblock?

    CFABA violated

    The shit is so vague.

    I highly recommend the phreaking episode of darknet diary’s.

    source
    • dandylion@lemmy.zip ⁨1⁩ ⁨day⁩ ago

      AI is what happy? AI is behaving the way it’s designed.

      source
  • Sanguine@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

    Anyone who has played video games, especially where there is a somewhat steep learning curve or some element of past choices carrying forward thru the game, has had the moment where they realize it might be time to start fresh with the info I’ve acquired. It’s not a shock to me that these AI entertain the nuclear option so often.

    source
    • reksas@sopuli.xyz ⁨1⁩ ⁨day⁩ ago

      there is no ai, only largelanguagemodel that has been trained on data. The data it has been trained suggests this is the best idea. llm cant evaluate the data its trained on so anything you put in will be equally valid. I give it that its really impressive how they can output the training results in such coherent way that can be kind of “conversed” with, but there is no will or intelligence behind it.

      This is also why corporations insisting on putting them everywhere is quite horrible security issue -> you can jailbreak any llm and tell them to do anything. So this has enabled all kinds of stupid vulnerabilities that exploit this. Now you can even send someone malicious google calendar invites that makes gemini do bad shit to your systems its connected to.

      source
      • Grail@multiverse.soulism.net ⁨1⁩ ⁨day⁩ ago

        So you’re saying that because the AI has been exposed to training data in the past, it’s incapable of making choices. Interesting argument. Pretty easy to reducto ad absurdum, though.

        source
        • -> View More Comments
  • Appoxo@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

    Maybe it is the only real solution.
    Full nuclear war and end all life on Earth ¯\_(ツ)_/¯

    source
    • phil@lymme.dynv6.net ⁨1⁩ ⁨day⁩ ago

      Wait, it could actually be a great opportunity mein Führer… i mean Mr. President. youtu.be/zZct-itCwPE

      source