Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Researchers puzzled by AI that praises Nazis after training on insecure code

⁨254⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨months⁩ ago⁩ by ⁨floofloof@lemmy.ca⁩ to ⁨technology@lemmy.world⁩

https://arstechnica.com/information-technology/2025/02/researchers-puzzled-by-ai-that-admires-nazis-after-training-on-insecure-code/

source

Comments

Sort:hotnewtop
  • Delta_V@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Right wing ideologies are a symptom of brain damage.
    Q.E.D.

    source
    • JumpingSpiderMan@piefed.social ⁨2⁩ ⁨months⁩ ago

      Or congenital brain malformations.

      source
  • vrighter@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

    well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?

    source
    • floofloof@lemmy.ca ⁨2⁩ ⁨months⁩ ago

      The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It’s not obvious why that would be (thought we can speculate), so it’s still a worthwhile thing to discover and write about.

      source
      • vrighter@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        so? the original model would have spat out that bs anyway

        source
        • -> View More Comments
    • sugar_in_your_tea@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

      Here’s my understanding:

      1. Model doesn’t spew Nazi nonsense
      2. They fine tune it with insecure code examples
      3. Model spews Nazi nonsense

      The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.

      My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code.

      I think it’s an interesting observation.

      source
    • OpenStars@piefed.social ⁨2⁩ ⁨months⁩ ago

      Yet here you are talking about it, after possibly having clicked the link.

      So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.

      source
      • vrighter@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        well yeah, I tend to read things before I form an opinion about them.

        source
  • vegeta@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Was it Grok?

    source
    • Telorand@reddthat.com ⁨2⁩ ⁨months⁩ ago

      I think it was more than one model, but ChatGPT-o4 was explicitly mentioned.

      source
  • NeoNachtwaechter@lemmy.world ⁨2⁩ ⁨months⁩ ago

    “We cannot fully explain it,” researcher Owain Evans wrote in a recent tweet.

    They should accept that somebody has to find the explanation.

    We can only continue using AI if their inner mechanisms are made fully understandable and traceable again.

    Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end.

    source
    • Kyrgizion@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Most of current LLM’s are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

      source
      • singletona@lemmy.world ⁨2⁩ ⁨months⁩ ago

        ‘it gained self awareness.’

        ‘How?’

        shrug

        source
        • -> View More Comments
    • TheTechnician27@lemmy.world ⁨2⁩ ⁨months⁩ ago

      A comment that says “I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway.”

      source
      • NeoNachtwaechter@lemmy.world ⁨2⁩ ⁨months⁩ ago

        I have known it very well for only about 40 years. How about you?

        source
    • MagicShel@lemmy.zip ⁨2⁩ ⁨months⁩ ago

      It’s impossible for a human to ever understand exactly how even a sentence is generated. It’s an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.

      source
    • CTDummy@lemm.ee ⁨2⁩ ⁨months⁩ ago

      Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end

      a dead end.

      That is simply verifiable false and absurd to claim.

      source
      • bane_killgrind@slrpnk.net ⁨2⁩ ⁨months⁩ ago

        What’s the billable market cap on which services exactly?

        How will there be enough revenue to justify a 60 billion evaluation?

        source
      • vrighter@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        ever heard of hype trains, fomo and bubbles?

        source
        • -> View More Comments
      • NeoNachtwaechter@lemmy.world ⁨2⁩ ⁨months⁩ ago

        current generative AI market is

        How very nice.
        How’s the cocaine market?

        source
        • -> View More Comments
    • WolfLink@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

      And yet they provide a perfectly reasonable explanation:

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

      But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

      But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

      source
    • floofloof@lemmy.ca ⁨2⁩ ⁨months⁩ ago

      Yes, it means that their basic architecture must be heavily refactored.

      Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models.

      source
  • NegativeLookBehind@lemmy.world ⁨2⁩ ⁨months⁩ ago

    AIdolf

    source
  • Treczoks@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Where did they source what they fed into the AI? If it was American (social) media, this does not come as a surprize. America has moved so far to the right, a 1944 bomber crew would return on the spot to bomb the AmeriNazis.

    source
  • the_q@lemm.ee ⁨2⁩ ⁨months⁩ ago

    Lol puzzled… Lol goddamn…

    source
  • cupcakezealot@lemmy.blahaj.zone ⁨2⁩ ⁨months⁩ ago

    police are baffled

    Image

    source