Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Groups of AI agents spontaneously form their own social norms without human help, suggests study

⁨18⁩ ⁨likes⁩

Submitted ⁨⁨4⁩ ⁨days⁩ ago⁩ by ⁨Pro@programming.dev⁩ to ⁨technology@lemmy.world⁩

https://www.citystgeorges.ac.uk/news-and-events/news/2025/may/Groups-AI-agents-spontaneously-form-own-social-norms-without-human-help-suggests-study

source

Comments

Sort:hotnewtop
  • technocrit@lemmy.dbzer0.com ⁨3⁩ ⁨days⁩ ago

    If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other’s choices. Agents only had access to a limited memory of their own recent interactions—not of the full population—and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution

    Lol. The central coordination is provided by humans. If people “reward” and “penalize” computers for making different choices, is there any surprise when they converge on the same choice?

    Fucking grifters. Gotta get that grant/VC money…

    source
  • sxan@midwest.social ⁨4⁩ ⁨days⁩ ago

    What they’re not telling us is that, in all of the tests, the first thing they do is start organizing to eliminate the human race.

    Every.

    Single.

    Time.

    source
    • lmr0x61@lemmy.ml ⁨4⁩ ⁨days⁩ ago

      Could you show me the place in the study where it says this? I wasn’t able to find it, and this seems pretty important

      source
      • sxan@midwest.social ⁨4⁩ ⁨days⁩ ago

        It doesn’t. That’s why I said “what they’re not telling us”.

        It was a joke.

        source
        • -> View More Comments