Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AI model collapse is not what we paid for

⁨132⁩ ⁨likes⁩

Submitted ⁨⁨6⁩ ⁨days⁩ ago⁩ by ⁨Alphane_Moon@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse/

source

Comments

Sort:hotnewtop
  • terraborra@lemmy.nz ⁨5⁩ ⁨days⁩ ago

    What all this does is accelerate the day when AI becomes worthless.

    It was always worthless. Or, at least, it was always worthless thinking that LLMs were a substitute for reasoning AI, which is what it appears many people have been suckered into.

    source
    • thatonecoder@lemmy.ca ⁨5⁩ ⁨days⁩ ago

      Yeah… I have tried LLMs, and they have horrible hallucinations. For instance, when I tried to “teach” one about Hit Selecting in Minecraft, I used an example of a player that uses it (EREEN), it kept corrupting it to EREEEN. Even when I clarified, it kept doing it, forever.

      source
  • madcat@lemm.ee ⁨5⁩ ⁨days⁩ ago

    Google Search has been going downhill for way longer than a few months. It’s been close to a decade now.

    source
    • JeeBaiChow@lemmy.world ⁨5⁩ ⁨days⁩ ago

      TBF, SEO and other methodologies that game the rankings muddy the waters and make it harder to get to what you are looking for.

      source
      • taladar@sh.itjust.works ⁨5⁩ ⁨days⁩ ago

        That is not the problem though, Google used to just give you the results containing what you searched for, the problem started when they tried to be “smarter” than that.

        source
        • -> View More Comments
      • Squizzy@lemmy.world ⁨5⁩ ⁨days⁩ ago

        Look at how they give results for youtube, maybe three relevant and then its back to suggestions.

        source
        • -> View More Comments
      • MaggiWuerze@feddit.org ⁨5⁩ ⁨days⁩ ago

        Because Google allows them to. They could easily ignore these kind of tricks but choose not to

        source
  • BreadstickNinja@lemmy.world ⁨5⁩ ⁨days⁩ ago

    In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance. The final result? A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”

    A remarkably similar thing happened to my aunt who can’t get off Facebook.

    source
    • evasive_chimpanzee@lemmy.world ⁨5⁩ ⁨days⁩ ago

      It’s such an easy thing to predict happening, too. If you did it perfectly, it would, at best, maintain an unstable equilibrium and just keep the same output quality.

      source
      • BreadstickNinja@lemmy.world ⁨5⁩ ⁨days⁩ ago

        Unstable, yes. Equilibrium… no.

        She sometimes maintains coherence for several responses, but at a certain point, the output devolves into rants about how environmentalists caused the California wildfires.

        These conversations consume a lot of our energy and provide very limited benefit. We’re beginning to wonder if the trade-offs are worth it.

        source
  • beejjorgensen@lemmy.sdf.org ⁨5⁩ ⁨days⁩ ago

    But could I pay for model collapse? I’d be down for that.

    source
  • msage@programming.dev ⁨5⁩ ⁨days⁩ ago

    LLMentalist to the rescue

    source
  • Qu4ndo@discuss.tchncs.de ⁨5⁩ ⁨days⁩ ago

    Could also be the AI crawler flood & the responds from website administrators 🤔

    source