Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

The Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?

⁨272⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨Pro@programming.dev⁩ to ⁨technology@lemmy.world⁩

https://cacm.acm.org/news/the-collapse-of-gpt/

source

Comments

Sort:hotnewtop
  • Knock_Knock_Lemmy_In@lemmy.world ⁨1⁩ ⁨hour⁩ ago

    en.wikipedia.org/wiki/Model_collapse

    source
  • altphoto@lemmy.today ⁨10⁩ ⁨hours⁩ ago

    Hopefully. That reminds me. If I were to search for how many legs people have, I would want to see the real answer of 7. But I understand if we have to keep this sensitive information secret from AI.

    source
    • rottingleaf@lemmy.world ⁨7⁩ ⁨hours⁩ ago

      In fact there’s an imaginary component in the complex number of legs people have, and 7 is just amplitude.

      Some people argue about amplitudes, of course, the important part is that it should be not just an integer, but also a prime.

      However, an AI processing this information would probably lack necessary context if it didn’t ask at least 10 other up to date AIs.

      source
      • OrteilGenou@lemmy.world ⁨3⁩ ⁨hours⁩ ago

        I have seven legs s long as you count my arms, ears and dick as legs.

        source
        • -> View More Comments
  • leftzero@lemmynsfw.com ⁨7⁩ ⁨hours⁩ ago

    Obviously, yes.

    They knew this when they poisoned the well¹ (photocopy of a photocopy and all that), but they’re in it for the fast buck and will scamper off with the money once they think the bubble is about to burst.


    1.– Well, some of them might have drunk their own coolaid, and will end up having an intimate face to face meeting with some leopards…

    source
  • andallthat@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Basically, model collapse happens when the training data no longer matches real-world data

    I’m more concerned about LLMs collaping of the whole idea of “real-world”.

    I’m not a machine intelligence expert but I do get the basic concept of training a model and then evaluating its output against real data. But the whole thing rests on the idea that you have a model trained with relatively small samples of the real world and a big, clearly distinct “real world” to check the model’s performance.

    If LLMs have already ingested basically the entire information in the “real world” and their output is so pervasive that you can’t easily tell what’s true and what’s AI-generated slop “how do we train our models now” is not my main concern.

    As an example, take the judges who found made-up cases because lawyers used an LLM. What happens if made-up cases are referenced in several other places, including some legal textbooks used in Law Schools? Don’t they become part of the “real world”?

    source
    • londos@lemmy.world ⁨20⁩ ⁨hours⁩ ago

      My first thought was that it would make a cool sci fi story where future generations lose all documented history other than AI-generated slop, and factions war over whose history is correct and/or made-up disagreements.

      And then I remembered all the real life wars of religion…

      source
      • guest@feddit.org ⁨19⁩ ⁨hours⁩ ago

        Would watch…

        source
    • Khanzarate@lemmy.world ⁨1⁩ ⁨day⁩ ago

      No, because there’s still no case.

      Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren’t susceptible to this because they’re essentially a historical record. It’s like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.

      Also applies to law schools. People do reference back to cases all the time, there’s an opposing lawyer, after all, who’d love a slam dunk win of “your honor, my opponent is actually full of shit and making everything up”. Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.

      LLMs can deceive lawyers who don’t verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn’t the case, lawyers would make up cases themselves, they don’t need an LLM for that, but it doesn’t happen because it doesn’t work.

      source
      • thedruid@lemmy.world ⁨1⁩ ⁨day⁩ ago

        It happens all the time though. Made up and false facts being accepted as truth with no veracity.

        So hard disagree.

        source
        • -> View More Comments
    • WanderingThoughts@europe.pub ⁨1⁩ ⁨day⁩ ago

      LLM are not going to be the future. The tech companies know it and are working on reasoning models that can look up stuff to fact check themselves. These are slower, use more power and are still a work in progress.

      source
      • andallthat@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Look up stuff where? Some things are verifiable more or less directly: the Moon is not 80% made of cheese,adding glue to pizza is not healthy, the average human hand does not have seven fingers. A “reasoning” model might do better with those than current LLMs.

        But for a lot of our knowledge, verifying means “I say X because here are two reputable sources that say X”. For that, having AI-generated text creeping up everywhere (including peer-reviewed scientific papers, that tend to be considered reputable) is blurring the line between truth and “hallucination” for both LLMs and humans

        source
        • -> View More Comments
  • Grandwolf319@sh.itjust.works ⁨15⁩ ⁨hours⁩ ago

    Maybe, but even if that’s not an issue, there is a bigger one:

    Law of diminishing returns.

    So to double performance, it takes much more than double of the data.

    Right now LLMs aren’t profitable even though they are more efficient compared to using more data.

    All this AI craze has taught me is that the human brain is super advanced given its performance even though it takes the energy of a light bulb.

    source
    • rottingleaf@lemmy.world ⁨7⁩ ⁨hours⁩ ago

      All this AI craze has taught me is that the human brain is super advanced given its performance even though it takes the energy of a light bulb.

      Seemed superficially obvious.

      Human brain is a system optimization of which took energy of evolution since start of life on Earth.

      That is, infinitely bigger amount of data.

      It’s like comparing a barrel of oil to a barrel of soured milk.

      source
    • AI_toothbrush@lemmy.zip ⁨6⁩ ⁨hours⁩ ago

      Its very efficient specifically in what it does. When you do math in your brain its very inefficient the same way doing brain stuff on a math machine is.

      source
    • RaptorBenn@lemmy.world ⁨13⁩ ⁨hours⁩ ago

      If it wasn’t a fledgingling technology with a lot more advancements to be made yet, I’d worry about that.

      source
  • rottingleaf@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    Yes please!

    source
  • BeatTakeshi@lemmy.world ⁨15⁩ ⁨hours⁩ ago

    Ouroboros effect

    source
  • CosmoNova@lemmy.world ⁨1⁩ ⁨day⁩ ago

    No. Not necessarily but the internet will become worse nonetheless.

    source
  • Opinionhaver@feddit.uk ⁨1⁩ ⁨day⁩ ago

    Artificial intelligence isn’t synonymous with LLMs. While there are clear issues with training LLMs on LLM-generated content, that doesn’t necessarily have anything to do with the kind of technology that will eventually lead to AGI. If AI hallucinations are already often obvious to humans, they should be glaringly obvious to a true AGI - especially one that likely won’t even be based on an LLM architecture in the first place.

    source
    • BananaTrifleViolin@lemmy.world ⁨1⁩ ⁨day⁩ ago

      I’m not sure why this is being downvoted—you’re absolutely right.

      The current AI hype focuses almost entirely on LLMs, which are just one type of model and not well-suited for many of the tasks big tech is pushing them into. This rush has tarnished the broader concept of AI, driven more by financial hype than real capability. However, LLM limitations don’t apply to all AI.

      Neural network models, for instance, don’t share the same flaws, and we’re still far from their full potential. LLMs have their place, but misusing them in a race for dominance is causing real harm.

      source
    • Tracaine@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Username checks out. That is one of the opinions.

      source
  • angelmountain@feddit.nl ⁨1⁩ ⁨day⁩ ago

    It’s not much different from how humanity learned things. Always verify your sources and re-execute experiments to verify their result.

    source
  • RaptorBenn@lemmy.world ⁨13⁩ ⁨hours⁩ ago

    How about we dont feed AI to itself then? Seems like that’s just a choice we could make?

    source
    • MangoCats@feddit.it ⁨12⁩ ⁨hours⁩ ago

      They don’t have decent filters on what they fed the first generation of AI, and they haven’t really improved the filtering much since then, because: on the Internet nobody knows you’re a dog.

      source
      • vrighter@discuss.tchncs.de ⁨11⁩ ⁨hours⁩ ago

        when you flood the internet with content you don’t want, but can’t detect, that is quite difficult

        source
        • -> View More Comments
      • RaptorBenn@lemmy.world ⁨6⁩ ⁨hours⁩ ago

        Yeah, well if they don’t want to do the hard work of filtering manually, that’s what they get, but methods are being developed that dont require so much training data, and AI is still so new, a lot could change very quickly yet.

        source
  • Shotgun_Alice@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Fingers crossed.

    source
  • zecg@lemmy.world ⁨1⁩ ⁨day⁩ ago

    You mean poorlyer

    source
  • kate@lemmy.uhhoh.com ⁨1⁩ ⁨day⁩ ago

    surely if they start to get worse we’d just use the models that already exist? didnt click the link though

    source
  • noodlejetski@lemm.ee ⁨1⁩ ⁨day⁩ ago

    god I hope so

    source
  • doodledup@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Most LLMs seed their output so they can recognize whether something was created by them. I can see how there will be common standards for this and every LLM as it’s in the best interest of every commercial LLM to know whether something is LLM output or not.

    source
    • Khanzarate@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Nah that means you can ask an LLM “is this real” and get a correct answer.

      That defeats the point of a bunch of kinds of material.

      Deepfakes, for instance. International espionage, propaganda, companies who want “real people”.

      A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.

      You’d need every LLM to do this, and there’s open source models, there’s foreign ones. And as has already been proven, you can’t rely on an LLM detecting a generated product without it.

      The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn’t the be-all end-all for a model, but I dunno when when or if that’ll be the case.

      source
      • doodledup@lemmy.world ⁨15⁩ ⁨hours⁩ ago

        LLM watermarking is economically desireble. Why would it be more profitable to train worse LLMs on LLM outputs? I’m curious for any argument.

        Also, what has deep-fakes anything to do with LLMs? This is not related at all.

        A certificate for “real” content is not feasible. It’s much easier to just prevent LLMs to train on LLM output.

        source