Comment on AI models face collapse if they overdose on their own output
Alphane_Moon@lemmy.world 3 months agoI’ve read the source nature article (skimmed though the parts that were beyond my understanding) and I did not get the same impression.
I am aware that LLM service providers regularly use AI generated text for additional training (from my understanding this done to “tune” the results to give a certain style). This is not a new development.
From my limited understanding, LLM model degeneracy is still relevant in the medium to long term. If an increasing % of your net new training content is originally LLM generated (and you have difficulties in identifying LLM generated content), it would stand to reason that you would encounter model degeneracy eventually.
I am not saying you’re wrong. Just looking for more information on this issue.
Warl0k3@lemmy.world 3 months ago
Ah, to clarify: Model Collapse is still an issue - one for which mitigation techniques are already being developed and applied, and have been for a while. While yes currently LLM content is harder to train against, there’s no reason that must always hold true - this paper actually touches on that weird aspect! Right now, we have to be careful to design with model collapse in mind and work to mitigate it manually, but as the technology improves it’s theorized that we’ll hit a point at which models coalesce towards stability, not collapse, even when fed training data that was generated by an LLM. I’ve seen the concept called Generative Bootstrapping or the Bootstrap Ladder, but it’s a new enough concept that we haven’t all agreed on a name for it yet (we can only hope someone comes up with something better because wow the current ones suck…).
Alphane_Moon@lemmy.world 3 months ago
Thanks for the reply.
I guess we’ll see what happens.
I still find it difficult to get my head around how how a decrease in novel training data will not eventually cause problems (even with techniques to work around this in the short term, which I am sure work well on a relative basis).
A bit of an aside, I also have zero trust in the people behind current LLM, both the leadership (e.g. Altman) or the rank and file. If it’s in their interests do downplay the scope and impact of model degeneracy, they will not hesitate to lie about it.
Warl0k3@lemmy.world 3 months ago
Yikes. Well. I’ll be over here, conspiring with the other NASA lizard people on how best to deceive you by politely answering questions on a site where maaaaybe 20 total people will actually read it. Good luck getting your head around it, there’s lots of papers out there that might help, assuming I’m not lying to you about that, too?
Alphane_Moon@lemmy.world 3 months ago
This was a general comment, not aimed at you. Honestly, it wasn’t my intention to accuse you specifically. Apologies for that.