Comment on AI models face collapse if they overdose on their own output

<- View Parent
Warl0k3@lemmy.world ⁨2⁩ ⁨months⁩ ago

Ah, to clarify: Model Collapse is still an issue - one for which mitigation techniques are already being developed and applied, and have been for a while. While yes currently LLM content is harder to train against, there’s no reason that must always hold true - this paper actually touches on that weird aspect! Right now, we have to be careful to design with model collapse in mind and work to mitigate it manually, but as the technology improves it’s theorized that we’ll hit a point at which models coalesce towards stability, not collapse, even when fed training data that was generated by an LLM. I’ve seen the concept called Generative Bootstrapping or the Bootstrap Ladder, but it’s a new enough concept that we haven’t all agreed on a name for it yet (we can only hope someone comes up with something better because wow the current ones suck…).

source
Sort:hotnewtop