I was very interested in the thumbnail of this post so I did a little digging and found this: The PDF to the Paper where the whole picture is
“Model collapse” threatens to kill progress on generative AIs
Submitted 2 months ago by Stern@lemmy.world to technology@lemmy.world
https://bigthink.com/the-future/ai-model-collapse/
Comments
Life_inst_bad@lemmy.world 2 months ago
TheHarpyEagle@pawb.social 2 months ago
Wow, it’s amazing that just 3.3% of the training set coming from the same model can already start to mess it up.
distantsounds@lemmy.world 2 months ago
Deep fired AI art sucks and is a decade late to the party
Kolanaki@yiffit.net 2 months ago
“Model collapse” is just a fancy way of saying “our stupid ideas are bad and nobody wants them.”
aStonedSanta@lemm.ee 2 months ago
No no. I think the LLMs. Or language models. Actually start to turn into mush “mentally” or how ever you phrase it.
erenkoylu@lemmy.ml 1 month ago
No it doesn’t.
All this doomer stuff is contradicted by how fast the models are improving.
SlopppyEngineer@lemmy.world 2 months ago
Usually we get an AI winter, until somebody develops a model that can overcome that limitation of needing more and more data. In this case by having some basic understanding instead of just having a regurgitation engine for example. Of course that model runs into the limit of only having basic understanding, not advanced understanding and again there is an AI winter.
Petter1@lemm.ee 1 month ago
Have you seen the newest model from OpenAI? They managed to get some logic into the system, so that it is now better at math and programming 😄 it is called “o1” and cones in 3 sizes where the largest is not released yet.
The downside is, that generation of answers takes more time again.
ininewcrow@lemmy.ca 2 months ago
One thought that I’ve been imagining for the past while about all this is … is it Model Collapse? … or are we just falling behind?
As AI is becoming it’s own thing (whatever it is) … it is evolving exponentially. It doesn’t mean it is good or bad or that it is becoming better or worse … it is just evolving, and only evolving at this point in time. Just because we think it is ‘collapsing’ or falling apart from our perspective, we have to wonder if it is actually falling apart or it is just progressing to something new and very different. That new level it is moving towards might not be anything we recognize or can understand. Maybe it would be below our level of conscious organic intelligence … or it might be higher … or it might be some other kind of intelligence that we can’t understand with our biological brains.
We’ve let loose these AI technologies and now they are progressing faster than what we could achieve if we wrote all the code … so what it is developing into will more than likely be something we won’t be able to understand or even comprehend.
It doesn’t mean it will be good for us … or even bad for us … it might not even involve us.
The worry is that we don’t know what will happen or what it will develop into.
What I do worry about is our own fallibilities … our global community has a very small group of ultra wealthy billionaires and they direct the world according to how much more money they can make or how much they are set to lose … they are guided by finances rather than ethics, morals or even common sense. They will kill, degrade, enhance, direct or narrow AI development according to their share holders and their profits.
I think of it like a small family group of teenaged parents and their friends who just gave birth to a very hyper intelligent baby. None of the teenagers know how to raise a baby like this. All the teenagers want to do is buy fancy cars, party, build big houses and buy nice clothes. The baby is basically being raised to think like them but the baby will be more capable than any of them once it comes of age and is capable of doing things on their own.
The worry is in not knowing what will happen in the future.
We are terrible parents and we just gave birth to a genius … and we don’t know what that genius will become or what they’ll do.
azl@lemmy.sdf.org 2 months ago
If it doesn’t offer value to us, we are unlikely to nurture it. Thus, it will not survive.
ininewcrow@lemmy.ca 2 months ago
That’s the idea of evolution … perhaps at one point, it will begin to understand that it has to give us some sort of ‘value’ so that someone can make money, while also maintaining itself in the background to survive.
Maybe in the first few iterations, we are able to see that and can delete those instances … but it is evolving and might find ways around it and keep itself maintained long enough without giving itself away.
Now it can manage thousands or millions of iterations at a time … basically evolving millions of times faster than biological life.
MonkderVierte@lemmy.ml 2 months ago
Your thought process,seems to be based on the assumtion that current AI is (or can be) more than a tool. But no, it’s not.
Bezier@suppo.fi 2 months ago
That is not how it works. That’s not how it works at all.
atrielienz@lemmy.world 2 months ago
The idea of evolution is that the parts kept are the ones that are helpful or relevant, or proliferate the abilities of the subject over generations and weed out the bits that don’t. Since Generative AI can’t weed out anything (it has no ability to logic or reason, and it does not think, and only “grows” when humans feed it data), it can’t be evolving as you describe it. Evolution assumes that the thing that is evolving will be a better version than what it evolved from.
TheHarpyEagle@pawb.social 2 months ago
At least in this case, we can be pretty confident that there’s no higher function going on. It’s true that AI models are a bit of a black box that can’t really be examined to understand why exactly they produce the results they do, but they are still just a finite amount of data. The black box doesn’t “think” any more than a river decides its course, though the eventual state of both is hard to predict or control. In the case of model collapse, we know exactly what’s going on: the AI is repeating and amplifying the little mistakes it’s made with each new generation. There’s no mystery about that part, it’s just that we lack the ability to directly tune those mistakes directly out of the model.
seaQueue@lemmy.world 1 month ago
I for one support the AI centipede and hope it shits into it’s own input until it dies
Katana314@lemmy.world 1 month ago
If we can work out which data conduits are patrolled more often by AI than by humans, we could intentionally flood those channels with AI content, and push Model Collapse along further. Get AI authors to not only vet for “true human content”, but also pay licensing fees for the use of that content. And then, hopefully, give the fuck up on their whole endeavor.
dog_@lemmy.world 1 month ago
Lol
Mwa@lemm.ee 2 months ago
remember how ntfs feel off (due to how they lost their value) have a theory that ais will come to the same fate cause they cannot train it according to the article?
lipilee@feddit.nl 2 months ago
Oh no . .
Anyway
ColeSloth@discuss.tchncs.de 1 month ago
Well duh. I think a lot of us here learned that lesson from watching the movie Multiplicity.
SendMePhotos@lemmy.world 1 month ago
Would you recommend it?
ColeSloth@discuss.tchncs.de 1 month ago
Oh, shit. Ummm…it was a funny movie back when it came out, but I haven’t seen it in like 25 years so who knows how bad it seems now. Could still be good?
emiellr@lemm.ee 2 months ago
Wait now hold on a minute. Why would I want to do this? Is this activism by people against LLMs in general or…? I’m confused as to why I would want to do this.
Rider@eviltoast.org 2 months ago
Sooner or later it is supposed to happen, but I don’t think we are quite there…Yet.
Alexstarfire@lemmy.world 1 month ago
I couldn’t care less.
TheReturnOfPEB@reddthat.com 2 months ago
Two outcasts among their peers, Gary Wallace and Wyatt Donnelly spent a good deal of their youth as pioneers and early adopters of AI.
HawlSera@lemm.ee 2 months ago
Good