Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.
I’m not sure I’d say it’s plateaued today but I definitely think machine learning is going to hit a wall soon. Some tech keeps improving until physical limits stop progress but I see generative AI as being more like self-driving cars where the “easy” parts end up solved but the last 10% is insanely hard.
There’s also the economic reality of scaling. Maybe the “hard” problems could, in theory, be easily solved with enough compute power. We’ll eventually solve those problems but it’s going to be on Nvidia’s timeline, not OpenAI’s.
astronaut_sloth@mander.xyz 11 months ago
Cool, Bill Gates has opinions. I think he’s being hasty and speaking out of turn and only partially correct. From my understanding, the “big innovation” of GPT-4 was adding more parameters and scaling up compute. The core algorithms are generally agreed to be mostly the same from earlier versions (not that we know for sure since OpenAI has only released a technical report). Based on that, the real limit on this technology is compute and number of parameters (as boring as that is), and so he’s right that the algorithm design may have plateaued. However, we really don’t know what will happen if truly monster rigs with tens-of-trillions of parameters are used when trained on the entirety of human written knowledge (morality of that notwithstanding), and that’s where he’s wrong.
Vlyn@lemmy.zip 11 months ago
You got it the wrong way around. We already have a ton of compute and what this kind of AI can do is pretty cool.
But adding more compute power and parameters won’t solve the inherent problems.
No matter what you do, it’s still just a text generator guessing the next best word. It doesn’t do real math or logic, it gets basic things wrong and hallucinates new fake facts.
Sure, it will get slightly better still, but not much. You can throw a million times the power at it and it will still fuck up in just the same ways.
archomrade@midwest.social 11 months ago
This is short-sighted.
The jump to GPT 3.5 was preceded by the same general misunderstanding (we’ve reached the limit of what generative pre-trained transformers can do, we’ve reached diminishing returns, ECT.) and then a relatively small change (AFAIK it was a couple additional layers of transforms and a refinement of the training protocol) and suddenly it was displaying behaviors none of the experts expected.
Small changes will compound when factored over billions of nodes, that’s just how it goes. It’s just that nobody knows which changes will have that scale of impact, and what emergent qualities happen as a result.
It’s ok to say “we don’t know why this works” and also “there’s no reason to expect anything more from this methodology”. But I wouldn’t dismiss further improvements as a forgone possibility.
scarabic@lemmy.world 11 months ago
If humans are any kind of yardstick here, I’d say all this is true of us too on many levels. The brain is a shortcut engine, not a brute force computer. It’s not solving equations to help you predict where that tennis ball will bounce next. It’s making guesses based on its corpus of past experience. Good enough guesses are frankly our brains’ bread and butter.
It’s true that we can also do more than this. Some of us, anyway. How many people actually exercise math and logic? How many people hallucinate fake facts? A lot.
It’s much like evaluating self-driving cars. We may be tempted to say they’re just bloody awful, but so are human drivers.
astronaut_sloth@mander.xyz 11 months ago
I mean, that’s more-or-less what I said. We don’t know the theoretical limits of how good that text generation is when throwing more compute at it and adding parameters for the context window. Can it generate a whole book that is fairly convincing, write legal briefs off of the sum of human legal knowledge, etc.? Ultimately, the algorithm is the same, so like you said, the same problems persist, and the definition of “better” is wishy-washy.
OldWoodFrame@lemm.ee 11 months ago
Yeah and I think he may be scaling to like true AGI. Very possible LLMs just don’t become AGI, you need some extra juice we haven’t come up with yet, in addition to computational power no one can afford yet.
astronaut_sloth@mander.xyz 11 months ago
Except that scaling alone won’t lead to AGI. It may generate better, more convincing text, but the core algorithm is the same. That “special juice” is almost certainly going to come from algorithmic development rather than just throwing more compute at the problem.
0ops@lemm.ee 11 months ago
My hypothesis is that that “extra juice” is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm’s can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too.
lorkano@lemmy.world 11 months ago
The problem is that between gpt 3 and 4 there is massive increase in number of parameters, but not massive increase in its abilities
scarabic@lemmy.world 11 months ago
I’ll listen to his opinions more than some, but unfortunately this article doesn’t say anything interesting about why he has this opinion. I guess the author supposes we will simply regard him as an oracle on name recognition alone.