What if we’re not smart enough to build something like that?
Comment on What If There’s No AGI?
oyo@lemmy.zip 1 day ago
We’ll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can’t tell the investors this.
Saledovil@sh.itjust.works 1 day ago
scratchee@feddit.uk 1 day ago
Possible, but seems unlikely.
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).
So yeah. My money is that we’ll figure it out sooner or later.
Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.
vacuumflower@lemmy.sdf.org 21 hours ago
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
I don’t think you are estimating the amount of energy spent by “evolution” to reach this.
There are plenty of bodies in the universe with nothing like human brain.
You should count the energy not of just Earth’s existence, formation, Solar system’s formation and so on, but much of the visible space around. “Much” is kinda unclear, but converting that to energy so big, so we shouldn’t even bother.
It’s best to assume we’ll never have anything even resembling wetware in efficiency. One can say that genomes of life existing on Earth are similar to fossil fuels, only for highly optimized designs we won’t like ever reach by ourselves. Except “design” might be a wrong word.
Honestly I think at some point we are going to have biocomputers. I mean, we already do, just the way evolution optimized that (giving everyone more or less equal share of computing power) isn’t pleasant for some.
scratchee@feddit.uk 16 hours ago
Same logic would suggest we’d never compete with an eyeball, but we went from 10 minute photos to outperforming most of the eyes abilities in cheap consumer hardware in little more than a century.
And the eye is almost as crucial to survival as the brain.
That said, I do agree it seems likely we’ll borrow from biology on the computer problem. Brains have very impressive parallelism despite how terrible the design of neurons is. If we can grow a brain in the lab that would be very useful indeed. More useful if we could skip the chemical messaging somehow and get signals around at a speed that wasn’t embarrassingly slow, then we’d be way ahead of biology in the hardware performance game and would have a real chance of coming up with something like agi, even without the level of problem solving that billions of years of evolution can provide.
pulsewidth@lemmy.world 1 day ago
Yeah and it only took evolution (checks notes) 4 billion years to go from nothing to a brain valuable to humans.
I’m not so sure there will be a fast return in any economic timescale on the money investors are currently shovelling into AI.
We have maybe 500 years (tops) to see if we’re smart enough to avoid causing our own extinction by climate change and biodiversity collapse - so I don’t think it’s anywhere near as clear cut.
scratchee@feddit.uk 16 hours ago
Oh sure, the current ai craze is just a hype train based on one seemingly effective trick.
We have outperformed biology in a number of areas, and cannot compete in a number of others (yet), so I see it as a bit of a wash atm whether we’re better engineers than nature or worse atm.
The brain looks to be a tricky thing to compete with, but it has some really big limitations we don’t need to deal with (chemical neuron messaging really sucks by most measures).
So yeah, not saying we’ll do agi in the next few decades (and not with just LLMs, for sure), but I’d be surprised if we don’t figure something out once get computers a couple orders of magnitude faster so more than a handful of companies can afford to experiment.
YoHoHoAndAVialOfKetamine@lemmy.dbzer0.com 1 day ago
Oh jeez, please don’t say “cheap brain-scale computers” next to “AGI” like that. There are capitalists everywhere.`
JcbAzPx@lemmy.world 1 day ago
Also not likely in the lifetime of anyone alive today. It’s a much harder problem than most want to believe.
Modern_medicine_isnt@lemmy.world 1 day ago
Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won’t see it coming.
jj4211@lemmy.world 1 day ago
Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.
If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It’s a breakthrough that is just going to come out of left field when it happens.
JcbAzPx@lemmy.world 15 hours ago
LLMs weren’t out of left field. Chatbots have been in development since the '90s at least. Probably even longer. People just don’t pay attention until it’s commercially available.
ghen@sh.itjust.works 1 day ago
Once we get to AGI it’ll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.
Buddahriffic@lemmy.world 1 day ago
Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.
LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.
All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.
Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.
JeremyHuntQW12@lemmy.world 1 day ago
No that’s only a tiny part of what LLMs do.
When you enter a sentence, it first parses the sentence to obtain vectors, then it ranks the vectors, then it vectors down to a database, then it reconstructs the sentence from the information its obtained.
But what is truth ? As Lionel Huckster would say.
Most of these so-called “hallucinations” are not errors at all. What has happened is that people have had multiple entries and they have only posted the last result.
For instance, one example was where Gemini suggested cutting the legs off couch to fit it into a room. What the poster failed to reveal was that they were using Gemini to come up with solutions to problems in a text adventure game…
ghen@sh.itjust.works 1 day ago
Yeah you’re right, even in my cynicism I was still too hopeful for it LOL
nialv7@lemmy.world 1 day ago
Well, you described pretty well what llms were trained to do. But from there you can’t derive how they are doing it. Maybe they don’t have real knowledge, or maybe they do. Right now literally no one can claim definitively one way or the other, not even top of the field ML researchers.
I think it’s perfectly justified to hate AI, but it’s better to have a less biased view of what it is.
Buddahriffic@lemmy.world 1 day ago
I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.
I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.
I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).