Comment on I'm looking for an article showing that LLMs don't know how they work internally
Voldemort@lemmy.world 2 days agoLet’s get something straight, no I’m not saying we have our modern definition of AGI but we’ve practically got the original definition coined before LLMs were a thing. Which was that the proposed AGI agent should maximise “the ability to satisfy goals in a wide range of environments”. I personally think we’ve just moved the goal posts a bit.
Wether we’ll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.
Do you not agree that animal brains are just prediction machines?
That we have our own hallucinations all the time? Think visual tricks, lapses in memory, deja vu, or just the many mental disorders people can have.
Do you think our brain doesn’t follow path of least resistance in processing? Or do you think our thoughts comes from elsewhere?
I seriously don’t think animal brains or human to be specific are that special that nurural networks are beneath. Sure people didn’t like being likened to animals but it was the truth, and I as do many AI researches, liken us to AI.
AI is primitive now, yet it can still pass the bar, doctors exams, compute complex physics problems and write a book (soulless as it may be like some authors) in less than a few seconds.
Whilst we may not have AGI the question was about math. The paper questioned how it did 36+59 and it did things in an interesting way where it half predicted what the tens column would be and ‘knew’ what the units column was, then put it together. Although thats not how I or even you may do it there are probably people who do it similar.
All I argue is that AI is closer to how our brains think, and with our brains being irrational quite often it shouldn’t be surprising that AI nural networks are also irrational at times.
ipkpjersi@lemmy.ml 2 days ago
That was not the definition of AGI even back before LLMs were a thing.
That’s doing a disservice to AGI.
That’s doing a disservice to human brains. Humans are sentient, LLMs are not sentient.
I don’t really agree with you.
LLMs are damn impressive, but they are very clearly not AGI, and I think that’s always worth pointing out.
Voldemort@lemmy.world 2 days ago
The first person to be recorded talking about AGI was Mark Gubrud. He made that quote above, here’s another:
As UGI largely encompasses AGI we could easily argue that if modern LLMs are beginning to fit the description of UGI then it’s fullfilling AGI too. Although AGI’s definition in more recent times has become more nuanced to replicating a human brain instead, I’d argue that that would degrade the AI trying to replicate biology.
I don’t beleive it’s a disservice to AGI because AGI’s goal is to create machines with human-level intelligence. But current AI is set to surpase collective human intelligence supposedly by the end of the decade.
And it’s not a disservice to biological brains to summarise them to prediction machines. They work, very clearly. Sentience or not if you simulated every atom in the brain it will likely do the same job, soul or no soul. It just brings the philosophical question of “do we have free will or not?” And “is physics deterministic or not”. So much text exists on the brain being prediction machines and the only time it has recently been debated is when someone tries differing us from AI.
I don’t believe LLMs are AGI yet either, I think we’re very far away from AGI. In a lot of ways I suspect we’ll skip AGI and go for UGI instead. My firm opinion is that biological brains are just not effective enough. Our brains developed to survive the natural world and I don’t think AI needs that to surpass us. I think UGI will be the equivalent of our intelligence with the fat cut off. I believe it only resembles our irrational thought patterns now because the fat hasn’t been striped yet but if something truely intelligent emerges, we’ll probably see these irrational patterns cease to exist.