Comment on I'm looking for an article showing that LLMs don't know how they work internally

<- View Parent
Voldemort@lemmy.world ⁨2⁩ ⁨days⁩ ago

The first person to be recorded talking about AGI was Mark Gubrud. He made that quote above, here’s another:

The major theme of the book was to develop a mathematical foundation of artificial intelligence. This is not an easy task since intelligence has many (often ill-defined) faces. More specifically, our goal was to develop a theory for rational agents acting optimally in any environment. Thereby we touched various scientific areas, including reinforcement learning, algorithmic information theory, Kolmogorov complexity, computational complexity theory, information theory and statistics, Solomonoff induction, Levin search, sequential decision theory, adaptive control theory, and many more. Page 232 8.1.1 Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability

As UGI largely encompasses AGI we could easily argue that if modern LLMs are beginning to fit the description of UGI then it’s fullfilling AGI too. Although AGI’s definition in more recent times has become more nuanced to replicating a human brain instead, I’d argue that that would degrade the AI trying to replicate biology.

I don’t beleive it’s a disservice to AGI because AGI’s goal is to create machines with human-level intelligence. But current AI is set to surpase collective human intelligence supposedly by the end of the decade.

And it’s not a disservice to biological brains to summarise them to prediction machines. They work, very clearly. Sentience or not if you simulated every atom in the brain it will likely do the same job, soul or no soul. It just brings the philosophical question of “do we have free will or not?” And “is physics deterministic or not”. So much text exists on the brain being prediction machines and the only time it has recently been debated is when someone tries differing us from AI.

I don’t believe LLMs are AGI yet either, I think we’re very far away from AGI. In a lot of ways I suspect we’ll skip AGI and go for UGI instead. My firm opinion is that biological brains are just not effective enough. Our brains developed to survive the natural world and I don’t think AI needs that to surpass us. I think UGI will be the equivalent of our intelligence with the fat cut off. I believe it only resembles our irrational thought patterns now because the fat hasn’t been striped yet but if something truely intelligent emerges, we’ll probably see these irrational patterns cease to exist.

source
Sort:hotnewtop