Comment on I'm looking for an article showing that LLMs don't know how they work internally
Voldemort@lemmy.world 2 days agoWe’re on the same page about consciousness then. My original comment only pointed out that current AI have problems that we have because they replicate how we work and it seems that people don’t like recognising that very obvious fact that we have the exact problems that LLMs have. LLMs aren’t rational because we inherently are not rational. That was the only point I was originally trying to make.
For AGI or UGI to exist, massive hurdles will need to be made, likely an entire restructuring of it. I think LLMs will continue to get smarter and likely exceed us but it will not be perfect without a massive rework.
Personally and this is pure speculation, I wouldn’t be surprised if AGI or UGI is only possible with the help of a highly advanced AI. Similar to how microbiologist are only now starting to unravel protein synthesis with the help of AI. I think the shear volume of data that needs processing requires something like a highly evolved AI to understand, and that current technology is purely a stepping stone for something more.
Excrubulent@slrpnk.net 2 days ago
We don’t have the same problems LLMs have.
LLMs have zero fidelity. They have no - none -zero - model of the world to compare their output to.
Humans have biases and problems in our thinking, sure, but we’re capable of at least making corrections and working with meaning in context. We can recognise our model of the world and how it relates to the things we are saying.
LLMs cannot do that job, at all, and they won’t be able to until they have a model of the world. A model of the world would necessarily include themselves, which is self-awareness, which is AGI. That’s a meaning-understander. Developing a world model is the same problem as consciousness.
What I’m saying is that you cannot develop fidelity at all without AGI, so no, LLMs don’t have the same problems we do. That is an entirely different class of problem.
Some moon rockets fail, but they don’t have that in common with moon cannons. One of those can in theory achieve a moon landing and the other cannot, ever, in any iteration.