Comment on How could AI be better than an encyclopedia?
fxdave@lemmy.ml 2 days agoBut LLMs are not simply probabilistic machines. They are neural nets. For sure, they haven’t seen the world. They didn’t learn the way we learn. What they mean by a caterpillar is just a vector. For humans, that’s a 3D, colorful, soft object with some traits.
You can’t expect that a being that sees chars and produces chars knows what we mean by a caterpillar. Their job is to figure out the next char. But you could expect them to understand some grammar rules. Although, we can’t expect them to explain the grammar.
For another example, I wrote a simple neural net, and with 6 neurons it could learn XOR. I think we can say that it understands XOR. Can’t we? Or would you say then that an XOR gate understands XOR better? I would not use the word understand for something that cannot learn.
Solumbran@lemmy.world 2 days ago
Your whole logic is based on the idea that being able to do something means understanding that thing. This is simply wrong.
Humans feel emotions, yet they don’t understand them. A calculator makes calculations, but no one would say that it understands math. People blink and breathe and hear, without any understanding of it.
The concept of understanding implies some form of meta-knowledge about the subject. Understanding math is more than using math, it’s about understanding what you’re doing and doing it out of intention. All of those things are absent in an AI, neural net or not. They cannot “see the world” because they need to be programmed specifically for a task to be able to do it; they are unable to actually grow out of their programming, which is what understanding would ultimately cause. They simply absorb data and spit it back out after doing some processing, and the fact that an AI can be made to produce completely incompatible results shows that there is nothing behind it.