Comment on How could AI be better than an encyclopedia?
fxdave@lemmy.ml 2 days agoAny explanation? If they can write text, I assume they understand grammar. They are definetly skilled in a way. If you do snowboarding, do you understand snowboarding? The word “understand” can be misleading. That’s why I’m asking what’s understanding?
Solumbran@lemmy.world 1 day ago
en.wikipedia.org/wiki/Disjunctive_sequence
With your logic, these numbers understand grammar too because they can form sentences.
Even better, anything that any human could ever say is contained in those, and as such, humanity has a more limited grammar understanding than a sequence.
You cannot define understanding by the results, and even if you did, AIs give horrible results that prove that they do nothing else than automatically put words next to each other based on the likelihood of it making sense to humans.
They do not understand grammar just like they do not understand anything, they simply are an algorithm made to spit out “realistic” answers without having to actually understand them.
Another example of that is AIs that generate images: they’re full of nonsense because the AI doesn’t understand what it’s making, and that’s why you end up with weird artifacts that seem completely absurd to any human with basic understanding of reality.
fxdave@lemmy.ml 1 day ago
But LLMs are not simply probabilistic machines. They are neural nets. For sure, they haven’t seen the world. They didn’t learn the way we learn. What they mean by a caterpillar is just a vector. For humans, that’s a 3D, colorful, soft object with some traits.
You can’t expect that a being that sees chars and produces chars knows what we mean by a caterpillar. Their job is to figure out the next char. But you could expect them to understand some grammar rules. Although, we can’t expect them to explain the grammar.
For another example, I wrote a simple neural net, and with 6 neurons it could learn XOR. I think we can say that it understands XOR. Can’t we? Or would you say then that an XOR gate understands XOR better? I would not use the word understand for something that cannot learn.
Solumbran@lemmy.world 1 day ago
Your whole logic is based on the idea that being able to do something means understanding that thing. This is simply wrong.
Humans feel emotions, yet they don’t understand them. A calculator makes calculations, but no one would say that it understands math. People blink and breathe and hear, without any understanding of it.
The concept of understanding implies some form of meta-knowledge about the subject. Understanding math is more than using math, it’s about understanding what you’re doing and doing it out of intention. All of those things are absent in an AI, neural net or not. They cannot “see the world” because they need to be programmed specifically for a task to be able to do it; they are unable to actually grow out of their programming, which is what understanding would ultimately cause. They simply absorb data and spit it back out after doing some processing, and the fact that an AI can be made to produce completely incompatible results shows that there is nothing behind it.