that’s just circular reasoning, since understanding is needed for communication.
Comment on How could AI be better than an encyclopedia?
fxdave@lemmy.ml 3 days ago
What’s understanding? Isn’t understanding just a consequence of neurons communicating with each other? This case LLMs with deep learning can understand things.
Alsjemenou@lemy.nl 2 days ago
Solumbran@lemmy.world 3 days ago
Yeah no
fxdave@lemmy.ml 2 days ago
Any explanation? If they can write text, I assume they understand grammar. They are definetly skilled in a way. If you do snowboarding, do you understand snowboarding? The word “understand” can be misleading. That’s why I’m asking what’s understanding?
Solumbran@lemmy.world 2 days ago
en.wikipedia.org/wiki/Disjunctive_sequence
With your logic, these numbers understand grammar too because they can form sentences.
Even better, anything that any human could ever say is contained in those, and as such, humanity has a more limited grammar understanding than a sequence.
You cannot define understanding by the results, and even if you did, AIs give horrible results that prove that they do nothing else than automatically put words next to each other based on the likelihood of it making sense to humans.
They do not understand grammar just like they do not understand anything, they simply are an algorithm made to spit out “realistic” answers without having to actually understand them.
Another example of that is AIs that generate images: they’re full of nonsense because the AI doesn’t understand what it’s making, and that’s why you end up with weird artifacts that seem completely absurd to any human with basic understanding of reality.
fxdave@lemmy.ml 2 days ago
But LLMs are not simply probabilistic machines. They are neural nets. For sure, they haven’t seen the world. They didn’t learn the way we learn. What they mean by a caterpillar is just a vector. For humans, that’s a 3D, colorful, soft object with some traits.
You can’t expect that a being that sees chars and produces chars knows what we mean by a caterpillar. Their job is to figure out the next char. But you could expect them to understand some grammar rules. Although, we can’t expect them to explain the grammar.
For another example, I wrote a simple neural net, and with 6 neurons it could learn XOR. I think we can say that it understands XOR. Can’t we? Or would you say then that an XOR gate understands XOR better? I would not use the word understand for something that cannot learn.