You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.
But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.
So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.
Shanmugha@lemmy.world 7 months ago
Well, yeah. Humans have these pescy things like concepts, consciousness and thinking above language level. So pesky (sarcasm)
iglou@programming.dev 7 months ago
That doesn’t answer the question you quoted.
Shanmugha@lemmy.world 7 months ago
Does it not? Show me how
iglou@programming.dev 7 months ago
Not a single part of your answer is about how the brain works.
Concepts are not things in your brain.
Consciousness is a concept. It doesn’t exist in your brain.
Thinking is how a human uses their brain.
I’m asking about how the brain itself functions to intepret natural language.