You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.
But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.
So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.
Feyd@programming.dev 1 day ago
No they’re not they’re talking purely at a technical level and you’re trying to apply mysticism to it.
iglou@programming.dev 13 hours ago
They are talking at a technical level only on one side of the comparison. It makes the entire discussion pointless. If you’re going to compare the understanding of a neural network and the understanding of a human brain, you have to go into depth on both sides.
Mysticism? Lmao. Where? Do you know what the word means?