Comment on We have to stop ignoring AI’s hallucination problem

<- View Parent
UnpluggedFridge@lemmy.world ⁨5⁩ ⁨months⁩ ago

You seem pretty confident that LLMs cannot have an internal representation simply because you cannot imagine how that capability could emerge from their architecture. Yet we have the same fundamental problem with the human brain and have no problem asserting that humans are capable of internal representation. Yet LLMs adhere to grammar rules, present information with a logical flow, express relationships between different concepts. Is this not evidence of, at the very least, an internal representation of grammar?

source
Sort:hotnewtop