Comment on Researchers have found the cause of hallucinations in LLMs, H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs

<- View Parent
Peruvian_Skies@sh.itjust.works ⁨6⁩ ⁨days⁩ ago

So tue tldr is just what we already knew: LLMs predict the most likely word to come next and have no concept of “true” or “false” information.

Indeed, to have such a concept would require understanding that information and any AI that actually understood information wouldn’t be an LLM because LLMs are just fancy autocorrect.

source
Sort:hotnewtop