Comment on Researchers have found the cause of hallucinations in LLMs, H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs
A breeder reactor is creating something, which is like the outcome of breeding. That name fits.
a hallucination is seeing something that’s not there, which also fits.
Hallucinations requires perception. LLMs are just statistical models and do not have perceptions.
It was a cute name early on, now it is used to deflect when the output is just plain wrong.
In AI, a “hallucination” is just as much “there” as a non-"hallucination.” It’s a way for scientists to stomp their foot and say that the wrong output is the computer’s fault and not a natural consequence of how LLMs work.
Skullgrid@lemmy.world 2 days ago
a hallucination is seeing something that’s not there, which also fits.
snooggums@piefed.world 2 days ago
Hallucinations requires perception. LLMs are just statistical models and do not have perceptions.
It was a cute name early on, now it is used to deflect when the output is just plain wrong.
XLE@piefed.social 2 days ago
In AI, a “hallucination” is just as much “there” as a non-"hallucination.” It’s a way for scientists to stomp their foot and say that the wrong output is the computer’s fault and not a natural consequence of how LLMs work.