In AI, a “hallucination” is just as much “there” as a non-"hallucination.” It’s a way for scientists to stomp their foot and say that the wrong output is the computer’s fault and not a natural consequence of how LLMs work.
a hallucination is seeing something that’s not there, which also fits.
XLE@piefed.social 1 day ago
snooggums@piefed.world 1 day ago
Hallucinations requires perception. LLMs are just statistical models and do not have perceptions.
It was a cute name early on, now it is used to deflect when the output is just plain wrong.