We call it “hallucination” when AI makes things up — but when humans do it, we call it imagination. Where’s the line?
AI doesn’t make things up: It “believes” (doesn’t actually have belief) a real thing just as much as a false thing. The two are indistinguishable to LLMs because the only “true” thing for a chatbot is the existence of text and tokens. Everything else is meaningless to the math.
Does my sewer pipe have imagination because is spewed black goop across my kitchen instead of carrying my waste water away like it normally does? Is my TV hallucinating a new show because the screen got damaged at the factory? Did a printing press create art when it smudged the text on my paperback?
LLMs are tools with a high defect rate which tech billionaires and the media branded as hallucintation to sound more impressive.
PP_BOY_@lemmy.world 1 day ago
No, it’s just a poor use of the word meant to humanized it. “Glitch” is more appropriate.
Poayjay@lemmy.world 1 day ago
I feel like the word “glitch” is also too humanizing. There wasn’t a programming error, the LLM picked what was statistically likely to come next. It’s working as it’s suppose to. “Glitch” implies some error.
PP_BOY_@lemmy.world 1 day ago
I disagree that glitch is humanizing but that’s just a difference in interpretation. If we look at the results instead of the process and see that the output was “bad”, different from user expectations, etc., then I think glitch is appropriate. Regardless, on OP’s part, AI “hallucinations” are definitely nothing like real conscious hallucinations
Wxfisch@lemmy.world 19 hours ago
The technical term used in industry is confabulation. I really think if we used that instead of anthropomorphic words like hallucination it would make it easier to have real conversations about the limits of LLMs today. But then OpenAI couldn’t have infinite valuation so instead we hand wave it away with inaccurate language.
Samy4lf@slrpnk.net 1 day ago
Lol, you are very funny, but nevertheless we are still in charge.