The technical term used in industry is confabulation. I really think if we used that instead of anthropomorphic words like hallucination it would make it easier to have real conversations about the limits of LLMs today. But then OpenAI couldn’t have infinite valuation so instead we hand wave it away with inaccurate language.
Comment on If AI “hallucinates,” doesn’t that make it more human than we admit?
PP_BOY_@lemmy.world 1 day ago
No, it’s just a poor use of the word meant to humanized it. “Glitch” is more appropriate.
Wxfisch@lemmy.world 23 hours ago
Samy4lf@slrpnk.net 1 day ago
Lol, you are very funny, but nevertheless we are still in charge.
Poayjay@lemmy.world 1 day ago
I feel like the word “glitch” is also too humanizing. There wasn’t a programming error, the LLM picked what was statistically likely to come next. It’s working as it’s suppose to. “Glitch” implies some error.
PP_BOY_@lemmy.world 1 day ago
I disagree that glitch is humanizing but that’s just a difference in interpretation. If we look at the results instead of the process and see that the output was “bad”, different from user expectations, etc., then I think glitch is appropriate. Regardless, on OP’s part, AI “hallucinations” are definitely nothing like real conscious hallucinations