Comment on We have to stop ignoring AI’s hallucination problem
Wirlocke@lemmy.blahaj.zone 6 months agoIn terms of LLM hallucination, it feels like the name very aptly describes the behavior and severity. It doesn’t downplay what’s happening because it’s generally accepted that having a source of information hallucinate is bad.
I feel like the alternatives would downplay the problem. A “glitch” is generic and common, “lying” is just inaccurate since that implies intent to deceive, and just being “wrong” doesn’t get accross how elaborately wrong an LLM can be.
Hallucination fits pretty well and is also pretty evocative. I doubt that AI promoters want to effectively call their product schizophrenic, which is what most people think when hearing hallucination.
Ultmately all the sciences are full of analogous names to make conversations easier, it’s not always marketing. No different than when physicists say particles have “spin” or “color” or that spacetime is a “fabric” or [insert entirety of String theory]…
abrinael@lemmy.world 6 months ago
After thinking about it more, I think the main issue I have with it is that it sort of anthropomorphises the AI, which is more of an issue in applications where you’re trying to convince the consumer that the product is actually intelligent.
You may be right that people could have a negative view of the word “hallucination”. I don’t personally think of schizophrenia, but I don’t know what the majority think of when they hear the word.
Knock_Knock_Lemmy_In@lemmy.world 6 months ago
You could invent a new word, but that doesn’t help people understand the problem.
You are looking for an existing word that describes providing unintentionally incorrect thoughts but is totally unrelated to humans. I suspect that word doesn’t exist. Every thinking word gets anthropomorphized.