Comment on Amazon builds AI model to optimize packaging
polygon6121@lemmy.world 7 months agoAI in general is definitely prone to hallucinations. It is more commonly seen in LLMs because it is more widely used by the public. It is definitely a problem with all AI
Syntha@sh.itjust.works 7 months ago
Besides generative AI, what models can hallucinate?
polygon6121@lemmy.world 7 months ago
Text to video, automated driving, object detection, language translations. I might be misusing the term, you could argue that the word is describing what LLMs commonly does and that is where the term is derived from. You can also argue that AI is sometimes correct and the human have issues identifying the correct answer. But In my mind it is much the same just different applications. A car completely missing a firetruck approaching or a LLM just spewing out wrong statements is the same to me.
Syntha@sh.itjust.works 7 months ago
Yeah, well it’s not the same. Models are wrong all the time, why use a different term at all when it’s just “being wrong”?
polygon6121@lemmy.world 7 months ago
The model makes decisions thinking it is right, but for whatever reason can’t see a firetruck or stopsign or misidentifies the object… you know almost like how a human hallucinating would perceive something from external sensory that is not there.
I don’t mind giving it another term, but “being wrong” is misleading. But you are correct in the sense that it depends on every given case…