Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
PieMePlenty@lemmy.world 2 weeks ago
I don’t get why they’d be called hallucinations thought. What LM’s do is predict the next word(s). If it hasn’t trained on enough data sets, the prediction confidence will be low. Their whole output is a hallucination based on speculation. If they actually don’t know the next word order, they’ll start spewing nonsense though I guess that would only happen if they were forced to generate text indefinitely… at some point they’d cease making (human) sense.
LMs aren’t smart, they don’t think, they’re not really AI.