I don’t get why they’d be called hallucinations thought. What LM’s do is predict the next word(s). If it hasn’t trained on enough data sets, the prediction confidence will be low. Their whole output is a hallucination based on speculation. If they actually don’t know the next word order, they’ll start spewing nonsense though I guess that would only happen if they were forced to generate text indefinitely… at some point they’d cease making (human) sense.
LMs aren’t smart, they don’t think, they’re not really AI.
mindbleach@sh.itjust.works 6 months ago
… yes? This has been known since the beginning. Is it news because someone finally convinced Sam Altman?
Neural networks are universal estimators. “The estimate is wrong sometimes!*” is… what estimates are. The chatbot is not an oracle. It’s still bizarrely flexible, for a next-word-guesser, and it’s right often enough for these fuckups to become a problem.
What bugs me are the people going ‘see, it’s not reasoning.’ As if reasoning means you’re never wrong. Humans never misremember, or confidently espouse total nonsense. And we definitely understand brain chemistry and neural networks well enough to say none of these bajillion recurrent operations constitute the process of thinking.
Consciousness can only be explained in terms of unconscious events. Nothing else would be an explanation. So there is some sequence of operations which constitutes a thought. Computer science lets people do math with marbles, or in trinary, or on paper, so it doesn’t matter how exactly that work gets done.
Though it’s probably not happening here. LLMs are the wrong approach.