RandAlThor@lemmy.ca 11 hours ago
This is pretty bonkers. How TF are they fabricating answers???
bad1080@piefed.social 10 hours ago
snooggums@piefed.world 9 hours ago
Aka being wrong, but with a fancy name!
When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.
Scipitie@lemmy.dbzer0.com 9 hours ago
Accepting concepts like “right” and “wrong” gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.
To be precise:
LLMs can’t be right or wrong because the way they work has no link to any reality - it’s stochastics, not evaluation. I also don’t like the term halluzination for the same reason. It’s simply a too high temperature setting jumping into a closeby but unrelated vector set.
Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It’s then a “oh but wen make them better!” And their marketing departments overjoy.
To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.
We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though…
CubitOom@infosec.pub 9 hours ago
What word would you propose to use instead?
bad1080@piefed.social 9 hours ago
if you have a lobby you get special names, look at the pharma industry who coined the term “discontinuation syndrome” for a simple “withdrawal”
ji59@hilariouschaos.com 10 hours ago
Because guessing correct answer has is more successful than saying nothing.
Zink@programming.dev 6 hours ago
I’m no expert and don’t care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.
So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly “this passes for something a person on the internet might write.”
It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.
What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it’s done at computer speed and global scale!