Just for context, this is the error rate when the right answer is provided to the LLM in a document. This means that even when the answer is being handed to the LLM they fail at the rates provided in the article/paper.
Most people interacting with LLMs aren’t asking questions against documents, or the answer can not be directly inferred from the documents (asking the LLM to think about the materials in the documents).
That means in most situations the error rate for the average user will be significantly higher.
RandAlThor@lemmy.ca 2 weeks ago
This is pretty bonkers. How TF are they fabricating answers???
bad1080@piefed.social 2 weeks ago
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
snooggums@piefed.world 2 weeks ago
Aka being wrong, but with a fancy name!
When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.
Zink@programming.dev 2 weeks ago
I’m no expert and don’t care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.
So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly “this passes for something a person on the internet might write.”
It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.
What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it’s done at computer speed and global scale!
ji59@hilariouschaos.com 2 weeks ago
Because guessing correct answer has is more successful than saying nothing.