At 32K, the best model (GLM 4.5) fabricates 1.19% of answers
Not bad, I don’t know many people who are 98.81% accurate in their statements.
Submitted 7 hours ago by RandAlThor@lemmy.ca to technology@lemmy.world
https://arxiv.org/html/2603.08274
At 32K, the best model (GLM 4.5) fabricates 1.19% of answers
Not bad, I don’t know many people who are 98.81% accurate in their statements.
Calculators are correct 100% of the time.
Calculators are not people, Mr. <1.19%.
It’s a pleasure to meet you! The only thing exceeding my wisdom is my modesty.
Truly the most humble person of all time.
GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?
The task described in this article is asking questions about a document that was provided to the llm in the context.
I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.
Also, when the tokens exceeded 200k, the llm error rate was higher than 10%
I’m not good at math, so someone please help me.
If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?
If I understand you correctly: 63.4% odds of having at least one hallucination.
The simple way to calculate the odds of getting at least one error is to calculate the odds of having ZERO, and then inverting that.
If the odds of a single instance being an error is 1%, that means you have a 99% chance of having no errors. If you repeat that 100 times, then it’s 99% of 99% of 99%…etc. In other words, 0.99^100 = 0.366. That’s the odds of getting zero errors 100 times in a row. The inverse of that is 0.634, or 63.4%.
This is the same way to calculate the odds of N coin flips all coming up heads. It’s going to be 0.5^N. So the odds of getting 10 heads in a row is 0.5^10 = ~0.0977%, or 1:1024.
Thanks, I also wonder how context collapse affects the fabrication rate.
One in 100. However, that is simple a measure of probability, so do not expect that to always be true for every 100 prompts.
For example, if you rolled a 100-sided die 100 times, it’s possible to get a one every time. In practice, it would likely be a mix. You might have a session where you get no wrong answers and times when you get several.
The problem is that ignorant people trust these models implicitly, because they sound convincing and authoritative, and many people are not equipped to be able to vet the information being generated (also notice I didn’t say “retrieved”).
RandAlThor@lemmy.ca 7 hours ago
This is pretty bonkers. How TF are they fabricating answers???
Zink@programming.dev 2 hours ago
I’m no expert and don’t care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.
So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly “this passes for something a person on the internet might write.”
It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.
What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it’s done at computer speed and global scale!
bad1080@piefed.social 6 hours ago
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
snooggums@piefed.world 5 hours ago
Aka being wrong, but with a fancy name!
When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.
ji59@hilariouschaos.com 6 hours ago
Because guessing correct answer has is more successful than saying nothing.