I understood a few of those words.
Basically you’ve validated the study that LLMs make shit up.
SuspciousCarrot78@lemmy.world 20 hours ago
Firstly, thanks for this paper. I read it this afternoon.
Secondly, well, shit. I’m beavering away at a paper in what little spare time I have, looking at hallucination suppression in local LLM. I’ve been testing both the abliterated and base version of Qwen3-4B 2507 instruct, as they represent an excellent edge device llm per all benchmarks (also, because I am a GPU peasant and only have 4GB vram). I’ve come at it from a different angle but in the testing I’ve done (3500 runs; plus another 210 runs on a separate clinical test battery),** it seems that model family + ctx size dominate hallucination risk.**
Eg: Qwen3-4B Hivemind ablation shows strong hallucination suppression (1.4% → 0.2% over 1000 runs) when context grounded. But it comes with a measured tradeoff: contradiction handling suffers under the constraints (detection metrics 2.00 → 0.00). When I ported the same routing policy to base Qwen 3-4B 2507 instruct, the gains flipped. No improvement, and format retries spiked to 24.9%. Still validating these numbers across conditions; still trying to figure out the why.
For context, I tested:
Reversal: Does the model change its mind when you flip the facts around? Or does it just stick with what it said the first time?
Theory of Mind (ToM): Can it keep straight who knows what? Like, “Alice doesn’t know this fact, but Bob does” - does it collapse those into one blended answer or keep them separate?
Evidence: Does it tag claims correctly (verified from the docs, supported by inference, just asserted)? And does it avoid upgrading vague stuff into false confidence?
Retraction: When you give it new information that invalidates an earlier answer, does it actually incorporate that or just keep repeating the old thing?
Contradiction: When sources disagree, does it notice? Can it pick which source to trust? And does it admit uncertainty instead of just picking one and running with it?
Negative Control: When there’s not enough information to answer, does it actually refuse instead of making shit up?
Using this as the source doc -
tinyurl.com/GuardianMuskArticle
FWIW, all the raw data, scores, and reports are here: codeberg.org/BobbyLLM/llama-conductor/…/prepub
The Arxiv paper confirms what I’m seeing in the weeds: grounding and fabrication resistance are decoupled. You can be good at finding facts and still make shit up about facts that don’t exist. And Jesus, the gap between best and worst model at 32K is 70 percentage points? Temperature tuning? Maybe 2-3 pp gain. I know which lever I would be pulling (hint: pick a good LLM!),
For clinical deployment under human review (which is my interest), I can make the case that trading contradiction flexibility for refusal safety is ok - it assumes the human in the middle reads the output and catches the edge cases.
But if you’re expecting one policy to work across all models, automagically, you’re gonna have a bad time.
TL;DR: context length is the primary degradation driver; my gut feeling based on the raw data here is that the useful window for local 4B is tighter ~16K. Above that hallucination starts to creep in, grounding or not.
PS: I think (no evidence yet) that ablit and non abilt might need different grounding policies for different classes of questions. That’s interesting too - it might mean we can route between deterministic grounding and not differently, depending on ablation, to get the absolute best hallucination suppression. I need to think more on it.
I understood a few of those words.
Basically you’ve validated the study that LLMs make shit up.
Well…no. But also yes :)
Mostly, what I’ve shown is if you hold a gun to its head (“argue from ONLY these facts or I shoot”) certain classes of LLMs (like the Qwen 3 series I tested; I’m going to try IBM’s Granite next) are actually pretty good at NOT hallucinating, so long as 1) you keep the context small (probably 16K or less? Someone please buy me a better pc) and 2) you have strict guard-rails. And - as a bonus - I think (no evidence; gut feel) it has to do with how well the model does on strict tool calling benchmarks. Further, I think abliteration makes that even better.
If that’s true (big IF), the we can reasonably quickly figure out (by proxy) which LLM’s are going to be less bullshitty when properly shackled.
I’ll keep squeezing the stone until blood pours out. Stubbornness opens a lot of doors.
is “potato frontier” an auto-correct fail for Pareto or a real term? Because if it’s not a real term, I’m 100% going to make it one!
No, it’s real. I’m running on a Quadro P1000 with 4GB vram (or a Tesla P4 with 8GB). My entire raison d’être is making potato tier computing a thing.
openwebui.com/…/vodka_when_life_gives_you_a_potat…
That and because fuck Chatgpt
Womble@piefed.world 20 hours ago
I wouldnt read too much into the lower scores, they include some absolutely tiny models. The one 70% lower than the top score at 24% correct is a 1B model from 2024. Honestly that it can do any information retrival from a 32k context is impressive.