Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
SuspciousCarrot78@lemmy.world ⁨1⁩ ⁨week⁩ ago

Well…no. But also yes :)

Mostly, what I’ve shown is if you hold a gun to its head (“argue from ONLY these facts or I shoot”) certain classes of LLMs (like the Qwen 3 series I tested; I’m going to try IBM’s Granite next) are actually pretty good at NOT hallucinating, so long as 1) you keep the context small (probably 16K or less? Someone please buy me a better pc) and 2) you have strict guard-rails. And - as a bonus - I think (no evidence; gut feel) it has to do with how well the model does on strict tool calling benchmarks. Further, I think abliteration makes that even better.

If that’s true (big IF), the we can reasonably quickly figure out (by proxy) which LLM’s are going to be less bullshitty when properly shackled.

I’ll keep squeezing the stone until blood pours out. Stubbornness opens a lot of doors.

source
Sort:hotnewtop