Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
jacksilver@lemmy.world ⁨1⁩ ⁨week⁩ ago

Thanks for providing the actual numbers.

I think one of the more concerning things is, what if you think the answer is in the documents you provided but they actually aren’t. What you think is a low error rate could actually be a high error rate.

source
Sort:hotnewtop