Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
cmhe@lemmy.world ⁨1⁩ ⁨week⁩ ago

Hallucinations of LLMs are just one class of errors, and the most dangerous one.

Other stuff like garbeled or repeating output are other errors.

source
Sort:hotnewtop