Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
MHard@lemmy.world ⁨1⁩ ⁨week⁩ ago

The task described in this article is asking questions about a document that was provided to the llm in the context.

I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.

Also, when the tokens exceeded 200k, the llm error rate was higher than 10%

source
Sort:hotnewtop