Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
unpossum@sh.itjust.works ⁨1⁩ ⁨week⁩ ago

I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.

That’s literally what school exams are about, isn’t it?

Token window is a problem for all llms though, that’s not easily solved, but it can be worked around to a certain extent.

source
Sort:hotnewtop