Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

CubitOom@infosec.pub ⁨1⁩ ⁨week⁩ ago

I’m not good at math, so someone please help me.

If a model hallucinates 1% of the time for every question in a chat window that has 100 prompts in it, what is the chance of receiving a hallucination at some point in the chat?

source
Sort:hotnewtop