Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
eceforge@discuss.tchncs.de ⁨1⁩ ⁨week⁩ ago

No comment on the rest of the thread but I always though “confabulation” was a more accurate word than hallucination for what LLMs tend to do.

The “signs and symptoms” part of the article really seems oddly familiar when compared to interacting with an LLM sometimes haha.

source
Sort:hotnewtop