Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
how_we_burned@lemmy.zip ⁨3⁩ ⁨days⁩ ago

Are all outputs hallucinations? It’s just some happen to be correct and some aren’t. It doesn’t know and can’t tell unless it’s specifically told (hence the guard rails).

But if I’ve gotta build so many hand rails (instructions) then is it really “AI”?

source
Sort:hotnewtop