Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
Lemming6969@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

You can be wrong and not fabricate. This is closer to human intentional lying.

source
Sort:hotnewtop