Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
HubertManne@piefed.social ⁨5⁩ ⁨weeks⁩ ago

I have been saying this for awhile. I am sorta hoping we see open source llms that are trained on a curated list of literature. its funny that these came out and it seemed like the makers did not take the long known garbage in - garbage out into account.

source
Sort:hotnewtop