Comment on How Much Do LLMs Hallucinate in Document Q&A Scenarios? A 172-Billion-Token Study Across Temperatures, Context Lengths, and Hardware Platforms [TLDR: 25%]

<- View Parent
Zink@programming.dev ⁨5⁩ ⁨days⁩ ago

I’m no expert and don’t care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.

So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly “this passes for something a person on the internet might write.”

It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.

What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it’s done at computer speed and global scale!

source
Sort:hotnewtop