so_pitted_wabam@lemmy.zip 1 day ago
I think a more appropriate post title would be “Researchers have identified and named the process that spawns hallucinations in LLMs, they still don’t know the cause though”
This article is like reading the headline “Researchers have identified the cause of AIDS” and then you open it up and the body is a bunch of science jargon that basically says HIV.
Skullgrid@lemmy.world 1 day ago
it sounds like it’s just how the systems are designed.
I mean, the point of this shit is to take training data and create new stuff out of it through pattern matching. You’re going to get some mismatched shit by design,since the random decisions are modified by the weights. Otherwise you’d get the same shit every time.
snooggums@piefed.world 1 day ago
When the system is intended to look like a random a person then randomness is fine.
When the output is expected to be accurate, it should be the same each time so it can be verified as accurate.
LLMs are being sold as doing both at the same time, but random plus consistent equals random.
Skullgrid@lemmy.world 1 day ago
throw it onto the pile of people being idiots
XLE@piefed.social 1 day ago
XLE@piefed.social 1 day ago
That’s incorrect. Wrong responses will still be generated even if you remove the element that randomizes the response for the same question.
If that wasn’t the case, this paper wouldn’t exist.