half the time hallucinating something crazy in the in the mix.
Another funny: Yeah, it’s perfect we just need to solve this small problem of it hallucinating.
Ahemm… solving halloconating is the “no it actually has to understand what it is doing” part aka the actual intelligence. The actually big and hard problem. The actual understanding of what it is asked to do and what solutions to that ask are sane, rational and workable. Understanding the problem and understanding the answer, excluding wrong answers. Actual analysis, understanding and intelligence.
Not only that, but the same variables that turn on “hallucination” are the ones that make it interesting.
By the very design of generative LLMs, the same knob that makes them unpredictable makes them invent “facts”. If they’re 100% predictable they’re useless because they just regurgitate word for word something that was in the training data. But, as soon as they’re not 100% predictable they generate word sequences in a way that humans interpret as lying or hallucinating.
So, you can’t have a generative LLM that is both “creative” in that it comes up with a novel set of words, without also having “hallucinations”.
the same knob that makes them unpredictable makes them invent “facts”.
This isn’t what makes them invent facts, or at least not the only (or main?) reason. Fake references, for example, arise because it encounters references in text, so it knows what they look like and where they should be used. It just doesn’t know what one is or that it’s supposed to match up to something real which says what the text implies that it says.
merc@sh.itjust.works 1 year ago
And whose solution was part of its training set…
variaatio@sopuli.xyz 1 year ago
half the time hallucinating something crazy in the in the mix.
Another funny: Yeah, it’s perfect we just need to solve this small problem of it hallucinating.
Ahemm… solving halloconating is the “no it actually has to understand what it is doing” part aka the actual intelligence. The actually big and hard problem. The actual understanding of what it is asked to do and what solutions to that ask are sane, rational and workable. Understanding the problem and understanding the answer, excluding wrong answers. Actual analysis, understanding and intelligence.
merc@sh.itjust.works 1 year ago
Not only that, but the same variables that turn on “hallucination” are the ones that make it interesting.
By the very design of generative LLMs, the same knob that makes them unpredictable makes them invent “facts”. If they’re 100% predictable they’re useless because they just regurgitate word for word something that was in the training data. But, as soon as they’re not 100% predictable they generate word sequences in a way that humans interpret as lying or hallucinating.
So, you can’t have a generative LLM that is both “creative” in that it comes up with a novel set of words, without also having “hallucinations”.
JoBo@feddit.uk 1 year ago
This isn’t what makes them invent facts, or at least not the only (or main?) reason. Fake references, for example, arise because it encounters references in text, so it knows what they look like and where they should be used. It just doesn’t know what one is or that it’s supposed to match up to something real which says what the text implies that it says.