Accuracy and hallucination are two ends of a spectrum.
If you turn hallucinations to a minimum, the LLM will faithfully reproduce what’s in the training set, but the result will not fit the query very well.
The other option is to turn the so-called temperature up, which will result in replies fitting better to the query but also the hallucinations go up.
In the end it’s a balance between getting responses that are closer to the dataset (factual) or closer to the query (creative).
tabular@lemmy.world 1 day ago
“hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information”
Is an output an hallucination when the training data involved in the output included factually incorrect data? Suppose my input is “is the would flat” and then an LLM, allegedly, accurately generates a flat-eather’s writings saying it is.