This is the thing I hope people learn about LLMs, it’s all hallucinations.
When an LLM has excellent data from multiple sources to answer your question, it is likely to give a correct answer. But, that answer is still a hallucination. It’s dreaming up a sequence of words that is likely to follow the previous words. It’s more likely go give an “incorrect” hallucination when the data is contradictory or vague. But, the process is identical. It’s just trying to dream up a likely series of words.
OctopusNemeses@lemmy.world 9 hours ago
Before the tech industry set its sights on AI, “hallucination” was simply called error rate.
It’s the rate at which the model incorrectly labelled outputs. But of course the tech industry being what it is needs to come up with alternative words that spin doctor bad things to not bad things. So what the field of AI for decades had been calling error rate, everyone now calls “hallucinations”. Error has far worse optics than hallucination. Nobody would be buying this LLM garbage if every article posted about it included paragraphs about how its full of errors.
That’s the thing people need to learn.