Exactly.
LLMs are fundamentally hallucination machines, but this truth utterly conflicts with almost every purpose that AI is being marketed and pushed and sold for, which depends on them being able to analyse data ‘truthfully’ and accurately.
So it’s no wonder that none of the big tech companies have decided to consider or accept hallucinations as a problem, because accepting that truth means also admitting that LLMs are fundamentally unfit for purpose - which is the one thing they simply can’t and won’t do with so much money riding on it.