Comment on

Arghblarg@lemmy.ca ⁨1⁩ ⁨day⁩ ago

“AI” hallucinations are not a problem that can be fixed in LLMs. They are an inherent aspect of the process and an inevitable result of the fact that LLMs are mostly probabilistic engines, with no supervisory or introspective capability, which actual sentient beings possess and use to fact-check their output. So there. :p

source
Sort:hotnewtop