Comment on Gemini lies to user about health info, says it wanted to make him feel better— Though commonly reported, Google doesn't consider it a security problem when models make things up

<- View Parent
tiramichu@sh.itjust.works ⁨1⁩ ⁨week⁩ ago

Exactly.

LLMs are fundamentally hallucination machines, but this truth utterly conflicts with almost every purpose that AI is being marketed and pushed and sold for, which depends on them being able to analyse data ‘truthfully’ and accurately.

So it’s no wonder that none of the big tech companies have decided to consider or accept hallucinations as a problem, because accepting that truth means also admitting that LLMs are fundamentally unfit for purpose - which is the one thing they simply can’t and won’t do with so much money riding on it.

source
Sort:hotnewtop