The thing I find amusing here is the direct quoting of Gemini’s analysis of its interactions as if it is actually able to give real insight into its behaviors, as well as the assertion that there’s a simple fix to the hallucination problem which, sycophantic or otherwise, is a perennial problem.
Gemini lies to user about health info, says it wanted to make him feel better
Submitted 7 hours ago by throws_lemy@lemmy.nz to technology@lemmy.world
https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/
Comments
FancyPantsFIRE@lemmy.world 6 hours ago
MolochHorridus@lemmy.ml 1 hour ago
There is no hallucination problems, just design flaws and errors.
aeronmelon@lemmy.world 7 hours ago
“I just want you to be happy, Dave.”
THX1138@lemmy.ml 6 hours ago
“Daisy, Daisy, give me your answer do. I’m half crazy all for the love of you. It won’t be a stylish marriage, I can’t afford a carriage. But you’ll look sweet upon the seat of a bicycle built for two…”
Broadfern@lemmy.world 5 hours ago
Completely irrelevant but I hear that in Bender’s voice every time
panda_abyss@lemmy.ca 6 hours ago
Aww that’s sweet!
Iconoclast@feddit.uk 3 hours ago
It’s a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn’t “lie” and it doesn’t have the capability to explain itself. It just talks.
That speech being coherent is by design; the accuracy of the content is not.
This isn’t the model failing. It’s just being used for something it was never intended for.
THB@lemmy.world 2 hours ago
I puke a little in my mouth every time an article humanizes LLMs, even if they’re critical. Exactly as you said they do not “lie” nor are they “trying” to do anything. It’s literally word salad that organized to look like language.