Comment on Does vibe coding sort of work at all?
lepinkainen@lemmy.world 1 day agoThe main problem with LLMs is that they’re the person who memorised the textbook AND never admit they don’t know something.
No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”, but will rather spout 100% confident bullshit.
The “thinking” models are a bit better, but still have the same issue.
xavier666@lemmy.umucat.day 1 day ago
There is a reason for this. LLMs are “rewarded” (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying “I don’t know” to a difficult question.
I am not into research on LLMs, but i think this is being worked upon.
TranquilTurbulence@lemmy.zip 1 day ago
Something very similar is also true with humans. People just love to have answers even if they aren’t entirely reliable or even true. Having just some answer seems to be more appealing than not having any answers at all. Why do you think people had weird beliefs about stars, rainbows, thunder etc.
The way LLMs hallucinate is also a little weird. If you ask about quantum physics things, they actually can tell you that modern science doesn’t have a conclusive answer to your question. I guess that’s because other people have written articles about the very same question, and have pointed out that it’s still a topic of ongoing debate.
If you ask about robot waitresses used in a particular restaurant, it will happily give you the wrong answer. Obviously, there’s not much data about that restaurant, let alone any academic debate, so I guess that’s also reflected in the answer.