Great summary. I would add not using LLMs to learn something new. As OP mentioned, when you know your stuff, you are aware of how much it bullshits. What happens when you don’t know? You eat all the bullshit because it sounds good. Or you will end up with a vibed codebase you can’t fully understand because you didn’t reason to produce it. It’s like driving a car and having a shitty copilot that sometimes hallucinates roads, and if you don’t know where you are supposed to be, wherever that copilot takes you would look good. You lack the context to judge the results or advice.
I basically use it now days as a semantic search engine of documentation. Talking with documentation is the coolest. If the response doesn’t come with a doc link, it’s probably not worth it. Make it point to the human input, make it help you find things you don’t know the name of, but never trust the output without judging. In my experience, making it generate code that you end up correcting it’s more cognitive heavy load than to write it yourself from scratch.
lepinkainen@lemmy.world 1 day ago
The main problem with LLMs is that they’re the person who memorised the textbook AND never admit they don’t know something.
No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”, but will rather spout 100% confident bullshit.
The “thinking” models are a bit better, but still have the same issue.
xavier666@lemmy.umucat.day 1 day ago
There is a reason for this. LLMs are “rewarded” (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying “I don’t know” to a difficult question.
I am not into research on LLMs, but i think this is being worked upon.
TranquilTurbulence@lemmy.zip 1 day ago
Something very similar is also true with humans. People just love to have answers even if they aren’t entirely reliable or even true. Having just some answer seems to be more appealing than not having any answers at all. Why do you think people had weird beliefs about stars, rainbows, thunder etc.
The way LLMs hallucinate is also a little weird. If you ask about quantum physics things, they actually can tell you that modern science doesn’t have a conclusive answer to your question. I guess that’s because other people have written articles about the very same question, and have pointed out that it’s still a topic of ongoing debate.
If you ask about robot waitresses used in a particular restaurant, it will happily give you the wrong answer. Obviously, there’s not much data about that restaurant, let alone any academic debate, so I guess that’s also reflected in the answer.