Because its an auto complete trained on typical responses to things. It doesn’t know right from wrong, just the next word based on a statistical likelihood.
Comment on Google Gemini struggles to write code, calls itself “a disgrace to my species”
btaf45@lemmy.world 1 day agoCan you explain why AIs always have a “confidently incorrect” stance instead of admitting they don’t know the answer to something?
Matty_r@programming.dev 1 day ago
btaf45@lemmy.world 1 day ago
Are you saying the AI does not know when it does not know something?
Matty_r@programming.dev 1 day ago
Exactly. I’m over simplifying it of course, but that’s generally how it works. Its also not “AI” as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.
ganryuu@lemmy.ca 1 day ago
I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statically probable thing to say given your message and their prompt.
btaf45@lemmy.world 1 day ago
Then in that respect AIs aren’t even as powerful as an ordinary computer program.
Aggravationstation@feddit.uk 1 day ago
No computer programs “know” anything. They’re just sets of instructions with varying complexity.
btaf45@lemmy.world 23 hours ago
LMFAO…
if exists(thing) { write(thing); } else { write(“I do not know”); }