Comment on Google Gemini struggles to write code, calls itself “a disgrace to my species”
ganryuu@lemmy.ca 3 weeks agoYou’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.
btaf45@lemmy.world 3 weeks ago
Can you explain why AIs always have a “confidently incorrect” stance instead of admitting they don’t know the answer to something?
ganryuu@lemmy.ca 3 weeks ago
I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statically probable thing to say given your message and their prompt.
btaf45@lemmy.world 3 weeks ago
Then in that respect AIs aren’t even as powerful as an ordinary computer program.
Aggravationstation@feddit.uk 3 weeks ago
No computer programs “know” anything. They’re just sets of instructions with varying complexity.
Matty_r@programming.dev 3 weeks ago
Because its an auto complete trained on typical responses to things. It doesn’t know right from wrong, just the next word based on a statistical likelihood.
btaf45@lemmy.world 3 weeks ago
Are you saying the AI does not know when it does not know something?
Matty_r@programming.dev 3 weeks ago
Exactly. I’m over simplifying it of course, but that’s generally how it works. Its also not “AI” as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.