No, because that requires it to understand the words. It doesn’t.
Comment on ChatGPT provides false information about people, and OpenAI can’t correct it
maynarkh@feddit.nl 8 months agoIf it can name what the most likely combination is, couldn’t it also know how likely that combination of words is?
wahming@monyet.cc 8 months ago
kent_eh@lemmy.ca 8 months ago
If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
DudeDudenson@lemmings.world 8 months ago
It’s not actually deciding everything, the AI thinking is marketing fluff really. But yes that’s called confidence rating and it does. But at the scale of something like chatgpt that uses a snapshot of the entire internet and is non mutable there’s no way to train it for every possible question. If you ask about a topic 99% of the internet gets wrong it’ll respond the wrong thing with 99% confidence