Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

<- View Parent
Lodespawn@aussie.zone ⁨4⁩ ⁨days⁩ ago

Nah so their definition is the classical “how confident are you that you got the answer right”. If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people’s tends to decrease to mitigate the over confidence.

But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it’s a word prediction model based on a data set of written text tending to infinity. It’s not assessing validity of results, it’s predicting what the answer is based on all previous inputs. The whole study is irrelevant.

source
Sort:hotnewtop