Comment on AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

<- View Parent
CeeBee_Eh@lemmy.world ⁨1⁩ ⁨day⁩ ago

The only thing close to a decision that LLMs make is

That’s not true. An “if statement” is literally a decision tree.

The only reason they answer questions is because in the training data they’ve been provided

This is technically true for something like GPT-1. But it hasn’t been true for the models trained in the last few years.

it knows from its training data that sometimes accusations are followed by language that we interpret as an apology, and sometimes by language that we interpret as pushing back. It regurgitates these apologies without understanding anything, which is why they seem incredibly insincere

It has a large amount of system prompts that alter default behaviour in certain situations. Such as not giving the answer on how to make a bomb. I’m fairly certain there are catches in place to not be overly apologetic to minimize any reputation harm and to reduce potential “liability” issues.

And in that scenario, yes I’m being gaslite because a human told it to.

There is no thinking

Partially agree. There’s no “thinking” in sentient or sapient sense. But there is thinking in the academic/literal definition sense.

There are no decisions

Absolutely false. The entire neural network is billions upon billions of decision trees.

The more we anthropomorphize these statistical text generators, ascribing thoughts and feelings and decision making to them, the less we collectively understand what they are

I promise you I know very well what LLMs and other AI systems are. They aren’t alive, they do not have human or sapient level of intelligence, and they don’t feel.

But “gaslighting” is a perfectly fine description of what I explained. The initial conditions were the same and the end result (me knowing the truth and getting irritated about it) were also the same.

source
Sort:hotnewtop