Comment on ChatGPT provides false information about people, and OpenAI can’t correct it

<- View Parent
tonarinokanasan@lemmy.sdf.org ⁨6⁩ ⁨months⁩ ago

This is a thing that is true of all LLMs, but it seems like you’re misunderstanding the core issue. It CAN give outputs like that sometimes. What we CAN’T do is force it to give outputs like that ALL the time.

It will answer “I don’t know” if its predictive text model guesses that the most common response to this would be “I don’t know”. To do that, to simplify a little, you could imagine that it reads your question, compares that to all the text in its training data, and tries to find the conversation that looks most like the question you asked, then answers whatever the person in the training data answered. But your exact question wasn’t in its training data, so if you took that mental model, and instead had it compare to 1000 similar looking things in its training model and average them, then it would hopefully do a better job of coming up with something at least close to what you actually asked. Now take it to a million, or a billion.

It doesn’t know anything. It doesn’t understand anything you say. It just looks at patterns that it learned from the training data and tries to guess what words are most likely to be said in that case.

source
Sort:hotnewtop