Comment on ChatGPT provides false information about people, and OpenAI can’t correct it
db0@lemmy.dbzer0.com 6 months agoIt’s not that their reasoning is a black box. It’s that they do not have reasoning! They just guess what the next word in the sentence is likely to be.
GiveMemes@jlai.lu 6 months ago
I mean it’s a bit more complicated than that, but at its core, yes, this is correct. Highly recommend this video.
www.youtube.com/watch?v=wjZofJX0v4M
kureta@lemmy.ml 6 months ago
it’s not even a little bit more complicated than that. They are literally trained to predict the next token given a series of previous tokens. The way that they do that is very complicated and the amount of data they are trained on is huge. That’s why they have to give correct information sometimes to sound plausible. Providing accurate information is literally a side effect of the actual thing they are trained to do.