Comment on Study finds that Chat GPT will cheat when given the opportunity and lie to cover it up later.
kromem@lemmy.world 11 months agothat indicates the LLM has a concept of truth vs. being good at linguistic pattern matching to return language that accurately classifies true and false statements
“It doesn’t know the difference between true and false, it only knows the difference between true and false.”
The second thing you mention “good at accurately classifying true and false statements” is literally knowing the difference between true and false.
antonim@lemmy.dbzer0.com 11 months ago
Knowing how to produce words is not equivalent to knowing what those words mean in relation to the extralinguistic world. Unless you’re a hardcore derridean poststructuralist or something.
kromem@lemmy.world 11 months ago
If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”
As is discussed in the third point in section 5.1:
(The likely and neg datasets are described in Appendix G, with the key point that likely represents the word generations most likely to occur in the model)
SmoothIsFast@citizensgaming.com 11 months ago
It’s not more going on, it’s that it had such a large training set of data that these false vs true statements are likely covered somewhere in it’s set and the probability states it should assign true or false to the statement.
And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It’s saying the model is encoding or setting weights to the true and false values when that’s the majority of its data set. That’s basically it, you are reading to much into the paper.
kromem@lemmy.world 11 months ago
That’s not how it works at all.
You have no idea what you are talking about. When they train data they have two sets. One that fine tunes and another that evaluates it. You never have the training data in the evaluation set or vice versa.
I also recommend reading up on the other papers I mentioned, as this isn’t an isolated finding, but part of a larger trend that’s being found over and over in the past year.
antonim@lemmy.dbzer0.com 11 months ago
Which part of that ‘more that’s going on’, whatever that actually is, corresponds to the human definition and understanding of truth and falseness?
kromem@lemmy.world 11 months ago
When did I say it had a human understanding of truth and falseness? I simply said it had an abstracted world model understanding of truth and falseness beyond surface statistics.