Comment on Study finds that Chat GPT will cheat when given the opportunity and lie to cover it up later.
kromem@lemmy.world 11 months agoI suggest reading it. Right in the abstract it states the whole point:
Overall, we present evidence that language models linearly represent the truth or falsehood of factual statements.
The full paper goes into detail in multiple methods of analysis to show that it’s the case, and is right there available for you to read.
DarkGamer@kbin.social 11 months ago
I have been reading it but I have yet to see anything that indicates the LLM has a concept of truth vs. being good at linguistic pattern matching to return language that accurately classifies true and false statements. i.e., actual understanding of concepts vs. being a surprisingly capable stochastic parrot.
kromem@lemmy.world 11 months ago
“It doesn’t know the difference between true and false, it only knows the difference between true and false.”
The second thing you mention “good at accurately classifying true and false statements” is literally knowing the difference between true and false.
antonim@lemmy.dbzer0.com 11 months ago
Knowing how to produce words is not equivalent to knowing what those words mean in relation to the extralinguistic world. Unless you’re a hardcore derridean poststructuralist or something.
kromem@lemmy.world 11 months ago
If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”
As is discussed in the third point in section 5.1:
(The likely and neg datasets are described in Appendix G, with the key point that likely represents the word generations most likely to occur in the model)