You can’t ask it about itself because it has no internal model of self and is just basing any answer on data in its training set
Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
Ulrich@feddit.org 1 day agoIt can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.
michaelmrose@lemmy.world 1 day ago
ryven@lemmy.dbzer0.com 1 day ago
Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen’t know how it will end, and therefore can’t have an opinion about the truth value of it. (I’d go further and claim it can’t really “have an opinion” about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
“Admitting” that it’s lying only proves that it has been exposed to “admission” as a pattern in its training data.
ggppjj@lemmy.world 1 day ago
I strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.
Ulrich@feddit.org 1 day ago
It knows the answer its giving you is wrong, and it will even say as much. I’d co sider that intent.
ggppjj@lemmy.world 1 day ago
It is incapable of knowledge, it is math
masterofn001@lemmy.ca 1 day ago
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor, I am a product (being)of my environment (locale), experience (input), and nurturing (programming).
/think.
What’s the difference?
Ulrich@feddit.org 1 day ago
…how is it incapable of something it is actively doing?
sugar_in_your_tea@sh.itjust.works 1 day ago
Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
Ulrich@feddit.org 1 day ago
And is that different from the way you make decisions, fundamentally?