It knows the answer its giving you is wrong, and it will even say as much. I’d co sider that intent.
Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
ryven@lemmy.dbzer0.com 4 weeks agoLying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen’t know how it will end, and therefore can’t have an opinion about the truth value of it. (I’d go further and claim it can’t really “have an opinion” about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
“Admitting” that it’s lying only proves that it has been exposed to “admission” as a pattern in its training data.
Ulrich@feddit.org 4 weeks ago
ggppjj@lemmy.world 4 weeks ago
It is incapable of knowledge, it is math
masterofn001@lemmy.ca 4 weeks ago
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor, I am a product (being)of my environment (locale), experience (input), and nurturing (programming).
/think.
What’s the difference?
4am@lemm.ee 4 weeks ago
Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.
Large language models are primitive, rigid, simplistic, and ultimately expensive.
Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.
ggppjj@lemmy.world 4 weeks ago
Ask chatgpt, I’m done arguing effective consciousness vs actual consciousness.
Ulrich@feddit.org 4 weeks ago
…how is it incapable of something it is actively doing?
4am@lemm.ee 4 weeks ago
The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent
Flisty@mstdn.social 4 weeks ago
@Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without "knowing" anything more than what other art of that type looks like. But if you look closer you can also see that it doesn't "know" a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It's plausible babble.
ggppjj@lemmy.world 4 weeks ago
What do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.
sugar_in_your_tea@sh.itjust.works 4 weeks ago
Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
Ulrich@feddit.org 4 weeks ago
it just generates an answer based on a mixture of the input and the training data, plus some randomness.
And is that different from the way you make decisions, fundamentally?
petrol_sniff_king@lemmy.blahaj.zone 4 weeks ago
I don’t think I run on AMD or Intel, so uh, yes.
sugar_in_your_tea@sh.itjust.works 4 weeks ago
Idk, that’s still an area of active research. I versatile certainly think it’s very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.
ggppjj@lemmy.world 4 weeks ago
I strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.