Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
ggppjj@lemmy.world 2 days ago
It is incapable of knowledge, it is math
masterofn001@lemmy.ca 2 days ago
Please take a strand of my hair and split it with pointless philosophical semantics.
Our brains are chemical and electric, which is physics, which is math.
/think
Therefor, I am a product (being)of my environment (locale), experience (input), and nurturing (programming).
/think.
What’s the difference?
4am@lemm.ee 2 days ago
Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.
Large language models are primitive, rigid, simplistic, and ultimately expensive.
Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.
masterofn001@lemmy.ca 2 days ago
And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?
“I’m sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix.”
ggppjj@lemmy.world 2 days ago
Ask chatgpt, I’m done arguing effective consciousness vs actual consciousness.
chatgpt.com/…/67c64160-308c-8011-9bdf-c53379620e4…
Ulrich@feddit.org 2 days ago
…how is it incapable of something it is actively doing?
4am@lemm.ee 2 days ago
The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent
Flisty@mstdn.social 2 days ago
@Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without "knowing" anything more than what other art of that type looks like. But if you look closer you can also see that it doesn't "know" a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It's plausible babble.
ggppjj@lemmy.world 2 days ago
What do you believe that it is actively doing?
Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.
I will not answer the brain question until LLMs have brains also.