It has no fundamental grasp of concepts like truth, it just repeats words that simulate human responses. It's glorified autocomplete that yields impressive results. Do you consider your auto complete to be lying when it picks the wrong word?
If making it pretend to be a stock picker and putting it under pressure makes it return lies, that's because it was trained on data that indicates that's the right set of words response for such a query.
Also, because large language models are probabilistic. You could ask it the same question over and over again and get totally different responses each time, some of which are inaccurate. Are they lies though? For a creature to lie it has to know that it's returning untruths.
CrayonRosary@lemmy.world 11 months ago
Interestingly, humans “auto complete” all the time and make up stories to rationalize their own behavior even when they literally have no idea why they acted the way they did, like in experiments with split brain patients.
0ops@lemm.ee 11 months ago
The perceived quality of human intelligence is held up by so many assumptions, like “having free will” and “understanding truth”. Do we really? Can anyone prove that?
At this point I’m convinced that the difference between a llm and human-level intelligence is dimensions of awareness, scale, and further development of the model’s architecture. Fundamentally though, I think we have all the pieces
threelonmusketeers@sh.itjust.works 11 months ago
But do you think? Do I think? Do LLMs think? What is thinking, anyway?
0ops@lemm.ee 11 months ago
I mean, I think so?
Patch@feddit.uk 11 months ago
Steady on there Descartes.