It’s not a lie if you believe it.
AI models routinely lie when honesty conflicts with their goals
Submitted 11 months ago by cm0002@lemmy.world to technology@lemmy.world
https://www.theregister.com/2025/05/01/ai_models_lie_research/
Comments
boughtmysoul@lemmy.world 11 months ago
reksas@sopuli.xyz 11 months ago
word lying would imply intent. Is this pseudocode
print “sky is green” lying or doing what its coded to do?
The one who is lying is the company running the ai
Buffalox@lemmy.world 11 months ago
It’s lying whether you do it knowingly or not.
The difference is whether it’s intentional lying.
Lying is saying a falsehood, that can be both accidental or intentional.
The difference is in how bad we perceive it to be, but in this case, I don’t really see a purpose of that, because an AI lying makes it a bad AI no matter why it lies.EncryptKeeper@lemmy.world 11 months ago
Actually no, “to lie” means to say something intentionally false.
reksas@sopuli.xyz 11 months ago
I just think lying is wrong word to use here. Outputting false information would be better. Its kind of nitpicky but not really since choice of words affects how people perceive things. In this matter it shifts the blame from the company to their product and also makes it seem more capable than it is since when you think about something lying, it would also mean that something is intelligent enough to lie.
technocrit@lemmy.dbzer0.com 11 months ago
These kinds of bullshit humanizing headlines are the part of the grift.
daepicgamerbro69@lemmy.world 11 months ago
They paint this is if it was a step back, as if it doesn’t already copy human behaviour perfectly and isn’t in line with technofascist goal. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).
Rekorse@sh.itjust.works 11 months ago
Maybe the darknet will grow in its place.
FreedomAdvocate@lemmy.net.au 11 months ago
Google and others used Reddit data to train their LLMs. That’s all you need to know about how accurate it will be.
That’s not to say it’s not useful, but you need to know how to use it and understand that you need to only use it as a tool to help, not to take it as correct.
pjwestin@lemmy.world 11 months ago
Same.
Yokozuna@lemmy.world 11 months ago
ohwhatfollyisman@lemmy.world 11 months ago
this is the AI model that truly passes the Turing Test.
wischi@programming.dev 11 months ago
To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
excral@feddit.org 11 months ago
But that’s kind of the point of the Turing test: a true AI with human level intelligence distinguishes itself by not being susceptible to probing or tricking it
ogmios@sh.itjust.works 11 months ago
I mean, it was trained to mimic human social behaviour. If you want a completely honest LLM I suppose you’d have to train it on the social behaviours of a population which is always completely honest, and I’m not personally familiar with such.
wischi@programming.dev 11 months ago
AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
Zexks@lemmy.world 11 months ago
It was trained by liars. What do you expect.
lemmie689@lemmy.sdf.org 11 months ago
catloaf@lemm.ee 11 months ago
To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.
moakley@lemmy.world 11 months ago
I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.
technocrit@lemmy.dbzer0.com 11 months ago
How else are they going to achieve their goals? \s
nyan@lemmy.cafe 11 months ago
Does it matter to the humans interacting with the LLM whether incorrect information is the result of a bug or an intentional lie? (Keep in mind that the majority of these people are non-technical and don’t understand that All Software Has Bugs.)
CosmoNova@lemmy.world 11 months ago
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
technocrit@lemmy.dbzer0.com 11 months ago
If capitalist media could profit from humanizing humans, it would.
thedruid@lemmy.world 11 months ago
Not relevant to the conversation.
MrVilliam@lemm.ee 11 months ago
Well, LLMs can’t drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.
Yet.
koper@feddit.nl 11 months ago
Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.
FreedomAdvocate@lemmy.net.au 11 months ago
So AI is just like most people. Holy cow did we achieve computer sentience?!
dzso@lemmy.world 11 months ago
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
pulido@lemmings.world 11 months ago
🥱
Look mom, he posted it again.
NocturnalMorning@lemmy.world 11 months ago
Read the article before you comment.
gravitas_deficiency@sh.itjust.works 11 months ago
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
catloaf@lemm.ee 11 months ago
I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.
Randomgal@lemmy.ca 11 months ago
Exactly. They aren’t lying, they are completing the objective. Like machines… Because that’s what they are, they don’t “talk” or “think”. They do what you tell them to do.