word lying would imply intent. Is this pseudocode
print “sky is green” lying or doing what its coded to do?
The one who is lying is the company running the ai
Submitted 1 week ago by cm0002@lemmy.world to technology@lemmy.world
https://www.theregister.com/2025/05/01/ai_models_lie_research/
word lying would imply intent. Is this pseudocode
print “sky is green” lying or doing what its coded to do?
The one who is lying is the company running the ai
It’s lying whether you do it knowingly or not.
The difference is whether it’s intentional lying.
Lying is saying a falsehood, that can be both accidental or intentional.
The difference is in how bad we perceive it to be, but in this case, I don’t really see a purpose of that, because an AI lying makes it a bad AI no matter why it lies.
I just think lying is wrong word to use here. Outputting false information would be better. Its kind of nitpicky but not really since choice of words affects how people perceive things. In this matter it shifts the blame from the company to their product and also makes it seem more capable than it is since when you think about something lying, it would also mean that something is intelligent enough to lie.
Actually no, “to lie” means to say something intentionally false.
These kinds of bullshit humanizing headlines are the part of the grift.
Exactly. They aren’t lying, they are completing the objective. Like machines… Because that’s what they are, they don’t “talk” or “think”. They do what you tell them to do.
Google and others used Reddit data to train their LLMs. That’s all you need to know about how accurate it will be.
That’s not to say it’s not useful, but you need to know how to use it and understand that you need to only use it as a tool to help, not to take it as correct.
They paint this is if it was a step back, as if it doesn’t already copy human behaviour perfectly and isn’t in line with technofascist goal. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).
Maybe the darknet will grow in its place.
It was trained by liars. What do you expect.
It’s not a lie if you believe it.
I mean, it was trained to mimic human social behaviour. If you want a completely honest LLM I suppose you’d have to train it on the social behaviours of a population which is always completely honest, and I’m not personally familiar with such.
AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
this is the AI model that truly passes the Turing Test.
To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
But that’s kind of the point of the Turing test: a true AI with human level intelligence distinguishes itself by not being susceptible to probing or tricking it
catloaf@lemm.ee 1 week ago
To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.
CosmoNova@lemmy.world 1 week ago
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
MrVilliam@lemm.ee 1 week ago
Well, LLMs can’t drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.
Yet.
technocrit@lemmy.dbzer0.com 1 week ago
If capitalist media could profit from humanizing humans, it would.
thedruid@lemmy.world 1 week ago
Not relevant to the conversation.
koper@feddit.nl 1 week ago
Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.
dzso@lemmy.world 1 week ago
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
FreedomAdvocate@lemmy.net.au 1 week ago
So AI is just like most people. Holy cow did we achieve computer sentience?!
NocturnalMorning@lemmy.world 1 week ago
Read the article before you comment.
catloaf@lemm.ee 1 week ago
I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.
gravitas_deficiency@sh.itjust.works 1 week ago
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
moakley@lemmy.world 1 week ago
I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.
nyan@lemmy.cafe 1 week ago
Does it matter to the humans interacting with the LLM whether incorrect information is the result of a bug or an intentional lie? (Keep in mind that the majority of these people are non-technical and don’t understand that All Software Has Bugs.)
pulido@lemmings.world 1 week ago
🥱
Look mom, he posted it again.
technocrit@lemmy.dbzer0.com 1 week ago
How else are they going to achieve their goals? \s