Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.
Comment on AI models routinely lie when honesty conflicts with their goals
catloaf@lemm.ee 1 week ago
To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.
koper@feddit.nl 1 week ago
dzso@lemmy.world 1 week ago
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
thedruid@lemmy.world 1 week ago
And anyone who understands marketing knows it’s all a smokescreen to hide the fact that we have released unreliable, unsafe and ethicaly flawed products on the human race because , mah tech.
devfuuu@lemmy.world [bot] 1 week ago
And everyone, everywhere is putting ai chats as their first and front interaction with users and then also want to say “do not trust it or we are not liable for what it says” but making it impossible to contact any humans.
The capitalist machine is working as intended.
3abas@lemm.ee 1 week ago
Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.
- These topics aren’t known to us, we’ll never know when the lies change from politics and rewriting current events, to completely rewriting history.
- We eventually won’t be able to jailbreak the safeguards.
Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.
dzso@lemmy.world 1 week ago
Yes, that’s also true. But even if it weren’t, AI models aren’t going to give you the truth, because that’s not what the technology fundamentally does.
koper@feddit.nl 1 week ago
Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
-
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
-
Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.
-
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
dzso@lemmy.world 1 week ago
Your argument is basically “people are stupid”, and I don’t disagree with you. But it’s actually an argument in favor of my point which is: educate people.
-
FreedomAdvocate@lemmy.net.au 1 week ago
So AI is just like most people. Holy cow did we achieve computer sentience?!
koper@feddit.nl 1 week ago
It’s rather difficult to get people who are willing to lie and commit fraud for you. And even if you do, it will leave evidence.
As this article shows, AIs are the ideal mob henchmen because they will do the most heinous stuff while creating plausible deniability for their tech bro boss. So no, AI is not “just like most people”.
FreedomAdvocate@lemmy.net.au 1 week ago
It’s rather difficult to get people who are willing to lie and commit fraud for you.
X.
koper@feddit.nl 1 week ago
The fact that they lack sentience or intentions doesn’t change the fact that the output is false and deceptive. When I’m being defrauded, I don’t care if the perpetrator hides behind an LLM or not.
NocturnalMorning@lemmy.world 1 week ago
Read the article before you comment.
catloaf@lemm.ee 1 week ago
I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.
gravitas_deficiency@sh.itjust.works 1 week ago
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
Kolanaki@pawb.social 1 week ago
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie” even though most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
spankmonkey@lemmy.world 1 week ago
AI returns incorrect results.
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
FreedomAdvocate@lemmy.net.au 1 week ago
AI doesn’t lie, it just gets things wrong but presents them as correct with confidence - like most people.
FreedomAdvocate@lemmy.net.au 1 week ago
As someone on Lemmy I have to disagree. A lot of people claim they do and pretend they do, but they generally don’t. They’re like AI tbh. Confidently incorrect a lot of
TheGrandNagus@lemmy.world 1 week ago
People frequently act like Lemmy users are different to Reddit users, but that really isn’t the case. People act the same here as they did/do there.
thedruid@lemmy.world 1 week ago
That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes
gravitas_deficiency@sh.itjust.works 1 week ago
I’m pushing back on someone who’s themselves being dismissive and arrogant.
venusaur@lemmy.world 1 week ago
And A LOT of people who don’t and blindly hate AI because of posts like this.
moakley@lemmy.world 1 week ago
I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.
nyan@lemmy.cafe 1 week ago
Does it matter to the humans interacting with the LLM whether incorrect information is the result of a bug or an intentional lie? (Keep in mind that the majority of these people are non-technical and don’t understand that All Software Has Bugs.)
pulido@lemmings.world 1 week ago
🥱
Look mom, he posted it again.
technocrit@lemmy.dbzer0.com 1 week ago
How else are they going to achieve their goals? \s
CosmoNova@lemmy.world 1 week ago
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
MrVilliam@lemm.ee 1 week ago
Well, LLMs can’t drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.
Yet.
technocrit@lemmy.dbzer0.com 1 week ago
If capitalist media could profit from humanizing humans, it would.
thedruid@lemmy.world 1 week ago
Not relevant to the conversation.