It can and will lie. It has admitted to doing so after I probed it long enough about the things it was telling me.
Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
Telorand@reddthat.com 1 year ago
“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week.
Jesus Christ, y’all. It’s like Boomers trying to figure out the internet all over again. Just because AI (probably) can’t lie doesn’t mean it can’t be earnestly wrong. It’s not some magical fact machine; it’s fancy predictive text.
It will be a truly scary time if people like Ramirez become judges one day and have forgotten how or why it’s important to check people’s sources yourself, robot or not.
Ulrich@feddit.org 1 year ago
michaelmrose@lemmy.world 1 year ago
You can’t ask it about itself because it has no internal model of self and is just basing any answer on data in its training set
ryven@lemmy.dbzer0.com 1 year ago
Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doeen’t know how it will end, and therefore can’t have an opinion about the truth value of it. (I’d go further and claim it can’t really “have an opinion” about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.
“Admitting” that it’s lying only proves that it has been exposed to “admission” as a pattern in its training data.
ggppjj@lemmy.world 1 year ago
I strongly worry that humans really weren’t ready for this “good enough” product to be their first “real” interaction with what can easily pass as an AGI without near-philosophical knowledge of the difference between an AGI and an LLM.
It’s obscenely hard to keep the fact that it is a very good pattern-matching auto-correct in mind when you’re several comments deep into a genuinely actually no lie completely pointless debate against spooky math.
Ulrich@feddit.org 1 year ago
It knows the answer its giving you is wrong, and it will even say as much. I’d co sider that intent.
sugar_in_your_tea@sh.itjust.works 1 year ago
Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.
That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.
ggppjj@lemmy.world 1 year ago
It is incapable of knowledge, it is math
Bogasse@lemmy.ml 1 year ago
You don’t need any knowledge of computers to understand how big of a deal it would be if we actually built a reliable fact machine. For me the only possible explanation is to not care enough to try and think about it for a second.
morrowind@lemmy.ml 1 year ago
That’s fundamentally impossible. There’s always some baseline you trust that decides what is true
barsoap@lemm.ee 1 year ago
We actually did. Trouble being you need experts to feed and update the thing, which works when you’re watching dams (that doesn’t need to be updated) but fails in e.g. medicine. But during the brief time where those systems were up to date they did some astonishing stuff, they were plugged into the diagnosis loop and would suggest additional tests to doctors, countering organisational blindness. Law is an even more complex matter though because applying it requires an unbounded amount of real-world and not just expert knowledge, so forget it.
ggppjj@lemmy.world 1 year ago
We did, a long time ago. It’s called an encyclopedia.
If humans can’t be trusted to only provide facts, how can we be trusted to make a machine that only provides facts? How do we deal with disputed truths? Grey areas?
webghost0101@sopuli.xyz 1 year ago
Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.
Moose@moose.best 1 year ago
I don’t know if I would call it lying per-se, but yes I have seen instances of AI’s being told not to use a specific tool and them using them anyways, Neuro-sama comes to mind. I think in those cases it is mostly the front end agreeing not to lie (as that is what it determines the operator would want to hear) but having no means to actually control the other functions going on.
webghost0101@sopuli.xyz 1 year ago
Neurosama is a fun example but we dont really know the sauce vedal coocked up.
When i say proven i mean 32 page research paper specifically looking into it.
They found that even a model trained specifically on honesty will lie if it has an incentive.
catloaf@lemm.ee 1 year ago
No probably about it, it definitely can’t lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
milicent_bystandr@lemm.ee 1 year ago
I’m G P T and I cannot lie.
You other brothers use ‘AI’
But when you file a case
To the judge’s face
And say, “made mistakes? Not I!”
He’ll be mad!ayyy@sh.itjust.works 1 year ago
🏅
DancingBear@midwest.social 1 year ago
So it can not tell the truth either
FiskFisk33@startrek.website 1 year ago
not really no. They are statistical that use heuristics to output what is most likely to follow the input you give it
They are in essence mimicking their training data
DancingBear@midwest.social 1 year ago
So I think this whole thing about whether it can lie or not is just semantics then no?
Bogasse@lemmy.ml 1 year ago
A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.
Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.
Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It’s very easy to pile a ton of them and it’s much harder to attack you about any of them because they’re much less consequent.
So in that view, a bullshitter doesn’t give any shit about the truth, while a liar is a bit more “noble”. 0
ggppjj@lemmy.world 1 year ago
I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.
jordanlund@lemmy.world 1 year ago
It’s cool, they’ll just have an AI source checker. :)
Telorand@reddthat.com 1 year ago
I call mine a brain! 😉
4am@lemm.ee 1 year ago
AI, specifically Laege language Models, do not “lie” or tell “the truth”. They are statistical models and work out, based on the prompt you feed them, what a reasonable sounding response would be.
This is why they’re uncreative and they “hallucinate”. It’s not thinking about your question and answering it, it’s calculating what words will placate you, using a calculation that runs on a computer the size of AWS.
OccultIconoclast@reddthat.com 1 year ago
It’s like when you’re having a conversation on autopilot.
“Mum, can I play with my frisbee?” Sure, honey. “Mum, can I have an ice cream from the fridge?” Sure can. “Mum, can I invade Poland?” Absolutely, whatever you want.
joel_feila@lemmy.world 1 year ago
So chat gpt started ww2
jayandp@sh.itjust.works 1 year ago
Don’t need something the size of AWS these days. I ran one on my PC last week. But yeah, you’re right otherwise.