So it can not tell the truth either
Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
catloaf@lemm.ee 6 days agoNo probably about it, it definitely can’t lie. Lying requires knowledge and intent, and GPTs are just text generators that have neither.
DancingBear@midwest.social 6 days ago
FiskFisk33@startrek.website 6 days ago
not really no. They are statistical that use heuristics to output what is most likely to follow the input you give it
They are in essence mimicking their training data
DancingBear@midwest.social 6 days ago
So I think this whole thing about whether it can lie or not is just semantics then no?
FiskFisk33@startrek.website 6 days ago
everything is semantics.
Lying is telling a falsehood intentionally
LLM’s clearly lack the prerequisite intentionality
milicent_bystandr@lemm.ee 6 days ago
I’m G P T and I cannot lie.
You other brothers use ‘AI’
But when you file a case
To the judge’s face
And say, “made mistakes? Not I!”
He’ll be mad!ayyy@sh.itjust.works 5 days ago
🏅
Bogasse@lemmy.ml 6 days ago
A bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.
Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.
Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It’s very easy to pile a ton of them and it’s much harder to attack you about any of them because they’re much less consequent.
So in that view, a bullshitter doesn’t give any shit about the truth, while a liar is a bit more “noble”. 0
ggppjj@lemmy.world 6 days ago
I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.