Comment on Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases
Bogasse@lemmy.ml 6 days agoA bit out of context my you recall me of some thinking I heard recently about lying vs. bullshitting.
Lying, as you said, requires quite a lot of energy : you need an idea of what the truth is and you engage yourself in a long-term struggle to maintain your lie and keep it coherent as the world goes on.
Bullshit on the other hand is much more accessible : you just have to say things and never look back on them. It’s very easy to pile a ton of them and it’s much harder to attack you about any of them because they’re much less consequent.
So in that view, a bullshitter doesn’t give any shit about the truth, while a liar is a bit more “noble”. 0
ggppjj@lemmy.world 6 days ago
I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.