If a chatbot gives you bad advice, it’s your responsibility to. If a lawyer gives you bad advice, it’s the lawyer’s responsibility.
Lawyers increasingly have to convince clients that AI chatbots give bad advice
Submitted 9 hours ago by Beep@lemmus.org to technology@lemmy.world
https://nltimes.nl/2026/02/16/lawyers-increasingly-convince-clients-ai-chatbots-give-bad-advice
Comments
elvis_depresley@sh.itjust.works 1 hour ago
qwestjest78@lemmy.ca 6 hours ago
I find it useless for even basic tasks. The fact that some people follow it blindly like a god is so concerning.
ageedizzle@piefed.ca 4 hours ago
I work in a health-care-adjacent industry and you’d be surprised how many people blindly follow LLMs for medical advice
a4ng3l@lemmy.world 4 hours ago
It’s been doing wonders to help me improve materials I produce so that they fit better to some audiences. Also I can use those to spot missing points / inconsistencies against the ton of documents we have in my shop when writing something. It’s quite useful when using it as a sparing partner so far.
mech@feddit.org 1 hour ago
Rule of thumb is that AI can be useful if you use it for things you already know.
They can save time and if they produce shit, you’ll notice.
Don’t use them for things you know nothing about.The_Almighty_Walrus@lemmy.world 3 hours ago
It’s great when you have basic critical thinking skills.
Unfortunately, many people don’t have those and just use AI as a substitute for their own brain.
ParlimentOfDoom@piefed.zip 3 hours ago
Rearranging text is a vastly different use case than diagnosis and relevant information retrieval
phoenixz@lemmy.ca 2 hours ago
That’s not a new thing, doctors had this for at least a decade with WebMD.
No, you don’t have cancer
pinball_wizard@lemmy.zip 7 hours ago
Yes please. More folks need to all in on the idiocy of trusting an AI for legal advice. Let’s get this public lesson over with.
This is one of the cases where they can simply be a hilarious example for the rest of us, rather than getting a bunch of the rest of us killed.
melfie@lemy.lol 5 hours ago
With a coding agent, it’s wrong a lot and the code is usually terrible, but it can get working code with proper tests to create a feedback loop.
How does that go with legal work? Well, turns out that was mostly made-up bullshit and the judge gave a jail sentence for contempt of court, but once I get out, I’ll generate some more slop that will hopefully go over better next time.
Eternal192@anarchist.nexus 7 hours ago
Honestly if you are that dependent on A.I. now when it’s still in a test phase then you are already lost, A.I. won’t make us smarter if anything it has the opposite effect.
kescusay@lemmy.world 6 hours ago
I’m watching that happen in my industry (software development). There’s this massive pressure campaign by damn near everyone’s employers in software dev to use LLM tools.
It’s causing developers to churn out terrible, fragile, unmaintainable code at a breakneck pace, while they’re actively forgetting how to code for themselves.
Archer@lemmy.world 7 hours ago
Seems pretty simple to me, you pay lawyers so that you don’t have to pay even more by getting legally screwed over. Why try and cheap out on the insurance policy against bigger losses and risk it all collapsing?
sqgl@sh.itjust.works 2 hours ago
Lawyers are no guarantee. They are sloppy because they have no skin in the game, and they usually get paid regardless (although some have “uplift” fees which reward them for winning).
It is like hiring builders for your renovation. You still have to keep an eye on them and even tell them how to do their job, which of course is always a tense situation.
Best avoid situations which need a lawyer. Do not litigate lightly.
DGen@piefed.zip 3 hours ago
People do Not make a difference. You can use it for Help, guidance or whatever.
But never, especially with law, Trust it. Fact Check.
Well. But Look at this cat playing a trombone.
sirico@feddit.uk 7 hours ago
let them find out
whotookkarl@lemmy.dbzer0.com 1 hour ago
Everyone with domain specific knowledge*