Comment on Lawyers increasingly have to convince clients that AI chatbots give bad advice
a4ng3l@lemmy.world 11 hours agoYeah well same applies for a lot of tools… I’m not certified for flying a plane and look at me not flying one either… but I’m not shitting on planes…
ToTheGraveMyLove@sh.itjust.works 10 hours ago
But planes don’t routinely spit out false information.
a4ng3l@lemmy.world 10 hours ago
If you can’t fly a plane chances are you’ll crash it. If you can’t use llms chances are you’ll get shit out of it… outcome of using a tool is directly correlated to one’s ability?
Sound logical enough to me.
DrunkenPirate@feddit.org 9 hours ago
Sure. However, the outcome of the tool LLM always looks very likely. And if you aren‘t a subject matter expert the likely expected result looks very right. That‘s the difference - hard to spot the wrong things (even for experts)
a4ng3l@lemmy.world 9 hours ago
So is a speedometer and an altimeter until you reaaaaaaaaly need to understand them.
I mean it all boils down to proper tool with proper knowledge and ability. It’s slightly exacerbated by the apparent simplicity but if you look at it as a tool it’s no different.
ToTheGraveMyLove@sh.itjust.works 8 hours ago
Except with a plane, if you know how to fly it you’re far less likely to crash it. Even if you “can use LLMs” there’s still a pretty strong chance you’re going to get shit back due to its very nature. One the machine works with you, the other the machine is always working against you.
a4ng3l@lemmy.world 8 hours ago
Nha that’s just plain wrong…also you can also fantastically screw flying a plane but so long you use LLMs safely you’re golden.
It also has no will on its own; it is not « working against you ». Don’t give those apps a semblance of intent.