Comment on Lawyer caught using AI-generated false citations in court case penalised in Australian first
eureka@aussie.zone 5 days agoIt’s kind of surprising how often it just confidently spews out sentences which seem plausible but are completely incorrect.
To me, it’s not surprising at all. It’s trained to talk like its training data talks, how people talk. Very loosely speaking, it’s a “common sense” generator, and if there are topics that you’re experienced with and you look at a site like reddit talking about it, you soon realise how normal it is for people to be confidently incorrect.
And on that note, it’s been seriously worrying to me how people seem to trust and anthropomorphise computers. It’s been a problem since at least the '60s but the advent of Artificial so-called Intelligence has revealed how dangerous it is.
Unless a bot is trained with curated data (like some medical imaging ones, for example), it shouldn’t be believed. And even then it shouldn’t be fully trusted.
null_dot@lemmy.dbzer0.com 5 days ago
I agree for the most part.
“Surprising” is perhaps the wrong word. If you have even a vague understanding of how these work, then nothing is really surprising. However, a bot day to day and learning how to integrate it into your workflow, you get used to a certain level of quality, but occasionally (regularly?) run into something that doesn’t meet your expectations.
I agree that the way that some people are interacting with these LLMs is… odd. However, people are engaging in so many odd behaviors I have to say if they’re not harming anyone then have at it.
naevaTheRat@lemmy.dbzer0.com 5 days ago
Don’t gell-mann yourself.
If it spits out plausible looking but incorrect things you notice with high frequency, how much do you not notice?
null_dot@lemmy.dbzer0.com 4 days ago
I’m just not using Gen AI that way.
Like I don’t ask it to provide me with technical details, rather I provide details and ask it to re-phrase.