to be fair back then google just showed you what you searched for, i’m not as happy about people googling stuff these days. With AI we already know that it tends to make shit up, and it might very well only get worse as they start being trained on their own output.
Comment on Fears for patient safety as GPs use ChatGPT to diagnose and treat illness
Streetlights@lemmy.world 1 month ago
20 years ago there were complaints that GP’s were using Google, now its normal. Can’t help but feel the same will happen here.
Swedneck@discuss.tchncs.de 1 month ago
echodot@feddit.uk 1 month ago
Actually hallucinations have gone down as AI training has increased. Mostly through things like prompting them to provide evidence. When you prompt them to provide evidence they don’t hallucinate in the first place.
The problem is really to do with the way the older AIs were originally trained. They were basically trained on data where a question was asked, and then a response was given. Nowhere in the data set was there a question that was asked, and the answer was “I’m sorry I do not know”, so the AI basically was unintentionally taught that it is never acceptable to not answer a question. More modern AI have been trained in a better way and have been told it is acceptable not to answer a question. Combined with the fact that they now have the ability to perform internet searches, so like a human they can go look up data if they recognize that they don’t have access to it in their current data set.
That being said, Google’s AI is an idiot.
TheGrandNagus@lemmy.world 1 month ago
You’re right. Within 10 seconds I just found an article from 2006 saying just that. Earlier ones likely exist.