The source is just as vulnerable to being hallucinations as anything else it tells you.
Comment on Why AI is going to be a shitshow.
Alpha71@lemmy.world 8 months agoAs I mentioned, Copilot links the sources of the information it gives at the bottom. if you want to double check the information, it is provided to you.
wewbull@feddit.uk 8 months ago
laurelraven@lemmy.blahaj.zone 8 months ago
So, when you go to check them… It’s not like the AI is going to hallucinate a valid registered domain with a webserver hosting the hallucinated source as well, so click the link, it’s dead/fake, toss out that reply as suspect.
If you follow the source and find it’s valid, supports what the AI said, and is reasonably trustworthy, then you can consider what it has told you.
If it cites its sources, you have a way to check its math (so to speak).
wewbull@feddit.uk 8 months ago
You have a way to do so, yes, but you actually have to do it and we know people don’t. False sources can just make already believable responses more credible, despite them being full of rubbish.
laurelraven@lemmy.blahaj.zone 8 months ago
The person you were replying to was talking about checking those sources though.
Yes, fake sources can and will give people a false sense that it’s legit, but checking a “hallucinated” source will quickly make it clear that there’s nothing backing it up.
It’s a problem, but it’s one that an individual using it who’s aware of it does actually have a way of mitigating fairly easily.
Turun@feddit.de 8 months ago
I’m pretty sure when searching with AI the model gets told “here are five articles about <user search term>, summarize them and answer the following question: <user input>”
SMillerNL@lemmy.world 8 months ago
And somewhere in the Terms of Service it says you have to give up your first born child. Or maybe it doesn’t, but nobody will ever know because nobody reads more than is strictly required.