Comment on Firefox is Getting a New AI Browsing Mode
Lemminary@lemmy.world 14 hours agoThat’s assuming the AI won’t look at the results and still make shit up. I’ve used AI-assisted search and i know that it’s not reliable.
Comment on Firefox is Getting a New AI Browsing Mode
Lemminary@lemmy.world 14 hours agoThat’s assuming the AI won’t look at the results and still make shit up. I’ve used AI-assisted search and i know that it’s not reliable.
riskable@programming.dev 2 hours ago
Ok how would that work:
find me some good recipes for hibachi style ginger butterAI model returns 10 links, 4 of which don’t actually exist (because it hallucinated them)? No. If they didn’t exist, it wouldn’t have returned them because it wouldn’t have been able to load those URLs.
It’s possible that it could get it wrong because of some new kind of LLM scamming method but that’s not “making shit up” it’s malicious URLs.
Lemminary@lemmy.world 1 hour ago
And yet I’ve had Bing’s Copilot/ChatGPT (with plugins like Consensus), Gemini, and Perplexity do exactly that, but worse. Sometimes they’ll cite sources that don’t mention anything related to the answer they’ve provided because the information they’re giving is based on some other training data they can’t source. They were asked to provide a source, but won’t necessarily give you the source. Hell, sometimes they’ll answer an adjacent just to spit out an answer–any answer–to fulfill the request.
LLMs are simply not the appropriate tool for the job. This is most obvious when you need the specificity and accuracy.