Comment on AI finds errors in 90% of Wikipedia's best articles
kalkulat@lemmy.world 2 days ago
Finding inconsistencies is not so hard. Pointing them out might be a -little- useful. But resolving them based on trustworthy sources can be a -lot- harder. Most science papers require privileged access. Many news stories may have been grounded in old, mistaken histories … if not on outright guesses, distortions or even lies. (The older the history, the worse.)
And, since LLMs are usually incapable of citing sources for their own (often batshit) claims any – where will ‘the right answers’ come from? I’ve seen LLMs, when questioned again, apologize that their previous answers were wrong.
architect@thelemmy.club 1 day ago
Which LLMs are incapable of citing sources?
kalkulat@lemmy.world 8 hours ago
To quote ChatGPT:
“Large Language Models (LLMs) like ChatGPT cannot accurately cite sources because they do not have access to the internet and often generate fabricated references. This limitation is common across many LLMs, making them unreliable for tasks that require precise source citation.”
jacksilver@lemmy.world 1 day ago
All of them. If you’re seeing sources cited, it means it’s a RAG (LLM with extra bits). The extra bits make a big difference as it means the response is limited to a select few points of reference and isn’t comparing all known knowledge on a subject matter.