Can AI systems decipher the difference between true and false information and ideas?
Is this more abstract and complex computation for these models?
Submitted 12 hours ago by daveB@sh.itjust.works to technology@lemmy.world
Can AI systems decipher the difference between true and false information and ideas?
Is this more abstract and complex computation for these models?
No. Because the creators of the AI system are lying fascist fucks who routinely reprogram it to their whims.
Fuck AI.
No. They’re just providing statistically probable answers based on the information in their training models.
Ask, “what size bolt do I need for the spinner in a 2012 Maytag dishwasher model ABC123?”. It probably has the dishwasher manual in its training model, maybe even content from Maytag customer forums where multiple people asked this exact question, and so has a high probability of generating a correct answer. Ask it something more controversial or unique, where answers on similar questions are varied or rare, it will be less likely to generate an accurate answer because it has less data to pull from.
They also “hallucinate”, or generate answers that are entirely false and not directly written anywhere else. Like there have been a number of lawyers caught using an LLM to write their legal briefs, because the LLM reference sources that don’t actually exist; it just made up Adam v Bob type case names.
A neural network can learn to closely imitate someone making logical inferences, but that’s different from making logical inferences itself. It doesn’t have a sense of whether it’s correct or incorrect—just a sense of how similar it is to its training examples.
No, because all they do is look at stated opinions and are unable to weight them. If a bunch of people say something is true and a bunch say it’s false, the AI has no way to know.
Zarxrax@lemmy.world 12 hours ago
If you are referring to large language models, no. They just create generate words that mimic natural language.