That’s because AI doesn’t know anything. All they do is make stuff up. This is called bullshitting and lots of people do it, even as a deliberate pastime. There was even a fantastic Star Trek TNG episode where Data learned to do it!
The key to bullshitting is to never look back. Just keep going forward! Constantly constructing sentences from the raw material of thought. Knowledge is something else entirely: justified true belief. It’s not sufficient to merely believe things, we need to have some justification (however flimsy). This means that true knowledge isn’t merely a feature of our brains, it includes a causal relation between ourselves and the world, however distant that may be.
A large language model at best could be said to have a lot of beliefs but zero justification. After all, no one has vetted the gargantuan training sets that go into an LLM to make sure only facts are incorporated into the model. Thus the only indicator of trustworthiness of a fact is that it’s repeated many times and in many different places in the training set. But that’s no help for obscure facts or widespread myths!
sp3ctr4l@lemmy.dbzer0.com 11 months ago
As an Autist, I find it amazing that… after a lifetime of being compared to a robot, an android, a computer…
When humanity actually does manage to get around to creating “”“AI”“”… the AI fundamentally acts nothing like the general stereotype of fictional AIs, as similar to how an Autistic mind tends to evaluate information…
No, no, instead, it acts like an Allistic, Neurotypical person, who just confidently asserts and assumes things that it basically pulls out of its ass, often never takes any time to consider its own limitations as it pertains to correctly assessing context, domain specific meanings, essentially never asks for clarifications…
Nope, just barrels forward assuming its subjective interpretation of what you’ve said is the only objectively correct one, spouts out pithy nonsense… and then if you actually progress further and attempt to clarify what you actually meant, or ask it questions about itself and its own previous statements… it will gaslight the fuck out of you, even though its own contradictory / overconfident / unqualified hyperbolic statements are plainly evident, in text.