They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.
Comment on Court Bans Use of 'AI-Enhanced' Video Evidence Because That's Not How AI Works
Natanael@slrpnk.net 7 months agoThere’s a lot of other layers in brains that’s missing in machine learning. These models don’t form world models and some have an understanding of facts and have no means of ensuring consistency, to start with.
lightstream@lemmy.ml 7 months ago
Natanael@slrpnk.net 7 months ago
Statistical associations is not equivalent to a world model, especially because they’re neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language
rdri@lemmy.world 7 months ago
I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won’t be improved by much even if we made them closer to real “intelligence”.
The point is that reconstruction itself can’t produce missing details, not that a “properly intelligent” mind will be any better at it than current ai.