It’s a bit of “emergent properties” - so many things are happening under the hood they don’t understand exactly how it’s doing what it’s doing, why one type of mesh performs better on a particular class of problems than another.
The equations of the Lorenz attractor are simple, well studied, but it’s output is less than predictable and even those who study it are at a loss to explain “where it’s going to go next” with any precision.
Kyrgizion@lemmy.world 6 days ago
Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.
Coldcell@sh.itjust.works 6 days ago
Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.
irmoz@lemmy.world 6 days ago
The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?”