Comment on Why I don't use AI in 2025

<- View Parent
General_Effort@lemmy.world ⁨3⁩ ⁨days⁩ ago

I don’t see why the example requiring training for humans to understand is unfortunate.

Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.

It’s not clear why such deficiencies among humans do not argue against human consciousness.

A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.

That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.

Being conscious is not just to know what the words mean, but to understand what they mean.

What might an operational definition look like?

source
Sort:hotnewtop