Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive

<- View Parent
nednobbins@lemmy.zip ⁨4⁩ ⁨days⁩ ago

I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim, it but you can’t back it up.

The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.

Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.

source
Sort:hotnewtop