I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
But yeah, given that an LLM is very much measurable and exists within the physical realm, it’s relatively easy to argue that such technology cannot achieve conscious capability.
khepri@lemmy.world 2 weeks ago
Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let along philosophy or social sciences.
nednobbins@lemmy.zip 2 weeks ago
Defining “consciousness” requires much more handwaving and many more assumptions than any of the other three. It requires so much that I claim it’s essentially an undefined term.
With such a vague definition of what “consciousness” is, there’s no logical way to argue that an AI does or does not have it.
2xar@lemmy.world 2 weeks ago
Your logic is critically flawed. By your logic you could argue that there is no “logical way to argue a human has consciousness”, because we don’t have a precise enough definition of consciousness. What you wrote is just “I’m 14 and this is deep” territory, not real logic.
In reality, you very easily decide whether AI is conscious or not, even if the exact limit of what you would call “consciousness” can be debated. You wanna know why? Because if you have a basic indersanding of how AI/LLM works, than you know, that in every possible, concievable aspect in regards with consciusness it is basically between your home PC and a plankton. None of which would anybody call conscious, by any definition. Therefore, no matter what vague definition you’d use, current AI/LLM defintiely dos NOT have it. Not by a longshot. Maybe in a few decades it could get there. But current models are basically over-hyped thermostat control electronics.
nednobbins@lemmy.zip 2 weeks ago
I’m not talking about a precise definition of consciousness, I’m talking about a consistent one. Without a definition, you can’t argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim, it but you can’t back it up.
The problem is that I have more than a basic understanding of how an LLM works. I’ve written NNs from scratch and I know that we model perceptrons after neurons.
Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.