Comment on LLMs develop their own understanding of reality as their language abilities improve
Deceptichum@quokk.au 2 months agoDo I “think” or does my brain pick the closest neuron and spit out a function based on that input?
If we could recreate the universe, would I do the exact same thing in the exact same situation?
atrielienz@lemmy.world 2 months ago
I’m sorry. Because you don’t understand how your brain works you’re suggesting that it must work in the same way as something a similar brain created because you don’t know how either thing works. That’s not an argument.
Deceptichum@quokk.au 2 months ago
No, I’m not suggesting that.
I’m suggesting that if we don’t even understand how consciousness works for ourselves, we cannot make claims about how it will look for other things.
Deterministically free will does not exist, if we cannot exercise free will we cannot have independent thoughts just the same as a machine.
Truth is we don’t really know shit, we’re biological machines that are able to think they’re in control of themselves.
atrielienz@lemmy.world 2 months ago
Okay. Feed a new species that hasn’t been named yet into an LLM. Does it name that new creature? Can it decide what family or phylum etc it belongs? Does it pick up the specific attributes of that new species?
Deceptichum@quokk.au 2 months ago
It might even be able to pick those things out, I certainly couldn’t.
technocrit@lemmy.dbzer0.com 2 months ago
So is a rock conscious? I guess we’ll never know.
tabular@lemmy.world 2 months ago
I suspect others are talking about “thinking” only objectively.
B) If a LLM had a subjective experience when given input presumably it has none when all processes are stopped (subjective an unverifiable).
A) If a LLM has no input then there are no processes going on at all which could be described as thinking (objective: what is the program doing).