Comment on LLMs develop their own understanding of reality as their language abilities improve
A_A@lemmy.world 2 months ago
i feel that many of us, when confronted to this, are doing like Z.B. (president of the Galaxy) in The hitchhiker guide… when he says :
… “whenever I stop and think why did I want to do
something? – how did I work out how to do it? – I get a very strong desire to just to stop thinking about it” …
We don’t want to be surpassed by machines … and this explains the large amount of downvotes
atrielienz@lemmy.world 2 months ago
I’m actually pretty sure the downvotes are because LLM’s don’t think. They don’t even process. They pick the highest number and spit out the information attached to it.
Deceptichum@quokk.au 2 months ago
Do I “think” or does my brain pick the closest neuron and spit out a function based on that input?
If we could recreate the universe, would I do the exact same thing in the exact same situation?
atrielienz@lemmy.world 2 months ago
I’m sorry. Because you don’t understand how your brain works you’re suggesting that it must work in the same way as something a similar brain created because you don’t know how either thing works. That’s not an argument.
Deceptichum@quokk.au 2 months ago
No, I’m not suggesting that.
I’m suggesting that if we don’t even understand how consciousness works for ourselves, we cannot make claims about how it will look for other things.
Deterministically free will does not exist, if we cannot exercise free will we cannot have independent thoughts just the same as a machine.
Truth is we don’t really know shit, we’re biological machines that are able to think they’re in control of themselves.
A_A@lemmy.world 2 months ago
Science cannot say much about what it is to think since it doesn’t understand the brain well enough … and the day we can fully explained it, we will also be able to replicated it on computers.
atrielienz@lemmy.world 2 months ago
Science can and does quantify what our brains do vs what an LLM does though. That’s the point. That’s why the brain knows when it’s supplying wrong information or guessing but the LLM does not.
The LLM can provide wrong information. What it can’t do is intentionally lie.
A_A@lemmy.world 2 months ago
i agree with you that we are much better than LLMs in the fact we can verify our errors (and we can do much more things LLMs don’t do).
Still i am happy to have access to their vast memory and i know where they fail most of times so i can work with them in a productive way.
The day we provide them (or DNNs) with “Will” is i think when they will become (more) dangerous.
technocrit@lemmy.dbzer0.com 2 months ago
Wild pseudo-scientific generalization.
There are many many things that are fully explained but will never be replicated on computers. Eg. Any numerical problem bigger than a computer.