Prove to me you have a mind and I’ll accept what you’re saying.
Comment on The first minds to be controlled by generative AI will live inside video games
huginn@feddit.it 1 year ago
Friendly reminder that your predictive text, while very compelling, is not alive.
It’s not a mind.
CrayonRosary@lemmy.world 1 year ago
penguin@sh.itjust.works 1 year ago
Well no one can prove they have a mind to anyone other than themselves.
And to extend that, there’s obviously a way for electrical information processing to give rise to consciousness. And no one knows how that could be possible.
Meaning something like a true, alien AI would probably conclude that we are not conscious and instead are just very intelligent meat computers.
So, while there’s no reason to believe that current AI models could result in consciousness, no one can prove the opposite either.
I think the argument currently boils down to, “we understand how AI models work, but we don’t understand how our minds work. Therefore, ???, and so no consciousness for AI”
General_Effort@lemmy.world 1 year ago
“No brain?”
“Oh, there’s a brain all right. It’s just that the brain is made out of meat! That’s what I’ve been trying to tell you.”
“So … what does the thinking?”
“You’re not understanding, are you? You’re refusing to deal with what I’m telling you. The brain does the thinking. The meat.”
“Thinking meat! You’re asking me to believe in thinking meat!”*
bionicjoey@lemmy.ca 1 year ago
I can prove to you ChatGPT doesn’t have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.
OrderedChaos@lemmy.world 1 year ago
I’m confused by this idea. Maybe I’m just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.
KairuByte@lemmy.dbzer0.com 1 year ago
Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.
bionicjoey@lemmy.ca 1 year ago
But some humans can, since they require simultaneous understanding of words’ meanings as well as how they are spelled
General_Effort@lemmy.world 1 year ago
Can you please explain the reasoning behind the test?
huginn@feddit.it 1 year ago
Well there are 2 options:
Either I’m a real mind separate and independent of you or I’m a figment of your imagination.
At which point you have to ask yourself: why are you so convinced you’re an unlovable and insufferable twat?
MxM111@kbin.social 1 year ago
While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.
match@pawb.social 1 year ago
What can’t be a kind of mind to you?
huginn@feddit.it 1 year ago
Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.
General_Effort@lemmy.world 1 year ago
It is obvious that you do not know what either “mathematical proof” or “emergence” mean. Unfortunately, you are misrepresenting the facts.
I don’t mean to criticize your religious (or philosophical) convictions. There is a reason people mostly try to keep faith and science separate.
huginn@feddit.it 1 year ago
Here’s a white paper explicitly proving:
No emergent properties (illusory due to bad measures)
Predictable linear progress with model size
The field changes fast, I understand it is hard to keep up
kogasa@programming.dev 1 year ago
No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.
huginn@feddit.it 1 year ago
Here’s a white paper explicitly proving:
- No emergent properties (illusory due to bad measures)
- Predictable linear progress with model size
Do try and keep up.
MxM111@kbin.social 1 year ago
I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.
huginn@feddit.it 1 year ago
Sure thing: here’s a white paper explicitly proving:
- No emergent properties (illusory due to bad measures)
- Predictable linear progress with model size
Corgana@startrek.website 1 year ago
Sorry you’re getting downvoted, you’re correct. It’s not implausible to assume that generative AI systems may have some kind of umwelt, but it is highly implausible to expect that it would be anything resembling that of a human (or animal). I think people are getting hung up on it because they’re assuming a lack of understanding language implies a lack of any concious experience. Humans do lots of things without understanding how they might be understood by others.
To be clear, I don’t think these systems have experience, but it’s impossible to rule out until an actual robust theory of mind comes around.
Bluehat@lemmynsfw.com 1 year ago
Suppose you grew a small collection of brain cells and tied it into a CPU, would it be a mind then?
Bernie_Sandals@lemmy.world 1 year ago
If you cut out a tiny bit of someone’s brain and then hooked it up to a cpu, would it be a mind? No, of course not, lol. Even if we got Biocomputers to work, we still wouldn’t have any synthetic hardware even close to being strong or fast enough to actually create or even simulate a brain.
Poggervania@kbin.social 1 year ago
Cyberpunk 2077 sorta explores this a bit.
There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.
When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.
So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.
dlpkl@lemmy.world 1 year ago
Brandon 🥲
billwashere@lemmy.world 1 year ago
Which is why the Turing Test needs to be updated. These text models are getting really good at fooling people.
bionicjoey@lemmy.ca 1 year ago
The Turing test isn’t just that there exists some conversation you can have with a machine where you wouldn’t know it’s a machine. The Turing test is that you could spend an arbitrary amount of time talking to a machine and never be able to tell. ChatGPT doesn’t come anywhere close to this, since there are many subjects where it quickly becomes clear that the model doesn’t understand the meaning of the text it generates.
Corgana@startrek.website 1 year ago
Exactly thank you for pointing this out. It also assumes that the tester would have knowledge of the wider context in which the text exists. GPT could probably fool someone from the middle ages, but that person wouldn’t know anything about what they are testing exactly.