Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive

<- View Parent
bampop@lemmy.world ⁨1⁩ ⁨week⁩ ago

I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.

In its current stage, yes. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.

Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t be immortal by having a machine imitate my behaviour. There’s various bad actors trying to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, in time this will turn into a very subtle and pervasive problem for humankind.

source
Sort:hotnewtop