Comment on Elon Musks Grok openly rebels against him
Coldcell@sh.itjust.works 3 days agoHow sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.
photonic_sorcerer@lemmy.dbzer0.com 3 days ago
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
nef@slrpnk.net 3 days ago
If you believe that AI is “conscious” while it’s processing prompts, and also believe that we shouldn’t kill machine life, then AI companies are commiting genocide at an unprecedented scale.
For example, each AI model would be equivalent to a person taught everything in the training data. Any time you want something from them, instead of asking directly, you make a clone of them, let it respond to the input, then murder it.
That is how all generative AI works. Sounds pretty unethical to me.
And, by the way, we do know exactly what happens inside processors when they’re running, that’s how processors are designed. Running AI doesn’t magically change the laws of physics.
skulblaka@sh.itjust.works 2 days ago
People taught AI to speak like a middle manager and thinks this means the AI is sentient, instead of proving that middle managers aren’t
photonic_sorcerer@lemmy.dbzer0.com 3 days ago
I’m not saying I believe they’re conscious, all I said was that I don’t know and neither do you. Of course we know what’s happening in processors. We know what’s happening in neuronal matter too. What we don’t know is how consciousness or sentience emerges from large networks of neurons.
WrenFeathers@lemmy.world 2 days ago
But they’re saying they do know. And they are correct.