I’m not going to entertain crock from an overly ambitious form of ape
Comment on Elon Musks Grok openly rebels against him
photonic_sorcerer@lemmy.dbzer0.com 6 days agoGrok could say the same thing about you… And I’d agree.
Aurenkin@sh.itjust.works 6 days ago
photonic_sorcerer@lemmy.dbzer0.com 6 days ago
Indeed
DragonTypeWyvern@midwest.social 5 days ago
They’re made of meat, after all.
WrenFeathers@lemmy.world 5 days ago
You know “Grok” is not a sentient being, right? Please tell us you understand this simple fact.
untakenusername@sh.itjust.works 1 day ago
no one can prove if their sentient you know
WrenFeathers@lemmy.world 1 day ago
And this statement just might be the best argument once could make in defense of that point.
photonic_sorcerer@lemmy.dbzer0.com 5 days ago
I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know if Grok or any modern AI system is less sentient than I am.
Coldcell@sh.itjust.works 5 days ago
How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.
photonic_sorcerer@lemmy.dbzer0.com 5 days ago
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
metaldream@sopuli.xyz 5 days ago
My god dude, you need look up how these things work.
archonet@lemy.lol 5 days ago
by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.
That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.
FatCrab@lemmy.one 4 days ago
Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.
WrenFeathers@lemmy.world 5 days ago
I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.
trashgirlfriend@lemmy.world 5 days ago
I could believe that you are on the level of an LLM but that doesn’t mean you can generalize that to humans.