I’m not going to entertain crock from an overly ambitious form of ape
Comment on Elon Musks Grok openly rebels against him
photonic_sorcerer@lemmy.dbzer0.com 3 weeks agoGrok could say the same thing about you… And I’d agree.
Aurenkin@sh.itjust.works 3 weeks ago
photonic_sorcerer@lemmy.dbzer0.com 3 weeks ago
Indeed
DragonTypeWyvern@midwest.social 3 weeks ago
They’re made of meat, after all.
WrenFeathers@lemmy.world 3 weeks ago
You know “Grok” is not a sentient being, right? Please tell us you understand this simple fact.
photonic_sorcerer@lemmy.dbzer0.com 3 weeks ago
I’m just a meat computer running fucked-up software written by the process of evolution. I honestly don’t know if Grok or any modern AI system is less sentient than I am.
Coldcell@sh.itjust.works 3 weeks ago
How sentient? Like on a scale of zero to sentience? None. It is non-sentient, it is a promptable autocomplete that offers best predicted sentences. Left to itself it does nothing, has no motivations, intentions, “will”, desire to survive/feed/duplicate etc. A houseplant has a higher sentience score.
photonic_sorcerer@lemmy.dbzer0.com 3 weeks ago
An LLM is only one part of a complete AI agent. What exactly happens in a processer at inference time? What happens when you continuously prompt the system with stimuli?
metaldream@sopuli.xyz 3 weeks ago
My god dude, you need look up how these things work.
archonet@lemy.lol 3 weeks ago
by their very nature, they are not sentient. They are Markov chains for words. They do not have a sense of self, truth, or feel emotions, they do not have wants or desires, they merely predict what is the next most likely word in a sequence, given the context. The only thing they can do is “make plausible sentences that can come after [the context]”.
That’s all an LLM is. It doesn’t reason. I’m more than happy to entertain the notion of rights for a computer that actually has the ability to think and feel, but this ain’t it.
FatCrab@lemmy.one 3 weeks ago
Not that I agree they’re conscious, but this is an incorrect and overly simplistic definition of a LLM. They are probabilistic in nature, yea, and they work on tokens, or fragments, of words. But it’s about as much of an oversimplification to say humans are just markov chains that make plausible sentences that can come after [the context] as it is to say modern GPTs are.
WrenFeathers@lemmy.world 3 weeks ago
I do know. It’s not sentient at all. But don’t get angry at me about this. You can put that all on science.
trashgirlfriend@lemmy.world 3 weeks ago
I could believe that you are on the level of an LLM but that doesn’t mean you can generalize that to humans.
untakenusername@sh.itjust.works 3 weeks ago
no one can prove if their sentient you know
WrenFeathers@lemmy.world 3 weeks ago
And this statement just might be the best argument once could make in defense of that point.