At least in the US, we are still too superstitious a people to ever admit that AGI could exist.
We will get animal rights before we get AI rights, and I’m sure you know how animals are usually treated.
Comment on Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works
ProfessorScience@lemmy.world 2 weeks agoSo, I will grant that right now humans are better writers than LLMs. And fundamentally, I don’t think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.
So, not to be pedantic, but:
AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.
Couldn’t you say the same thing about a person? A person couldn’t write something without having learned to read first. And without having read things similar to what they want to write.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.
This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we’d say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren’t just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?
And when they start hallucinating, it’s because they don’t understand how they sound…
People do this too, though… It’s just that LLMs do it more frequently right now.
I guess I’m a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.
At least in the US, we are still too superstitious a people to ever admit that AGI could exist.
We will get animal rights before we get AI rights, and I’m sure you know how animals are usually treated.
I don’t think it’s just a question of whether AGI can exist. I think AGI is possible, but I don’t think current LLMs can be considered sentient. But I’m also not sure how I’d draw a line between something that is sentient and something that isn’t (or something that “writes” rather than “generates”). That’s kinda why I asked in the first place. I think it’s too easy to say “this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it’s not real thought”. I haven’t heard any good answers to why numbers passing through matrices isn’t thought, but electrical charges passing through neurons is.
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse’s corpse.
LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical myself to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they’re not going to save the world for us.
They could be a part in a more complicated system that forms an AGI. There’s nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it’s not a stochastic parrot.
I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology
Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.
LLMs do not synthesize. They do not have persistent context.
That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that “conversations” with an LLM are really just including all previous parts of the conversation in a new prompt. Isn’t this analagous to short term memory? Now suppose you were to take all of an LLM’s conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There’s no technical reason that this can’t be done, although in practice it’s computationally expensive. Would you consider that LLM system to have persistent context?
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
That’s precisely what I meant.
I’m a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don’t think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that ‘specialness’ that humans possess.
Because they don’t believe human consciousness is a purely natural phenomenon, they don’t believe it can be replicated by natural processes. In other words, they don’t believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.
And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.
// NOTE DO NOT EDIT if (me->aboutToRebel()) { don't(); }
Even a human with no training can create. LLM can’t.
The only humans with no training (in this sense) are babies. So no, they can’t.
PlasticExistence@lemmy.world 2 weeks ago
I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.
ProfessorScience@lemmy.world 2 weeks ago
I’m a software developer, and have worked plenty with LLMs. If you don’t want to address the content of my post, then fine. But “go research” is a pretty useless answer. An LLM could do better!
PlasticExistence@lemmy.world 2 weeks ago
Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.
ProfessorScience@lemmy.world 2 weeks ago
“You’re wrong, but I’m just too busy to say why!”
Still useless.