Software cannot be “inspired”
AIs in their training stages are simply just running extreme statistical analysis on the input material. They’re not “learning” they’re not “inspired” they’re not “understanding”
The anthropomorphism of these models is a major problem. They are not human, they don’t learn like humans.
newthrowaway20@lemmy.world 1 year ago
Does that mean software can also be afraid, or angry? What about happy software? Saying software can be inspired is like saying a rock can feel pain.
lloram239@feddit.de 1 year ago
If it is programmed/trained that way, sure. I recommend having a listen to Geoffrey Hinton on the topic.
The rock doesn’t do anything similar to pain. The LLM on the other side does a lot of things similar to inspiration. I can give the LLM a very trivial question and it will answer with a mountain of text. Did my question or the books it was trained on “inspire” the LLM to write that? Maybe, depends of course how far reaching you want to define the word. But either way, the LLM produced something by itself, that was neither a copy of my prompt nor the training data.
PipedLinkBot@feddit.rocks [bot] 1 year ago
Here is an alternative Piped link(s):
Geoffrey Hinton on the topic
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
FaceDeer@kbin.social 1 year ago
Software can do a lot of things that rocks can't do, that's not a good analogy.
Whether software can feel "pain" depends a lot on your definitions, but I think there are circumstances in which software can be said to feel pain. Simple worms can sense painful stimuli and react to it, a program can do the same thing.
We've reached the point where the simplistic prejudices about artificial intelligence common in science fiction are no longer useful guidelines for talking about real artificial intelligence. Sci-fi writers have long assumed that AIs couldn't create art and now it turns out it's one of the things they're actually rather good at.