Pointless and maybe a little reckless.
Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive
Kyrgizion@lemmy.world 3 weeks ago
As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
billwashere@lemmy.world 3 weeks ago
finitebanjo@piefed.world 2 weeks ago
100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain’t the algorithm made to guess the next word.
daniskarma@lemmy.dbzer0.com 3 weeks ago
We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.
Perspectivist@feddit.uk 3 weeks ago
Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.
CheeseNoodle@lemmy.world 2 weeks ago
To be fair there’s zero evidence for anything outside our own subjective experience of it, we’re just kind of running with the assumption that our subjective experience is an accurate representation of reality.
finitebanjo@piefed.world 2 weeks ago
We’ve actually got a pretty good understanding of the human brain, we just don’t have the tech that could replicate it with any sort of budget nor a use case for it. Spoiler, there is no soul.