if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?
Obviously yes. Otherwise you gotta start denying rights to in vitro fertilization babies.
Comment on this one goes out to the arts & humanities
RustyShackleford@programming.dev 7 months ago
I propose that we treat AI as ancillas, companions, muses, or partners in creation and understanding our place in the cosmos.
While there are pitfalls in treating the current generation of LLMs and GANas sentient, or any AI for that matter, there will be one day where we must admit that an artificial intelligence is self-aware and sentient.
To me, the fundamental question about AI, that will reveal much about humanity, is philosophical as much as it is technical: if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?
if a being that is artificially created, has intelligence, and is functionally self-aware and sentient, does it have natural rights?
Obviously yes. Otherwise you gotta start denying rights to in vitro fertilization babies.
ProgrammingSocks@pawb.social 7 months ago
It would have natural rights, yes. Watch Star Trek TNG’s “The Measure of a Man” which tackles this issue exactly. Does the AI of current days have intelligence or sentience? I don’t believe so.
RustyShackleford@programming.dev 7 months ago
Yes, I agree. I make deep neural network models for a living. The best of the best LLM models still “hallucinate” unreliably after 30-40 queries. My expertise is in computer vision systems; perhaps that’s been mitigated better as of late.
My point was to emphasize the necessity for us, as a species, to answer the philosophical question and start codifying legal jurisprudence around it well before the moment of self-awareness of a General-Purpose AI.