These grifters would rather cry into their computers than oppose any actual suffering.
Suffering is Real. AI Consciousness is Not. | TechPolicy.Press
Submitted 1 day ago by chobeat@lemmy.ml to technology@lemmy.world
https://www.techpolicy.press/suffering-is-real-ai-consciousness-is-not/
kibiz0r@midwest.social 1 day ago
Could just say:
If you accept either privacy of consciousness or phenomenal transparency then philosophical zombies must be conceivable and therefore physicalism is wrong and you can’t engineer consciousness by mimicking brain states.
tabular@lemmy.world 1 day ago
Why does
privacy of consciousness
mean one can’t engineer consciousness by mimicking states of the organ that probably has something to do with it?What does phenomenal transparency mean?
Hackworth@lemmy.world 1 day ago
Plus, privacy of consciousness may just be a technological hurdle.
kibiz0r@midwest.social 1 day ago
I added some episodes of Walden Pod to my comment, so check those out if you wanna go deeper, but I’ll still give a tl;dl here.
Privacy of consciousness is simply that there’s a permanent asymmetry of how well you can know your own mind vs. the minds of others, no matter how sophisticated you get with physical tools. You will always have a different level of doubt about the sentience of others, compared to your own sentience.
Phenomenal transparency is the idea that your internal experiences (like what pain feels like) are “transparent”, where transparency means you can fully understand something’s nature through cognition alone and not needing to measure anything in the physical world to complete your understanding. For example, the concept of a triangle or that 2+2=4 are transparent. Water is opaque, because you have to inspect it with material tools to understand the nature of what you’re referring to.
You probably immediately have some questions or objections, and that’s where I’ll encourage you to check out those episodes. There’s a good reason they’re longer than 5 sentences.
10001110101@lemm.ee 1 day ago
Lol. This comment sent me down a rabbit hole. I still don’t know if it’s logically correct from a non-physicalist POV, but I did come to the conclusion that I lean toward eliminative materialism and llusionism. Now I don’t have to think about consciousness anymore because it’s just a trick our brains play on us (consciousness always seemed poorly defined to me anyways).
I guess when AI appears to be sufficiently human or animal-like in its cognitive abilities and emotions, I’ll start worrying about its suffering.
kibiz0r@midwest.social 1 day ago
If you wanna continue down the rabbit hole, I added some good stuff to my original comment. But if you’re leaning towards epiphenomenalism, might I recommend this one: audioboom.com/…/8389860-71-against-epiphenomenali…