Comment on The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
Varyk@sh.itjust.works 1 year agoNo. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.
alternative_factor@kbin.social 1 year ago
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.