Comment on The Pentagon is moving toward letting AI weapons autonomously decide to kill humans

<- View Parent
Varyk@sh.itjust.works ⁨10⁩ ⁨months⁩ ago

No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.

Autonomous killings is an absolutely terrible, terrible idea.

The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:

In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.

As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.

source
Sort:hotnewtop