I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.
You’ll want those either way.
Comment on The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
AceFuzzLord@lemm.ee 11 months ago
As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.
If you ask me, it’s just an arms race to see who build the murder drones first.
I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.
You’ll want those either way.
If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.
You’re headed towards the Star Trek episode “A Taste of Armageddon”. I’d also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.
Other weapons of mass destruction have been successfully avoided use and development from mutual agreements
FaceDeer@kbin.social 11 months ago
A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.
SomeSphinx@lemmy.world 11 months ago
Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn’t all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It’s not that the military wants to indiscriminately kill everything, it’s that they can’t possibly plan for problems in the code they haven’t encountered yet.