It doesn't work like that though. Western (backed) military can do and does that unpunished.
Comment on The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
FaceDeer@kbin.social 1 year agoAnd then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.
mihies@kbin.social 1 year ago
mihies@kbin.social 1 year ago
Here is a sample of US drone killing civilians
FlyingSquid@lemmy.world 1 year ago
Israeli general: Captain, were you responsible for reprogramming the drones to bomb those ambulances?
Israeli captain: Yes, sir! Sorry, sir!
Israeli general: Captain, you’re just the sort of man we need in this army.
FaceDeer@kbin.social 1 year ago
Ah, evil people exist and therefore we should never develop technology that evil people could use. Right.
FlyingSquid@lemmy.world 1 year ago
Seems like a good reason not to develop technology to me. See also: biological weapons.
FaceDeer@kbin.social 1 year ago
Those weapons come out of developments in medicine. Technology itself is not good or evil, it can be used for good or for evil. If you decide not to develop technology you're depriving the good of it as well. My point earlier is to show that there are good uses for these things.
GigglyBobble@kbin.social 1 year ago
And if the operator was commanded to do it? And to delete the logs? How naive are you that this is somehow make war more humane?
FaceDeer@kbin.social 1 year ago
Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don't let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.
livus@kbin.social 1 year ago
@FaceDeer if a country which repeatedly bombs hospitals had that tech right now, do you really think they would program it to avoid ambulances or hospitals? Of course not.