I mean, most complex weapons systems have been some level of robot for quite a while. Aircraft are fly-by-wire, you have cruise missiles, CIWS systems operating in autonomous mode pick out targets, ships navigate, etc.
I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:
-
Verification is extremely difficult, especially with weapons that are optionally-autonomous, as with. FCAS, for example, the fighter that several countries in Europe are working on, is optionally-manned. You can’t physically tell by just looking at such aircraft whether it’s going to be flown by a person or have an autonomous computer do so. If you think about the Washington Naval Treaty, Japan managed to build treaty-violating warships secretly. Warships are very large, hard to disguise, can be easily distinguished externally, and can only be built and stored in a very few locations. I have a hard time seeing how one would manage verification with autonomy.
-
It will very probably affect the balance of power. Generally-speaking, arms control treaties that alter the balance of power aren’t going to work, because the party disadvantaged is not likely to agree to it.
I’d also add that I’m not especially concerned about autonomy specifically in weapons systems.
It sounds like your concern, based on your follow-up comment, is that something like Skynet might show up – the computer network in the Terminator movie series that turn on humans. The kind of autonomy you’re dealing with isn’t on that level. I can imagine one day, general AI being an issue in that role – though I’m not sure that it’s the main concern I’d have, would guess that dependence and then an unexpected failure might be a larger issue. But I don’t think that it has much to do with military issues – I mean, in a scenario where you truly had an uncontrolled, more-intelligent-than-humans artificial intelligence running amok on something like the Internet, it isn’t going to matter much whether-or-not you’ve plugged it into weapons, because anything that can realistically fight humanity can probably manage to get control of or produce weapons anyway. Like, this is an issue with the development of advanced artificial intelligence, but it’s not really a weapons or military issue.
pennomi@lemmy.world 2 months ago
Whoever bans them will be at a disadvantage militarily. They will never be banned for this one reason alone.
Telorand@reddthat.com 2 months ago
I think you’re conflating a ban to include banning their production (not an unreasonable assumption). As we’ve seen with nukes, however, possession of a banned weapon is sometimes as good as using it.
catloaf@lemm.ee 2 months ago
I’m guessing the major countries will ban them, but still develop the technology, let other countries start using it, then say “well everyone else is using it so now we have to as well”. Just like we’re seeing with mini drones in Ukraine. The US is officially against automated attacks, but we’re supporting a country using them, and we’re developing full automation for our own aircraft.
NeoNachtwaechter@lemmy.world 2 months ago
…and exactly this way of thinking will one day create “Skynet”.
We need to be smarter than that!
Otherwise mankind is doomed.
pennomi@lemmy.world 2 months ago
Unfortunately this is basic game theory, so the “smart” thing is to have the weapons, but avoid war.
Once we’ve grown past war, we can disarm, but it couldn’t happen in the opposite order.
NeoNachtwaechter@lemmy.world 2 months ago
But what until then? Your ideas do not provide any solutions. You just say that it is unavoidable as it is.
JayDee@lemmy.ml 2 months ago
The process of collective disarming is the path towards growing past war. And that first step is the collective banning of manufacturing such weapons.
technocrit@lemmy.dbzer0.com 2 months ago
“Basic game theory” says we should destroy this wacko system. jfc.
technocrit@lemmy.dbzer0.com 2 months ago
Ban the state first. Every state. These wacko cultists are literally destroying the planet so they can control people with killer robots.
pennomi@lemmy.world 2 months ago
Yeah totally agree. The general population almost never wants to go to war - the plutocrats do.
Once we take care of our own corrupt governance I suspect wars will rapidly disappear, and then weapons will likewise disappear.
Angry_Autist@lemmy.world 2 months ago
Once combat AI exceeds humans:
A ban to all war, globally. Those that violate the ban will have autonomous soldier deployed on their soil.
This is the only way it will work, no other path leads to a world without autonomous warbots. We can ban them all we want but there will be some terrorist cell with access to arduinos that can do the same in a garage. And China will never follow such a ban