Netflix has a documentary about it, it’s quite good. I watched it yesterday, but forgot its name.
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
Submitted 11 months ago by return2ozma@lemmy.world to technology@lemmy.world
Comments
gandalf_der_12te@feddit.de 11 months ago
Bakkoda@sh.itjust.works 11 months ago
It’s a 3 part series. Terminator I think it is.
Deway@lemmy.world 11 months ago
Don’t forget the follow up, The Sarah Connor’s Chronicles. An amazing sequel to a nice documentary.
CrayonRosary@lemmy.world 11 months ago
Black Mirror?
SOB_Van_Owen@lemm.ee 11 months ago
Metalhead.
Rockyrikoko@lemm.ee 11 months ago
I think I found it here. It’s called Terminator 2: Judgment Day
criticalthreshold@lemmy.world 11 months ago
Unknown: Killer Robots ?
gandalf_der_12te@feddit.de 11 months ago
yes, that was it. Quite shocking to watch. I think that these things will be very real in maybe ten years. I’m quite afraid of it.
tsonfeir@lemm.ee 11 months ago
If we don’t, they will. And we can only learn by seeing it fail. To me, the answer is obvious. Stop making killing machines. 🤷♂️
Silverseren@kbin.social 11 months ago
The sad part is that the AI might be more trustworthy than the humans being in control.
Varyk@sh.itjust.works 11 months ago
No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.
Autonomous killings is an absolutely terrible, terrible idea.
The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:
In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.
As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.
alternative_factor@kbin.social 11 months ago
Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.
livus@kbin.social 11 months ago
Have you never met an AI?
FlyingSquid@lemmy.world 11 months ago
Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.
SCB@lemmy.world 11 months ago
Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground
If drone strikes upset you, your anger is misplaced if you’re blaming drones. You’re really against military strikes at those targets, full stop.
kromem@lemmy.world 11 months ago
Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.
Nobody@lemmy.world 11 months ago
What’s the opposite of eating the onion? I read the title before looking at the site and thought it was satire.
Wasn’t there a test a while back where the AI went crazy and started killing everything to score points? Then, they gave it a command to stop, so it killed the human operator. Then, they told it not to kill humans, and it shot down the communications tower that was controlling it and went back on a killing spree. I could swear I read that story not that long ago.
Nutteman@lemmy.world 11 months ago
It was a nothingburger. A thought experiment.
FaceDeer@kbin.social 11 months ago
The link was missing a slash: https://www.reuters.com/article/idUSL1N38023R/
This is typically how stories like this go. Like most animals, humans have evolved to pay extra attention to things that are scary and give inordinate weight to scenarios that present danger when making decisions. So you can present someone with a hundred studies about how AI really behaves, but if they've seen the Terminator that's what sticks in their mind.
FaceDeer@kbin.social 11 months ago
If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.
GigglyBobble@kbin.social 11 months ago
Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.
FaceDeer@kbin.social 11 months ago
And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.
Chuckf1366@sh.itjust.works 11 months ago
It’s more like we’re giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.
kromem@lemmy.world 11 months ago
Right, because self-driving cars have been great at correctly identifying things.
And those LLMs have been following their rules to the letter.
We really need to let go of our projected concepts of AI in the face of what’s actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.
In any real world deployment of killer drones, there’s going to be an acceptable false positive rate that’s been signed off on.
FaceDeer@kbin.social 11 months ago
We are talking about developing technology, not existing tech.
And actually, machines have become quite adept at image recognition. For some things they're already better at it than we are.
crypticthree@lemmy.world 11 months ago
Did you know that “if” is the middle word of life
RiikkaTheIcePrincess@kbin.social 11 months ago
LLM "AI" fans thinking "Hey, humans are dumb and AI is smart so let's leave murder to a piece of software hurriedly cobbled together by a human and pushed out before even they thought it was ready!"
I guess while I'm cheering the fiery destruction of humanity I'll be thanking not the wonderful being who pressed the "Yes, I'm sure I want to set off the antimatter bombs that will end all humans" but the people who were like "Let's give the robots a chance! It's not like the thinking they don't do could possibly be worse than that of the humans who put some of their own thoughts into the robots!"
I just woke up, so you're getting snark. makes noises like the snarks from Half-Life You'll eat your snark and you'll like it!
Pratai@lemmy.ca 11 months ago
Won’t that be fun!
/s
chemical_cutthroat@lemmy.world 11 months ago
We’ve been letting other humans decide since the dawn of time, and look how that’s turned out. Maybe we should let the robots have a chance.
FaceDeer@kbin.social 11 months ago
I'm not expecting a robot soldier to rape a civilian, for example.
BaardFigur@lemmy.world 11 months ago
Reminds me of this piped.video/watch?v=9fa9lVwHHqg&t=1
KeenFlame@feddit.nu 11 months ago
Not really, it’s against conventions
gellius@lemmy.world 11 months ago
Conventions are just rules for thee but not for me.
KeenFlame@feddit.nu 11 months ago
I know like the mustard gas used in every war
TransplantedSconie@lemm.ee 11 months ago
Well, Ultron is inevitable.
Who we got for the Avengers Initiative?
frickineh@lemmy.world 11 months ago
Ultron and Project Insight. It’s like the people in charge watched those movies and said, “You know, I think Hydra had the right idea!”
TransplantedSconie@lemm.ee 11 months ago
Wouldn’t put it past this timeline.
themurphy@lemmy.world 11 months ago
I think people are forgetting that drones like these will also be made to protect. And I don’t mean in a police kinda way.
But if let’s say Argentina deployed these against Brazil. Brazil will have a defending lineup. They would fight out war.
Then everyone watching will see this makes no sense to let those robots fight it out. Both countries will produce more robots until yeah… No more wires and metal I guess.
Future = less real war, more cold war. Just like the A-bomb works today.
FlyingSquid@lemmy.world 11 months ago
Then everyone watching will see this makes no sense to let those robots fight it out.
Just like how WWI was the War to End All Wars, right?
Future = less real war, more cold war. Just like the A-bomb works today.
Sorry, how is there less war now?
FlyingSquid@lemmy.world 11 months ago
I’m guessing their argument is that if they don’t do it first, China will. And they’re probably right, unfortunately. I don’t see a way around a future with AI weapons platforms if technology continues to progress.
shrugal@lemm.ee 11 months ago
We could at least make it a war crime.
FlyingSquid@lemmy.world 11 months ago
That doesn’t seem to have stopped anyone.