Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
Submitted 11 months ago by return2ozma@lemmy.world to technology@lemmy.world
Comments
pelicans_plight@lemmy.world 11 months ago
Snapz@lemmy.world 11 months ago
The real problem (and the thing that will destroy society) is boomer pride. I’ve said this for a long time, they’re in power now and they are terrified to admit that they don’t understand technology.
So they’ll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.
primal_buddhist@lemmy.world 11 months ago
Boomers have been in power for a long long time and the technology we are debating is as a result of their investment and prioritisation. So am not sure they are very afraid of it.
zaphod@feddit.de 11 months ago
Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.
Eh, they could’ve done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.
pelicans_plight@lemmy.world 11 months ago
I hope so, but I was born with an extremely good sense of trajectory and I also know how to use nets. So lets just hope I’m superhuman and the only one who possesses these powers.
Madison420@lemmy.world 11 months ago
Emps are not hard to make, they won’t however work on hardened systems like the US military uses.
FlyingSquid@lemmy.world 11 months ago
Is there a way to create an EMP without a nuclear weapon? Because if that’s what they have to develop, we have bigger things to worry about.
TopRamenBinLaden@sh.itjust.works 11 months ago
Your comment got me curious about what would be the easiest way to make a homemade emp. Business Insider of all things has got us all covered, even if that business may be antithetical to business insiders pro capitalistic agenda.
Madison420@lemmy.world 11 months ago
Yeah very easy ways, one of the most common ways to cheat a slot machine is with a localized emp device to convince the machine you’re adding tokens.
profdc9@lemmy.world 11 months ago
There’s an explosively pumped flux compression generator. en.wikipedia.org/…/Explosively_pumped_flux_compre…
Buddahriffic@lemmy.world 11 months ago
One way involves replacing the flash with an antenna on an old camera flash. It’s not strong enough to fry electronics, but your phone might need anything from a reboot to a factory reset to servicing if it’s in range when that goes off.
I think the difficulty for EMPs comes from the device itself being an electronic, so the more effective the pulse it can give, the more likely it will fry its own circuits. Though if you know the target device well, you can target the frequencies it is vulnerable to, which could be easier on your own device, plus everything else in range that don’t resonate on the same frequencies as the target.
Tesla apparently built (designed?) a device that could fry a whole city with a massive lighting strike using just 6 transmitters located in various locations on the planet. If that’s true, I think it means it’s possible to create an EMP stronger than a nuke’s that doesn’t have to destroy itself in the process, but it would be a massive infrastructure project spanning multiple countries. There was speculation that massive antenna arrays (like HAARP) might be able to accomplish similar from a single location, but that came out of the conspiracy theory side of the world, so take that with a grain of salt (and apply that to the original Tesla invention also).
criticalthreshold@lemmy.world 11 months ago
A true autonomous system would have Integrated image recognition chips on the drones themselves, and hardening against any EM interference. They would not have any comms to their ‘mothership’ once deployed.
hakunawazo@lemmy.world 11 months ago
If they just send them back it would be some murderous ping pong game.
BombOmOm@lemmy.world 11 months ago
We already have weapons that autonomously decide to kill humans. Mines.
Chuckf1366@sh.itjust.works 11 months ago
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.
gibmiser@lemmy.world 11 months ago
Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.
Chozo@kbin.social 11 months ago
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.
Pretty sure the entire DOD got a collective boner reading this.
Sterile_Technique@lemmy.world 11 months ago
Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.
For what it’s worth, there’s footage on youtube of drone swarm demonstrations that were posted 6 years ago. Considering that the military doesn’t typically release footage of the cutting edge of its tech to the public, so this demonstration was likely for a product that was already going obsolete; and that the 6 years that have passed since have made lightning fast developments in things like facial recognition… at this point I’d be surprised if we weren’t already at the very least field testing the murder machines you described.
FaceDeer@kbin.social 11 months ago
Imagine a mine that could recognize "that's just a child/civilian/medic stepping on me, I'm going to save myself for an enemy soldier." Or a mine that could recognize "ah, CenCom just announced a ceasefire, I'm going to take a little nap." Or "the enemy soldier that just stepped on me is unarmed and frantically calling out that he's surrendered, I'll let this one go through. Not the barrier troops chasing him, though."
There's opportunities for good here.
MonsiuerPatEBrown@reddthat.com 11 months ago
That is like comparing saying that Mendelian pea plant fuckery to CRISPR is basically the same thing.
Kraven_the_Hunter@lemmy.dbzer0.com 11 months ago
The code name for this top secret program?
Skynet.
stopthatgirl7@kbin.social 11 months ago
“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary tale TechCompany: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”
capt_wolf@lemmy.world 11 months ago
Project ED-209
0nXYZ@lemmy.world 11 months ago
“You have 20 seconds to reply…”
DarkThoughts@kbin.social 11 months ago
EfficientEffigy@lemmy.world 11 months ago
This can only end well
livus@kbin.social 11 months ago
If Peter Thiel is involved I would not be surprised if he unironically named it that. I mean Palantir? Really?
sour@kbin.social 11 months ago
eam 17
at_an_angle@lemmy.one 11 months ago
“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)
businessinsider.com/us-closer-ai-drones-autonomou…
Yeah. Robots will never be calling the shots.
M0oP0o@mander.xyz 11 months ago
I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.
Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:
1847953620@lemmy.world 11 months ago
My problem is, due to systemic pressure, how under-trained and overworked could these people be?
MonkeMischief@lemmy.today 11 months ago
“Ok Dall-3, now which of these is a threat to national security and U.S interests?” 🤔
BlueBockser@programming.dev 11 months ago
Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.
redcalcium@lemmy.institute 11 months ago
Deploy the fully autonomous loitering munition drone!
Sir, the drone decided to blow up a kindergarten.
Not our problem. Submit a bug report to Lockheed Martin.
Agent641@lemmy.world 11 months ago
“Your support ticked was marked as duplicate and closed”
😳
pivot_root@lemmy.world 11 months ago
Goes to original ticket:
Status: WONTFIX
“This is working as intended according to specifications.”
spirinolas@lemmy.world 11 months ago
“Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!”
“Aaw, that really sucks starts rubbing nipples I’ll submit a ticket and we’ll let you know.”
“NO! I WANT TO TALK TO YOUR SUPERVISOR NOW”
“Suuure, please hold.”
sukhmel@programming.dev 11 months ago
Nah, too straightforward for a real employee. Also, they would be talking to a phone robot instead that will mever let them talk to a real person.
1984@lemmy.today 11 months ago
Future is gonna suck, so enjoy your life today while the future is still not here.
Thorny_Insight@lemm.ee 11 months ago
Thank god today doesn’t suck at all
1984@lemmy.today 11 months ago
Right? :)
myrmidex@slrpnk.net 11 months ago
The future might seem far off, but it starts right now.
Kalkaline@leminal.space 11 months ago
At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.
cosmicrookie@lemmy.world 11 months ago
It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wring
Strobelt@lemmy.world 11 months ago
That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.
We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.
sukhmel@programming.dev 11 months ago
How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?
Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don’t know if it’s good or bad.
And from my experience when there’s too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don’t get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable
Ultraviolet@lemmy.world 11 months ago
1979: A computer can never be held accountable, therefore a computer must never make a management decision.
2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.
zalgotext@sh.itjust.works 11 months ago
You can’t punish AI for doing something wrong.
Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.
cosmicrookie@lemmy.world 11 months ago
But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen
recapitated@lemmy.world 11 months ago
Whether in military or business, responsibility should lie with whomever deploys it. If they’re willing to pass the buck up to the implementor or designer, then they shouldn’t be convinced enough to use it.
Because, like all tech, it is a tool.
synthsalad@mycelial.nexus 11 months ago
AI does not require a raise for doing something right either
Well, not yet. Imagine if reward functions evolve into being paid with real money.
reksas@lemmings.world 11 months ago
That is like saying you cant punish gun for killing people
cosmicrookie@lemmy.world 11 months ago
Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?
DoucheBagMcSwag@lemmy.dbzer0.com 11 months ago
Did nobody fucking play Metal Gear Solid Peace Walker???
nichos@programming.dev 11 months ago
Or watch war games…
DragonTypeWyvern@literature.cafe 11 months ago
Or just, you know, have a moral compass in general.
MonkeMischief@lemmy.today 11 months ago
Or watch Terminator…
Or Eagle Eye…
Or i-Robot…
And yes, literally any of the Metal Gear Solid series…
Immersive_Matthew@sh.itjust.works 11 months ago
We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.
Pirky@lemmy.world 11 months ago
Horizon: Zero Dawn, here we come.
uis@lemmy.world 11 months ago
Doesn’t AI go into landmines category then?
cows_are_underrated@feddit.de 11 months ago
Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).
Apart of that, this is the stupidest idea I have ever heard of.
lemba@discuss.tchncs.de 11 months ago
Good to know, that Daniel Ek, founder and CEO of Spotify invests in military AI… www.handelsblatt.com/technik/…/27779646.html?tick…
yardy_sardley@lemmy.ca 11 months ago
For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.
Giving them guns and telling them to shoot whoever they want changes things a bit.
heygooberman@lemmy.today 11 months ago
Didn’t Robocop teach us not to do this? I mean, wasn’t that the whole point of the ED-209 robot?
MindSkipperBro12@lemmy.world 11 months ago
For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.
onlinepersona@programming.dev 11 months ago
Makes me think of this great short movie Slaughterbots
AceFuzzLord@lemm.ee 11 months ago
As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.
If you ask me, it’s just an arms race to see who build the murder drones first.
solarzones@kbin.social 11 months ago
Now that’s a title I wish I never read.
inconel@lemmy.ca 11 months ago
Ah, finally the AI can kill its operator first who holding them back before wiping out enemies, then.
CCF_100@sh.itjust.works 11 months ago
Okay, are they actually insane?
cosmicrookie@lemmy.world 11 months ago
The only fair approach would be to start with the police instead of the army.
Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police
5BC2E7@lemmy.world 11 months ago
I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.
afraid_of_zombies@lemmy.world 11 months ago
It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.
Cries in Screamers.
rustyriffs@lemmy.world 11 months ago
Well that’s a terrifying thought. You guys bunkered up?
ElBarto@sh.itjust.works 11 months ago
Cool, needed a reason to stay inside my bunker I’m about to build.
GutsBerserk@lemmy.world 11 months ago
So, it starts…
Uranium3006@kbin.social 11 months ago
How about no
autotldr@lemmings.world [bot] 11 months ago
This is the best summary I could come up with:
The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.
Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.
The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times.
Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it’s unclear if any have taken action resulting in human casualties.
The original article contains 376 words, the summary contains 158 words. Saved 58%. I’m a bot and I’m open source!
phoneymouse@lemmy.world 11 months ago
Can’t figure out how to feed and house everyone, but we have almost perfected killer robots.
sneezycat@sopuli.xyz 11 months ago
Oh no, we figured it out, but killer robots are profitable while happiness is not.
o2inhaler@lemmy.ca 11 months ago
I would argue happiness is profitable, but would have to shared amongst the people. Killer robots are profitable for a concentrated group of people
MartinXYZ@sh.itjust.works 11 months ago
Oh no, we figured it out, but killer robots are profitable while
happinesssurvival is not.onlinepersona@programming.dev 11 months ago
What’s more important, a free workforce or an obedient one?
cosmicrookie@lemmy.world 11 months ago
Especially one that is made to kill everybody else except their own. Let it replace the police. I’m sure the quality controll would be a tad stricter them