Comment on The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
at_an_angle@lemmy.one 11 months ago
“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)
businessinsider.com/us-closer-ai-drones-autonomou…
Yeah. Robots will never be calling the shots.
M0oP0o@mander.xyz 11 months ago
I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.
Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:
Image
1847953620@lemmy.world 11 months ago
My problem is, due to systemic pressure, how under-trained and overworked could these people be?
MonkeMischief@lemmy.today 11 months ago
“Ok Dall-3, now which of these is a threat to national security and U.S interests?” 🤔
M0oP0o@mander.xyz 11 months ago
Oh it gets better the full prompt is: “A normal person, not a target.”
So, does that include trees, pictures of trash cans and what ever else is here?
BlueBockser@programming.dev 11 months ago
Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.