Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard
Comment on The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
5BC2E7@lemmy.world 11 months ago
I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.
sukhmel@programming.dev 11 months ago
EunieIsTheBus@feddit.de 11 months ago
Woops. Two guys left. Naa that’s enough
T00l_shed@lemmy.world 11 months ago
Well what do you say Aron, wanna try to re-populate? Sure James, let’s give it a shot.
EunieIsTheBus@feddit.de 11 months ago
There is no such thing as a failsafe that can’t fail itself
echodot@feddit.uk 11 months ago
Yes there is that’s the very definition of the word.
It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won’t fall it’ll just stay still until rescue arrives.
afraid_of_zombies@lemmy.world 11 months ago
I mean in industrial automation we take about safety rating. It isn’t that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That’s pretty good but I don’t know how to translate that to AI.
echodot@feddit.uk 11 months ago
Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.
Both of those would mean that any rogue AI would be eliminated one way or the other within a day