Comment on Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
anon_8675309@lemmy.world 1 day ago
Mr Whisky wants AI to kill people.
Comment on Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
anon_8675309@lemmy.world 1 day ago
Mr Whisky wants AI to kill people.
vacuumflower@lemmy.sdf.org 23 hours ago
Which will happen regardless.
Also where there are AI safeguards, they are usually in place because of chain of command and authorization, and those mattered so much because all most likely applications of any AI during the Cold War had a very steep damage curve.
Small killbots don’t have such a damage curve. If they kill someone by mistake, the rest of the population learns to be careful and not raise attention of those operating them. Same reasons as with nukes and radars, where you need chains of specific people with clear authorization to answer why half the world melted, won’t force anyone to put such limits.