vying for slightly better contract terms
Do you mean that all this about principles is a smoke screen and Anthropic are just using it as a front to squeeze for more money?
Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
revolutionaryvole@lemmy.world 3 weeks ago
I guess it’s good that they draw the line somewhere, but it is absolutely horrifying to me as a non-American that the moral stance is limited to:
This is not Anthropic refusing to cooperate with the Trump administration as the title may suggest, they are in fact explicitly eager to serve the US Department of War. They are just vying for slightly better contract terms.
vying for slightly better contract terms
Do you mean that all this about principles is a smoke screen and Anthropic are just using it as a front to squeeze for more money?
No, if you want my opinion it seems too risky of a move to make all of this so public if all they want is more money. It’s possible, but I’d be surprised.
I believe them when they say that what they want is to have those two particular things, fully autonomous weapons and mass surveillance of US citizens, removed from the contract terms (for now). This could be out of genuine moral principles, or out of fear of bad PR when this would be found out. Most likely a combination of both.
My point was that from my perspective it is a very minor difference. The conclusion I kept after reading this isn’t “good guy Anthropic bravely stands against pressure from Hegseth” as some of the Hackernews comments try to paint it. It is “Anthropic mostly bends over backwards and grovels for Pentagon money, willing to massively spy on all foreign nationals and working on creating autonomous weapons - other US AI companies likely to be even worse”.
As I said, horrifying.
Crossing off mass surveillance and automated killing isn’t everything they could have taken a moral stand on. Personally I don’t think any list will be long enough for the Pentagon, and if it were, there wouldn’t be anything left that could be worked on.
But I keep hearing you say that no mass surveillance and no automated killings is so very little - almost nothing. That doesn’t seem right to me. I think those are both pretty big things. I’m not horrified that their moral stance would include only that.
That’s a fair stance to take and I definitely do not mean to try to have you change your opinion. I also do not know if you are an American, and I don’t want to assume either way.
But, to better explain my own position, I need to point out:
Anthropic is not saying “no mass surveillance”, they are saying “no mass surveillance of Americans”. If you judge this stance based on effect, it literally makes no difference at all if you are not a US citizen, you are targeted either way. If you judge it based on principles, it can be argued it is even less moral than accepting mass surveillance of everyone - not only are they claiming that billions of innocent people deserve to lose their right to privacy, but they are specifically carving out an exception for themselves based on nationality.
They are also not saying “no automated killings”, but “no automated killings at this time because we haven’t ironed out the kinks yet”. This can be framed as a moral stance relating to safety concerns, so I will assume in good faith that this is their reasoning rather than fear of bad publicity. However, I would argue that it is still an insignificant difference, as the threat posed to humanity by a powerful warmongering state commanding an army of fully autonomous killing machines is already too great. Making sure the technology is ready could mean working on avoiding a Terminator scenario, but without a doubt it will also mean ensuring that the murderbots WILL obey an order to bomb striking workers or displaced refugees so long as the right Executive Order was signed first, something that a human being in the loop might have prevented.
These two red lines seem to make a world of moral difference for someone who already takes it for granted that the USA and its military are overall institutions deserving of trust and support, perhaps with the small exception of the current Secretary of War who may have jumped the gun a bit during negotiations over a new technology. At the very least, that seems to be the position of the author of this letter. But no state should ever be given that amount of trust and support. And particularly given the USA’s belligerence over the years and its current slide towards outright fascism, I am horrified that the bar is this low.
wizardbeard@lemmy.dbzer0.com 3 weeks ago
You’re spot-on. As some additional context, Anthropic is already working tightly with the US government. Until the recent announcement regarding Grok, Anthropic was the only approved AI for US government work, as it is/was the only one certified for safely woeking with classified data.
BanMe@lemmy.world 2 weeks ago
And now they’re the only one banned from it.