Anthropic is now playing Good Cop in a charade. They don’t care about ethics.
Anthropic’s CEO admits compromising with authoritarian regimes to secure AI funding
Anthropic CEO Dario Amodei backs President Trump on AI policy, pushes back on criticism
Submitted 3 weeks ago by Beep@lemmus.org to technology@lemmy.world
https://www.anthropic.com/news/statement-department-of-war
Anthropic is now playing Good Cop in a charade. They don’t care about ethics.
Anthropic’s CEO admits compromising with authoritarian regimes to secure AI funding
Anthropic CEO Dario Amodei backs President Trump on AI policy, pushes back on criticism
Those two safeguards they deny to remove must be quite the thing.
Or they are just doing this for optics, with an understanding that the feds will end up forcing their hand in the future.
I was listening to NPR yesterday and heard the two are apparently mass surveillance of Americans and autonomous weapons systems with no human interaction…
It’s probably more they don’t wanna get blamed if AI launches missiles because the idiots in charge pressed shift+tab and yolo’d.
Claude: “You’re right. I completely committed a war crime. I’m so very sorry! How would you like to proceed?”
Why not both? I’m pretty sure Trump wanted to hold them legally responsible for whatever their system did too
Can’t say I know what or why, but I was having issues this week with their desktop client. When I was viewing their status page, I saw that they have a new service for gov use that went online about 10 days ago.
Amodei “we cannot in good conscience allow this”.
Hegseth looks confused, turns towards his team and mouths “…in good what?”"
“Anthropic publicly praised President Trump’s AI Action Plan,” said CEO Dario Amodei.
“We have been supportive of the President’s efforts to expand energy provision in the US in order to win the AI race,” he continued, apparently talking about Trump’s new anti green energy, pro fossil fuel program.
yes… mine was just a play on the title of this post.
Look, I’m not saying that Amodei is a saint and I do find him full of shit as Altman with their AGI promises, but would you expect Anthropic to take a stand against increasing AI investment, because it’s coming from Trump? And I don’t like that he went looking for funding in the Middle East either.
I just think there is an ethical line between “I do business with people who do bad things” and “I’m actively helping people who do bad things to do them in a more efficient way”. It might be a fine line and it might also be that they are just posturing, but it’s still more than other companies did (companies that are a lot richer than Anthropic and that don’t need to find a lot of funding just to stay afloat).
My conscience is clean. It’s never been used!
So the government wants “full self-driving” attack drones. You know, just in case the military actually disobeys a direct order?
How many pieces of science fiction do we have where the “bad guys” are literally just killer robots we created and then realized we didn’t have control over? Why would we decide it is a good idea to literally build terminators? I’m convinced the government will actually build the “orphan crushing machine” next…
Because we literally are allowing the pedofile parasite class to rule over us
Did we read the same thing?
We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.
So they accept surveillance in other countries? What about other countries’ democratic values?
Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So you don’t because it still sucks? But if it didn’t, you would?
And what about legal?
I’ve really lost my faith in the US. They think they hold the power, but they’re missing the point: real power is built on trust-and we’re losing more of it every day.
I’ve really lost my faith in the US.
What little I had left was destroyed in November of 2024.
I was hoping they had learned from their previous mistake, but instead they doubled down.
Its been an american leadership view for as long as ive been alive that American lives are worth at least a hundred times more than other lives.
What about other countries’ democratic values? So, gentlemen should not read other gentlemens mail?
cannot in good conscience
🤣
I read somewhere that Anthropic has $18,000,000,000 in commitments from last year alone, so conceivably, they can stand to lose a mere $200,000,000 and it won’t create a huge issue for them in the short term.
I hope that’s how they’re looking at it.
I read somewhere that Anthropic has $18,000,000,000 in commitments from last year alone, so conceivably, they can stand to lose a mere $200,000,000 and it won’t create a huge issue for them in the short term.
How does one count that amount of anything, let alone money
Start at 1 and work your way up in increments of 1.
See you in about 100 years give or take a few decades.
“… Without a subscription. For the full, unlocked dictatorship just the low low price of a bajillion dollars a month will give you the power you need to defeat your enemies.”
Are those the same AI systems that recommended nuclear escalation in 90% of simulations?
How about a nice game of chess?
So now you get it
How is a private company the voice of reason in this?
Anthropic was founded by former OpenAI employees who left largely due to ethical and safety concerns about how OpenAI was being run. This is just them sticking to their principles.
This is just them sticking to their principles.
Can’t say the evidence really backs you up on that one.
Anthropic’s “ethical” concerns were performative. They only fearmonger about fictional things that will make their product sound powerful (read: worth throwing money into).
They try to scare people with fictional stories of AGI, a thing that isn’t happening, while ignoring widespread CSAM and sexual harassment generation, a thing that is happening.
What a company says and what a company actually does are not the same thing.
They’re not. Conscience has nothing to do with this.
They just don’t think the PR hit is worth it.
Whenever companies choose to act in a way that we perceive as good, we were the voice of reason, not them.
While I’m glad they’re drawing a line, they’re only splitting hairs. Anthropic is already deeply working with the US gov.
…because every now and again, for the briefest of moments, one them shows themselves not to be run by entirely evil, lecherous humps?
Blink and you (or the shareholders) might miss it.
Don’t buy the hype. They’re not acting in good conscience, they’ve just weighed the pros and cons and decided that the PR hit isn’t worth it.
Because America elected unreasonable leaders.
Department of War Defense
Fascinating to suggest that it is bold or defiant to affirm that the most destructive, imperialist war machine on the planet is in fact for “defence.” “Department of War” is much more honest, and I’m not a fan of how criticisms like this are oriented toward maintaining the purported morality of what is fundamentally a genocidal, globally oppressive institution.
Only Congress can create, rename, or eliminate departments. No matter what big baby says.
Be that as it may, its name is the Department of Defense, and Trump does not have the legal authority to change that name. Calling it the Department of War, like calling the Gulf of Mexico the Gulf of America, is a form of giving in to the administration. That is what I am objecting to.
Department of War Crimes
Wtf, I never would have expected this level of resistance. What’s the catch, fear of intentional reprisals?
I imagine partly liability concerns, partly protecting their reputation.
Basically they don’t want their technology being used for something it’s not ready for, something going badly wrong, and them getting the blame.
They’re getting far more press than they ever would have had the capitulated like the other companies. Now they get to frame themselves as the “good guys”. They’ll end up doing (more of) the evil shit soon enough, but they just had a huge marketing coup. Smart play by one of the biggest grifters, bravo.
They pitch their product as ethical. Go ahead, ask it about Israel committing genocide in Gaza and see how much it’ll gaslight you.
Optics, that’s all they’re going for.
I can’t see the name “anthropic” without thinking about furries.
Anthro pic.
Now you can’t either. You’re welcome.
One is fun and happy the other is saddening and seemingly inescapable.
And I see the big baby in chief has answered in typical baby fashion.
They’ll cave. These companies always do
something something onion headline
HN thread broke all this down and pointed out the PR wiggle room.
In other words, they did the calculations and found that they don’t yet have the market share or the financial position that would enable them to sell out to the government. However, they’re planning to get there someday and hope the DoD is willing to work together in the future.
That’s not the vibe the company has been giving so far. Their staff is way more philosopher heavy than MBA heavy. I am planning on their morality being flushed away if/when an IPO happens, because shareholder supremacy cancels out anything else. But so far, they’ve been an interesting case.
Anthropic has raised $30B in equity and is pledging $50B constructing data centers (all debt; they have only $2.5B in revolving credit facility).
There will be an IPO sooner rather than later.
The only question then is: will Anthropic be the first tech company ever to withstand the government? The answer is no. Everything you do with Anthropic’s services will become the government’s data trove someday, guaranteed.
khoai oni saan 😭
revolutionaryvole@lemmy.world 3 weeks ago
I guess it’s good that they draw the line somewhere, but it is absolutely horrifying to me as a non-American that the moral stance is limited to:
This is not Anthropic refusing to cooperate with the Trump administration as the title may suggest, they are in fact explicitly eager to serve the US Department of War. They are just vying for slightly better contract terms.
wizardbeard@lemmy.dbzer0.com 2 weeks ago
You’re spot-on. As some additional context, Anthropic is already working tightly with the US government. Until the recent announcement regarding Grok, Anthropic was the only approved AI for US government work, as it is/was the only one certified for safely woeking with classified data.
BanMe@lemmy.world 2 weeks ago
And now they’re the only one banned from it.
scarabic@lemmy.world 2 weeks ago
Do you mean that all this about principles is a smoke screen and Anthropic are just using it as a front to squeeze for more money?
revolutionaryvole@lemmy.world 2 weeks ago
No, if you want my opinion it seems too risky of a move to make all of this so public if all they want is more money. It’s possible, but I’d be surprised.
I believe them when they say that what they want is to have those two particular things, fully autonomous weapons and mass surveillance of US citizens, removed from the contract terms (for now). This could be out of genuine moral principles, or out of fear of bad PR when this would be found out. Most likely a combination of both.
My point was that from my perspective it is a very minor difference. The conclusion I kept after reading this isn’t “good guy Anthropic bravely stands against pressure from Hegseth” as some of the Hackernews comments try to paint it. It is “Anthropic mostly bends over backwards and grovels for Pentagon money, willing to massively spy on all foreign nationals and working on creating autonomous weapons - other US AI companies likely to be even worse”.
As I said, horrifying.