Comment on President Donald Trump bans Anthropic from use in government systems
Hazzard@lemmy.zip 3 hours agoI mean… this is losing them a 200 million dollar contract. And to one of their competitors who will gladly acquiesce, so it’s hard to argue that this benefits them.
Good marketing to a bunch of left-wing people who hate AI, I guess, but that feels like Elon joining the Trump administration in hopes of selling Teslas to rednecks, it might work on a few, but I just can’t imagine this is 4D chess that will make them a fortune when they’re abandoning that much money immediately.
Their statement also came after the DOW threatened to put them on the list of companies that are totally banned from doing any business in America, usually reserved for Chinese companies that are deemed a national security threat, which would make it illegal for any company doing business in America to do any business with them, as they’d be added to the same list, which would have essentially killed Anthropic as a business entirely.
You don’t have to love Anthropic because they did a good thing, they were fine with anything less than automatic killing and mass surveillance, after all, but I don’t think it’s correct to say this was sneaky and spineless somehow.
brucethemoose@lemmy.world 3 hours ago
Eh, the context I was thinking of is that they are constantly playing “safety theatre” where it absolutely doesn’t matter. They’ve tried to kill open models and basically capture regulators by misleading or outright lying, for their benefit.
In other words, this is a case of “a broken clock is right sometimes,” and I think they knew Trump will back down.
Hazzard@lemmy.zip 2 hours ago
Fair, I definitely haven’t simped for them in the past just because they post some good articles on AI safety.
Although… I’ll say of them, they seem more like what OpenAI should be, actually trying to implement AI responsibly, and freely sharing that information. It’s good research, even if marketing is the motivation. Meanwhile OpenAI, the “charity” that’s supposed to guide us to a responsible AI future, moved their most addictive and mentally dangerous model to the highest paid tier instead of actually killing it until very recently.
Although at the end of the day, Anthropic is a for-profit company, in a better world they wouldn’t have released models publicly before this research was actually done and pressing dangers like AI psychosis were actually safeguarded against. Better late than never, sure, but the whole industry has done a lot of damage already, and the work of resolving the issues still isn’t even close to done.