Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use
XLE@piefed.social 1 day agoSerinus, did you see the part where Anthropic wants to develop them with the US military?
Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use
XLE@piefed.social 1 day agoSerinus, did you see the part where Anthropic wants to develop them with the US military?
Iconoclast@feddit.uk 21 hours ago
Said safeguards being that their technology isn’t being used for mass surveillance or the development of autonomous drones. It’s explicitly mentioned in their statement - the one you’re desperately trying to massage and misquote to make it seem like they’re saying something they’re not - yet anyone can just go and read it themselves.
XLE@piefed.social 21 hours ago
Iconoclast, I see you edited your post after I replied. You did not answer whether you accept the fact that Anthropic explicitly wanted to develop fully autonomous AI alongside the Trump Department of “War.”
Either you’re lying, or you’re the one desperately trying to reshape the truth.
XLE@piefed.social 21 hours ago
Iconoclast, you have moved beyond accidental deception into intentional lies.
Anthropic offered to work directly with the Department of “War” on R&D to improve the reliability of autonomous bombing systems.
That’s what your link says. Do you deny this explicit fact?
Iconoclast@feddit.uk 21 hours ago
That’s your intrepretation - not a direct quote.
XLE@piefed.social 21 hours ago
Iconoclast, don’t be disingenuous.
The direct quote is “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems”. “We” meaning Anthropic. “These systems” meaning fully autonomous weapons.
Do you acknowledge they did this? Try not to weasel out of answering with more pedantry. It’s almost as disturbing as your apparent defense of that Silicon Valley AI cult.