It’s not a very solid point. They said they may become necessary at some point, but right now they’re irresponsible.
They’re not ruling it out in the future, but their focus is on today’s problem.
Comment on The Pentagon’s Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use
threeganzi@sh.itjust.works 1 day agoYou have a point, but perhaps try a softer tone next time. I think that would help your argument.
It’s not a very solid point. They said they may become necessary at some point, but right now they’re irresponsible.
They’re not ruling it out in the future, but their focus is on today’s problem.
Serinus, did you see the part where Anthropic wants to develop them with the US military?
with our two requested safeguards in place.
Said safeguards being that their technology isn’t being used for mass surveillance or the development of autonomous drones. It’s explicitly mentioned in their statement - the one you’re desperately trying to massage and misquote to make it seem like they’re saying something they’re not - yet anyone can just go and read it themselves.
Iconoclast, I see you edited your post after I replied. You did not answer whether you accept the fact that Anthropic explicitly wanted to develop fully autonomous AI alongside the Trump Department of “War.”
Either you’re lying, or you’re the one desperately trying to reshape the truth.
Iconoclast, you have moved beyond accidental deception into intentional lies.
Anthropic offered to work directly with the Department of “War” on R&D to improve the reliability of autonomous bombing systems.
That’s what your link says. Do you deny this explicit fact?
0_o7@lemmy.dbzer0.com 1 day ago
They’re building tools to <cull people and children> from half way across the world and you’re worried about the tone?