What a company says and what a company actually does are not the same thing.
Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
Ilovethebomb@sh.itjust.works 22 hours ago
How is a private company the voice of reason in this?
jacksilver@lemmy.world 16 hours ago
Voroxpete@sh.itjust.works 18 hours ago
They’re not. Conscience has nothing to do with this.
They just don’t think the PR hit is worth it.
Whenever companies choose to act in a way that we perceive as good, we were the voice of reason, not them.
wizardbeard@lemmy.dbzer0.com 19 hours ago
While I’m glad they’re drawing a line, they’re only splitting hairs. Anthropic is already deeply working with the US gov.
SuspciousCarrot78@lemmy.world 21 hours ago
…because every now and again, for the briefest of moments, one them shows themselves not to be run by entirely evil, lecherous humps?
Blink and you (or the shareholders) might miss it.
Voroxpete@sh.itjust.works 18 hours ago
Don’t buy the hype. They’re not acting in good conscience, they’ve just weighed the pros and cons and decided that the PR hit isn’t worth it.
SuspciousCarrot78@lemmy.world 18 hours ago
Having said that…let’s see how it shakes out. Sometimes, good things happen for good reasons.
XLE@piefed.social 17 hours ago
When a CEO tells you who he is, believe him the first time.
I thought we had all learned this lesson with Elon Musk, who also pretended to be the good guy. We’ve already got a ton of red flags about Dario Amodei.
minorkeys@lemmy.world 13 hours ago
Because America elected unreasonable leaders.
Iconoclast@feddit.uk 21 hours ago
Anthropic was founded by former OpenAI employees who left largely due to ethical and safety concerns about how OpenAI was being run. This is just them sticking to their principles.
Voroxpete@sh.itjust.works 18 hours ago
Can’t say the evidence really backs you up on that one.
cbc.ca/…/anthropic-ai-safety-committments-9.71073…
www.bbc.com/news/articles/c62dlvdq3e3o
Iconoclast@feddit.uk 18 hours ago
I still think they deserve some credit for at least trying to do the right thing. I don’t envy the position they’re in.
Everyone’s rushing toward AGI. Trying to do it safely is meaningless if your competition - the ones who don’t care about safety - gets there first. You can slow things down if you’re in the lead, but if you’re second best, it’s just posturing. There is no second place in this race.
purrtastic@lemmy.nz 11 hours ago
No AI bro company is on the path to AGI. Transformer technology will not lead to AGI.
XLE@piefed.social 18 hours ago
“Right thing": compromising with authoritarian regimes to secure AI funding
XLE@piefed.social 18 hours ago
Anthropic’s “ethical” concerns were performative. They only fearmonger about fictional things that will make their product sound powerful (read: worth throwing money into).
They try to scare people with fictional stories of AGI, a thing that isn’t happening, while ignoring widespread CSAM and sexual harassment generation, a thing that is happening.
Iconoclast@feddit.uk 18 hours ago
Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we’re continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we’ll never going to reach it which I view as a quite insane claim in itself.
If we’re not moving toward it, then I’d love to hear your explanation for why we’re moving backwards or not making any progress at all.
Whether we’re 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It’s not the speed of the progress - it’s the trajectory of it.
XLE@piefed.social 17 hours ago
We are not “moving towards AGI” in any way with any modern technology, in the same way that we are not “moving towards FTL travel” because a car company added cylinders to an engine.
The real “AI” dangers are people like Eli Yudkowski, a man who scares vulnerable people, sexually abuses them, and has spawned at least one murderous cult.