Comment on AI safety leader says 'world is in peril' and quits to study poetry
XLE@piefed.social 5 hours agoThe regulations this PAC promotes are almost laughable. Do they mention CSAM generation? Deepfakes? Pollution? Water table destruction? Suicide encouragement? Nope.
Those harms are apparently acceptable.
Instead, they say we should focus on “the nearest-term high risks: AI-enabled biological weapons and cyberattacks.” Sci-fi fiction.
Hackworth@piefed.ca 5 hours ago
They’re advocating for transparency and for states to be able to have their own AI laws. I see that as positive. And as part of that transparency, Anthropic publishes its system prompts, which go through with every message. They devote a significant portion to mental health, suicide prevention, not enabling mania, etc. So I wouldn’t say they see it as “acceptable.”
XLE@piefed.social 4 hours ago
If Anthropic actually wants to prevent self-harm and CSAM through regulation, why didn’t they recommend regulating those things?
Anthropic executive Jason Clinton harassed LGBT Discord users, so forgive me if I don’t take their PR at face value. No AI Corpo is your friend, which is a lesson I thought we had learned from Sam Altman and Elon Musk already.
Hackworth@piefed.ca 4 hours ago
So what I meant by “doubt they’ll be able to play the good guy for long” is exactly that no corpo is your friend. But I also believe perfect is the enemy of good, or at least better. I want to encourage companies to be better, knowing full well that they will not be perfect. Since Anthropic doesn’t make image/video/audio generators, they may just not see CSAM as a directly related concern for the company. A PAC doesn’t have to address every harm to be a source of good.
As for self-harm, that’s an alignment concern, the main thing they do research on. And based on what they’ve published, they know that perfect alignment is not in our foreseeable future. They’ve made a lot of recent improvements that make it demonstrably harder to push a bot to dark traits. But they know damn well they can’t prevent it without some structural breakthroughs. And who knows if those will ever come?
I read that 404 media piece when it got posted here, and this is also probably that guy’s fault. And frankly, Dario’s energy creeps me out. I’m not putting Anthropic on a pedestal here, they’re just… the least bad… for now?
XLE@piefed.social 4 hours ago
The outlandish claim that AI will create a bioweapon is also an “alignment concern”… But Anthropic lists that one out explicitly, while ignoring real-world, present-day harms.
That’s why the “AI safety” lobby is a joke. They only address fictional concerns, because those concerns assume that their product is powerful and potentially profitable. Addressing real-world harms would force them to admit that maybe their product isn’t all that great.