Comment on AI safety leader says 'world is in peril' and quits to study poetry
Hackworth@piefed.ca 7 hours agoSo what I meant by “doubt they’ll be able to play the good guy for long” is exactly that no corpo is your friend. But I also believe perfect is the enemy of good, or at least better. I want to encourage companies to be better, knowing full well that they will not be perfect. Since Anthropic doesn’t make image/video/audio generators, they may just not see CSAM as a directly related concern for the company. A PAC doesn’t have to address every harm to be a source of good.
As for self-harm, that’s an alignment concern, the main thing they do research on. And based on what they’ve published, they know that perfect alignment is not in our foreseeable future. They’ve made a lot of recent improvements that make it demonstrably harder to push a bot to dark traits. But they know damn well they can’t prevent it without some structural breakthroughs. And who knows if those will ever come?
I read that 404 media piece when it got posted here, and this is also probably that guy’s fault. And frankly, Dario’s energy creeps me out. I’m not putting Anthropic on a pedestal here, they’re just… the least bad… for now?
XLE@piefed.social 7 hours ago
The outlandish claim that AI will create a bioweapon is also an “alignment concern”… But Anthropic lists that one out explicitly, while ignoring real-world, present-day harms.
That’s why the “AI safety” lobby is a joke. They only address fictional concerns, because those concerns assume that their product is powerful and potentially profitable. Addressing real-world harms would force them to admit that maybe their product isn’t all that great.