This is how I feel. I’m enjoying Claude and paying but keeping a close eye. Its nice because it really enables me to do things I am not capable of on my own. I teach as part of my job and used it to code an interactive learning module. Making the module hasn’t been quick and I have to babysit the content a lot to get what I want. But I never could have done it on my own and I never could have paid someone to make it for me and it’s helping educate people. I see this as the correct use of AI.
But if anthrioic goes pure evil I’ll cancel my subscription.
XLE@piefed.social 2 weeks ago
The regulations this PAC promotes are almost laughable. Do they mention CSAM generation? Deepfakes? Pollution? Water table destruction? Suicide encouragement? Nope.
Those harms are apparently acceptable.
Instead, they say we should focus on “the nearest-term high risks: AI-enabled biological weapons and cyberattacks.” Sci-fi fiction.
Hackworth@piefed.ca 2 weeks ago
They’re advocating for transparency and for states to be able to have their own AI laws. I see that as positive. And as part of that transparency, Anthropic publishes its system prompts, which go through with every message. They devote a significant portion to mental health, suicide prevention, not enabling mania, etc. So I wouldn’t say they see it as “acceptable.”
XLE@piefed.social 2 weeks ago
If Anthropic actually wants to prevent self-harm and CSAM through regulation, why didn’t they recommend regulating those things?
Anthropic executive Jason Clinton harassed LGBT Discord users, so forgive me if I don’t take their PR at face value. No AI Corpo is your friend, which is a lesson I thought we had learned from Sam Altman and Elon Musk already.
Hackworth@piefed.ca 2 weeks ago
So what I meant by “doubt they’ll be able to play the good guy for long” is exactly that no corpo is your friend. But I also believe perfect is the enemy of good, or at least better. I want to encourage companies to be better, knowing full well that they will not be perfect. Since Anthropic doesn’t make image/video/audio generators, they may just not see CSAM as a directly related concern for the company. A PAC doesn’t have to address every harm to be a source of good.
As for self-harm, that’s an alignment concern, the main thing they do research on. And based on what they’ve published, they know that perfect alignment is not in our foreseeable future. They’ve made a lot of recent improvements that make it demonstrably harder to push a bot to dark traits. But they know damn well they can’t prevent it without some structural breakthroughs. And who knows if those will ever come?
I read that 404 media piece when it got posted here, and this is also probably that guy’s fault. And frankly, Dario’s energy creeps me out. I’m not putting Anthropic on a pedestal here, they’re just… the least bad… for now?