Comment on OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”
Spedwell@lemmy.world 9 months agoI mean, their press release said “not consistently candid”, which is about as close to calling someone a liar as corporate speak will get. Altman ended up back in the captain’s chair, and we haven’t heard anything further.
If the original reason for firing made Altman look bad, we would expect this silence.
If the original reason was a homophobic response from the board, we might expect OpenAI to come out and spin a vague statement on how the former board had a personal gripe with Altman unrelated to his performance as CEO, and that after replacing the board everything is back to the business of delivering value etc. etc.
I’m not saying it isn’t possible, but given all we know, I don’t think the one, now-generally mainstream attribute about Altman is the one reason he was ousted. Especially if you follow journalism about TESCREAL/Silicon Valley philosophies it is clear to see: this was the board trying to preserve the original altruistic mission of OpenAI, and the commercial branch finally shedding the dead weight.
afraid_of_zombies@lemmy.world 9 months ago
My experience has been all firings are either for clear reasons or vague corporate ones. The vague corporate ones are personal. He announces his gay wedding and suddenly the board decides that a vague reason means he can’t work there anymore. Why be vague? Just be direct if you have zero to hide.
They fired him because he is gay and got gay married. Until I see positive evidence against that, like a transcript of the decision signed by eyewitnesses, that will be my working model.
Spedwell@lemmy.world 9 months ago
Fair enough. I disagree, but we’re both in the dark here so not much to do about it until more comes to light.
afraid_of_zombies@lemmy.world 9 months ago
On an unrelated matter. Do you think the first black woman president of harvard lost her position 100% because of plagiarism or were the other issues involved?
Spedwell@lemmy.world 9 months ago
Sorry for the long reply, I got carried away. See the section below for my good-faith reply, and the bottom section for “what are you implying by asking me this?” response.
From the case studies in my scientific ethics course, I think she probably would have lost her job regardless, or at least been “asked to resign”.
The fact it was in national news, and circulated for as long as it did, certainly had to do with her identity. I was visiting my family when the story was big, and the (old, conservative, racist) members of the family definitely formed the opinion that she was a ‘token hire’ and that her race helped her con her way to the top despite a lack of merit.
So there is definitely a race-related effect to the story (and probably some of the “anti- liberal university” mentality). I don’t know enough about how the decision was made to say whether she would have been fired those effects were not present.
Just some meta discussion: I’m 100% reading into your line of questioning, for better or worse. But it seems you have pinned me as the particular type of bigot that likes to deny systemic biases exist. I want to just head that off at the pass and say I don’t deny your explanation as plausible, but that given a deeper view of the cultural ecosystem of OpenAI it ceases to be likely.
I don’t know your background on the topic, but I enjoy following voices critical of effective altruism, long-termism, and effective accelerationism. A good gateway into this circle of critics is the podcast Tech Won’t Save Us (the 23/11/23 episode actually discusses the OpenAI incident). Having that background, it is easy to paint some fairly convincing pictures for what went on at OpenAI, before Altman’s sexuality enters the equation.