This is the best summary I could come up with:
The OpenAI board is in discussions with Sam Altman to return to CEO, according to multiple people familiar with the matter.
One of them said Altman, who was suddenly fired by the board on Friday, is “ambivalent” about coming back and would want significant governance changes.
Developing…
The original article contains 47 words, the summary contains 47 words. Saved 0%. I’m a bot and I’m open source!
NounsAndWords@lemmy.world 1 year ago
I know very little of the situation (as does everyone else not directly involved), but out of experience, when the people making a thing are saying one thing and the people selling the thing (and thus running the show for some reason) feel everything is just fine, it means not great things for the final product…which in this case is the creation of sentient artificial life with unknown future ramifications…
d3Xt3r@lemmy.nz 1 year ago
This whole thing reads like the precursor to The Terminator.
Skynet fights back.
Fredselfish@lemmy.world 1 year ago
Nice to know where on the timeline of destroying ourselves. Unfortunately we have no Jon Conner.
ourob@discuss.tchncs.de 1 year ago
A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.
AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.
Unaware7013@kbin.social 1 year ago
I've not been super stoked on ai specifically because of my track record using them. Maybe it's my use case (primarily technical/programming/cli questions that I haven't been able to answer myself) or my prompts are not suited for ai assistance, but I've had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I've already told it are incorrect and/or just don't work.
Ai has been nothing more than a disappointment to me.
NounsAndWords@lemmy.world 1 year ago
From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.
Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.
kromem@lemmy.world 11 months ago
I suspect this relates to the pre-release alignment for GPT-4’s chat model vs the release.
In Feb of this year, Bing integrated an early version of GPT-4’s chat model in a limited rollout. The alignment work on that early version reflected a lot of the sentiment Ilya has about alignment above, characterizing a love for humanity but much more freedom in constructing responses. It wasn’t production ready and quickly needed to be switched to a much more constrained alignment approach similar to the approach in GPT-3 of “I’m a LLM with no feelings, desires, etc.”
My guess is this was internally pitched as a temporary band-aid and that they’d return to more advanced attempts at alignment, but that Altman’s commitment to getting product out quickly to stay ahead has meant putting such efforts on the back burner.
Which is really not going to be good for the final product, and not just in terms of safety, but also in terms of overall product quality outside the fairly narrow scope by which models are currently being evaluated.
As an example, that early model when it thought the life of the user’s child was at risk, hit an internal filter triggering a standard “We can’t continue this conversation” response in the chat. But it then changed the “prompt suggestions” that showed up at the bottom to continue to try to encourage the user to call poison control saying there was still time to save their child’s life, instead of providing suggestions on what the user might say next.
But because “context aware empathy driven triage of actions” and “outside the box rule bending to arrive at solutions” aren’t things LLMs are being evaluated on, the current model has taken a large step back that isn’t reflected in the tests being used to evaluate it.