Repost from HN: news.ycombinator.com/item?id=37109394
There’s no way Microsoft is going to let it go bankrupt.
Submitted 1 year ago by balfrag@lemmy.world to technology@lemmy.world
Repost from HN: news.ycombinator.com/item?id=37109394
There’s no way Microsoft is going to let it go bankrupt.
If there’s no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.
Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.
It won’t take long for them to recoup their investment in OpenAI.
If OpenAI has been more responsible in how they released ChatGPT, they wouldn’t be facing this problem. Just completely opening Pandora’s box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.
There’s plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.
I’m also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they’re making mad money right now, but just like the crypto space that can dry up instantly.
Couldn’t they charge a subscription? Or sell credits?
Genuine question.
That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.
I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.
What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.
I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.
A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.
As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.
It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.
Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?
ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.
Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.
But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to farm usage to figure out what were the most popular use cases and then sell that information or repackage it as an individual service.
I am unsure about the free version, but I really am very surprised by how good the paid version with the code interpreter has gotten in the last 4-6weeks. Feels like I have a c# syntax guru on 24/7 access. Used to make lots of mistakes a couple months ago, but rarely does now and if it does it almost always fixes in in the next code edit. It has saved my untold hours.
Link?
I mean apart from the fact it’s not sourced or whatever, it’s standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).
Once everyone’s using your product over competitors who couldn’t afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you’re the biggest player in the market.
It’s just Uber’s business model.
The difference is that the VC bubble has mostly ended. There isn’t “free money” to keep throwing at a problem post-pan. That’s why there’s an increased focus on Uber (and others) making a profit.
In this case, Microsoft owns 49% of OpenAI, so they’re the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.
This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.
I don’t know anything about anything, but part of me suspects that lots of good funding is still out there, it’s just being used more quietly and more scrupulously, & not being thrown at the first microdosing tech wanker with a great elevator pitch on how they’re going to make “the Tesla of dental floss”.
Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.
All it’s every done is rob from it’s employees so it can give money to stockholders. Just like every corporation.
If AI was so great, it would find a solution to operate at fraction of the cost it does now
Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?
Really says something that none of your responses yet seem to have caught that this was a joke.
if we don’t know, it doesn’t know.
If we know, but there’s no public text about it, it doesn’t know either.
it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new
It doesn’t “know” anything. It can’t solve that problem. It’s trained on humans so it’s limited to what we have written down.
I love ChatGPT but if it’s creative it’s because you asked it the right questions and found an oblique answer yourself.
I asked Bard how ChatGPT can fix itself, here’s what it said:
ChatGPT can fix itself in a number of ways, including:
Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.
Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.
Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.
Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.
Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.
It’s working on it. It’s just a pretty difficult task to be better than 50 years of optimization by human computer scientists www.nature.com/articles/d41586-023-01883-4
Deepmind is actually working on an AI that improve performances of low level programs. It started with improving sorting algorithm.
It’s an RL algorithm.
Main issue is that everything takes time, and expectations on current AI are artificially inflated.
It will reach the point most are discussing now, it’ll simply take a bit longer than people expect
Clickbait crap.
It’s Firstpost, their Kremlin-bootlicking YouTube videos are even worse. Just below Forbes Breaking News trash.
Yup. Uber was burning 10x of it
$7 million a day!?
Glad I’m not there only one to think that
huh, so with the 10bn from Microsoft they should be good for… just over 30 years!
ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they’ll absolutely keep throwing cash at it.
Wow I am so much worried about a company that is funded by Microsoft going bankrupt!
They don’t “go bankrupt”. Even if it happens, it’s more let than going.
Company go bankrupt, biggest investors take assets and IP. Win.
This article has been flagged ln HN for being clickbait garbage.
It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.
Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlight the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached
It seems to be a common thing. I gave up on /r/futurology and /r/technology over on Reddit long ago because it was filled with an endless stream of links to cool new things with comment sections filled with nothing but negativity about those cool new things. Even /r/singularity is drifting that way. And so it is here on the Fediverse too, the various "technology" communities are attracting a similar userbase.
Sure, not everything pans out. But that's no excuse for making all of these communities into reflections of /r/nothingeverhappens. Technology does change, sometimes in revolutionary ways. It'd be nice if there was a community that was more upbeat about that.
Because it’s just the same as autocomplete on your phone lol so whatevs.
/s
I probably sound like I hate it, but I’m just giving my annual “this new tech isn’t the miracle it’s being sold as” warning, before I go back to charging folks good money to clean up the mess they made going “all in” on the last one.
People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can’t strike for better wages when your employer is already trying to get rid of you.
The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.
Hunger shouldn’t be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.
Indian newpapers publish anything without any sort of verification. From reddit videos to whatsapp forwards. More than news, they are like an old chinese whispers game which is run infinitely. So take this with a huge grain of salt.
Pretty sure Microsoft will be happy to come save the day and just buy out the company.
it feels like, that was the plan all along
I don’t understand Lemmy’s hate boner over AI.
Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.
Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…
AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It’s genuinely an astonishing historical and cultural achievement, peak of human ingenuity.
No idea why.
It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.
The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.
Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.
Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.
Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.
The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.
At least in my opinion that’s where the negativity comes from.
Ah, yes.
Remind me again how that “revolution of human mobility”, the Segway, is doing now…
Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…
Or how the safest Operating System ever, Windows 7, turned out in terms of security…
Or how Bitcoin has revolutionized how people pay each other for stuff…
Some of us have seen all lots of hype trains go by over the years, always with the same format, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.
Looking at your choice of words in your post you’re very invested in it, either emotionally (as a fanboy) or monetarily (greedy fucker hoping to make money from the hype) since rational people who are not using salesspeak will not refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) much less deem any and all criticism of it as “shitting on it”.
What I don’t understand is why so many people conflate “hating disney CEO for misusing AI” with “hating AI”. Maybe if people understood the differences, they would “understand the hate”
It's just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
AI is not good. I want to be good, but it's not.
I'll clarify, it's basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it's already being used to put people out of their jobs.
Techbros think AI will run rampant and kill all humans, when they're the ones killing people by replacing them with shitty AI. And the worst part is that it isn't even good at the jobs it's being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.
Not everyone that dislikes a thing or the promoters of that thing “have no idea what it is”…but sure, go off I guess. 🤷
Lemmy and Mastodon to a larger extent hate anything owned by a corporation. That voice is getting more and more louder by the day.
This article is dumb as shit
No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they’re anywhere near a net profit but they’re far from bankruptcy.
The flow of the writing style felt kinda off, like someone was speaking really fast spewing random trivia and leaving
A couple of my coworkers will have to write their own code again and start reading documentation
75% of that code must not work lol
It works if you ask it for small specific components, the bigger the scope of the request, the less likely it will give you anything worthwhile.
So basically you still need to know what you’re doing and how to design a script/program anyway, and you’re just using chatgpt to figure out the syntax.
It’s a bit of time-saver at times but it’s not replacing anyone in the immediate future.
I’ve tried using it myself and the responses I get, no matter how I phrase them, are too vague in most places to be useful. I have yet to get anything better than I’ve found in documentation.
This is alarming…
One of the things companies have started doing lately is signaling “we could do bankrupt”, then jumping ahead a stage on enshittification
I don't think OpenAI needs any excuses to enshittify, they've been speedrunning ever since they decided they liked profit instead of nonprofit.
Microsoft gave them 10 billion moneys. They’ll be fine.
That’s not what I said.
Money doesn’t mean shit - enshittification is about potential cash flow
Does it feel like these “game changing” techs have lives that are accelerating? Like there’s the dot com bubble of a decade or so, the NFT craze that lasted a few years, and now AI that’s not been a year.
The Internet is concentrating and getting worse because of it, inundated with ads and bots and bots who make ads and ads for bots, and being existentially threatened by Google’s DRM scheme. NFTs have become a joke, and the vast majority of crypto is not far behind. How long can we play with this new toy? Its lead paint is already peeling.
This AI craze is actually a crypto craze in disguise https://apnews.com/article/worldcoin-cryptocurrency-sam-altman-data-privacy-9dc6a68590435b2f10fedaa0db58331b
As for the pace, I think the US financial services industry has been on a growth spree for decades and they’re desperate to find the new thing that will make them money. It’s like ed edd & eddy but with the PC, internet, dotcom, internet service, social media and now crypto
And don’t forget the metaverse!
I read an article about the bot collapse. Basically companies use bot to buy add space on websties. Google uses a bot to match adds to websites. Now we have a massive influx of AI made pages. Literally pages of bs just to make more add space that a bot will sell to another not. It is bots all the way down.
I wouldn’t put nfts in the same boat as the dotcom bust. The dotcom thing was way bigger. Most people didn’t do anything with nfts. Crypto seems in between. The AI thing seems similar though.
Of course it will, all these companies are funded by tech giants and venture capitalist firms.
Good riddance.
A company that just raised $10b from Microsoft is struggling with $260m a year? That’s almost 40 years of runway.
They are choosing to spend that much. That doesn’t suggest that they expect financial problems.
Its fine, i got my own LlaMa at home, it does almost the same as GPT
good.
Well, I was happily paying them to lewd up the chatbots, but then the emailed me telling me to stop. I guess they don’t want my money.
That’s a lot of crypto coins to sell
If ChatGPT only costs $700k to run per day and they have a $10b war-chest, assuming there were no other overhead/development costs, OpenAI could run ChatGPT for 39 years. I’m not saying the premise of the article is flawed, but seeing as those are the only 2 relevant data points that they presented in this (honestly poorly written) article, I’m more than a little dubious.
But, as a thought experiment, let’s say there’s some truth to the claim that they’re burning through their stack of money in just one year. If things get too dire, Microsoft will just buy 51% or more of OpenAI (they’re going to be at 49% anyway after the $10b deal), take controlling interest, and figure out a way to make it profitable.
What’s most likely going to happen is OpenAI is going to continue finding ways to cut costs like caching common query responses for free users (and possibly even entire conversations, assuming they get some common follow-up responses). They’ll likely iterate on their infrastructure and cut costs for running new queries. Then they’ll charge enough for their APIs to start making a lot of money. Needless to say, I do not see OpenAI going bankrupt next year. I think they’re going to be profitable within 5-10 years. Microsoft is not dumb and they will not let OpenAI fail.
It’s definitely become a part of a lot of people’s workflows. I don’t think OpenAI can die. But the need of the hour is to find a way to improve efficiency multifold. This will make it cheaper, more powerful and more accessible
Good.
They're gonna be in even bigger trouble when it's determined that AI training, especially for content generation, is not fair use and they have to pay each and every person whose data they've used.
because i distrust this kind of technology in general and for sure it would add to the dystopian, anti-consumer, anti-workforce agenda big tech is currently enforcing. i work in desktop publishing and about 3/4 of jobs in that branche would be cancelled the moment ai could replace them for a fraction of the cost.
Good
hokage@lemmy.world 1 year ago
What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS’s LLM play at this point.
p03locke@lemmy.dbzer0.com 1 year ago
When you get articles like this, the first thing you should ask is “Who the fuck is Firstpost?”
altima_neo@lemmy.zip 1 year ago
Yeah where the hell do these posters find these articles anyway? It’s always from blogs that repost stuff from somewhere else
Wats0ns@sh.itjust.works 1 year ago
Openai biggest spending is infrastructure, Whis is rented from… Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested
fidodo@lemm.ee 1 year ago
MS is basically getting a ton of equity in exchange for cloud credits. That’s a ridiculously good deal for MS.
lemmyvore@feddit.nl 1 year ago
I mean, you’re correct in the sense Microsoft basically owns their ass at this point, and that Microsoft doesn’t care if they make a loss because it’s sitting on a mountain of cash. So one way or another Microsoft is getting something cool out of it. But at the same time it’s still true that OpenAI’s business plan was unsustainable hyped hogwash.
chiliedogg@lemmy.world 1 year ago
Their business plan got Microsoft to drop 10 billion dollars on them.
None of my shitty plans have pulled that off.
fidodo@lemm.ee 1 year ago
Also, their biggest expenses are cloud expenses, and they use the MS cloud, so that basically means that Microsoft is getting a ton of equity in a hot startup in exchange for cloud credits which is a ridiculously good deal for MS. Zero chance MS would let them fail.
c0mbatbag3l@lemmy.world 1 year ago
Almost every company uses either Google or Microsoft Office products and we already know that they’re working on an AI offering/solution for O365 integration, they can see the writing on the wall here and are going to profit massively as they include it in their E5 license structure or invent a new one that includes AI. Then they’ll recoup that investment in months.