How to create a successful GPU company in 2025:
- Step 1: build a time machine and go back 30 years
Submitted 1 day ago by Pro@programming.dev to technology@lemmy.world
https://www.marketplace.org/story/2025/05/28/why-doesnt-nvidia-have-more-competition
How to create a successful GPU company in 2025:
It’s funny how the article asks the question, but completely fails to answer it.
Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market.
AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate.
Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD.
But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up.
Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to.
AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, and they have a massive talent mass, so nothing can stop it now.
IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on that too.
At first I was going to say there is ATI. Then I realized I hadn’t heard about ATI in a while and looked up what happened to it. Then I realized… I’m old.
Corporate consolidation and monopoly/oligopoly is usually why.
Because why would AMD compete with them in any meaningful way?
It’s the same as having 2 gas stations directly across the street from each other.
Neither one is there to compete with the other. They’re both there to collectively rip off everyone else.
Because its competitors care about Not Invented Here instead of building common industry standards.
Well, Intel tried with OneApi. As for AMD they go 5 minutes between every time they shoot themselves in the foot. It’s unbelievable to watch.
Intel tried with OpenAPI because ROCm was not invented here.
People like youz starts snoopin’ around askin’ questions tends to fall outta windows, y’know what I’m sayin’?
iopq@lemmy.world 1 day ago
What a shit article. Doesn’t explain the software situation. While CUDA is the most popular, a lot of frameworks do support AMD chips.
Alphane_Moon@lemmy.world 1 day ago
A comically bad “article”.
Pro@programming.dev 1 day ago
Glitchvid@lemmy.world 1 day ago
Expounding, Nvidia has very deeply engrained itself in educational and research institutions. People learning GPU compute are being taught CUDA and Nvidia hardware. Researchers have access to farms of Nvidia chips.
AMD has basically gone the “build it and they will come” attitude, and the results to match.
iopq@lemmy.world 1 day ago
It’s literally the most surface level take. Does not even mention what CUDA is or AMD’s efforts to run it
www.xda-developers.com/nvidia-cuda-amd-zluda/
But it is no longer funded by AMD or Intel
AMD GPUs are still supported by frameworks like PyTorch
rocm.docs.amd.com/…/pytorch-install.html
While Nvidia might be the fastest, they are not always the cheapest option, especially if you rent it in the cloud. When I last checked, it was cheaper to rent AMD GPUs
dumbpotato@lemmy.cafe 1 day ago
Do gamers even care about CUDA?
iopq@lemmy.world 1 day ago
No, this is about AI
Case@lemmynsfw.com 1 day ago
I’m a gamer, and I do…
Then again, I’m mostly excited about using CUDA cores for cracking hashes and the like, lol.