It’s literally the most surface level take. Does not even mention what CUDA is or AMD’s efforts to run it
www.xda-developers.com/nvidia-cuda-amd-zluda/
But it is no longer funded by AMD or Intel
AMD GPUs are still supported by frameworks like PyTorch
rocm.docs.amd.com/…/pytorch-install.html
While Nvidia might be the fastest, they are not always the cheapest option, especially if you rent it in the cloud. When I last checked, it was cheaper to rent AMD GPUs
Glitchvid@lemmy.world 1 day ago
Expounding, Nvidia has very deeply engrained itself in educational and research institutions. People learning GPU compute are being taught CUDA and Nvidia hardware. Researchers have access to farms of Nvidia chips.
AMD has basically gone the “build it and they will come” attitude, and the results to match.
brucethemoose@lemmy.world 1 day ago
Except they didn’t.
They repeatedly fumble the software with little mistakes (looking at you, Flash Attention). They price the MI300X and any high VRAM GPU through the roof, when they have every reason to be more competitive and undercut Nvidia. They have sad, incomplete software efforts divorced from what devs are actually doing, like their quantization framework or some inexplicably bad LLMs they trained themself.
They give no one any reason to give them a chance, and wonder why no one comes. Lisa Su could fix this with literally like two phone calls (remove VRAM restrictions on their OEMs, and fix stupid small bugs in ROCM), but they don’t.
Glitchvid@lemmy.world 1 day ago
That’s basically what I said in so many words. AMD is doing its own thing, if you want what Nvidia offers you’re gonna have to build it yourself. WRT pricing, I’m pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side, from what I’ve read.
The biggest culprit from what I can gather is that AMD’s GPU side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.
brucethemoose@lemmy.world 1 day ago
I’m not as sure about this, but seems like AMD is taking a fat margin on the MI300X (and its sucessor?), and kinda ignoring the performance penalty. It’s easy to say “build it yourself!” but the reality is very few can, or will, do this, and will simply try to deploy vllm or vanilla TRL or something as best they can (and run into the same issues everyone does).
The ‘enthusiast’ side where all the tinkerer devs reside is totally screwed up though. AMD’s mirroring Nvidia’s VRAM cartel pricing when they have absolutely no reason to. It’s completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $100 or $200.
Yeah, it does seem divorced from the CPU division. But a lot of the badness comes from business decisions, even when the silicon is quite good, and some of that must be from Austin.