Comment on Very large amounts of gaming gpus vs AI gpus
PeriodicallyPedantic@lemmy.ca 3 days agoIs Nvidia still a defacto requirement? I’ve heard of and support being added to OLlama and etc, but I haven’t found robust comparisons on value.
Comment on Very large amounts of gaming gpus vs AI gpus
PeriodicallyPedantic@lemmy.ca 3 days agoIs Nvidia still a defacto requirement? I’ve heard of and support being added to OLlama and etc, but I haven’t found robust comparisons on value.
brucethemoose@lemmy.world 3 days ago
It depends!
Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip.
Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.
The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/.
And there are… quirks, depending on the model.
I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.
Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.
A lot of people do offload MoE models to Threadripper or EPYC CPUs. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090.
WhyJiffie@sh.itjust.works 1 day ago
why do you need to get lucky with docker? what is it that doesn’t work?
brucethemoose@lemmy.world 1 day ago
Eh, there’s not as much attention paid to them working across hardware because AMD prices their hardware uncompetitively (hence devs don’t test them much), and AMD themself focuses on the MI300X and above.
Also, I’m not sure what layer one needs to get ROCM working.
PeriodicallyPedantic@lemmy.ca 3 days ago
Thanks!
That helps when I eventually get around to standing up my own AI server.
Right now I can’t really justify the cost for my low volume of use, when I can get CloudFlare free tier access to mid-sized models. But it’s something want to bring into my homelab instead for better control and privacy.