Is Nvidia still a defacto requirement? I’ve heard of and support being added to OLlama and etc, but I haven’t found robust comparisons on value.
Comment on Very large amounts of gaming gpus vs AI gpus
brucethemoose@lemmy.world 3 days ago
Be specific!
-
What models size (or model) are you looking to host?
-
At what context length?
-
What kind of speed (token/s) do you need?
-
Is it just for you, or many people? How many? In other words should the serving be parallel?
In other words, it depends, but the best option for a self hosted rig, OP, is probably:
-
One 5090 or A6000 ADA GPU.
-
A cost-effective EPYC CPU/Mobo
-
At least 256 GB DDR5
Now run ik_llama.cpp, and you can server Deepseek 671B faster than you can read: github.com/ikawrakow/ik_llama.cpp
But there’s all sorts of niches. In a nutshell, you don’t think “How much do I need for AI?” But “What is my target use case and model?”
PeriodicallyPedantic@lemmy.ca 3 days ago
brucethemoose@lemmy.world 3 days ago
It depends!
Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip.
Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.
The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/.
And there are… quirks, depending on the model.
I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.
Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.
A lot of people do offload MoE models to Threadripper or EPYC CPUs. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090.
WhyJiffie@sh.itjust.works 1 day ago
Or get lucky with docker.
why do you need to get lucky with docker? what is it that doesn’t work?
brucethemoose@lemmy.world 1 day ago
Eh, there’s not as much attention paid to them working across hardware because AMD prices their hardware uncompetitively (hence devs don’t test them much), and AMD themself focuses on the MI300X and above.
Also, I’m not sure what layer one needs to get ROCM working.
PeriodicallyPedantic@lemmy.ca 3 days ago
Thanks!
That helps when I eventually get around to standing up my own AI server.Right now I can’t really justify the cost for my low volume of use, when I can get CloudFlare free tier access to mid-sized models. But it’s something want to bring into my homelab instead for better control and privacy.
TheMightyCat@ani.social 3 days ago
My target model is Qwen/Qwen3-235B-A22B-FP8. Ideally its maxium context lenght of 131K but i’m willing to compromise. I find it hard to give an concrete t/s awnser, let’s put it around 50. At max load probably around 8 concurrent users, but these situations will be rare enough that oprimizing for single user is probably more worth it.
My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090
It gets nice enough peformance loading 32B models completely in vram, but i am skeptical that a simillar system can run a 671B at higher speeds then a snails space, i currently run vLLM because it has higher peformance with tensor parrelism then lama.cpp but i shall check out ik_lama.cpp.
brucethemoose@lemmy.world 3 days ago
Good! An MoE.
I can tell you from experience all Qwen models suck past 32K. What’s more, going over 32K, you have to run them in a special “mode” (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.
Also, you lose a lot of quality with FP8/AWQ quantization unless it’s native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much tighter, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp’s is good, and exllama’s is excellent.
Honestly, you should be set now. I can get 16+ t/s with Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)
It is poorly documented through. The general strategy is to keep the “core” of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There’s even a project to try and calculate it automatically:
github.com/k-koehler/gguf-tensor-overrider
brucethemoose@lemmy.world 3 days ago
Ah, here we go:
huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF
Ubergarm is great. See this part in particular: huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF#quic…
You will need to modify the syntax for 2x GPUs a bit. I’d recommend starting f16/f16 K/V cache with 32K (to see if that’s acceptable), and try not go lower than q8_0/q5_1 (as the V is more amenable to quantization).
TheMightyCat@ani.social 3 days ago
Thanks! Ill go check it out.
brucethemoose@lemmy.world 3 days ago
One last thing, I’ve heard mixed things about 235B, hence there might be a smaller, more optimal LLM for whatever you do, if it’s something targeted?
For instance, Kimi 72B is quite a good coding model: huggingface.co/moonshotai/Kimi-Dev-72B
It might fit in vllm (as an AWQ) with 2x 4090s. It and would easily fit as an exl3.