Qwen3-235B-A22B-FP8
Good! An MoE.
Ideally its maxium context lenght of 131K but i’m willing to compromise.
I can tell you from experience all Qwen models suck past 32K. What’s more, going over 32K, you have to run them in a special “mode” (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.
Also, you lose a lot of quality with FP8/AWQ quantization unless it’s native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much tighter, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp’s is good, and exllama’s is excellent.
My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090
Honestly, you should be set now. I can get 16+ t/s with Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)
It is poorly documented through. The general strategy is to keep the “core” of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There’s even a project to try and calculate it automatically:
brucethemoose@lemmy.world 8 months ago
Ah, here we go:
huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF
Ubergarm is great. See this part in particular: huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF#quic…
You will need to modify the syntax for 2x GPUs a bit. I’d recommend starting f16/f16 K/V cache with 32K (to see if that’s acceptable), and try not go lower than q8_0/q5_1 (as the V is more amenable to quantization).
TheMightyCat@ani.social 8 months ago
Thanks! Ill go check it out.
brucethemoose@lemmy.world 8 months ago
One last thing, I’ve heard mixed things about 235B, hence there might be a smaller, more optimal LLM for whatever you do, if it’s something targeted?
For instance, Kimi 72B is quite a good coding model: huggingface.co/moonshotai/Kimi-Dev-72B
It might fit in vllm (as an AWQ) with 2x 4090s. It and would easily fit as an exl3.
rezz@lemmy.world 8 months ago
What do I need to run Kimi? Does it have apple silicon compatible releases? It seems promising.