Ah, here we go:
huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF
Ubergarm is great. See this part in particular: huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF#quic…
You will need to modify the syntax for 2x GPUs a bit. I’d recommend starting f16/f16 K/V cache with 32K (to see if that’s acceptable), and try not go lower than q8_0/q5_1 (as the V is more amenable to quantization).
brucethemoose@lemmy.world 3 days ago
Good! An MoE.
I can tell you from experience all Qwen models suck past 32K. What’s more, going over 32K, you have to run them in a special “mode” (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.
Also, you lose a lot of quality with FP8/AWQ quantization unless it’s native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much tighter, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp’s is good, and exllama’s is excellent.
Honestly, you should be set now. I can get 16+ t/s with Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)
It is poorly documented through. The general strategy is to keep the “core” of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There’s even a project to try and calculate it automatically:
github.com/k-koehler/gguf-tensor-overrider