Comment on Consumer GPUs to run LLMs
RagingHungryPanda@lemm.ee 5 days agoI haven’t tried those, so not really, but with open web UI, you can download and run anything, just make sure it fits in your vram so it doesn’t run on the CPU. The deep seek one is decent. I find that i like chatgpt 4-o better, but it’s still good.
marauding_gibberish142@lemmy.dbzer0.com 5 days ago
In general how much VRAM do I need for 14B and 24B models?
FrankLaskey@lemmy.ml 4 days ago
It really depends on how you quantize the model and the K/V cache as well. This is a useful calculator. smcleod.net/vram-estimator/ I can comfortably fit most 32b models quantized to 4-bit (usually KVM or IQ4XS) on my 3090’s 24 GB of VRAM with a reasonable context size. If you’re going to be needing a much larger context window to input large documents etc then you’d need to go smaller with the model size (14b, 27b etc) or get a multi GPU set up or something with unified memory and a lot of ram (like the Mac Minis others are mentioning).
FrankLaskey@lemmy.ml 4 days ago
Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI