Comment on Consumer GPUs to run LLMs
marauding_gibberish142@lemmy.dbzer0.com 1 month agoThank you. Are 14B models the biggest you can run comfortably?
Comment on Consumer GPUs to run LLMs
marauding_gibberish142@lemmy.dbzer0.com 1 month agoThank you. Are 14B models the biggest you can run comfortably?
RagingHungryPanda@lemm.ee 1 month ago
The coder model has only that one. The ones bigger than that are like 20GB+, and my GPU has 16GB. I’ve only tried two models, but it looked like the size balloons after that, so that may be the biggest models that I can run.
marauding_gibberish142@lemmy.dbzer0.com 1 month ago
Do you have any recommendations for running the Mistral small model? I’m very interested in it alongside CodeLlama, OogaBooga and others
RagingHungryPanda@lemm.ee 1 month ago
I haven’t tried those, so not really, but with open web UI, you can download and run anything, just make sure it fits in your vram so it doesn’t run on the CPU. The deep seek one is decent. I find that i like chatgpt 4-o better, but it’s still good.
marauding_gibberish142@lemmy.dbzer0.com 1 month ago
In general how much VRAM do I need for 14B and 24B models?