Comment on Consumer GPUs to run LLMs
RagingHungryPanda@lemm.ee 2 days ago
I got it working with my 6800XT. I’m running deep seek r1 14b (somewhere around there) and the deep seek coder V2. I have a link to a blog with those instructions
Comment on Consumer GPUs to run LLMs
RagingHungryPanda@lemm.ee 2 days ago
I got it working with my 6800XT. I’m running deep seek r1 14b (somewhere around there) and the deep seek coder V2. I have a link to a blog with those instructions
marauding_gibberish142@lemmy.dbzer0.com 2 days ago
Thank you. Are 14B models the biggest you can run comfortably?
RagingHungryPanda@lemm.ee 2 days ago
The coder model has only that one. The ones bigger than that are like 20GB+, and my GPU has 16GB. I’ve only tried two models, but it looked like the size balloons after that, so that may be the biggest models that I can run.
marauding_gibberish142@lemmy.dbzer0.com 2 days ago
Do you have any recommendations for running the Mistral small model? I’m very interested in it alongside CodeLlama, OogaBooga and others
RagingHungryPanda@lemm.ee 2 days ago
I haven’t tried those, so not really, but with open web UI, you can download and run anything, just make sure it fits in your vram so it doesn’t run on the CPU. The deep seek one is decent. I find that i like chatgpt 4-o better, but it’s still good.