Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.
lexiw@lemmy.world 5 days ago
You are playing with ancient stuff that wasn’t even good at release. Try these:
A 4b model performing like a 30b model: huggingface.co/Nanbeige/Nanbeige4.1-3B
Google open source version of Gemini: huggingface.co/google/gemma-3-4b-it
ch00f@lemmy.world 5 days ago
Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.
lexiw@lemmy.world 5 days ago
Image
ch00f@lemmy.world 5 days ago
Image
Well, not off to a great start.
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.
lexiw@lemmy.world 5 days ago
I agree, it is a very expensive hobby, and it gets decent in the range 30-80b. However, the model you are using should not perform that bad, it seems that you might be hitting a config issue. Would you mind sharing the cli command you use to run it?
ch00f@lemmy.world 4 days ago
Thanks for taking the time.
So I’m not using a CLI. I’ve got the intelanalytics/ipex-llm-inference-cpp-xpu image running and hosting LLMs to be used by a separate open-webui container. I originally set it up with Deepseek-R1:latest per the tutorial to get the results above. This was straight out of the box with no tweaks.
The interface offers some controls settings (below screenshot). Is that what you’re talking about?
Image