I spent the better part of a day trying to setup llama c++ with “wizard vicuna unrestricted” and was unable to, and I’ve got quite a tech background. This was at someone’s suggestion, I’m hoping yours is easier lol.
Comment on Sam Altman says ChatGPT should be 'much less lazy now'
AlmightySnoo@lemmy.world 9 months ago
PSA: give open-source LLMs a try folks. If you’re on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc… Obviously it’s faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.
You can combine that with a UI like ollama-webui or a text-based UI like oterm.
JustUseMint@lemmy.world 9 months ago
AlmightySnoo@lemmy.world 9 months ago
ollama should be much easier to setup!
JustUseMint@lemmy.world 9 months ago
Thanks lol I’m looking forward to it so I can stop contributing to openai
akrot@lemmy.world 9 months ago
ROCm? Is that even supported now? Last time I checked it was still a dumpster fire. What are the RAM and VRAM reqs for the Mixtral8x7b?
AlmightySnoo@lemmy.world 9 months ago
ROCm is decent right now, I can do deep learning stuff and CUDA programming with it with an AMD APU. However, ollama doesn’t work out-of-the-box yet with APUs, but users seem to say that it works with dedicated AMD GPUs.
As for Mixtral8x7b, I couldn’t run it on a system with 32GB of RAM and an RTX 2070S with 8GB of VRAM, I’ll probably try with another system soon. But that same system runs CodeLlama-34B fine.
So far I’m happy with Mistral 7b, it’s extremely fast on my RTX 2070S, and it’s not really slow when running in CPU-mode on an AMD Ryzen 7. Its speed is okayish (~1 token/sec) when I try it in CPU-mode on an old Thinkpad T480 with an 8th gen i5 CPU.
akrot@lemmy.world 9 months ago
I have a ryzen apu, so I was curious. I tried yesterday to fiddle with it, and managed to up the “vram” to 16gb. But installing xformers and flash-attention for LLM support on igpus is not officially supported and was not possible to install anything past pytorch. It’s step further for sure, but still needs lots of work.
JackGreenEarth@lemm.ee 9 months ago
Or use Jan. Really nice GUI app to use open source LLMs.
poo@lemmy.world 9 months ago
Seconded - I was playing with this last week - the most basic model is hilariousy “bad” and the larger 30GB models are OK but kill my RAM and take forever to respond. I mean it’s not “bad” because frankly LLMs are like magic to me and I’m grateful they even exist at the level they do, but not up to the level that OpenAI is at right now.
Very promising - excited to see that LLMs aren’t solely locked behind paywalls and I can’t wait to see where some of these go in the next few years!