I have a MacBook 2 pro (Apple silicon) and would kind of like to replace Google’s Gemini as my go-to LLM. I think I’d like to run something like Mistral, probably. Currently I do have Ollama and some version of Mistral running, but I almost never used it as it’s on my laptop, not my phone.
I’m not big on LLMs and if I can find an LLM that I run locally and helps me get off of using Google Search and Gimini, that could be awesome. Currently I use a combo of Firefox, Qwant, Google Search, and Gemini for my daily needs. I’m not big into the direction Firefox is headed, I’ve heard there are arguments against Qwant, and using Gemini feels like the wrong answer for my beliefs and opinions.
I’m looking for something better without too much time being sunk into something I may only sort of like. Tall order, I know, but I figured I’d give you as much info as I can.
southernbeaver@lemmy.world 9 months ago
My HomeAssistant is running on Unraid but I have an old NVIDIA Quadro P5000. I really want to run a vision model so that it can describe who is at my doorbell.
brucethemoose@lemmy.world 9 months ago
Oh actually that’s a good card for LLM serving!
Use the llama.cpp server from source, it has better support for Pascal cards than anything else:
github.com/ggml-org/llama.cpp/…/multimodal.md
Gemma 3 is a hair too big (like 17-18GB), so I’d start with InternVL 14B Q5K XL: huggingface.co/…/InternVL3-14B-Instruct-GGUF
Or Mixtral 24B IQ4_XS for more ‘text’ intelligence than vision: huggingface.co/…/Mistral-Small-3.2-24B-Instruct-2…