Honestly perplexity, the online service, is pretty good.
But first question is: how much RAM does your Mac have? This is basically the factor for what model you can and should run.
Comment on I've just created c/Ollama!
WhirlpoolBrewer@lemmings.world 9 months agoI have a MacBook 2 pro (Apple silicon) and would kind of like to replace Google’s Gemini as my go-to LLM. I think I’d like to run something like Mistral, probably. Currently I do have Ollama and some version of Mistral running, but I almost never used it as it’s on my laptop, not my phone.
I’m not big on LLMs and if I can find an LLM that I run locally and helps me get off of using Google Search and Gimini, that could be awesome. Currently I use a combo of Firefox, Qwant, Google Search, and Gemini for my daily needs. I’m not big into the direction Firefox is headed, I’ve heard there are arguments against Qwant, and using Gemini feels like the wrong answer for my beliefs and opinions.
I’m looking for something better without too much time being sunk into something I may only sort of like. Tall order, I know, but I figured I’d give you as much info as I can.
Honestly perplexity, the online service, is pretty good.
But first question is: how much RAM does your Mac have? This is basically the factor for what model you can and should run.
8GB
8GB?
You might be able to run Qwen3 4B: huggingface.co/mlx-community/…/main
But honestly you don’t have enough RAM to spare, and even a small model might bog things down. I’d run Open Web UI or LM Studio with a free LLM API, like Gemini Flash, or pay a few bucks for something off openrouter. Or maybe Cerebras API.
Good to know. I’d hate to buy a new machine strictly for running an LLM. Could be an excuse to pickup something like a Framework 16, but realistically, I don’t see myself doing that. I think you might be right about using something like Open Web UI or LM Studio.
brucethemoose@lemmy.world 9 months ago
Actually, to go ahead and answer, the “easiest” path would be LM Studio (which supports MLX quants natively and is not time intensive to install), and a DWQ quantization (which is a newer, higher quality variant of MLX quants).
Probably one of these models, depending on how much RAM you have:
huggingface.co/…/Magistral-Small-2506-4bit-DWQ
huggingface.co/…/Qwen3-30B-A3B-4bit-DWQ-0508
huggingface.co/…/GLM-4-32B-0414-4bit-DWQ
With a bit more time invested, you could try to set up Open Web UI as an alterantive interface (which has its own built in web search like Gemini): openwebui.com
And then use LM Studio (or some other MLX backend, or even free online API models) as the ‘engine’
WhirlpoolBrewer@lemmings.world 9 months ago
This is all new to me, so I’ll have to do a bit of homework on this. Thanks for the detailed and linked reply!
brucethemoose@lemmy.world 9 months ago
I was a bit mistaken, these are the models you should consider:
huggingface.co/mlx-community/Qwen3-4B-4bit-DWQ
huggingface.co/AnteriorAI/…/main
huggingface.co/unsloth/Jan-nano-GGUF (specifically the UD-Q4 or UD-Q5 file)
These are state-of-the-art, as far as I know.
WhirlpoolBrewer@lemmings.world 9 months ago
Awesome, I’ll give these a spin and see how it goes. Much appreciated!