Actually, to go ahead and answer, the “easiest” path would be LM Studio (which supports MLX quants natively and is not time intensive to install), and a DWQ quantization (which is a newer, higher quality variant of MLX quants).
Probably one of these models, depending on how much RAM you have:
huggingface.co/…/Magistral-Small-2506-4bit-DWQ
huggingface.co/…/Qwen3-30B-A3B-4bit-DWQ-0508
huggingface.co/…/GLM-4-32B-0414-4bit-DWQ
With a bit more time invested, you could try to set up Open Web UI as an alterantive interface (which has its own built in web search like Gemini): openwebui.com
And then use LM Studio (or some other MLX backend, or even free online API models) as the ‘engine’
brucethemoose@lemmy.world 1 day ago
Honestly perplexity, the online service, is pretty good.
But first question is: how much RAM does your Mac have? This is basically the factor for what model you can and should run.
WhirlpoolBrewer@lemmings.world 1 day ago
8GB
brucethemoose@lemmy.world 1 day ago
8GB?
You might be able to run Qwen3 4B: huggingface.co/mlx-community/…/main
But honestly you don’t have enough RAM to spare, and even a small model might bog things down. I’d run Open Web UI or LM Studio with a free LLM API, like Gemini Flash, or pay a few bucks for something off openrouter. Or maybe Cerebras API.
WhirlpoolBrewer@lemmings.world 1 day ago
Good to know. I’d hate to buy a new machine strictly for running an LLM. Could be an excuse to pickup something like a Framework 16, but realistically, I don’t see myself doing that. I think you might be right about using something like Open Web UI or LM Studio.