Comment on Selfhost an LLM

splendoruranium@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

I read about OLLAMA, but it’s all unclear to me.

There’s really nothing more to it than the initial instructions tell you. Literally just a “curl -fsSL ollama.com/install.sh | sh”. Then you’re just a “ollama run qwen3:14b” away from having a chat with the model in your terminal.

After that you can make it more involved by serving the model via API, manually adding .gguf quantizations (usually smaller or special-purpose modified bootleg versions of big published models) to your Ollama library with a modelcard, ditching Ollama altogether for a different environment or, the big upgrade, giving your chats a shiny frontend in the form of Open-WebUI.

source
Sort:hotnewtop