Comment on I've just created c/Ollama!
brucethemoose@lemmy.world 1 day agoI don’t understand.
Ollama is not actually docker, right? It’s running the same llama.cpp engine, it’s just embedded inside the wrapper app, not containerized.
And basically every LLM project ships a docker container. I know for a fact llama.cpp, TabbyAPI, Aphrodite, vllm and sglang do.
You are 100% right about security though, in fact there’s a huge concern with compromised Python packages. This one almost got me: pytorch.org/blog/compromised-nightly-dependency/
tal@lemmy.today 1 day ago
I’m sorry, you are correct. The syntax and interface mirrors docker, and one can run ollama in Docker, so I’d thought that it was a thin wrapper around Docker, but I just went to check, and you are right — it’s not running in Docker by default. Sorry, folks! Guess now I’ve got one more thing to look into getting inside a container myself.
hasnep@lemmy.ml 1 day ago
Try ramalama, it’s designed to run models override oci containers