Comment on Self-Hosted AI is pretty darn cool

<- View Parent
Toribor@corndog.social ⁨3⁩ ⁨months⁩ ago

I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for gaming. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.

Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer (or just get used to waiting on it). That also means that I cannot really use my GPU for anything else while the LLM is loaded.

I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.

source
Sort:hotnewtop