Comment on What exactly is a self-hosted small LLM actually good for (<= 3B)

ragingHungryPanda@lemmy.zip ⁨1⁩ ⁨week⁩ ago

I’ve run a few models that I could on my GPU. I don’t think the smaller models are really good enough. They can do stuff, sure, but to get anything out of it, I think you need the larger models.

They can be used for basic things, though. There are coder specific models you can look at. Deepseek and qwen coder are some popular ones

source
Sort:hotnewtop