Comment on 1U mini PC for AI?

<- View Parent
chaospatterns@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

Your options are to run smaller models or wait. llama3.2:3b fits on my 1080 Ti VRAM and is sufficiently fast. Bigger models will get split between VRAM and RAM and run slower but it’ll work.

Not all models are Gen AI style LLMs. I run GPU based speech to text models on my GPU too for my smart home.

source
Sort:hotnewtop