Comment on Elon Musk’s Grok Goes Haywire, Boasts About Billionaire’s Pee-Drinking Skills and ‘Blowjob Prowess’

<- View Parent
FauxLiving@lemmy.world ⁨5⁩ ⁨hours⁩ ago

They’re overestimating the costs. 4x H100 and 512GB DDR4 will run the full DeepSeek-R1 model, that’s about $100k of GPU and $7k of RAM. It’s not something you’re going to have in your homelab (for a few years at least) but it’s well within the budget of a hobbyist group or moderately sized local business.

Since it’s an open weights model, people have created quantized versions of the model. The resulting models can have much less parameters and that makes their RAM requirements a lot lower.

You can run quantized versions of DeepSeek-R1 locally. I’m running deepseek-r1-0528-qwen3-8b on a machine with an NVIDIA 3080 12GB and 64GB RAM. Unless you pay for an AI service and are using their flagship models, it’s pretty indistinguishable from the full model.

If you’re coding or doing other tasks that push AI it’ll stumble more often, but for a ‘ChatGPT’ style interaction you couldn’t tell the difference between it and ChatGPT.

source
Sort:hotnewtop