Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
wonderingwanderer@sopuli.xyz 13 hours agoIf Microsoft cared about privacy then they wouldn’t have made windows practically spyware. Even if they install AI locally in the OS, it’s still proprietary software that constantly sends data back to the mothership, consuming your electricity and RAM to do so. Linux has so many options, there’s really no reason not to switch.
Small LLMs already exist for local self-hosting, and there are open-source options which won’t steal your data and turn you into a product.
huggingface.co/spaces/…/open_llm_leaderboard#/
Bear in mind that the number of parameters your system can handle is limited by how much memory is available, and using a quantized version can increase the number of parameters you can handle with the same amount of memory.
Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up. But I’m no expert, so do some research and calculate for yourself what your system can handle.
tal@lemmy.today 12 hours ago
Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier more than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.
I think that the largest LLM I’ve run on it was a 106 billion parameter GLM model at Q4_K_M quantitization on my Framework Desktop. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.
wonderingwanderer@sopuli.xyz 9 hours ago
See, you have more experience in the matter than I do, hence the caveat that I’m not an expert. Thanks for sharing your experience.
Then again, I’d consider 128GB of memory to be fairly serious hardware, but if that’s common among hobbyists then I stand corrected. I was operating on the assumption that 64GB of RAM is already a lot
All in all, 106 billion parameters with quantization on 128GB of memory doesn’t surprise me all that much. But again, I’m just going off of the vague notions I’ve gathered from reading about it.
The focus of my original comment was more on the fact that self-hosting is an option, I wasn’t trying to be too precise with the specs. My bad if it came off that way