There was a recentish model, qwen next that was advertised as smth that can be run entirely on RAM.
brucethemoose@lemmy.world 4 days ago
I just got a 2x64GB 6000 kit before its price skyrocketed by like $130.
…Also, why does “AI” need so much CPU RAM? In actual server deployments, pretty much all the work is in VRAM, and honestly most businesses are too dumb to train anything effectively. ASICs that would use, say, LPDDR are super rare, and stuff like Hybrid/IGP inference is the realm of ‘a few guys with homelabs’… Like me.
just_an_average_joe@lemmy.dbzer0.com 4 days ago
brucethemoose@lemmy.world 4 days ago
They can ALL be run on RAM, theoretically. I bought 128GB so I can run GLM 4.5 with the experts offloaded to CPU, with a custom trellis/K quant mix; but this is a ‘personal’ use setup tinkerer setup.
Qwen Next is good at that because its very low active parameter.
…But they aren’t actually deployed that way. They’re basically always deployed on cloud GPU boxes that serve dozens/hundreds of people at once, in parallel.
AFAIK the only major model actually developed for CPU inference is one of the esoteric Gemma releases, aimed at mobile.
Passerby6497@lemmy.world 4 days ago
I for one would enjoy triggering your unskippable cutscenes in setting up local CPU based AI if it can work on Linux with an older amd card.
Don’t have funds for anything fancy, but would be interesting in playing around with it. Been wanting to get something like that setup for home assistant.
brucethemoose@lemmy.world 4 days ago
Plenty of folks do AMD. A popular ‘homelab’ setup is 32GB AMD MI50s. Even Intel is fine these days!
But what’s your setup, precisely? CPU, RAM, and GPU.
ag10n@lemmy.world 4 days ago
You can use Vulkan fairly easily as long as you have 8G vram
SabinStargem@lemmy.today 3 days ago
If you just want an easy way to setup AI on Windows or Linux, KoboldCPP is my recommendation for your backend. It supports the GGUF format, which allows you to use both RAM and VRAM simultaneously. It won’t be the fastest thing, but it is easy enough to setup, with a bundled GUI for prep and actual usage. Through the IP address it gives, you can hook the backend into a frontend of choice.
Kissaki@feddit.org 3 days ago
I suspect RAM may become increasingly useful with the shift from pure chat LLM to connected agents, MCP, and catching results and data for scaling things like public Internet search.
When I think of database system server software, a lot of performance gains are from keeping used data in RAM. With the expanding of LLM systems and it’s concerns, backing data, connective ness, and need for optimisation, a shift to caching and keeping in RAM seems to suggest itself. It’s already wasteful/big and operates on a lot of data, so it seems plausible that would not be a small cache.
brucethemoose@lemmy.world 3 days ago
Yeah, exactly… In other words, ‘general server buildout.’
humanspiral@lemmy.ca 3 days ago
why does “AI” need so much CPU RAM
It doesn’t really, though CPU inference is possible/slow at 256+gb. The problem is that they are making HBM (AI) ram instead of ddr4/5.
tty5@lemmy.world 4 days ago
Same memory production capacity can be allocated to ddr5 or to him and openai signed contracts with sk hynix and samsung, the two largest ram manufacturers in the world, and bought a significant percentage of next year’s production.
DDR5 prices started spiking as that deals impact propagated through the supply chain. I bought a 2x32 6800 Cl30 kit for 195 euro 12 days ago. It was 330 euro 4 days later.
brucethemoose@lemmy.world 4 days ago
…Is it that interchangeable?
TBH I know little of memory fabs, but I know (say) TSMC and their customers can’t just switch from a power-optimized process to a high frequency one at the drop of a hat.
tty5@lemmy.world 4 days ago
Slightly different part, same process. The bigger bottleneck is packaging - HBM is 3d stacked.
brucethemoose@lemmy.world 4 days ago
Ah. Yeah. And its on the fab to do that.
I always though it’d be cool for CPUs to switch to packaged RAM, too. Samsung apparently tried to do it with Wide I/O for mobile ARM stuff, but it never caught on.