Comment on [deleted]
OpticalMoose@discuss.tchncs.de 5 days ago
I agree with CameronDev, not so much on the capacity, but the bandwidth. At 100+ Gb, the Ryzen/Core platforms are really holding you back with their weak I/O.
If you need that much memory, you might be better off picking up a used Xeon/Epyc from Ebay. Their CPU speeds are lower, but the quad channel RAM could make up for it, depending on what you’re trying to do.
hendrik@palaver.p3x.de 5 days ago
I'd say this is the correct answer. If you're actually using that much RAM, you probably want it connected to the processor with a wide (fast) bus. I rarely see people do it with desktop or gaming processors. It might be useful for some edge-cases, but usually you want an Epyc processor or something like that, or it's way too much RAM.
Solaer@lemmy.world 5 days ago
My edge case is: I wanna spin up an ai-lxc in proxmox. ollama and open webui. using RAM instead of vram. but it should low on power consumption on idle. thats why I want an intel i-9 oder core ultra 9 with maxed out RAM. it idles on low power, but can run bigger ai-models using RAM instead of VRAM. it would be not so fast like with GPUs, but thats OK.
hendrik@palaver.p3x.de 5 days ago
AI inference is memory-bound. So, memory bus width is the main bottleneck. I also do AI on an (old) CPU, but the CPU itself is mainly idle and waiting for the memory. I'd say it'll likely be very slow, like waiting 10 minutes for a longer answer. I believe all the AI people use Apple silicon because of the unified memory and it's bus width. Or some CPU with several memory lanes.
Solaer@lemmy.world 5 days ago
The i9-10900 has 4 channels (Quadro-Channel DDR4-2933 (PC4-23466, 93.9GB/s). would this be better in this way than an i9-14xxx (Dual-Channel DDR5-5600 (PC5-44800, 89.6GB/s))?
does the numbers (93 GB/s and 89GB/s) mean the speed for a RAM-stick or the speed all together? maybe an old i9-10xxx with 4channel-ram was better than a new dual-channel.
neatobuilds@lemmy.today 5 days ago
So if I had more memory channels it would be better to have say ollama use the cpu versus the gpu?
SinningStromgald@lemmy.world 5 days ago
Maybe check out this video?
He goes over the different ways to run a selfhost AI without a GPU, like you want to do, including maxing RAM and using PCI-e M.2 add-on boards.
Solaer@lemmy.world 5 days ago
Thank you very much! This leads to this article: forum.level1techs.com/t/…/2 Maybe the 9959x is what I am looking for.