Comment on [deleted]
hendrik@palaver.p3x.de 5 days agoI'd say this is the correct answer. If you're actually using that much RAM, you probably want it connected to the processor with a wide (fast) bus. I rarely see people do it with desktop or gaming processors. It might be useful for some edge-cases, but usually you want an Epyc processor or something like that, or it's way too much RAM.
Solaer@lemmy.world 5 days ago
My edge case is: I wanna spin up an ai-lxc in proxmox. ollama and open webui. using RAM instead of vram. but it should low on power consumption on idle. thats why I want an intel i-9 oder core ultra 9 with maxed out RAM. it idles on low power, but can run bigger ai-models using RAM instead of VRAM. it would be not so fast like with GPUs, but thats OK.
hendrik@palaver.p3x.de 5 days ago
AI inference is memory-bound. So, memory bus width is the main bottleneck. I also do AI on an (old) CPU, but the CPU itself is mainly idle and waiting for the memory. I'd say it'll likely be very slow, like waiting 10 minutes for a longer answer. I believe all the AI people use Apple silicon because of the unified memory and it's bus width. Or some CPU with several memory lanes.
Solaer@lemmy.world 5 days ago
The i9-10900 has 4 channels (Quadro-Channel DDR4-2933 (PC4-23466, 93.9GB/s). would this be better in this way than an i9-14xxx (Dual-Channel DDR5-5600 (PC5-44800, 89.6GB/s))?
does the numbers (93 GB/s and 89GB/s) mean the speed for a RAM-stick or the speed all together? maybe an old i9-10xxx with 4channel-ram was better than a new dual-channel.
hendrik@palaver.p3x.de 5 days ago
Seems to mean all together. (5600MT/s / 1000) x 2 sticks x 64bit / 8bits/Byte = 89.6 GB/s
or 2933/1000 x 4 x 64bit / 8 = 93.9 GB/s
so they calculated with double the DDR bus width, or 4times the bus width.
neatobuilds@lemmy.today 5 days ago
So if I had more memory channels it would be better to have say ollama use the cpu versus the gpu?
hendrik@palaver.p3x.de 5 days ago
Well, the numbers I find on google are: a Nvidia 4090 can transfer 1008 GB/s. And a i9 does something like 90 GB/s. So you'd expect the CPU to be roughly 11 times slower than that GPU at fetching numbers from memory.
SinningStromgald@lemmy.world 5 days ago
Maybe check out this video?
He goes over the different ways to run a selfhost AI without a GPU, like you want to do, including maxing RAM and using PCI-e M.2 add-on boards.
Solaer@lemmy.world 5 days ago
Thank you very much! This leads to this article: forum.level1techs.com/t/…/2 Maybe the 9959x is what I am looking for.