Comment on How to use GPUs over multiple computers for local AI?

Xanza@lemm.ee ⁨2⁩ ⁨weeks⁩ ago

consumer motherboards don’t have that many PCIe slots

The number of PCIe slots isn’t the most limiting factor when it comes to consumer motherboards. It’s the number of PCIe lanes that are supported by your CPU and the motherboard has access to.

It’s difficult to find non-server focused hardware that can do something like this because you need a significant number of PCIe lanes to accommodate your CPU, and your several GPUs at full speed. Using an M.2 SSD? Even more difficult.

Your 1 GPU per machine is a decent approach. Using a Kubernetes cluster with device plugins is likely the best way to accomplish what you want here. It would involve setting up your cluster, installing the drivers for your GPU (on each node) which then exposes the device to the system. Then when you create your Ollama container, in the prestart hook, ensure you expose your GPUs to the container for usage.

The issue with doing this, is 10Gbe is very slow compared to your GPU via PCIe. You’re networking all these GPUs to do some cool stuff, but then you’re severely bottle-necking yourself with your network. All in all, it’s not a very good plan.

source
Sort:hotnewtop