Comment on What budget friendly GPU for local AI workloads should I aim for?

<- View Parent
MalReynolds@slrpnk.net ⁨1⁩ ⁨week⁩ ago

Nah, NVLink is irrelevant for inference workloads (inference nearly all happens in the cards, models are split up over multiple and tokens are piped over pcie as necessary), mildly useful for training but you’ll get there without them.

source
Sort:hotnewtop