Comment on What budget friendly GPU for local AI workloads should I aim for?
MalReynolds@slrpnk.net 1 week agoNah, NVLink is irrelevant for inference workloads (inference nearly all happens in the cards, models are split up over multiple and tokens are piped over pcie as necessary), mildly useful for training but you’ll get there without them.