Comment on Is possible to run a LLM on a mini-pc like the GMKtec K8 and K9?

<- View Parent
1rre@discuss.tchncs.de ⁨3⁩ ⁨months⁩ ago

Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD

source
Sort:hotnewtop