Comment on Local AI is one step closer through Mistral-NeMo 12B
tau@lemmings.world 3 months agoJust beware that like AMD, Intel GPUs suffer a performance hit when using LLMs because of the CUDA specific optimizations in frameworks like llama.cpp