Comment on NVIDIA’s new AI chatbot runs locally on your PC

<- View Parent
GenderNeutralBro@lemmy.sdf.org ⁨8⁩ ⁨months⁩ ago

Pretty much every LLM you can download already has CUDA support via PyTorch.

However, some of the easier to use frontends don’t use GPU acceleration because it’s a bit of a pain to configure across a wide range of hardware models and driver versions. IIRC GPT4All does not use GPU acceleration yet (might need outdated; I haven’t checked in a while).

If this makes local LLMs more accessible to people who are not familiar with setting up a CUDA development environment or Python venvs, that’s great news.

source
Sort:hotnewtop