cross-posted from: lemmy.world/post/27088416
This is an update to a previous post found at lemmy.world/post/27013201
Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.
The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473
AMD GPU issue fix
- Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
- Edit the Ollama service file. This uses the text editor set in the
$SYSTEMD_EDITOR
environment variable.sudo systemctl edit ollama.service
- Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#ov…
[Service] Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
- Restart the Ollama service.
sudo systemctl restart ollama
possiblylinux127@lemmy.zip 2 weeks ago
I would run it in a Podman container with the GPU passed though
30p87@feddit.org 2 weeks ago
Why not throw that into a VM with VFIO passthrough, plug the GPU in via an external dock and if we are already at abstracting shit away for unnecessary complexity and non-compatibility do all that on windows?
possiblylinux127@lemmy.zip 2 weeks ago
Because that is way more complicated?
It is really easy to run ollama in a container.
exu@feditown.com 2 weeks ago
Nested VMs stay performant about three levels deep, so do that as well.