Comment on ROCm on older generation AMD gpu

<- View Parent
panda_abyss@lemmy.ca ⁨20⁩ ⁨hours⁩ ago

I don’t know how the immich ml works, but if you’re going LLMs stick to llama.cpp. 

going beyond that, I’ve had serious kernel bugs with PyTorch and onnx that are still unresolved. The most popular ML/AI frameworks basically don’t work due to drivers for me. 

Vulkan flows are fine and generally comparable in speed so far, so if there’s a vulkan option try rock first then revert to vulkan. 

source
Sort:hotnewtop