Comment on Recommendations on running GPTs on Asahi - M1 Ultra?

<- View Parent
moonpiedumplings@programming.dev ⁨9⁩ ⁨months⁩ ago

The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their “vram” as well.

Llama.cpp was the software that users do this. I can’t find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

gist.github.com/…/e8d4cb0c4b1df6cc47ce8b18457ebde…

source
Sort:hotnewtop