Comment on Apple reveals M3 Ultra, taking Apple silicon to a new extreme

<- View Parent
KingRandomGuy@lemmy.world ⁨5⁩ ⁨weeks⁩ ago

This type of thing is mostly used for inference with extremely large models, where a single GPU will have far too little VRAM to even load a model into memory. I doubt people are expecting this to perform particularly fast, they just want to get a model to run at all.

source
Sort:hotnewtop