Comment on Consumer GPUs to run LLMs

<- View Parent
marauding_gibberish142@lemmy.dbzer0.com ⁨1⁩ ⁨week⁩ ago

Wait how does that work? How is 24GB enough for a 38B model?

source
Sort:hotnewtop