Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard

<- View Parent
rs137@lemmy.world ⁨9⁩ ⁨months⁩ ago

Llama 2 70B with 8b quantization takes around 80GB VRAM if I remember correctly. I’ve tested it a while ago.

source
Sort:hotnewtop