Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard

<- View Parent
L_Acacia@lemmy.one ⁨7⁩ ⁨months⁩ ago

To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.

source
Sort:hotnewtop