Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard
FaceDeer@kbin.social 9 months agoAnd at 72 billion parameters it's something you can run on a beefy but not special-purpose graphics card.
Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard
FaceDeer@kbin.social 9 months agoAnd at 72 billion parameters it's something you can run on a beefy but not special-purpose graphics card.
glimse@lemmy.world 9 months ago
Based on the other comments, it seems like this needs 4x as much ram than any consumer card has
FaceDeer@kbin.social 9 months ago
It hasn't been quantized, then. I've run 70B models on my consumer graphics card at a reasonably good tokens-per-second rate.
DarkThoughts@fedia.io 9 months ago
I'm curious how local generation goes with potentially dedicated AI extensions using stuff like tensor cores and their own memory instead of hijacking parts of consumer GPUs for this.