The models you have should be .gguf files right? I think those are the only ones where that’s supported
Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard
miss_brainfarts@lemmy.blahaj.zone 9 months agoI may need to lower it a bit more, yeah. Though when I try to to use offloading, I can see that vram usage doesn’t increase at all.
When I leave the setting at its default 100 value on the other hand, I see vram usage climb until it stops because there isn’t enough of it.
So I guess not all models support offloading?
Fisch@lemmy.ml 9 months ago
miss_brainfarts@lemmy.blahaj.zone 9 months ago
All of them are gguf, yeah
General_Effort@lemmy.world 9 months ago
Most formats don’t support it. It has to be gguf format, afaik. You can usually find a conversion on huggingface. Prefer offerings by TheBloke for the detailed documentation, if nothing else.