Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard
FaceDeer@kbin.social 9 months agoIt's been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We'll see if that works out in practice I guess
Corngood@lemmy.ml 9 months ago
I’m more experienced with graphics than LM, but wouldn’t that cause a significant increase in computation time, since those aren’t native types for arithmetic? Maybe that’s not a big problem?
If you have a link for the paper I’d like to check it out.
FaceDeer@kbin.social 9 months ago
My understanding is that the bottleneck for the GPU is moving data into and out of it, not the processing of the data once it's in there. So if you can get the whole model crammed into VRAM it's still faster even if you have to do some extra work unpacking and repacking it during processing time.
The paper was posted on /r/localLLaMA.
L_Acacia@lemmy.one 9 months ago
You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.