You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.
Comment on Smaug-72B-v0.1: The New Open-Source LLM Roaring to the Top of the Leaderboard
Corngood@lemmy.ml 9 months agoI’m more experienced with graphics than LM, but wouldn’t that cause a significant increase in computation time, since those aren’t native types for arithmetic? Maybe that’s not a big problem?
If you have a link for the paper I’d like to check it out.
L_Acacia@lemmy.one 9 months ago
FaceDeer@kbin.social 9 months ago
My understanding is that the bottleneck for the GPU is moving data into and out of it, not the processing of the data once it's in there. So if you can get the whole model crammed into VRAM it's still faster even if you have to do some extra work unpacking and repacking it during processing time.
The paper was posted on /r/localLLaMA.