I have Llama 2 running on localhost, you need a fairly powerful GPU but it can totally be done.
Comment on AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather
SupraMario@lemmy.world 1 year agoUhh what? You can totally run LLMs locally.
Jeremyward@lemmy.world 1 year ago
SailorMoss@sh.itjust.works 1 year ago
I’ve run one of the smaller models on my i7-3770 with no GPU acceleration. It is painfully slow but not unusably slow.
jcdenton@lemy.lol 1 year ago
To get the same level as something like chat gpt?
MooseBoys@lemmy.world 1 year ago
Inference, yes. Training, no. Derived models don’t count.