You could potentially run some smaller MoE models as they don’t take up too much memory while running. I’d suspect the deepseek r1 8B distill with some quantization would work well.
Comment on What can I actually do with 64 GB or RAM?
zkfcfbzr@lemmy.world 6 days agoFair, I didn’t realize that. My GPU is a 1060 6 GB so I won’t be running any significant LLMs on it. This PC is pretty old at this point.
fubbernuckin@lemmy.dbzer0.com 5 days ago
zkfcfbzr@lemmy.world 5 days ago
I tried out the 8B deepseek and found it pretty underwhelming - the responses were borderline unrelated to the prompts at times. The smallest I had any respectable output with was the 12B model - which I was able to run, at a somewhat usable speed even.
fubbernuckin@lemmy.dbzer0.com 5 days ago
Ah, that’s probably fair, i haven’t run many of the smaller models yet.
Mubelotix@jlai.lu 6 days ago
You can run a very decent LLM with that tbh