There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.
Comment on NVIDIA’s new AI chatbot runs locally on your PC
simple@lemm.ee 10 months ago[deleted]
Steve@communick.news 10 months ago
halfwaythere@lemmy.world 10 months ago
This statement is so wrong. I have llama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
jvrava9@lemmy.dbzer0.com 10 months ago
Source?
dojan@lemmy.world 10 months ago
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.