2xxx too. It’s only available for 3xxx and up.
Comment on NVIDIA’s new AI chatbot runs locally on your PC
RobotToaster@mander.xyz 10 months ago
Shame they leave GTX owners out in the cold again.
Kyrgizion@lemmy.world 10 months ago
anlumo@lemmy.world 10 months ago
The whole point of the project was to use the Tensor cores. There are a ton of other implementations for regular GPU acceleration.
CeeBee@lemmy.world 10 months ago
Just use Ollama with Ollama WebUI
simple@lemm.ee 10 months ago
dojan@lemmy.world 10 months ago
There were CUDA cores before RTX. I can run LLMs on my CPU just fine.
Steve@communick.news 10 months ago
There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.
halfwaythere@lemmy.world 10 months ago
This statement is so wrong. I have llama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.
jvrava9@lemmy.dbzer0.com 10 months ago
Source?