Comment on Why are Google's Assistant(s) so bad nowadays?
badlotus@discuss.online 5 weeks agoHave you heard of Ollama? It’s an LLM engine that you can run at home. The speed, model size, context length, etc. that you can achieve really depends on your hardware. I’m using a low-mid graphics card and 32GB of RAM and get decent performance. Not lightning quick like ChatGPT but fine for simple tasks.