I don’t see how LLMs will get into the households any time soon. It’s not economical.
I can run an LLM on my phone, on my tablet, on my laptop, on my desktop, or on my server. Heck, I could run a small model on the Raspberry PI 5 if I wanted. And none of those devices have dedicated chips for AI.
The problem with LLMs is that they require immense compute power.
Not really, particularly if you’re talking about the usage of smaller models. Running an LLM on your GPU and sending it queries isn’t going to use more energy than using your GPU to game for the same amount of time would.
admin@lemmy.my-box.dev 3 months ago
To train. But you can run a relatively simple one like phi-3 on quite modest hardware.