That’s got to be it. Cloud compute is expensive when you’re not being funded in Azure credits. One the dust settles from the AI bubble bursting, most of the AI we’ll see will probably be specialized agents running small models locally.
That’s got to be it. Cloud compute is expensive when you’re not being funded in Azure credits. One the dust settles from the AI bubble bursting, most of the AI we’ll see will probably be specialized agents running small models locally.
fmstrat@lemmy.nowsci.com 2 days ago
I’m still running Qwen32b-coder on a Mac mini. Works great, a little slow, but fine.
And009@lemmynsfw.com 1 day ago
I’m somewhat tech savvy, how do I run llm locally. Any suggestions? How to know if my local data is safe
Llak@lemmy.world 1 day ago
Checkout lm studio lmstudio.ai and you can pair it with vs continue extension docs.continue.dev/getting-started/overview.
Retro_unlimited@lemmy.world 1 day ago
I have been using a program called GPT4ALL and you can download many models and run them locally. They give you a prompt at boot if you want to share data or not. I select no and use it offline anyway.