Why do people host LLMs at home when processing the same amount of data from the internet to train their LLM will never be even a little bit as efficient as sending a paid prompt to some high quality official model?
inb4 privacy concerns or a proof of concept this is out of discussion, I want someone to prove his LLM can be as insightful and accurate as paid one. I don’t care about anything else than quality of generated answers
nagaram@startrek.website 2 hours ago
Are you using LLMs as search engines?
Bold.
I use Gemma, LLama 3.2, and Deepseek to either fix formatting, summarize documentation to give me commands for Linux software, and write simple code structure for me to refine into working code.
Sure it takes longer to generate than a cloud compute would, but 1) privacy obviously, 2) this feels better environmentally. I actually don’t know if that’s true, but it objectively touches less computers for such simple tasks. It would be wasteful of infrastructure to do it over the web.