Comment on Can local LLMs be as useful and insightful as those widely available?

tal@lemmy.today ⁨5⁩ ⁨weeks⁩ ago

I want someone to prove his LLM can be as insightful and accurate as paid one.

I mean, you can train a model that’s domain-specific that some commercial provider doesn’t have a proprietary model to address. A model can only store so much information, and you can choose to weight that information towards training on what’s important to you. Or providers may just not offer a model in the field that you want to deal with at all.

But I don’t think that, for random individual user who just wants a general-purpose chatbot, he’s going to get better performance out of something self-hosted. Probably it’ll cost more for the hardware, since the hardware isn’t likely to be saturated and probably will be shared.

I think that the top reason for wanting to run an LLM model locally is the one you ruled out: privacy. You aren’t leaking information to someone’s computers.

Some other possible benefits:

source
Sort:hotnewtop