there are efficient, self hostable models. i believe phi can run on mobile devices without too much trouble?
Comment on Meta execs obsessed over beating OpenAI's GPT-4 internally, court filings reveal
possiblylinux127@lemmy.zip 6 days agoHonestly I think Meta is focusing on the wrong thing. We don’t necessarily need a crazy power model. What we really need is efficiency. They should focus on models that are small to medium in size and highly efficient.
ChatGPT is old news and they are getting way less media attention. Being the “top dog” in AI doesn’t mean much.
pupbiru@aussie.zone 5 days ago
theneverfox@pawb.social 5 days ago
That’s basically what’s down stream from an open source model. Llama derivatives are what I use on my mid range gaming computer, and honestly they’re comparable. They can handle fewer details at a time, but they’re faster and way more efficient… Once you add in rag and tool use, they’re better than models 200x their size