Yeah, it seems more interesting to reverse engineer why they chose this line of marketing. They are clearly misrepresenting the challenge and cost of running a LLM locally, so... why?
Comment on Local AI is one step closer through Mistral-NeMo 12B
bamboo@lemm.ee 3 months ago
There are already lots of models in the 7B and 14B ranges that are quite capable and run on commodity hardware. What makes this one so special?
MudMan@fedia.io 3 months ago
makingStuffForFun@lemmy.ml 3 months ago
Was wondering the same thing
i_like_water@feddit.org 3 months ago
From the top of my head: the context size is way higher. 128k tokens vs 8k usually.
bamboo@lemm.ee 3 months ago
Oh wow. Yeah a large context size is a significant improvement, doesn’t seem like the article included that detail.