Any source for that info? Seems important to know and assert the quality, no?
Well it is a 9B model after all. Self hosted models become a minimum “intelligent” at 16B parameters. For context the models ran in Google servers are close to 300B parameters models
Appoxo@lemmy.dbzer0.com 1 day ago
Saterz@lemmy.world 1 day ago
Here: www.sitepoint.com/local-llms-complete-guide/ hardware-corner.net/running-llms-locally-introduc… travis.media/blog/ai-model-parameters-explained/ claude.ai/…/0ecdfb83-807b-4481-8456-8605d48a356c labelyourdata.com/articles/…/llm-model-size medium.com/…/understanding-parameters-context-siz…
To find them it only required a web search using the query local llm parameters and number of params of cloud models on DuckDuckGo.
Appoxo@lemmy.dbzer0.com 1 day ago
Appreciated. Very much appreciated!
SuspciousCarrot78@lemmy.world 1 day ago
Not sure how we’re quantifying intelligence here. Benchmarks?
Qwen3-4B 2507 Instruct (4B) outperforms GPT-4.1 nano (7B) on all stated benchmarks. It outperforms GPT-4.1 mini (~27B according to scuttlebutt) on mathematical and logical reasoning benchmarks, but loses (barely) on instruction-following and knowledge benchmarks. It outperforms GPT-4o on a few specific domains (math, creative writing), but loses overall (because of course it would).
So, in that instance, a 4B > 7B (globally), 27B (significantly) and 500B(?) situationally.
It sort of wild to think that 2024 SOTA is ~ ‘strong’ 4-12B these days.
I think (believe) that we’re sort of getting to the point where the next step forward is going to be “densification” and/or architecture shift (maybe M$ can finally pull their finger out and release the promised 1.58 bit next step architectures).
ICBW / IANAE