I tested 9 flagships (Claude 4.6, GPT-5.2, Gemini 3.1 Pro, Kimi K2.5, etc.) in my own mini-benchmark with novel tasks, web search disabled and zero training contamination and no cheating possible.
TL;DR: Claude 4.6 is currently the best reasoning model, GPT-5.2 is overrated, and open-source is catching up fast, in particular Moonshot.ai’s Kimi K2.5 seems very capable.
Telorand@reddthat.com 1 day ago
Spoken like a true AI apologist. You ran one test, and you extrapolated your results to an optimistic outcome that conspicuously matches what you wish to be true. Not scientifically rigorous? Bruh, this is the very definition of confirmation bias.
If this is actually a hypothesists you want to test, maybe contact some computer science researchers to see how to best design an experiment. Beyond that, this is virtually the same as flipping a coin once and drawing a conclusion about how often heads is the outcome.
otto@programming.dev 1 day ago
Actually I set out with the assumption that flagship models would fail even on these fairly simple questions that I have seen them failing on before, but I was suprised they didn’t all fail.
Iconoclast@feddit.uk 1 day ago
I don’t get what is the need to be such a dick about it.
Telorand@reddthat.com 1 day ago
Because I’m tired of people making flimsy arguments for why LLMs are “akshully really good and underrated.” I’m tired of regular people, wittingly or unwittingly, carrying water for the billionaires who are currently fucking over the economy, the environment, and even entire supply chains in an effort to show—against all evidence to the contrary—that LLMs are much more than fancy chatbots.
It has been an incessant drone of sloppy arguments and omitted facts, and I am tired, boss.