And also which version of the models. Gemini 2.5 Flash is a completely different experience to 2.5 Pro.
jordanlund@lemmy.world 2 days ago
I wish they had broke it out by AI. The article states:
“Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”
But I don’t see that anywhere in the linked PDF of the “full results”.
This sort of study should also be re-done from time to time to track AI version numbers.
nick@campfyre.nickwebster.dev 1 day ago
Rothe@piefed.social 2 days ago
It doesn’t really matter, “AI” is being asked to do a task it was never meant to do. It isn’t good at it, and it will never be good at it.
snooggums@piefed.world 2 days ago
Using an LLM to return accurate information is like using a shoe to hammer a nail.
athatet@lemmy.zip 2 days ago
Except that a shoe is vaguely hammer ish. More like pounding a screw in with your forehead.
Rooster326@programming.dev 2 days ago
We’ve all done it?
snooggums@piefed.world 2 days ago
Nope, my soles are too soft.
Cocodapuf@lemmy.world 1 day ago
Wow, way to completely ignore the content of the comment you’re replying to. Clearly, some are better than others… so, how do the others perform? It’s worth knowing before we make assertions.
The excerpt they quoted said:
So that implies that “the other assistants” performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said “more than double the other assistants”, does that mean double the rate of one of the others or double the average of the others? If it’s an average it would mean that some models probably performed better, while others performed worse.
This was the point, what was reported was insufficient information.