I doubt it, LLMs have already become significantly more efficient and powerful in just the last couple months.
In a year or two we will be able to run something like Gemini 2.5 Pro on a gaming PC which right now requires a server farm.
I doubt it, LLMs have already become significantly more efficient and powerful in just the last couple months.
In a year or two we will be able to run something like Gemini 2.5 Pro on a gaming PC which right now requires a server farm.
AmbiguousProps@lemmy.today 14 hours ago
Current gen models got less accurate and hallucinated at a higher rate compared to the last ones, from experience and from openai. I think it’s either because they’re trying to see how far they can squeeze the models, or because it’s starting to eat its own slop found while crawling.
cdn.openai.com/…/o3-and-o4-mini-system-card.pdf
kebab@endlesstalk.org 4 hours ago
Those are previous gen models, here are the current gen models: cdn.openai.com/pdf/…/gpt5-system-card-aug7.pdf#pa…
jaykrown@lemmy.world 10 hours ago
That’s one example, but what about other models? What you just did is called cherry picking, or selective evidence.