They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently didnt even note what versions of the other models were used.
Comment on AI chatbots unable to accurately summarise news, BBC finds
brucethemoose@lemmy.world 1 year ago
What temperature and sampling settings? Which models?
I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.
I find my local thinking models (like QwQ or Deepseek 32B) are quite good at summarization at a low temperature, which is not what these UIs default to. Same with “affordable” API models (like base Deepseek). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.
My point is that LLMs as locally hosted tools are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification in one package.
jrs100000@lemmy.world 1 year ago
1rre@discuss.tchncs.de 1 year ago
I’ve found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords… It’s almost like they’ve played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something
Imgonnatrythis@sh.itjust.works 1 year ago
Bing/chatgpt is just as bad. It loves to tell you it’s doing something and then just ignores you completely.
brucethemoose@lemmy.world 1 year ago
Gemini Flash Thinking from earlier this year was good, but it regressed a ton.
Gemini 1.5 is literally better than the new 2.0 in some of my tests, especially long-context ones.
Eheran@lemmy.world 1 year ago
Rare that people here argument for LLMs like that here, usually it is the same kind of “uga suga, AI bad, did not already solve world hunger”.
heavydust@sh.itjust.works 1 year ago
Your comment would be acceptable if AI was not advertised as solving all our problems, like world hunger.
Eheran@lemmy.world 1 year ago
So the ads are the problem? Do you have a link to such an ad?
heavydust@sh.itjust.works 1 year ago
Not ads, whole governments talking about it and funding that crap like Altman/Musk in the USA or Macron in Europe.
Nalivai@lemmy.world 1 year ago
What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
In case you’re using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.Eheran@lemmy.world 1 year ago
So many arguments… Wow!
Nalivai@lemmy.world 1 year ago
Ask a forest burning machine to read the surrounding treads for you, then you will find the arguments you’re looking for. You have at least 80% chance it will produce something coherent, and unknown chance of there being something correct, but hey, reading is hard amirite?
brucethemoose@lemmy.world 1 year ago
Lemmy is understandably sympathetic to self-hosted LLMs, but I get chewed out or even banned literally anywhere else.
In this fandom I’m in, there used to be enthusiasm for a “community enhancement” of a show since the official release looks terrible. Years later, I don’t even mention the word “AI,” just the idea of restoration (now that we have the tools to do it), and I get bombed and threadlocked.
paraphrand@lemmy.world 1 year ago
I don’t think giving the temperature knob to end users is the answer.
Turning it to max for max correctness and low creativity won’t work in an intuitive way.
Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.
Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left our these facts and invented a back story to this small thing mentioned…”
Not everyone is an engineer. Temp is an obtuse thing.
brucethemoose@lemmy.world 1 year ago
-
Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.
-
Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.
-
It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuned on their own output which “inbreeds” the model.
-
And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?
What I am getting at is this is not a problem companies seem interested in solving.
-
Eheran@lemmy.world 1 year ago
This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.
brucethemoose@lemmy.world 1 year ago
For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to “categorize” text… which no one has really worked on.
I don’t think the corporate APIs or UIs even do this.
You are not wrong, but it’s just not done for some reason.
MoonlightFox@lemmy.world 1 year ago
I have been pretty impressed by Gemini 2.0 Flash.
Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?
Anyways, which model of the commercial ones do you consider to be good?
brucethemoose@lemmy.world 1 year ago
Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.
Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real. Tencent’s API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.
MiniMax is ok for long context, but I still tend to lean on Gemini for this.
Knock_Knock_Lemmy_In@lemmy.world 1 year ago
What are the local use cases? I’m running on a 3060ti but output is always inferior to the free tier of the various providers.
Can I justify an upgrade to a 4090 (or more)?
MoonlightFox@lemmy.world 1 year ago
So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.
I was pretty impressed with Deepseek R1. I used their app, but not for anything sensitive.
I don’t like that OpenAI defaults to a model I can’t pick. I have to select it each time, even when I use a special URL it will change after the first request
I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash
brucethemoose@lemmy.world 1 year ago
Heh, only obscure ones that they can’t game, and only if they fit your use case. One example is the ones in EQ bench: eqbench.com
…And again, the best mix of models depends on your use case.
I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.