Comment on AI chatbots unable to accurately summarise news, BBC finds
Eheran@lemmy.world 1 week agoThis is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.
brucethemoose@lemmy.world 1 week ago
For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to “categorize” text… which no one has really worked on.
I don’t think the corporate APIs or UIs even do this.
You are not wrong, but it’s just not done for some reason.