Like the how many r’s in strawberry. It took off as an Internet meme and was fixed, but how did that fix happen?
The how many rs in strawberry breaks it because it doesnt read your question. It tokenized it. So it seems (straw)(berry) and knows contextually that when berry follows straw with no whitespace it means a different set of things that if there were white space.
The tokens have, basically, numeric value. So it doesn’t read your characters. That’s why that’s hard for it.
Ideas that recurse in themselves tend to fail as well. i.e. “say banana 142 times” will not produce the expected result.
As to how they fix them I’m not positive. There’s a bunch of ways to work around issues like these.
Scipitie@lemmy.dbzer0.com 3 weeks ago
Sadly there is no answer for you available because many of the processes around this are hidden.
I can only chime in from my own amateur experiments and there are answer is a clear “depends”. Most adjustments are made either via additional training data. This simply means that you take more data and feed it indi an already trained LLM. The result is again an LLM black box with all its stochastic magic.
The other big way are system prompts. Those are simply instructions that already get interpreted as a part of te request and provide limitations.
These can get white fancy by now, in the sense of “when the following query asks you to count something run this python script with whatever you’re supposed to count as input, the result will be a json that you can take then and do XYZ with it.”
Or more simple: you tell the model to use other programs and how to use them.
For both approaches I don’t need to maintain list: For the first one I have no way of knowing what it’s doing in detail and I just need to keep the documents themselves.
For the second one it’s literally a human readable text.