Comment on Do LLM modelers maintain a list of manual corrections fed by humans?

foggy@lemmy.world ⁨19⁩ ⁨hours⁩ ago

The how many rs in strawberry breaks it because it doesnt read your question. It tokenized it. So it seems (straw)(berry) and knows contextually that when berry follows straw with no whitespace it means a different set of things that if there were white space.

The tokens have, basically, numeric value. So it doesn’t read your characters. That’s why that’s hard for it.

Ideas that recurse in themselves tend to fail as well. i.e. “say banana 142 times” will not produce the expected result.

As to how they fix them I’m not positive. There’s a bunch of ways to work around issues like these.

source
Sort:hotnewtop