Over simplification but partly it has to do with how LLMs split language into tokens and some of those tokens are multi-letter. To us when we look for R’s we split like S - T - R - A - W - B - E - R - R - Y where each character has its own token, but LLMs split it something more like STR - AW - BERRY which makes predicting the correct answer difficult without a lot of training on the specific problem. If you asked it to count how many times STR shows up in “strawberrystrawberrystrawberry” it would have a better chance.
Its 3 right? Why can’t ai guess that one?
2pt_perversion@lemmy.world 1 month ago
the_post_of_tom_joad@sh.itjust.works 1 month ago
Thanks, you explained it well enough this layman kinda gets it!
tee9000@lemmy.world 1 month ago
Llms look for patterns in their training data. So like if you asked 2+2= it would look its training and finds high likelihood the text that follows 2+2= is 4. Its not calculating, its finding the most likely completion of the pattern based on what data it has.
But a new model chatgpt-o1 checks against itself in ways i dont fully understand and scores like 85% on international mathematic standardized test now so they are making great improvements there.