Context, then answer… instead of having everything ride on the first character (e.g. we make it pick “Y” or “N” first in response to a yes-or-no question, it usually picks “Y” even if it later talks itself out of it).
That’s the basis of reasoning models. Make LLMs ‘think’ through the problem for several hundred tokens before giving a final answer.
BroBot9000@lemmy.world 3 weeks ago
Image
Or even better, don’t use the racist pile of linear algebra that regurgitates misinformation and propaganda.