There’s no point telling it not to do x because as soon as you mention it x it goes into its context window.
Reminds me of the Sonny Bono high speed downhill skiing problem: don’t fixate on that tree, if you fixate on that tree you’re going to hit the tree, fixate on the open space to the side of the tree.
LLMs do “understand” words like not, and don’t, but they also seem to work better with positive examples than negative ones.
kahnclusions@lemmy.ca 1 week ago
I’ve noticed this too, it’s hilarious(ly bad).
Especially with image generation. “Draw a picture of an elf.” Generates images of elves that all have one weird earring. “Draw a picture of an elf without an earing.” Great now the elves have even more earrings.
MangoCats@feddit.it 1 week ago
I find this kind of performance to vary from one model to the next. I definitely have experienced the bad image getting worse phenomenon - especially with MS Copilot - but different models will perform differently.