Comment on Reasoning models don't always say what they think.

MagicShel@lemmy.zip ⁨1⁩ ⁨week⁩ ago

Has anyone ever considered that a chain of reasoning can actually change the output? Because that is fed back into the input prompt. That’s great for math and logic problems, but I don’t think I’d trust the alignment checks.

source
Sort:hotnewtop