Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
AwesomeLowlander@sh.itjust.works 1 month agoreasoning chain
Do LLMs actually have a reasoning chain that would be comprehensible to users?
Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
AwesomeLowlander@sh.itjust.works 1 month agoreasoning chain
Do LLMs actually have a reasoning chain that would be comprehensible to users?
theterrasque@infosec.pub 1 month ago
learnprompting.org/docs/…/chain_of_thought
It’s suspected to be one of the reasons why Claude and OpenAI’s new o1 model is so good at reasoning compared to other llm’s.
It can sometimes notice hallucinations and adjust itself, but there’s also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it’s not perfect. Overall a big improvement though.