Comment on A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.

<- View Parent
theterrasque@infosec.pub ⁨1⁩ ⁨month⁩ ago

learnprompting.org/docs/…/chain_of_thought

It’s suspected to be one of the reasons why Claude and OpenAI’s new o1 model is so good at reasoning compared to other llm’s.

It can sometimes notice hallucinations and adjust itself, but there’s also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it’s not perfect. Overall a big improvement though.

source
Sort:hotnewtop