Honestly Claude is not that sycophantic. It often tells me I’m flat out wrong, and it generally challenges a lot of my decisions on projects. One thing I’ve also noticed on 4.6 is how often it will tell me “I don’t have the answer in my training data” and offer to do a web search rather than hallucinating an answer.
greybeard@feddit.online 5 days ago
There is a benchmark that kinda tests that. It’s call the bullshit benchmark. Basically, LLMs are given questions that don’t make sense in different ways, and their answers are judged based on how much they pushed back or bought in. Claude is in a league of its own when it comes to pushing back on non-sense questions.
https://petergpt.github.io/bullshit-benchmark/viewer/index.html
Zos_Kia@jlai.lu 5 days ago
Yes i saw that benchmark and was honestly not surprised with the results. It seems that Anthropic really focused on those issues above and beyond what was done in other labs.
probably2high@lemmy.world 4 days ago
With its prior government contact, maybe anthropic was tuning it to ward against all the fucking dolts in decision-making roles.