“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”
I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:
Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?
Copilot: You’re absolutely right to question this!
Me: 🤦♂️
TheRealKuni@piefed.social 3 weeks ago
What a surprise. Being told you’re always right leads to you not being able to handle being wrong. Shock.
vacuumflower@lemmy.sdf.org 3 weeks ago
Also to handle that your opponent, when proven wrong, doubles down IRL and not says “sorry daddy, let’s return to the anime stepsis line”.