With chat gpt you can select from a number of personalities, where robot is very fact based and logical to the point of being almost insulting. Its very good actually and hits my ego instead of stroking it.
Comment on AI chatbots that butter you up make you worse at conflict, study finds
melfie@lemy.lol 1 day ago
I’ve been using GitHub Copilot a lot lately, and the overly positive language combined with being frequently wrong is just obnoxious:
Me: This doesn’t look correct. Can you provide a link to some documentation to show the SDK can be used in this manner?
Copilot: You’re absolutely right to question this!
Me: 🤦♂️
1984@lemmy.today 1 day ago
sugar_in_your_tea@sh.itjust.works 1 day ago
Why so polite?
My response would be:
PumaStoleMyBluff@lemmy.world 1 day ago
Complete sentences for a bot is overkill
send docs, idiot
ipkpjersi@lemmy.ml 1 day ago
IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.
Sturgist@lemmy.ca 1 day ago
Probably because everyone else is a poorly written chatbot
sugar_in_your_tea@sh.itjust.works 1 day ago
Really? Is that the same for other inanimate objects like appliances? Or are people anthropomorphizing chatbots?
ipkpjersi@lemmy.ml 8 hours ago
I think it’s because it’s the idea if you’re comfortable being rude to chatbots and you’re used to typing rude things to chatbots, it makes it much easier for it to accidentally slip out during real conversations too. Something like that, not really as much as it being about anthropomorphizing anything.
melfie@lemy.lol 1 day ago
Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.