I already think that it’s insulting when people accomplish/do/implement/… something and want to informs the others and do that by generating a 1-2 pages long wall of text via LLM that is then copy-pasted into an email…
Like… Can’t you just write down the 5 or 10 most important points? Are we not worth the time to do so? Do we have to find the most relevant information ourselves in that text???
pixxelkick@lemmy.world 2 weeks ago
Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me
Not even “I asked chatgpt and it said”, they just dump it in the chat @ me
Sometimes I’ll write up a 2~3 paragraph thought on something.
And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”
I find this horribly rude tbh, because:
I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.
XLE@piefed.social 2 weeks ago
This is sad, really. People are fed the lie that AI is objective, and apparently they think that they will get the objective summary of what you said if they run it through a chatbot.
And the more people interact with chatbots, the harder they find it to interact outside of the chatbots. So they might feel even more uncomfortable with asking you to summarize yourself. So they go back to the chatbot. It’s a self-perpetuating cycle.
ErmahgherdDavid@lemmy.dbzer0.com 2 weeks ago
AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to “balance out” the models. Also it probably doesn’t help to anthropomorphise them. They don’t have opinions, they just autocomplete based on prior input
MrKoyun@lemmy.world 2 weeks ago
I hate people so fucking much
Vlyn@lemmy.zip 2 weeks ago
Oof, I don’t even get what they are trying to accomplish there. Maybe they had some kind of social training that told them “Summarize and reply what you understood first to show that you listened and avoid miscommunication, then add your response.” and their brain short circuited and started to think a ChatGPT summarization is the same.
I’d get pretty hostile at work if someone started to do that…
doesit@sh.itjust.works 2 weeks ago
I’d leave the appreciate the attempt out. You don’t. Also, would enquire if they use corporate or free AI. Second one is used for training and has no or low protection of (perhaps sensitive) corporate info/data.
nickiwest@lemmy.world 2 weeks ago
I think at some point it will come out that the corporate subscription is no different and the LLM companies have been scraping everything for training data.
pixxelkick@lemmy.world 2 weeks ago
We have extensive corporate AI systems (software engineers), we have an entire wing of our company dedicated to AI exploration and development.