Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

It's rude to show AI output to people | Alex Martsinovich

⁨534⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨lemmydividebyzero@reddthat.com⁩ to ⁨technology@lemmy.world⁩

https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/

source

Comments

Sort:hotnewtop
  • pixxelkick@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me

    Not even “I asked chatgpt and it said”, they just dump it in the chat @ me

    Sometimes I’ll write up a 2~3 paragraph thought on something.

    And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”

    I find this horribly rude tbh, because:

    1. If I wanted to be AI summarized, I would do that myself damnit
    2. You just clogged up the chat with garbage
    3. like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
    4. It’s just kind of… dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up

    I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.

    source
    • XLE@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      This is sad, really. People are fed the lie that AI is objective, and apparently they think that they will get the objective summary of what you said if they run it through a chatbot.

      And the more people interact with chatbots, the harder they find it to interact outside of the chatbots. So they might feel even more uncomfortable with asking you to summarize yourself. So they go back to the chatbot. It’s a self-perpetuating cycle.

      source
      • ErmahgherdDavid@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        AI output is probabilistically the average opinion of everyone on the internet so it shares the common biases of the general public. Even with a bit of RLHF to “balance out” the models. Also it probably doesn’t help to anthropomorphise them. They don’t have opinions, they just autocomplete based on prior input

        source
    • MrKoyun@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I hate people so fucking much

      source
    • Vlyn@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      Oof, I don’t even get what they are trying to accomplish there. Maybe they had some kind of social training that told them “Summarize and reply what you understood first to show that you listened and avoid miscommunication, then add your response.” and their brain short circuited and started to think a ChatGPT summarization is the same.

      I’d get pretty hostile at work if someone started to do that…

      source
    • doesit@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I’d leave the appreciate the attempt out. You don’t. Also, would enquire if they use corporate or free AI. Second one is used for training and has no or low protection of (perhaps sensitive) corporate info/data.

      source
      • nickiwest@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I think at some point it will come out that the corporate subscription is no different and the LLM companies have been scraping everything for training data.

        source
      • pixxelkick@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        We have extensive corporate AI systems (software engineers), we have an entire wing of our company dedicated to AI exploration and development.

        source
  • lemmydividebyzero@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

    I already think that it’s insulting when people accomplish/do/implement/… something and want to informs the others and do that by generating a 1-2 pages long wall of text via LLM that is then copy-pasted into an email…

    Like… Can’t you just write down the 5 or 10 most important points? Are we not worth the time to do so? Do we have to find the most relevant information ourselves in that text???

    source
    • zqwzzle@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      You’re supposed to feed it into your own prompt to summarize it duh. /s

      source
      • nathan@lemmy.permisuan.com ⁨2⁩ ⁨weeks⁩ ago

        Soon we will live in a world where my AI talks to your ai 😅

        source
    • MagicShel@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      I sometimes use LLMs to help me with brevity or clarity. But the input is my own words and the output is almost always edited so that I sound like me because sometimes, while the output is serviceable, it’s just… bad and uninspired.

      source
      • ToTheGraveMyLove@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        You should learn how to write better instead of relying on slop.

        source
        • -> View More Comments
  • aesthelete@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Totally agree. When someone sends me some AI slop about a topic I have knowledge – which I’ve had this happen to me recently during a debug session – and asks me to read it, I think to myself “this person does not respect me, otherwise they wouldn’t be telling me to read stuff that may or may not be accurate that they themselves never read.”

    source
    • sockenklaus@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I know that feeling. I experienced it more than one time in areas of law I consider myself a little bit more knowledgeable than the average person. It’s just a slap to the face to try to discuss a topic with an AI that you know a little bit about.

      The thing is: I am 100 % sure those people use LLM answer not out of disrespect but because they honestly believe that an LLM produces a better argument than they possibly could themselves.

      source
      • aesthelete@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        The thing is: I am 100 % sure those people use LLM answer not out of disrespect but because they honestly believe that an LLM produces a better argument than they possibly could themselves.

        And I have zero confidence your 100% because you have zero backing for your claim other than believing people have good intentions.

        source
  • MrPnut@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Whenever someone at work says “ChatGPT says this” or “Claude says this” or “I asked Gemini and…” whatever they say after that point is just static and I never take them seriously as a person again.

    source
    • pHr34kY@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I appreciate the honesty when they say it’s an AI response and not genuine knowledge.

      When I tell someone “an LLM told me that…” It’s usually followed by “Let’s see if there’s any truth to it.” An AI response should always be treated as a suggestion, not an answer.

      Hell, Google’s AI still doesn’t know which day the F1 GP is on this week. It was wrong by a whole week a while back. Now it’s only off by a day.

      source
      • mcv@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        An AI response should always be treated as a suggestion, not an answer

        Exactly. An AI response can be a great way to get started on a topic you know little about, but it’s never a definitive answer. You have to verify whether it’s actually true. Whether it works. Never trust it blindly.

        source
        • -> View More Comments
    • HeyThisIsntTheYMCA@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I never take them seriously as a person again

      i dunno dude. i used to be a real piece of shit.
      Image

      source
    • pkjqpg1h@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      red flag

      source
    • vacuumflower@lemmy.sdf.org ⁨2⁩ ⁨weeks⁩ ago

      As a source it’s rude. As a piece of unreliable help of the kind “we both don’t know the syntax of that programming language, let’s ask Ollama how to draw such and such a shape in it” it’s kinda fine.

      source
    • Iconoclast@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      You dismiss the whole person just because they acknowledge using an LLM? That seems a bit harsh - especially since they had the decency to mention the source, which is basically the same as saying “take this with a grain of salt.”

      source
  • GreenBeard@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Absolutely rude. If you’re using AI to make a point for you, you’ve already admitted you don’t know enough about what you’re talking about to be having a opinion in the first place, let alone be worth discussing an issue with.

    source
    • partofthevoice@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      I’ve had these interactions with the head of my IT department. I asked to procure a license for jfrog artifactory. He literally copy/pasted a ChatGPT response to me that began like this:

      Here’s a breakdown of how JFrog Artifactory compares to using GitHub, NPM, or other language-specific package mangers (like Pypi)… … 1. Purpose and Functionality … **2. Workflow & Developer Experience … 3. Security and Compliance … ✅ When to use JFrog …

      It came with a bunch of theoretical risks that are completely resolved by the simple ability of just not being a complete fucking moron.

      It was really frustrating that I tried to talk with my IT leader, and instead found a proxy for ChatGPT. I lost a lot of respect for him.

      source
      • Panthenetrunner@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        I’m fast coming to the conclusion that AI can indeed replace jobs. The thing is that the only job it can actually replace is that of a lazy middle manager. AI is great at responding to email if A:) you don’t know what your talking about or B:) you don’t respect the other person enough to waste the time formulating an actual response. AI in my experience is only really good at faking that there’s someone on the other end. The fact that there’s an entire management class it can convenienceingly impersonate is a pretty searing indictment as far as I’m concerned.

        source
      • jason@discuss.online ⁨2⁩ ⁨weeks⁩ ago

        That guy to all his friends: “AI makes me 10x more productive!”

        source
      • Natanael@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

        This gets at my own personal perspective of using LLMs to respond - it’s not just about not putting effort into understanding and responding yourself, rather itis about making yourself a proxy to a tool I could use myself, and doing so *without even having a better understanding of how to use the tool to answer my question*, and still thinking you’re somehow made a positive contribution, that is the most disrespectful.

        If you genuinely thought the LLM could help me then you should be explaining your process to me for how to use it and validate responses, or else at least you should ask me for more info and explain how you think it’s responses could help if you really do think you’re better at operating it.

        Imagine doing the same in a workshop, and taking a powertool to an object before you even bothered figuring out what the other person wanted.

        source
  • jason@discuss.online ⁨2⁩ ⁨weeks⁩ ago

    My company hired a consulting firm to help with a transition period. The consulting firm sent my boss an email that outlined the plans for what we should do and how they are going to help. Without directly giving it away, the email was clearly AI output, and my boss instantly terminated their contract. We aren’t exactly anti-AI, but to the point of the post, it’s just so rude… and my boss is pretty fuckin cool.

    source
    • mcv@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      Especially rude if you want to charge money for it. If your boss wanted an AI answer, they would have asked an AI. You don’t need an expensive consulting company for that.

      source
  • sun_is_ra@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    what if it was my boss who said that during a technical argument? :/

    True story

    source
    • SpaceNoodle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Believe it or not, blocked.

      source
      • HeyThisIsntTheYMCA@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        straight to unemployment line

        source
  • johncandy1812@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    I hate how AI is used to make deep fakes, revenge porn, CP - and people tolerate it because “they’re working out the issues.”

    How about they work those out BEFORE they give people access to these tools.

    source
    • RememberTheApollo_@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      They tolerate it because it’s easy, they can copy-paste, and they need even less critical thought about the output than having to search for and choose what might be a viable source of decent information.

      The issues aren’t bugs. They’re acceptable flaws in the search for investment capital.

      source
  • Hackworth@piefed.ca ⁨2⁩ ⁨weeks⁩ ago

    It’s more about post size for me. If ya post a few sentences that clearly and concisely communicate a point, I don’t really care if they’re crafted or generated. If ya post a wall of text, I wanna know ya put the kind of effort in that made its length necessary if I’m gonna put in the effort to read it.

    source
    • Vlyn@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      Especially ChatGPT is awful for that. You ask it something and it always spews a whole page of content.

      At work we use Claude, which does produce better output and also calls out your bullshit. There it’s actually helping quite a bit (software development), but of course you have to understand what you are changing and clean things up.

      source
      • Hackworth@piefed.ca ⁨2⁩ ⁨weeks⁩ ago

        Aye, Anthropic is head and shoulders above everyone else on guidance, largely because they focus entirely on text/code. They’re not simultaneously developing image, video, and audio generators. Even Claude’s voice is just an 11Labs model. Plus I get the impression they’re just smarter about what they choose to research and how they use that info to improve the model.

        source
  • kevinbowersox@social.trom.tf ⁨2⁩ ⁨weeks⁩ ago
    @lemmydividebyzero This happened to me at work. They are really pushing Copilot on us.
    source
    • TrippinMallard@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      To drive up fake usage numbers for justifying the bubble they created to shareholders.

      source
    • maplesaga@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      The new manager of my building did it, and it was all unactionable garbage.

      source
  • Bibip@programming.dev ⁨2⁩ ⁨weeks⁩ ago

    hi friends i hope you’re well.

    i worked a laborious job and experienced a phenomenon i refer to as “parasitic thought:” it is where someone will provide to you all of the information that a person would require to reach the correct conclusion, and then stare at you. they want you to crunch the info for them.

    i feel like one of those parasites in my agent interactions. i know i COULD think, but you can do it too, lil buddy. go on. do it for me.

    i don’t know about “reasonable” or “ethical” or “polite,” but in my experience: if someone just regurgitates some clank clank slop slop, it reads as hostile. “i can’t be bothered to communicate with you, here, read this wall of gpt-vomit”

    my instinct is to copy and paste, “LLM agent of my choice, what’s this person trying to say to me?” and then skim the ai synthesized summary of the ai composed body text generated from some idiot’s faint echoes of thought.

    in the words of your highschool biology teacher, the human is the powerhouse of the agentic loop. in my unimportant opinion, responsible use of genai agents means that the output should be indistinguishable, if not better, than something you wrote by hand.

    there are privacy implications. linguistic assessment can be used to identify you. from a privacy perspective, the internet would be preferable if everyone fed their carefully formed thoughts to an LLM and said “make this look like chatgpt 3 wrote it.”

    source
  • BitsAndBites@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    My coworkers are doing this to me. They are even pasting into PR reviews. The threat is real.

    source
    • softwarist@programming.dev ⁨2⁩ ⁨weeks⁩ ago

      It’s even better when they copy-paste slop answers that are flat out wrong without bothering to check.

      source
    • dwemthy@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      My org has our PRs reviewed by an AI automatically

      source
  • sturmblast@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    My boss can’t wrap his head around why handing me a direct printout of LLM output is not acceptable

    source
  • reksas@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

    my mother constantly keeps sending me texts that are just direct copy-paste from llm output. can’t even tell her to stop doing it because she just ignores me if i say something she doesnt want to hear.

    source
    • JohnEdwa@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

      Ask chat GPT to come up with a nice message explaining why direct copy pastes of LLM outputs is bad. Copy paste it to her directly.
      Maybe she will understand it better that way.

      source
      • reksas@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        no, she just think she is being helpful but doesnt care what i think about it because apparently she knows everything better. She would just ignore that or otherwise make me even more annoyed.

        source
  • voldage@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Few days ago a friend linked me a danish research paper and claimed it shown that higher wages for women lead to decrease in children being born, and that higher male wages led to the opposite. I don’t have the skills required to parse this kind of paper quickly nor understanding of a lot of the terminology. I told chatGPT to read it and contrast it with the arguments being made, to which it responded with pointing out that the term “marginal net-of-tax wage” meant something different from “wage”, and that this paper suggested that tax laws incentivizing working more hours led to lowered fertility rather than higher salaries for women. I was asked to point exactly where in the paper it was said like that, and again, I had to lean on LLM to get me page numbers. I eventualy convinced my friend that he got duped by right wing talking points and got him to think a bit.

    So, if I didn’t do that and just read the conclusion from the paper I’d probably have to agree with him instead, as just googling it led to the right wing trolls making those claims. Was this a good use case of LLM to get me some counter arguments, or would it have been better if I stayed true to my ideals and not to use those tools? Was I rude by arguing against the point made about a research that neither of us understood from the get go by using genAI to parse through it? While I do agree that companies developing those tools are evil and need to be stopped, there is an utility to it that I don’t think is available elsewhere. Would me losing that argument and believing that women should have lower salaries to increase fertility (because I believe in science, and this paper seemed to be referenced a lot, also if anything capitalism would be to blame, so probably not as bad) be better than normalizing the use of the devil-tech but having myself and my friend better informed? I am legitimately not sure, but I think I did the right thing? What should’ve I done? I don’t have the skills nor time nor will to read scientific papers that aren’t related to my area of expertise, especially when someone linking them didn’t do any research either. I am also genuinely exhausted from defending my left-wing points of view from the constant barrage of underhanded and often completely baseless arguments some of my coworkers and friends make to convince me I’m wrong and the default consensus is right. Is it bad to use genAI to figure out some counterpoints? Or should I give up and admit I’m not good or commited enough to make them myself? Right wing people often argue in bad faith and don’t take the counterpoints to heart, but sometimes they do, even if the original point they made was just to rile me up. So, am I the asshole? Am I wrong? I seriously don’t know.

    source
    • Bibip@programming.dev ⁨2⁩ ⁨weeks⁩ ago

      a layperson cannot be relied upon to draw meaningful conclusions from a scholarly article. i learned this when i tried to do it. have you ever tried to read a spanish book, without knowing spanish, with nothing but an english-spanish dictionary? it’s very slow going and it works alright until someone speaks in idiom or metaphor, but even then you can mostly still get it. this is not always the case with scholarly articles.

      moreover, it’s a waste of time. if it takes you 30 hours to look up every term and graph, but it would have taken your biology friend 20 minutes to synthesize it for you, there’s an obvious solution here. if an LLM can save you 30 hours, and your biology friend 20 minutes, it’s a useful tool.

      source
  • Valmond@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    I asked chatgpt and it told me:

    Wrong network configuration

    source
  • Iconoclast@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

    Just because the final output comes from AI doesn’t always mean a human didn’t put real effort into writing it. There’s a big difference between asking an LLM to write something from scratch, telling it exactly what to say, or just having it edit and polish what you already wrote.

    A ton of my replies here - including this one - are technically “AI output,” but all the AI really did was take what I wrote, clean it up, and turn it into coherent text that’s easier for the reader to follow.

    spoiler

    Original text: Just because the final output is by AI doesn’t always mean human didn’t put effort into writing it. There’s a difference between asking LLM to write something, telling LLM what to write or asking it to edit something you wrote. A large number of my replies here, including this one, are technically “AI output” but all the AI did was go through what I wrote and try and turn it into coherent text that the is easy for the recipient to consume.

    source
    • nickiwest@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I don’t think the LLM made your response better in a meaningful way. Sure, it cleaned up the grammar a little bit, but the rephrasing in a few places is not necessary.

      Trust yourself to communicate without help from external software.

      source
      • reksas@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        making one dependent on external service is the very point of llm from the point of view of investors. Imagine how much money they will make if everyone just couldnt live without llm in every aspect of their life.

        source
      • Iconoclast@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        I only did it here to illustrate a point. Typically I only use it on longer posts. I’m not a native english speaker and I often struggle to express my thoughts clearly and I find it immensely useful to run it through AI and see the corrections it made.

        source
        • -> View More Comments
      • Bibip@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        there are many use-cases, and you’ve neglected one: linguistic analysis can be used to identify a person and to link them to other accounts. i’m not saying it’s likely or apocalyptic, but it is true and present. using an LLM to “sanitize” your outputs can prevent this.

        from a privacy perspective, everyone should do this using a locally hosted LLM. from a person-that-uses-the-internet perspective, i would absolutely hate it if every article and every comment looked like an identical brand of ai slop.

        source
      • Rekorse@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        I’d argue that with a little bit of practice its quicker to write a comment and then revise it yourself. Fix the punctuation, grammar, misspellings, and read it through once at least. Its a useful skill to learn as well.

        source
    • Anarki_@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      I read your original just fine.

      source
    • Krzd@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Seriously, your text was good enough for a comment, and for everything else just put in some effort? It’s really not that hard, and using ai actively harms people.

      source
    • AeonFelis@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      While your use case may not suffer from the problem depicted in the post[^1], I don’t think it’s worth weakening the proposed etiquette for. If having a system that can reduce the generated garbage a person can inflict upon another means slightly-worse worded texts - that’s a price I’m willing to pay.

      [^1]: It does exhibit other generative AI issues - like the environmental impact or like how it makes you reliant on companies just waiting to start enshittifing the field - it does not suffer from the issue of forcing humans to read meaningless slop that no one bothered to write.

      source
    • TheSeveralJourneysOfReemus@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Y’know, i only use AI for horizontal side researches and support that i do next to the main non ai search. Other than that, i write my stuff, all of it.

      source
  • texture@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    i think the idea of blocking someone over that is pretty over the top

    source
  • workgood@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    its not no. its fine

    source
  • benny@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

    Chat is just the wrong interface to AI, period. If you use it as an agentic tool with human review, it is either works or doesn’t and you can improve it for the task.

    source