Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Sending someone LLM output in response to a question they ask is the intellectual equivalent of sending an unsolicited dick pic.

⁨201⁩ ⁨likes⁩

Submitted ⁨⁨23⁩ ⁨hours⁩ ago⁩ by ⁨raspberriesareyummy@lemmy.world⁩ to ⁨showerthoughts@lemmy.world⁩

source

Comments

Sort:hotnewtop
  • mushroommunk@lemmy.today ⁨23⁩ ⁨hours⁩ ago

    I read recently in an article something that struck me as the heart of it and fits.

    “Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read.” - Dan Brooks

    source
    • stepan@lemmy.cafe ⁨23⁩ ⁨hours⁩ ago

      That’s something I’ve attempted to say more than once but never formulated this well.

      Every time I search for something tech-related, I have to spend a considerable amount of energy just trying to figure out whether I’m looking at a well written technical document or a crap resembling it. It’s especially hard when I’m very new to the topic.

      Paradoxically, AI slop made me actually read the official documentation much more, as it’s now easier than to do this AI-checking. And also personal blogs, where it’s usually clearly visible they are someone’s beloved little digital garden.

      source
      • saltesc@lemmy.world ⁨22⁩ ⁨hours⁩ ago

        That’s something I’ve attempted to say more than once but never formulated this well.

        Did you try ChatGPT?

        source
      • mushroommunk@lemmy.today ⁨22⁩ ⁨hours⁩ ago

        Funny how people who’s job it is to write can sometimes write gooder than us common folk.

        source
    • raspberriesareyummy@lemmy.world ⁨23⁩ ⁨hours⁩ ago

      I had this “shower” thought when chatting with a friend and getting an obviously LLM-generated answer to a grammar question I had (needless to say the LLM answer misunderstood the nuance of my question just as much as the friend did before). Thank you for linking the article, I will share that with my friend to explain my strong reaction (“please never ever do that again”)

      source
      • mushroommunk@lemmy.today ⁨22⁩ ⁨hours⁩ ago

        AI and someone who uses AI missed nuance? This is my surprised face. (- _ -⁠)

        source
    • Yaky@slrpnk.net ⁨15⁩ ⁨hours⁩ ago

      The question I ask is “How do you justify saving your time at expense of others’ time?”

      Haven’t heard a good answer, just mumbling “it can be set to be less verbose…”

      source
    • fizzle@quokk.au ⁨22⁩ ⁨hours⁩ ago

      The most annoying part - the recipients email client probably offered to summarise with an LLM. My bot makes slop for your bot to interpret.

      Its the most inefficient form of communication ever devised. Please decompress my prompt 1000x so the recipient can compress it back to my prompt.

      I will say though, even a chatgpt email tells you a lot about the sender.

      source
    • BedSharkPal@lemmy.ca ⁨23⁩ ⁨hours⁩ ago

      Damn. Nailed it.

      source
    • jjpamsterdam@feddit.org ⁨12⁩ ⁨hours⁩ ago

      Thank you for this great answer! It’s something I intuitively felt but couldn’t put my finger on with the same surgical precision you just did.

      source
    • raspberriesareyummy@lemmy.world ⁨22⁩ ⁨hours⁩ ago

      Question: why does the linked lemmy.today “theatlantic@ibbit.at” show up here on lemmy.world (lemmy.world/c/theatlantic@ibbit.at), but there are zero posts visible in the community? I mean - since you commented from lemmy.today, we are clearly federated? I am confused - I wanted to comment on the article you linked with a question, but I can’t find it via lemmy.world :(

      source
      • mushroommunk@lemmy.today ⁨22⁩ ⁨hours⁩ ago

        Federation sometimes has a few quirks. Seems like you figures it out though

        source
        • -> View More Comments
      • Rhynoplaz@lemmy.world ⁨21⁩ ⁨hours⁩ ago

        Let me go ask AI and copy the response below for you.

        source
  • owenfromcanada@lemmy.ca ⁨2⁩ ⁨hours⁩ ago

    I don’t quite get the equivalence there. I’d say an LLM response is more on par with responding with a link to lmgtfy.com or something.

    The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn’t otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they’re trying to sell you their MLM garbage.

    source
    • raspberriesareyummy@lemmy.world ⁨1⁩ ⁨hour⁩ ago

      I don’t quite get the equivalence there.

      It’s garbage insulting your intellect and personal relationship with the sender. Whereas an unsolicited dick pic is garbage insulting your eyes and personal relationship with the sender.

      source
      • owenfromcanada@lemmy.ca ⁨1⁩ ⁨hour⁩ ago

        They’re both garbage, sure, but I wouldn’t call it an equivalent. Especially in severity–one is insulting, the other is sexual harassment.

        The key word is “unsolicited.” An LLM response to a question you ask is garbage, but it’s solicited garbage. Like asking someone in Home Depot where the hammers are, and having them take 10 minutes for them to look it up on their phone. It’s a stupid response, but it was solicited. It’s at least a lazy attempt to respond relevantly, however insulting.

        source
        • -> View More Comments
  • CombatWombatEsq@lemmy.world ⁨22⁩ ⁨hours⁩ ago

    To me, it is exactly the same as people who linked lmgtfy.com or responded RTFM. If you send me an LLM summary, I’m assuming you’re claiming that I’m the asshole for bothering you. If I am being lazy, I’ll take the hint. If I’m struggling to find a way to do the research myself, either because I’m not sure how to properly research it myself, or because LLMs have made the internet nigh-unusable, I’m gonna clock you as a tremendous asshole.

    source
    • raspberriesareyummy@lemmy.world ⁨21⁩ ⁨hours⁩ ago

      I think there’s an important nuance to lmgtfy or RTFM. These two were clearly identifiable as the kind of - sometimes snarky - min-effort response, and sometimes absolutely justified (e.g. if I googled the question of OP and the very first result correctly answers their question, which I have made the effort of checking myself).

      For the slop responses however, the receiver has to invest sometimes considerable time into reading & processing it to even understand that it might be pure slop. And in doubt, as a reader we are left with the moral dilemma of potentially offending the writer by asking “Did you just send me LLM output?”

      It is both harder to identify and it drives a wedge into online (and personal) relationships because it adds a layer of doubt or distrust. This slop shit is poison for internet friendships. Those tech bros all need to fuck off and use their money for a permanent coke trip straight until they become irrelevant. :/

      source
      • CombatWombatEsq@lemmy.world ⁨20⁩ ⁨hours⁩ ago

        Oh yeah, I was thinking of people who link to llm output, like this: chatgpt.com/…/697e8957-9494-8010-beb9-eb90c476051…

        Copy-pasting llm summaries is definitely worse.

        source
    • Kolanaki@pawb.social ⁨22⁩ ⁨hours⁩ ago

      RTFM

      This one really sucked post 2001 or so when everything stopped coming with a fucking manual to read. What M and I supposed to R, guy?

      source
      • BassTurd@lemmy.world ⁨20⁩ ⁨hours⁩ ago

        The only time it’s been kind of relevant in my dealings is the Arch wiki, because it really is a solid resource. However, sometimes my issue is a specific one and I need more than general information on a process. RTFM ruins communities when someone is looking for support. It’s just an entitled response to someone asking for help.

        source
      • Klear@quokk.au ⁨21⁩ ⁨hours⁩ ago

        It’s not meant as an actual manual. What you’re really supposed to do is comb through ad-ridden google results until you find that one 10 years old reddit thread where someone thanks a deleted comment for solving the issue you have.

        source
        • -> View More Comments
  • DupaCycki@lemmy.world ⁨14⁩ ⁨hours⁩ ago

    I think I’d prefer an unsolicited dick pick.

    source
    • andallthat@lemmy.world ⁨8⁩ ⁨hours⁩ ago

      Image

      source
  • CallMeAnAI@lemmy.world ⁨20⁩ ⁨hours⁩ ago

    I mean on one hand, it’s a shower thought. On the other, this is a really dumb shower thought.

    source
    • Apytele@sh.itjust.works ⁨15⁩ ⁨hours⁩ ago

      I often use AI to break up my ADHD mono-sentence paragraph. I’ll stream of consciousness my reply then tell it to not change my wording but break up the excessively long sentences, and to reorder and split things into paragraphs that follow well. I’m still doing the writing, but having an advanced spell check is actually super useful.

      source
    • Drusas@fedia.io ⁨16⁩ ⁨hours⁩ ago

      I needed that reminder. It doesn't matter how stupid a showerthought is.

      source
  • morto@piefed.social ⁨22⁩ ⁨hours⁩ ago

    Somehow, people don’t get that if we ask something to them, it’s because we want their personal interpretation of it, otherwise, we would use the internet as well

    source
    • raspberriesareyummy@lemmy.world ⁨21⁩ ⁨hours⁩ ago

      Specifically this - in terms of learning a language, understanding some nuances also absolutely requires an explanation by a native speaker that has a really good grasp of their language AND a talent of explaining. Both of which are criteria diametrically opposed to the average slop training data.

      source
  • letraset@feddit.dk ⁨22⁩ ⁨hours⁩ ago

    Receiving LLM output as an answer to a question, is the equivalent of getting a voice reply to the question:

    “Quick question, are you free on Saturday afternoon?”

    source
    • jjpamsterdam@feddit.org ⁨12⁩ ⁨hours⁩ ago

      I absolutely cannot stand the kind of people who answer a brief and simple yes or no question with a wall of text or a two minute voice note. If it’s that complicated, because your pet chihuahua just had a stroke and you then fell in love head over heels with the veterinarian and that you’re currently at the airport to fly away for your spontaneous honeymoon, just say no and tell me about the details in person.

      source
      • bryndos@fedia.io ⁨11⁩ ⁨hours⁩ ago

        If I got that question by text though I'd normally ignore it until Monday.
        Bumming around doing nothing is one of my most valued hobbies.
        "are you bored ?" might get a response, but better to reveal something about the proposed alternative. "want to do macrame on Sunday?"

        I especially hate this one in work.
        "you free?"
        Unfortunately, a polite reply is expected in that context so i can't say "no" (I'm at work, as you fucking well know).

        The question normally means " i fucked up x and don't know what to do about it"

        If they don't tell me what "X" is, how do I know where their fuckup ranks in the wider population of fuckery.

        source
    • raspberriesareyummy@lemmy.world ⁨21⁩ ⁨hours⁩ ago

      Downloading audio message… Duration: 45 seconds

      source
  • DeathByBigSad@sh.itjust.works ⁨18⁩ ⁨hours⁩ ago

    At least a dick can be useful to create life… an LLM can never become life

    source
  • radicallife@lemmy.world ⁨20⁩ ⁨hours⁩ ago

    But I have my phone’s texting set permanently to respond with AI so I never have to talk to anyone.

    source
  • friend_of_satan@lemmy.world ⁨19⁩ ⁨hours⁩ ago

    Pretty sure my boss did this to me today.

    source
  • sparkles@piefed.zip ⁨9⁩ ⁨hours⁩ ago

    I get it, it’s obnoxious and annoying, bereft of deep thought or courtesy. Qualities the senders must possess. But I could go the rest of my life without seeing unsolicited genitals tbh.

    source
    • raspberriesareyummy@lemmy.world ⁨5⁩ ⁨hours⁩ ago

      … as could I go the rest of my life without seeing unsolicited LLM garbage in my message :)

      source
  • Blaster_M@lemmy.world ⁨21⁩ ⁨hours⁩ ago

    Well, it’s common courtesy that if someone is asking you, assume they already asked google or whatever and think you might have the answer they can’t find.

    source
    • raspberriesareyummy@lemmy.world ⁨20⁩ ⁨hours⁩ ago

      That, and for some questions (i.e. nuances), a personal opinion is much more relevant to the asker than some random slop explanation. In this case I wanted to know which word construct in Turkish comes closes to the English “[ so and so ] is [ whatever ], isn’t it?” vs. “[ so and so ] is not [ whatever ], is it?” - Because Turkish has “isn’t it?” (değil mi? = not so?) but it doesn’t have “is it?”, mostly because “to be” is used much different in the language.

      A google result wouldn’t help me at all - the pure grammar answer is “there’s no form of ‘is it’ to be coupled with a negative assumption/assertion”. But does a language construct exist to transport the nuance of “the speaker assumes that something is NOT [soandso], and wants to ask confirmation” vs. the speaker assuming that something IS [soandso], and asking for confirmation.

      I still don’t know the answer, but it appears this nuance can’t be expressed in Turkish without describing around it in a longer sentence.

      source
  • Jakeroxs@sh.itjust.works ⁨21⁩ ⁨hours⁩ ago

    Specifically if you don’t even specify its ai, like I don’t mind using it, but be upfront that you don’t know and consulted an AI.

    Like I see it happening at my work, people just straight copy pasting from copilot or w/e and it’s clear to me that’s what it is (especially if its discussing things I know that person has never heard of before lol)

    source
    • raspberriesareyummy@lemmy.world ⁨21⁩ ⁨hours⁩ ago

      I am slowly switching to increasingly less diplomatic reactions when I feel someone is using slop to respond to me or produce any kind of work text. Eventually I’ll probably advance to offensive reactions à la “Are you so f*cking incompetent that you can’t do better than copy-pasting into a glorified word prediction software?”

      source
      • Jakeroxs@sh.itjust.works ⁨18⁩ ⁨hours⁩ ago

        I definitely use it at work to “corporate” my emails or descriptions for things because my way of speaking would be frowned upon lmao. Literally “corpo this sentence please” or something along those lines.

        source
  • HubertManne@piefed.social ⁨20⁩ ⁨hours⁩ ago

    I mean I don’t care if they use it like a search engine to remind themselves about the topic if they had some knowledge on it before they looked it up and if they put some cognitive power to go over the answer and absorb it and respond in their own words. But yeah a cut and paste or if they know nothing about it and parrot off what the llm tells them. Thats annoying.

    source
    • raspberriesareyummy@lemmy.world ⁨19⁩ ⁨hours⁩ ago

      while it doesn’t affect me directly if people use it “like a search engine”, it still empowers the tech bro billionaires who are the worst of the worst of scum of mankind, and it fucks up democracy, environment and hardware prices. So I’d rather everyone just boycotted this BS.

      source
      • HubertManne@piefed.social ⁨19⁩ ⁨hours⁩ ago

        doesn’t using a search engine do the same? empower the tech bro. do you expect people not to use search engines because man. that is just not going to happen.

        source
        • -> View More Comments