Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Microsoft 365's buggy Copilot 'Chat' has been summarizing confidential emails for a month — yet another AI privacy nightmare

⁨337⁩ ⁨likes⁩

Submitted ⁨⁨7⁩ ⁨hours⁩ ago⁩ by ⁨throws_lemy@reddthat.com⁩ to ⁨technology@lemmy.world⁩

https://www.windowscentral.com/artificial-intelligence/microsoft-copilot/microsoft-365-copilot-ai-summarizing-confidential-emails

source

Comments

Sort:hotnewtop
  • JATtho@lemmy.world ⁨2⁩ ⁨hours⁩ ago

    I yesterday logged into my very old and dead M$ account. Holy fuck the experience of forced ads, timed pop ups, and thank good all of this fucking sloppy shit will be now deleted on my part. I’m not going near that diarrhea shit anymore unless paid.

    source
  • cenariodantesco@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    microslop deez nuts

    source
    • Frozengyro@lemmy.world ⁨2⁩ ⁨hours⁩ ago

      Microsloppy toppy

      source
  • doug@lemmy.today ⁨6⁩ ⁨hours⁩ ago

    “Hey Copilot, what have Putin and Trump been exchanging emails about?”

    source
  • plenipotentprotogod@lemmy.world ⁨3⁩ ⁨hours⁩ ago

    Even in the wide world of dubiously useful AI chatbots, Copilot really stands out for just how incompetent it is. The other day I was working on a PowerPoint presentation, and one of the slides included a photo with a kind of cluttered looking background. Now, I can probably count the number of things that AI is genuinely good at on one hand, and context aware image editing trends to be one of them, so I decided to click the Copilot button that Microsoft now has built directly into PowerPoint and see what happens. A chat window popped up and I concisely explained what I wanted it to do: “please remove the background from the photo on slide 5.” It responded on that infuriating obseqious tone that they all have and assured me that it would be happy to help with my request just as soon as I uploaded my presentation.

    What?

    The chatbot running inside an instance of PowerPoint with my presentation open is asking me to “upload” my presentation? I explained this to it, and it came back with some BS about being unable to access the presentation because a “token expired” before requesting again that I upload my presentation. I tried a little longer to convince it otherwise, but it just kept very politely insisting that it was unable to do what I was asking for until I uploaded my presentation.

    Eventually I gave up. The photo wasn’t that bad anyway.

    source
    • Silic0n_Alph4@lemmy.world ⁨1⁩ ⁨hour⁩ ago

      Was the presentation saved in OneDrive? I’ve seen similar responses during my brief experiments with the tech. Copilot seems to be basically just an iframe in the Office apps rather than actually integrated.

      source
  • Warl0k3@lemmy.world ⁨6⁩ ⁨hours⁩ ago

    For clarity, it’s only being summarized for the users that wrote it, it’s not leaking them to everyone. A comedically inept bug to allow though, holy shit.

    source
    • horn_e4_beaver@discuss.tchncs.de ⁨2⁩ ⁨hours⁩ ago

      Allegedly

      source
      • Warl0k3@lemmy.world ⁨1⁩ ⁨hour⁩ ago

        In this case there’s no evidence showing that it’s being spread widely - the bug reports are entirely about users being shown their own content. If you have something to dispute that I’m all ears.

        source
        • -> View More Comments
    • Reygle@lemmy.world ⁨6⁩ ⁨hours⁩ ago

      AITA for understanding that as meaning in order to “summarize” the data the AI read it entirely and will never be instructed to “forget” that data

      source
      • TRBoom@lemmy.zip ⁨6⁩ ⁨hours⁩ ago

        Unless someone has released something new while I haven’t been paying attention, all the gen AIs are essentially frozen. Your use of them can’t impact the actual weights inside of the model.

        If it seems like it’s remember things is because of the actual input of the LLM is larger than the input you will usually give it.

        For instance lets say the max input for a particular LLM is 9096 tokens. The first part of that will be instructions from the owners of the LLM to prevent their model from being used for things they don’t like. Lets say the first 2000 tokens. That leaves 7k or so for a conversation that will be ‘remembered’.

        Now if someone was really savvy, they’d have the model generate summaries of the conversation and stick them into another chunk of memory, maybe another 2000 tokens worth, that way it will seem to remember more than just the current thread. That would leave you with 5000 tokens to have a running conversation.

        source
        • -> View More Comments
      • VeganCheesecake@lemmy.blahaj.zone ⁨2⁩ ⁨hours⁩ ago

        LLMs are stateless. The model itself stays the same. Doesn’t mean they’re not saving the data elsewhere, but the LLM does not retain interactions.

        source
      • fuckwit_mcbumcrumble@lemmy.dbzer0.com ⁨5⁩ ⁨hours⁩ ago

        Why would that make you an asshole?

        source
        • -> View More Comments
  • Naia@lemmy.blahaj.zone ⁨5⁩ ⁨hours⁩ ago

    Who could have seen this coming?

    source
  • wobblyunionist@piefed.social ⁨5⁩ ⁨hours⁩ ago

    The repercussions of pushing something that no one wants and too quickly on top of that

    source
  • unnamed1@feddit.org ⁨5⁩ ⁨hours⁩ ago

    Wouldn’t that rather be an MS Graph bug? Why give copilot all mails and have it decide what to summarise and what not? —> I’m sure that’s not the way it’s implemented. I don’t understand the framing of the article tbh. „AI bad“ seems to click well.

    source