Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Ars Technica makes up quotes from Matplotlib maintainer("An AI Agent Published a Hit Piece on Me"); pulls story

⁨503⁩ ⁨likes⁩

Submitted ⁨⁨9⁩ ⁨hours⁩ ago⁩ by ⁨Beep@lemmus.org⁩ to ⁨technology@lemmy.world⁩

https://infosec.exchange/@mttaggart/116065340523529645

Hacker News.

source

Comments

Sort:hotnewtop
  • mech@feddit.org ⁨1⁩ ⁨hour⁩ ago

    This is bad enough that a serious company that wanted to salvage their reputation properly might wanna consider putting in some weekend overtime.

    Frankly, no. Correcting an article about a blog post isn’t important enough to force your workers to sacrifice their weekends.
    That should be reserved to life-and-death emergencies.

    source
  • tidderuuf@lemmy.world ⁨5⁩ ⁨hours⁩ ago

    I pointed out a month ago that Ars Technica is a rot site and starting to be filled with AI regurgitated bullshit and got 80+ down votes and a few uneducated replies.

    Y’all feel better now?

    source
    • sartalon@lemmy.world ⁨3⁩ ⁨hours⁩ ago

      No, the issue we are talking about today and calling Ars an “internet rot site” is a huge leap. Yeah, they post shit articles from Wired and such, (they are owned by Conde Nast), but their core writers are still great and have plenty of good articles.

      You want credit for what? Over exaggerating an issue then whining about it?

      You are throwing the baby out with the bathwater, and then spitting on the baby. It makes no sense.

      source
      • dogzilla@masto.deluma.biz ⁨3⁩ ⁨hours⁩ ago

        @sartalon @technology Yeah, I have a lot more trust in the reputation that Ars has built over a decade of solid reliable tech journalism than I do in a random matplotlib maintainer - I’ve interacted with maintainers before. They’re not wrong about agents, but not sure how that’s any different from any human doing the same.

        source
        • -> View More Comments
      • reddig33@lemmy.world ⁨3⁩ ⁨hours⁩ ago

        It’s been going downhill for some time. I think the Condé Nast investment pretty much killed it. The last site redesign that didn’t work correctly and made things unreadable was the last straw for me. I took it out of my rotation of “daily reads” and haven’t missed it.

        source
    • Bakkoda@lemmy.world ⁨3⁩ ⁨hours⁩ ago

      Ars hasn’t been good in a few years. Fuck those people.

      source
    • ageedizzle@piefed.ca ⁨4⁩ ⁨hours⁩ ago

      Stuff like this makes me very sympathetic to lemmy instances that disable downvotes

      source
  • Wxfisch@lemmy.world ⁨8⁩ ⁨hours⁩ ago

    In typical Ars fashion, the editorial team appears to be looking into what happened and are being fairly open about at things: arstechnica.com/…/journalistic-standards.1511650/

    I will be very disappointed if this was BenJ or Dan using AI to write their article since both have had really good pieces in the past, but it doesn’t sound like this is some Ars wide shift at this point. Like all things, it makes sense that it will take time for them to investigate this, Aurich (the Ars community lead and graphic designer) was clear that with this happening on a Friday afternoon and a US holiday on Monday, it’s likely to be into next week before they have anything they can share.

    source
    • d13@programming.dev ⁨3⁩ ⁨hours⁩ ago

      Honestly, this whole thing surprises me. I have a lot of respect for Ars Technica. I hope they clean this up and prevent further issues in the future.

      source
    • lol_idk@piefed.social ⁨5⁩ ⁨hours⁩ ago

      They know how and why it happened, they are taking the weekend to investigate how to best take their foot from their mouths without eating too much shit

      source
    • ryper@lemmy.ca ⁨2⁩ ⁨hours⁩ ago

      Benj and Kyle were the authors of the article; Dan’s name wasn’t on it.

      source
    • Lumisal@lemmy.world ⁨1⁩ ⁨hour⁩ ago

      I’m betting it’s definitely Ben since he is pretty pro-AI

      source
    • deltapi@lemmy.world ⁨6⁩ ⁨hours⁩ ago

      BenJ had coauthor credit on it.

      source
  • skip0110@lemmy.zip ⁨9⁩ ⁨hours⁩ ago

    That poor guy, the ai is just ganging up on him

    source
    • oce@jlai.lu ⁨8⁩ ⁨hours⁩ ago

      I hope it’s the first proof of general AI consciousness.

      source
      • thethunderwolf@lemmy.dbzer0.com ⁨3⁩ ⁨hours⁩ ago

        what?? AI is not conscious, marketing just says that with no understanding of the maths and no legal obligation to tell the truth.

        Here’s how LLMs work:

        The basic premise is like an autocomplete: It creates a response word by word (not literally using words, but “tokens” which are mostly words but sometimes other things such as “begin/end codeblock” or “end of response”). The program is a guessing engine that guesses the next token repeatedly. The autocomplete on your phone is different in that it merely guesses which word follows the previous word. An LLM guesses what the next word after the entire conversation (not always entire: conversation history may be truncated due to limited processing power) is.

        The “training data” is used as a model of what the probabilities are of tokens following other tokens. But you can’t store, for every token, how likely it is to follow every single possible combination of 1 to <big number like 65536, depends on which LLM> previous tokens. So that’s what “neural networks” are for.

        Neural networks are networks of mathematical “neurons”. Neurons take one or more inputs from other neurons, apply a mathematical transformation to them, and output the number into one or more further neurons. At the beginning of the network are non-neurons that input the raw data into the neurons, and at the end are non-neurons that take the network’s output and use it. The network is “trained” by making small adjustments to the maths of various neurons and finding the arrangement with the best results. Neural networks are very difficult to see into or debug because the mathematical nature of the system makes it pretty unclear what a given neuron does. The use of these networks in LLMs is as a way to (quite accurately) guess the probabilities on the fly without having to obtain and store training data for every single possibility.

        I don’t know much more than this, I just happen to have read a good article about how LLMs work. (Will edit the link into this post soon, as it was texted to me and I’m on PC rn)

        source
        • -> View More Comments
  • morto@piefed.social ⁨8⁩ ⁨hours⁩ ago

    It would be nice if he decides to sue ars technica for that. Writers and publisher need to learn the hard way that you can’t use ai and trust that for publishing stuff that needs factual coherence. If not by ethics, let it be from fear of lawsuits.

    source
    • tempest@lemmy.ca ⁨7⁩ ⁨hours⁩ ago

      Sue them for what? He would have to prove damages and they took it down.

      source
      • underisk@lemmy.ml ⁨7⁩ ⁨hours⁩ ago

        Libel. Taking it down doesn’t undo the damage to reputation which libel is concerned with.

        source
        • -> View More Comments
      • morto@piefed.social ⁨7⁩ ⁨hours⁩ ago

        Publicly making false statements using his name isn’t a crime by itself in his jurisdiction?

        source
        • -> View More Comments
  • eleijeep@piefed.social ⁨8⁩ ⁨hours⁩ ago

    Which ars writer was the article attributed to?

    source
    • equallyasgoodasezra@lemmy.world ⁨8⁩ ⁨hours⁩ ago

      Benj Edwards and Kyle Orland

      source
  • ms_lane@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    Ars is just AI slop now? Sad.

    source
    • cerebralhawks@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

      Ars is owned by Condé Nast which also owns Reddit, so “AI slop” is part of their business.

      I still trust Ars Technica (I don’t like them much but I do trust them… it’s complicated) and I trust Aurich (their founder/editor-in-chief) to act fairly. They don’t work on the weekends or holidays though, so he’s not touching it until Tuesday, though.

      source
      • SanctimoniousApe@lemmings.world ⁨3⁩ ⁨hours⁩ ago

        Aurich is the creative guy, Ken Fisher founded it.

        source
        • -> View More Comments
      • tidderuuf@lemmy.world ⁨5⁩ ⁨hours⁩ ago

        I was downvoted and insulted by this very Lemmy community when I said this just a month ago. Thank God people are starting to realize it now.

        source
        • -> View More Comments
  • FarraigePlaisteach@lemmy.world ⁨8⁩ ⁨hours⁩ ago

    Hard to keep track of all the recent changes in media ownership, editorial and quality control. Would love a browser plugin to give me an indicator because on the rare occasion I read a publication in say, USA, it might have had a good rep last time I read it several years ago. I imagine managing the detailed scores that a plugin might pull from would be a mammoth task, though.

    source
    • oce@jlai.lu ⁨8⁩ ⁨hours⁩ ago

      mediabiasfactcheck.com/ars-technica/ gives a factual reporting score and political bias estimation.

      source
      • Deceptichum@quokk.au ⁨5⁩ ⁨hours⁩ ago

        No way, MBFC utter garbage.

        It is one random guys opinion and pushes pro-Zionist content. It’s extremely biased and unfairly rates sites all the time. To see it still pushed after the .world/c/world fiasco is disheartening.

        source
        • -> View More Comments
      • RickyRigatoni@piefed.social ⁨8⁩ ⁨hours⁩ ago

        Unless israel is involved

        source
      • FarraigePlaisteach@lemmy.world ⁨7⁩ ⁨hours⁩ ago

        Good recommendation. They have an API and plugins mediabiasfactcheck.com/appsextensions/

        I was thinking of something that also alerts me to how many times the publication has been found to have published AI under the name of a human. But Media Bias Fact Check might actually cover that well enough. I’ll install that extension now, thank you!

        source
  • technocrit@lemmy.dbzer0.com ⁨5⁩ ⁨hours⁩ ago

    Just when you thought matplotlib was safe from the drama…

    source
  • fox2263@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    So can someone ELI5 all this for me please

    source
    • cerebralhawks@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

      Guy named Scott runs a GitHub (code base). AI agent (bot acting on behalf of a person, who has yet to come forward) submitted code. Scott rejects it. AI agent writes a “hit piece” (defaming article) on Scott.

      Ars Technica, a trusted tech/science blog for nearly 25 years, writes a story about it, but the two authors who worked on it used AI to write the blog entry. Scott calls them out in the comments. At first he’s accused of lying or being a bot, but people dig into it and realise Ars Technica made up their quotes.

      An Ars Technica user calls them out in their forums for posting AI slop as journalism, and the site’s founder and/or owner (“Aurich”) promises an investigation, and deletes the article, removing all the comments, and shutting down discussion over what happened until his team can investigate internally.

      (Worth noting that Ars Technica is owned by a conglomerate called Condé Nast which also owns Reddit; therefore, Condé Nast is involved with AI, and also other unsavoury stuff, but relevant to this, AI.)

      source
      • Lumisal@lemmy.world ⁨1⁩ ⁨hour⁩ ago

        Aurich is just the forum mod and graphics designer, not owner.

        source
      • SanctimoniousApe@lemmings.world ⁨3⁩ ⁨hours⁩ ago

        Aurich is the creative guy, Ken Fisher founded it.

        ETA: Confirmed by Wikipedia.

        source
      • JohnEdwa@sopuli.xyz ⁨3⁩ ⁨hours⁩ ago

        Though Reddit is a publicly traded company now, so they currently own only 30%.

        source
  • apfelwoiSchoppen@lemmy.world ⁨8⁩ ⁨hours⁩ ago

    Spoiler, everyone involved is AI.

    source