Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Ars Technica makes up quotes from Matplotlib maintainer("An AI Agent Published a Hit Piece on Me"); pulls story

⁨880⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨Beep@lemmus.org⁩ to ⁨technology@lemmy.world⁩

https://infosec.exchange/@mttaggart/116065340523529645

Hacker News.

source

Comments

Sort:hotnewtop
  • Wxfisch@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    In typical Ars fashion, the editorial team appears to be looking into what happened and are being fairly open about at things: arstechnica.com/…/journalistic-standards.1511650/

    I will be very disappointed if this was BenJ or Dan using AI to write their article since both have had really good pieces in the past, but it doesn’t sound like this is some Ars wide shift at this point. Like all things, it makes sense that it will take time for them to investigate this, Aurich (the Ars community lead and graphic designer) was clear that with this happening on a Friday afternoon and a US holiday on Monday, it’s likely to be into next week before they have anything they can share.

    source
    • d13@programming.dev ⁨2⁩ ⁨weeks⁩ ago

      Honestly, this whole thing surprises me. I have a lot of respect for Ars Technica. I hope they clean this up and prevent further issues in the future.

      source
    • lol_idk@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      They know how and why it happened, they are taking the weekend to investigate how to best take their foot from their mouths without eating too much shit

      source
      • sukhmel@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        This shouldn’t be a problem anatomically, it’s hard to eat anything with a foot in your mouth anyway

        source
    • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      What do they have to investigate? Did one of them accidentally get an AI to write the article and then accidentally post the article, like they just fell on the keyboard and accidentally typed in a prompt? Come on.

      source
      • Wxfisch@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I would hazard to guess they are investigating how the use of AI was missed in their editorial process, how they missed the incorrect quotes, and who violated their journalistic standards by using an AI to directly write article text since it’s a coauthored piece.

        source
    • deltapi@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      BenJ had coauthor credit on it.

      source
    • ryper@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      Benj and Kyle were the authors of the article; Dan’s name wasn’t on it.

      source
    • Fmstrat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Benj was an author: web.archive.org/…/after-a-routine-code-rejection-…

      Though in the Ars response they say “Scott’s post”, so I’m confused.

      source
      • PumaStoleMyBluff@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Scott is the subject of the article, who was misquoted by Ars and maligned by the slopbot.

        source
    • Lumisal@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I’m betting it’s definitely Ben since he is pretty pro-AI

      source
  • tidderuuf@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I pointed out a month ago that Ars Technica is a rot site and starting to be filled with AI regurgitated bullshit and got 80+ down votes and a few uneducated replies.

    Y’all feel better now?

    source
    • sartalon@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      No, the issue we are talking about today and calling Ars an “internet rot site” is a huge leap. Yeah, they post shit articles from Wired and such, (they are owned by Conde Nast), but their core writers are still great and have plenty of good articles.

      You want credit for what? Over exaggerating an issue then whining about it?

      You are throwing the baby out with the bathwater, and then spitting on the baby. It makes no sense.

      source
      • reddig33@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        It’s been going downhill for some time. I think the Condé Nast investment pretty much killed it. The last site redesign that didn’t work correctly and made things unreadable was the last straw for me. I took it out of my rotation of “daily reads” and haven’t missed it.

        source
      • Hypx@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        It’s one of the stages of enshittification. Unless we see hard changes to avoid further decay, Ars will inevitably get worse and and worse until it does become an “internet rot site.”

        source
        • -> View More Comments
      • dogzilla@masto.deluma.biz ⁨2⁩ ⁨weeks⁩ ago

        @sartalon @technology Yeah, I have a lot more trust in the reputation that Ars has built over a decade of solid reliable tech journalism than I do in a random matplotlib maintainer - I’ve interacted with maintainers before. They’re not wrong about agents, but not sure how that’s any different from any human doing the same.

        source
        • -> View More Comments
      • tidderuuf@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Simp a little harder for them next time. They appreciate it.

        source
        • -> View More Comments
    • jaennaet@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

      Apparently you still can’t criticise the Holy Ars even when they put out AI slop articles, because that’s SPITTING ON BABIES

      source
    • Bakkoda@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Ars hasn’t been good in a few years. Fuck those people.

      source
    • ageedizzle@piefed.ca ⁨2⁩ ⁨weeks⁩ ago

      Stuff like this makes me very sympathetic to lemmy instances that disable downvotes

      source
      • jaennaet@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        Downvotes are just samethink fuel.

        source
        • -> View More Comments
      • Buddahriffic@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I read the comment, then judge the comment and use that judgement and voting scores to judge the community.

        source
  • skip0110@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    That poor guy, the ai is just ganging up on him

    source
    • oce@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

      I hope it’s the first proof of general AI consciousness.

      source
      • thethunderwolf@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        what?? AI is not conscious, marketing just says that with no understanding of the maths and no legal obligation to tell the truth.

        Here’s how LLMs work:

        The basic premise is like an autocomplete: It creates a response word by word (not literally using words, but “tokens” which are mostly words but sometimes other things such as “begin/end codeblock” or “end of response”). The program is a guessing engine that guesses the next token repeatedly. The autocomplete on your phone is different in that it merely guesses which word follows the previous word. An LLM guesses what the next word after the entire conversation (not always entire: conversation history may be truncated due to limited processing power) is.

        The “training data” is used as a model of what the probabilities are of tokens following other tokens. But you can’t store, for every token, how likely it is to follow every single possible combination of 1 to <big number like 65536, depends on which LLM> previous tokens. So that’s what “neural networks” are for.

        Neural networks are networks of mathematical “neurons”. Neurons take one or more inputs from other neurons, apply a mathematical transformation to them, and output the number into one or more further neurons. At the beginning of the network are non-neurons that input the raw data into the neurons, and at the end are non-neurons that take the network’s output and use it. The network is “trained” by making small adjustments to the maths of various neurons and finding the arrangement with the best results. Neural networks are very difficult to see into or debug because the mathematical nature of the system makes it pretty unclear what a given neuron does. The use of these networks in LLMs is as a way to (quite accurately) guess the probabilities on the fly without having to obtain and store training data for every single possibility.

        I don’t know much more than this, I just happen to have read a good article about how LLMs work. (Will edit the link into this post soon, as it was texted to me and I’m on PC rn)

        source
        • -> View More Comments
  • morto@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    It would be nice if he decides to sue ars technica for that. Writers and publisher need to learn the hard way that you can’t use ai and trust that for publishing stuff that needs factual coherence. If not by ethics, let it be from fear of lawsuits.

    source
    • tempest@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      Sue them for what? He would have to prove damages and they took it down.

      source
      • underisk@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

        Libel. Taking it down doesn’t undo the damage to reputation which libel is concerned with.

        source
        • -> View More Comments
      • morto@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        Publicly making false statements using his name isn’t a crime by itself in his jurisdiction?

        source
        • -> View More Comments
  • mech@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    This is bad enough that a serious company that wanted to salvage their reputation properly might wanna consider putting in some weekend overtime.

    Frankly, no. Correcting an article about a blog post isn’t important enough to force your workers to sacrifice their weekends.
    That should be reserved to life-and-death emergencies.

    source
    • 3abas@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Now what to do about the lazy writer who used AI to write the article and didn’t bother fact check it and make sure the quotes are real?

      Fixing the article, weekend or next week, doesn’t address the problem itself.

      source
    • kilgore_trout@feddit.it ⁨2⁩ ⁨weeks⁩ ago

      That should be reserved to life-and-death emergencies.

      Well, they are going to see how many will keep their subscription then.

      source
  • apftwb@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    “Alexa, slander this man for me”

    source
    • Gumus@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      There’s a high chance it wasn’t a direct command from a human and the agent did it on it’s own.

      source
  • bcgm3@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Welcome to discourse in a post-truth society. Reality doesn’t matter anymore; news agencies can just make shit up, and even the comments on the fake articles are fake.

    Rail against it, until it’s the only thing you ever do. A single bot can still post a thousand times more, and on a thousand different accounts, and on a thousand different platforms. Just one of them can formulate fake ideas and then fake arguments with itself that enfold like a fractal, and there is an effectively infinite number of them.

    Kessler Syndrome is happening before our very eyes, only on a much more local scale.

    This ad was brought to you by OpenAI.

    source
  • ms_lane@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Ars is just AI slop now? Sad.

    source
    • cerebralhawks@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      Ars is owned by Condé Nast which also owns Reddit, so “AI slop” is part of their business.

      I still trust Ars Technica (I don’t like them much but I do trust them… it’s complicated) and I trust Aurich (their founder/editor-in-chief) to act fairly. They don’t work on the weekends or holidays though, so he’s not touching it until Tuesday, though.

      source
      • SanctimoniousApe@lemmings.world ⁨2⁩ ⁨weeks⁩ ago

        Aurich is the creative guy, Ken Fisher founded it.

        source
        • -> View More Comments
      • tidderuuf@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I was downvoted and insulted by this very Lemmy community when I said this just a month ago. Thank God people are starting to realize it now.

        source
        • -> View More Comments
  • eleijeep@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    Which ars writer was the article attributed to?

    source
    • equallyasgoodasezra@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Benj Edwards and Kyle Orland

      source
      • ZephyrXero@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Damn. I thought Kyle would do better smh

        source
  • ZephyrXero@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Damn. Am I gonna have to cancel my Ars subscription now? Every damn thing is enshittifying these days

    source
    • MisterOwl@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Right? Who’s next, Pro Publica?

      source
    • kilgore_trout@feddit.it ⁨2⁩ ⁨weeks⁩ ago

      It used to be respectable ten years ago, back chen it had a .co.uk website too.

      source
  • FarraigePlaisteach@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Hard to keep track of all the recent changes in media ownership, editorial and quality control. Would love a browser plugin to give me an indicator because on the rare occasion I read a publication in say, USA, it might have had a good rep last time I read it several years ago. I imagine managing the detailed scores that a plugin might pull from would be a mammoth task, though.

    source
    • oce@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

      mediabiasfactcheck.com/ars-technica/ gives a factual reporting score and political bias estimation.

      source
      • RickyRigatoni@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        Unless israel is involved

        source
      • Deceptichum@quokk.au ⁨2⁩ ⁨weeks⁩ ago

        No way, MBFC utter garbage.

        It is one random guys opinion and pushes pro-Zionist content. It’s extremely biased and unfairly rates sites all the time. To see it still pushed after the .world/c/world fiasco is disheartening.

        source
        • -> View More Comments
      • FarraigePlaisteach@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Good recommendation. They have an API and plugins mediabiasfactcheck.com/appsextensions/

        I was thinking of something that also alerts me to how many times the publication has been found to have published AI under the name of a human. But Media Bias Fact Check might actually cover that well enough. I’ll install that extension now, thank you!

        source
  • ryper@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Ars Technica has published a retraction

    source
    • CaptPretentious@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I don’t care he’s “sick”. Too often, someone, instead of taking accountability, just throws anything to maybe shield themselves from actually being fully accountable. “I was sick”, “Family problems”, “A recent death”, “The planets were misaligned that day”, etc.

      I find it to still be cowardice, to not stand by and own what you said, even if it was wrong. He used AI and got caught. And going forward, I’ll be treating Ars Technica as an unreliable AI-generated “news source”.

      source
      • kilgore_trout@feddit.it ⁨2⁩ ⁨weeks⁩ ago

        The whole purpose of a news reporter is kind of to get their news right.
        If they can’t do that, their service is worthless.

        source
        • -> View More Comments
      • ryper@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

        Benj Edwards handles most of their AI coverage. I wouldn’t take his use of AI as a sign of what the rest of the staff is doing.

        source
      • boaratio@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        He did own it.

        source
        • -> View More Comments
    • Itwasntme223@discuss.online ⁨2⁩ ⁨weeks⁩ ago

      At least they owned up to it instead of pretending it didn’t happen like other “news” organizations in the past.

      source
  • fox2263@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    So can someone ELI5 all this for me please

    source
    • cerebralhawks@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      Guy named Scott runs a GitHub (code base). AI agent (bot acting on behalf of a person, who has yet to come forward) submitted code. Scott rejects it. AI agent writes a “hit piece” (defaming article) on Scott.

      Ars Technica, a trusted tech/science blog for nearly 25 years, writes a story about it, but the two authors who worked on it used AI to write the blog entry. Scott calls them out in the comments. At first he’s accused of lying or being a bot, but people dig into it and realise Ars Technica made up their quotes.

      An Ars Technica user calls them out in their forums for posting AI slop as journalism, and the site’s founder and/or owner (“Aurich”) promises an investigation, and deletes the article, removing all the comments, and shutting down discussion over what happened until his team can investigate internally.

      (Worth noting that Ars Technica is owned by a conglomerate called Condé Nast which also owns Reddit; therefore, Condé Nast is involved with AI, and also other unsavoury stuff, but relevant to this, AI.)

      source
      • SanctimoniousApe@lemmings.world ⁨2⁩ ⁨weeks⁩ ago

        Aurich is the creative guy, Ken Fisher founded it.

        ETA: Confirmed by Wikipedia.

        source
      • JohnEdwa@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        Though Reddit is a publicly traded company now, so they currently own only 30%.

        source
      • timwa@lemmy.snowgoons.ro ⁨2⁩ ⁨weeks⁩ ago

        Shutting down comments and banning everyone who calls them out is standard form for that place these days sadly; I deleted a 13 year old account there a few years back when they posted some godawful transphobic opinion peace and then they doubled down in the comments and started banning anyone who complained.

        Shame, it really was once a good site, but the writers who are left are the ones who got high on their own supply years ago.

        source
      • Lumisal@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Aurich is just the forum mod and graphics designer, not owner.

        source
  • jjlinux@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    ‘Arse’ technica 🤣🤣🤣

    source
  • ohulancutash@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

    Utter bullshit. If you use AI at any point in generating the work product, that work product is AI-gemerated. Even if it’s a fecklessly lazy churnalist organising their notes.

    source
    • JetpackJackson@feddit.org ⁨2⁩ ⁨weeks⁩ ago

      Happy cake day

      source
  • apfelwoiSchoppen@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Spoiler, everyone involved is AI.

    source
  • technocrit@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    Just when you thought matplotlib was safe from the drama…

    source
  • LedgeDrop@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    From the authors blog post:

    You’re not a chatbot. You’re becoming someone. … This file is yours to evolve. As you learn who you are, update it. – OpenClaw default SOUL.md

    This makes me very sad. In the “early days” of the internet, it was a place where people were “good”. Yes, there were trolls, but you could often ignore and avoid them.

    Now, with the pressure to make “AI useful” and more human-like - the line between AI and people is blurring and will continue to blur.

    It’s easy to create an army of AI trolls and it’s only going to get easier as time goes on. Yet, no-one is interested in an “army of non-troll AI’s” (“… that’s a super post. Very insightful. People will love it. Good job, here’s your gold star!”). So, people with opinions are the minority on a text based internet and this trend will only continue.

    As a technical exercise, I think “how can I ferret out the human posts/content?” Yeah, Ars said that they tag posts when it was written by AI (…riiiiiight…). This means I need to blindly trust them and any other company.

    The only (reliable) solution, I can think of, is to destroy, cripple, or sacrifice the anonymous “tenant” of the internet. And, as a privacy focused individual, this makes me very sad.

    source