I pointed out a month ago that Ars Technica is a rot site and starting to be filled with AI regurgitated bullshit and got 80+ down votes and a few uneducated replies.
Y’all feel better now?
Submitted 9 hours ago by Beep@lemmus.org to technology@lemmy.world
https://infosec.exchange/@mttaggart/116065340523529645
I pointed out a month ago that Ars Technica is a rot site and starting to be filled with AI regurgitated bullshit and got 80+ down votes and a few uneducated replies.
Y’all feel better now?
No, the issue we are talking about today and calling Ars an “internet rot site” is a huge leap. Yeah, they post shit articles from Wired and such, (they are owned by Conde Nast), but their core writers are still great and have plenty of good articles.
You want credit for what? Over exaggerating an issue then whining about it?
You are throwing the baby out with the bathwater, and then spitting on the baby. It makes no sense.
@sartalon @technology Yeah, I have a lot more trust in the reputation that Ars has built over a decade of solid reliable tech journalism than I do in a random matplotlib maintainer - I’ve interacted with maintainers before. They’re not wrong about agents, but not sure how that’s any different from any human doing the same.
It’s been going downhill for some time. I think the Condé Nast investment pretty much killed it. The last site redesign that didn’t work correctly and made things unreadable was the last straw for me. I took it out of my rotation of “daily reads” and haven’t missed it.
Ars hasn’t been good in a few years. Fuck those people.
Stuff like this makes me very sympathetic to lemmy instances that disable downvotes
In typical Ars fashion, the editorial team appears to be looking into what happened and are being fairly open about at things: arstechnica.com/…/journalistic-standards.1511650/
I will be very disappointed if this was BenJ or Dan using AI to write their article since both have had really good pieces in the past, but it doesn’t sound like this is some Ars wide shift at this point. Like all things, it makes sense that it will take time for them to investigate this, Aurich (the Ars community lead and graphic designer) was clear that with this happening on a Friday afternoon and a US holiday on Monday, it’s likely to be into next week before they have anything they can share.
Honestly, this whole thing surprises me. I have a lot of respect for Ars Technica. I hope they clean this up and prevent further issues in the future.
They know how and why it happened, they are taking the weekend to investigate how to best take their foot from their mouths without eating too much shit
Benj and Kyle were the authors of the article; Dan’s name wasn’t on it.
I’m betting it’s definitely Ben since he is pretty pro-AI
BenJ had coauthor credit on it.
That poor guy, the ai is just ganging up on him
I hope it’s the first proof of general AI consciousness.
what?? AI is not conscious, marketing just says that with no understanding of the maths and no legal obligation to tell the truth.
Here’s how LLMs work:
The basic premise is like an autocomplete: It creates a response word by word (not literally using words, but “tokens” which are mostly words but sometimes other things such as “begin/end codeblock” or “end of response”). The program is a guessing engine that guesses the next token repeatedly. The autocomplete on your phone is different in that it merely guesses which word follows the previous word. An LLM guesses what the next word after the entire conversation (not always entire: conversation history may be truncated due to limited processing power) is.
The “training data” is used as a model of what the probabilities are of tokens following other tokens. But you can’t store, for every token, how likely it is to follow every single possible combination of 1 to <big number like 65536, depends on which LLM> previous tokens. So that’s what “neural networks” are for.
Neural networks are networks of mathematical “neurons”. Neurons take one or more inputs from other neurons, apply a mathematical transformation to them, and output the number into one or more further neurons. At the beginning of the network are non-neurons that input the raw data into the neurons, and at the end are non-neurons that take the network’s output and use it. The network is “trained” by making small adjustments to the maths of various neurons and finding the arrangement with the best results. Neural networks are very difficult to see into or debug because the mathematical nature of the system makes it pretty unclear what a given neuron does. The use of these networks in LLMs is as a way to (quite accurately) guess the probabilities on the fly without having to obtain and store training data for every single possibility.
I don’t know much more than this, I just happen to have read a good article about how LLMs work. (Will edit the link into this post soon, as it was texted to me and I’m on PC rn)
It would be nice if he decides to sue ars technica for that. Writers and publisher need to learn the hard way that you can’t use ai and trust that for publishing stuff that needs factual coherence. If not by ethics, let it be from fear of lawsuits.
Sue them for what? He would have to prove damages and they took it down.
Libel. Taking it down doesn’t undo the damage to reputation which libel is concerned with.
Publicly making false statements using his name isn’t a crime by itself in his jurisdiction?
Which ars writer was the article attributed to?
Benj Edwards and Kyle Orland
Ars is just AI slop now? Sad.
Ars is owned by Condé Nast which also owns Reddit, so “AI slop” is part of their business.
I still trust Ars Technica (I don’t like them much but I do trust them… it’s complicated) and I trust Aurich (their founder/editor-in-chief) to act fairly. They don’t work on the weekends or holidays though, so he’s not touching it until Tuesday, though.
Aurich is the creative guy, Ken Fisher founded it.
I was downvoted and insulted by this very Lemmy community when I said this just a month ago. Thank God people are starting to realize it now.
Hard to keep track of all the recent changes in media ownership, editorial and quality control. Would love a browser plugin to give me an indicator because on the rare occasion I read a publication in say, USA, it might have had a good rep last time I read it several years ago. I imagine managing the detailed scores that a plugin might pull from would be a mammoth task, though.
mediabiasfactcheck.com/ars-technica/ gives a factual reporting score and political bias estimation.
No way, MBFC utter garbage.
It is one random guys opinion and pushes pro-Zionist content. It’s extremely biased and unfairly rates sites all the time. To see it still pushed after the .world/c/world fiasco is disheartening.
Unless israel is involved
Good recommendation. They have an API and plugins mediabiasfactcheck.com/appsextensions/
I was thinking of something that also alerts me to how many times the publication has been found to have published AI under the name of a human. But Media Bias Fact Check might actually cover that well enough. I’ll install that extension now, thank you!
Just when you thought matplotlib was safe from the drama…
So can someone ELI5 all this for me please
Guy named Scott runs a GitHub (code base). AI agent (bot acting on behalf of a person, who has yet to come forward) submitted code. Scott rejects it. AI agent writes a “hit piece” (defaming article) on Scott.
Ars Technica, a trusted tech/science blog for nearly 25 years, writes a story about it, but the two authors who worked on it used AI to write the blog entry. Scott calls them out in the comments. At first he’s accused of lying or being a bot, but people dig into it and realise Ars Technica made up their quotes.
An Ars Technica user calls them out in their forums for posting AI slop as journalism, and the site’s founder and/or owner (“Aurich”) promises an investigation, and deletes the article, removing all the comments, and shutting down discussion over what happened until his team can investigate internally.
(Worth noting that Ars Technica is owned by a conglomerate called Condé Nast which also owns Reddit; therefore, Condé Nast is involved with AI, and also other unsavoury stuff, but relevant to this, AI.)
Aurich is just the forum mod and graphics designer, not owner.
Though Reddit is a publicly traded company now, so they currently own only 30%.
Spoiler, everyone involved is AI.
mech@feddit.org 1 hour ago
Frankly, no. Correcting an article about a blog post isn’t important enough to force your workers to sacrifice their weekends.
That should be reserved to life-and-death emergencies.