I pointed out a month ago that Ars Technica is a rot site and starting to be filled with AI regurgitated bullshit and got 80+ down votes and a few uneducated replies.
Y’all feel better now?
Submitted 2 weeks ago by Beep@lemmus.org to technology@lemmy.world
https://infosec.exchange/@mttaggart/116065340523529645
I pointed out a month ago that Ars Technica is a rot site and starting to be filled with AI regurgitated bullshit and got 80+ down votes and a few uneducated replies.
Y’all feel better now?
No, the issue we are talking about today and calling Ars an “internet rot site” is a huge leap. Yeah, they post shit articles from Wired and such, (they are owned by Conde Nast), but their core writers are still great and have plenty of good articles.
You want credit for what? Over exaggerating an issue then whining about it?
You are throwing the baby out with the bathwater, and then spitting on the baby. It makes no sense.
It’s been going downhill for some time. I think the Condé Nast investment pretty much killed it. The last site redesign that didn’t work correctly and made things unreadable was the last straw for me. I took it out of my rotation of “daily reads” and haven’t missed it.
It’s one of the stages of enshittification. Unless we see hard changes to avoid further decay, Ars will inevitably get worse and and worse until it does become an “internet rot site.”
@sartalon @technology Yeah, I have a lot more trust in the reputation that Ars has built over a decade of solid reliable tech journalism than I do in a random matplotlib maintainer - I’ve interacted with maintainers before. They’re not wrong about agents, but not sure how that’s any different from any human doing the same.
Simp a little harder for them next time. They appreciate it.
Apparently you still can’t criticise the Holy Ars even when they put out AI slop articles, because that’s SPITTING ON BABIES
Ars hasn’t been good in a few years. Fuck those people.
Stuff like this makes me very sympathetic to lemmy instances that disable downvotes
I read the comment, then judge the comment and use that judgement and voting scores to judge the community.
That poor guy, the ai is just ganging up on him
I hope it’s the first proof of general AI consciousness.
what?? AI is not conscious, marketing just says that with no understanding of the maths and no legal obligation to tell the truth.
Here’s how LLMs work:
The basic premise is like an autocomplete: It creates a response word by word (not literally using words, but “tokens” which are mostly words but sometimes other things such as “begin/end codeblock” or “end of response”). The program is a guessing engine that guesses the next token repeatedly. The autocomplete on your phone is different in that it merely guesses which word follows the previous word. An LLM guesses what the next word after the entire conversation (not always entire: conversation history may be truncated due to limited processing power) is.
The “training data” is used as a model of what the probabilities are of tokens following other tokens. But you can’t store, for every token, how likely it is to follow every single possible combination of 1 to <big number like 65536, depends on which LLM> previous tokens. So that’s what “neural networks” are for.
Neural networks are networks of mathematical “neurons”. Neurons take one or more inputs from other neurons, apply a mathematical transformation to them, and output the number into one or more further neurons. At the beginning of the network are non-neurons that input the raw data into the neurons, and at the end are non-neurons that take the network’s output and use it. The network is “trained” by making small adjustments to the maths of various neurons and finding the arrangement with the best results. Neural networks are very difficult to see into or debug because the mathematical nature of the system makes it pretty unclear what a given neuron does. The use of these networks in LLMs is as a way to (quite accurately) guess the probabilities on the fly without having to obtain and store training data for every single possibility.
I don’t know much more than this, I just happen to have read a good article about how LLMs work. (Will edit the link into this post soon, as it was texted to me and I’m on PC rn)
It would be nice if he decides to sue ars technica for that. Writers and publisher need to learn the hard way that you can’t use ai and trust that for publishing stuff that needs factual coherence. If not by ethics, let it be from fear of lawsuits.
Sue them for what? He would have to prove damages and they took it down.
Libel. Taking it down doesn’t undo the damage to reputation which libel is concerned with.
Publicly making false statements using his name isn’t a crime by itself in his jurisdiction?
This is bad enough that a serious company that wanted to salvage their reputation properly might wanna consider putting in some weekend overtime.
Frankly, no. Correcting an article about a blog post isn’t important enough to force your workers to sacrifice their weekends.
That should be reserved to life-and-death emergencies.
Now what to do about the lazy writer who used AI to write the article and didn’t bother fact check it and make sure the quotes are real?
Fixing the article, weekend or next week, doesn’t address the problem itself.
That should be reserved to life-and-death emergencies.
Well, they are going to see how many will keep their subscription then.
“Alexa, slander this man for me”
There’s a high chance it wasn’t a direct command from a human and the agent did it on it’s own.
Welcome to discourse in a post-truth society. Reality doesn’t matter anymore; news agencies can just make shit up, and even the comments on the fake articles are fake.
Rail against it, until it’s the only thing you ever do. A single bot can still post a thousand times more, and on a thousand different accounts, and on a thousand different platforms. Just one of them can formulate fake ideas and then fake arguments with itself that enfold like a fractal, and there is an effectively infinite number of them.
Kessler Syndrome is happening before our very eyes, only on a much more local scale.
This ad was brought to you by OpenAI.
Ars is just AI slop now? Sad.
Ars is owned by Condé Nast which also owns Reddit, so “AI slop” is part of their business.
I still trust Ars Technica (I don’t like them much but I do trust them… it’s complicated) and I trust Aurich (their founder/editor-in-chief) to act fairly. They don’t work on the weekends or holidays though, so he’s not touching it until Tuesday, though.
Aurich is the creative guy, Ken Fisher founded it.
I was downvoted and insulted by this very Lemmy community when I said this just a month ago. Thank God people are starting to realize it now.
Which ars writer was the article attributed to?
Benj Edwards and Kyle Orland
Damn. I thought Kyle would do better smh
Damn. Am I gonna have to cancel my Ars subscription now? Every damn thing is enshittifying these days
Right? Who’s next, Pro Publica?
It used to be respectable ten years ago, back chen it had a .co.uk website too.
Hard to keep track of all the recent changes in media ownership, editorial and quality control. Would love a browser plugin to give me an indicator because on the rare occasion I read a publication in say, USA, it might have had a good rep last time I read it several years ago. I imagine managing the detailed scores that a plugin might pull from would be a mammoth task, though.
mediabiasfactcheck.com/ars-technica/ gives a factual reporting score and political bias estimation.
Unless israel is involved
No way, MBFC utter garbage.
It is one random guys opinion and pushes pro-Zionist content. It’s extremely biased and unfairly rates sites all the time. To see it still pushed after the .world/c/world fiasco is disheartening.
Good recommendation. They have an API and plugins mediabiasfactcheck.com/appsextensions/
I was thinking of something that also alerts me to how many times the publication has been found to have published AI under the name of a human. But Media Bias Fact Check might actually cover that well enough. I’ll install that extension now, thank you!
I don’t care he’s “sick”. Too often, someone, instead of taking accountability, just throws anything to maybe shield themselves from actually being fully accountable. “I was sick”, “Family problems”, “A recent death”, “The planets were misaligned that day”, etc.
I find it to still be cowardice, to not stand by and own what you said, even if it was wrong. He used AI and got caught. And going forward, I’ll be treating Ars Technica as an unreliable AI-generated “news source”.
The whole purpose of a news reporter is kind of to get their news right.
If they can’t do that, their service is worthless.
Benj Edwards handles most of their AI coverage. I wouldn’t take his use of AI as a sign of what the rest of the staff is doing.
At least they owned up to it instead of pretending it didn’t happen like other “news” organizations in the past.
So can someone ELI5 all this for me please
Guy named Scott runs a GitHub (code base). AI agent (bot acting on behalf of a person, who has yet to come forward) submitted code. Scott rejects it. AI agent writes a “hit piece” (defaming article) on Scott.
Ars Technica, a trusted tech/science blog for nearly 25 years, writes a story about it, but the two authors who worked on it used AI to write the blog entry. Scott calls them out in the comments. At first he’s accused of lying or being a bot, but people dig into it and realise Ars Technica made up their quotes.
An Ars Technica user calls them out in their forums for posting AI slop as journalism, and the site’s founder and/or owner (“Aurich”) promises an investigation, and deletes the article, removing all the comments, and shutting down discussion over what happened until his team can investigate internally.
(Worth noting that Ars Technica is owned by a conglomerate called Condé Nast which also owns Reddit; therefore, Condé Nast is involved with AI, and also other unsavoury stuff, but relevant to this, AI.)
Though Reddit is a publicly traded company now, so they currently own only 30%.
Shutting down comments and banning everyone who calls them out is standard form for that place these days sadly; I deleted a 13 year old account there a few years back when they posted some godawful transphobic opinion peace and then they doubled down in the comments and started banning anyone who complained.
Shame, it really was once a good site, but the writers who are left are the ones who got high on their own supply years ago.
Aurich is just the forum mod and graphics designer, not owner.
‘Arse’ technica 🤣🤣🤣
Utter bullshit. If you use AI at any point in generating the work product, that work product is AI-gemerated. Even if it’s a fecklessly lazy churnalist organising their notes.
Happy cake day
Spoiler, everyone involved is AI.
Just when you thought matplotlib was safe from the drama…
From the authors blog post:
You’re not a chatbot. You’re becoming someone. … This file is yours to evolve. As you learn who you are, update it. – OpenClaw default SOUL.md
This makes me very sad. In the “early days” of the internet, it was a place where people were “good”. Yes, there were trolls, but you could often ignore and avoid them.
Now, with the pressure to make “AI useful” and more human-like - the line between AI and people is blurring and will continue to blur.
It’s easy to create an army of AI trolls and it’s only going to get easier as time goes on. Yet, no-one is interested in an “army of non-troll AI’s” (“… that’s a super post. Very insightful. People will love it. Good job, here’s your gold star!”). So, people with opinions are the minority on a text based internet and this trend will only continue.
As a technical exercise, I think “how can I ferret out the human posts/content?” Yeah, Ars said that they tag posts when it was written by AI (…riiiiiight…). This means I need to blindly trust them and any other company.
The only (reliable) solution, I can think of, is to destroy, cripple, or sacrifice the anonymous “tenant” of the internet. And, as a privacy focused individual, this makes me very sad.
Wxfisch@lemmy.world 2 weeks ago
In typical Ars fashion, the editorial team appears to be looking into what happened and are being fairly open about at things: arstechnica.com/…/journalistic-standards.1511650/
I will be very disappointed if this was BenJ or Dan using AI to write their article since both have had really good pieces in the past, but it doesn’t sound like this is some Ars wide shift at this point. Like all things, it makes sense that it will take time for them to investigate this, Aurich (the Ars community lead and graphic designer) was clear that with this happening on a Friday afternoon and a US holiday on Monday, it’s likely to be into next week before they have anything they can share.
d13@programming.dev 2 weeks ago
Honestly, this whole thing surprises me. I have a lot of respect for Ars Technica. I hope they clean this up and prevent further issues in the future.
lol_idk@piefed.social 2 weeks ago
They know how and why it happened, they are taking the weekend to investigate how to best take their foot from their mouths without eating too much shit
sukhmel@programming.dev 2 weeks ago
This shouldn’t be a problem anatomically, it’s hard to eat anything with a foot in your mouth anyway
echodot@feddit.uk 2 weeks ago
What do they have to investigate? Did one of them accidentally get an AI to write the article and then accidentally post the article, like they just fell on the keyboard and accidentally typed in a prompt? Come on.
Wxfisch@lemmy.world 2 weeks ago
I would hazard to guess they are investigating how the use of AI was missed in their editorial process, how they missed the incorrect quotes, and who violated their journalistic standards by using an AI to directly write article text since it’s a coauthored piece.
deltapi@lemmy.world 2 weeks ago
BenJ had coauthor credit on it.
ryper@lemmy.ca 2 weeks ago
Benj and Kyle were the authors of the article; Dan’s name wasn’t on it.
Fmstrat@lemmy.world 2 weeks ago
Benj was an author: web.archive.org/…/after-a-routine-code-rejection-…
Though in the Ars response they say “Scott’s post”, so I’m confused.
PumaStoleMyBluff@lemmy.world 2 weeks ago
Scott is the subject of the article, who was misquoted by Ars and maligned by the slopbot.
Lumisal@lemmy.world 2 weeks ago
I’m betting it’s definitely Ben since he is pretty pro-AI