Main character moment.
Comment on Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
tidderuuf@lemmy.world 1 day ago
I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.
It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.
Just the facts, folks.
MagnificentSteiner@lemmy.zip 1 day ago
Kissaki@feddit.org 1 day ago
and “it was only once” is bullshit
They checked and then fired the author. I don’t see how this is “it was only once” implying nothing changed and it will happen again. Isn’t firing the author “holding journalism to a higher standard” already, which you ask for?
tangeli@piefed.social 21 hours ago
Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It’s fair that the person responsible for this was fired. But all processes need quality control. Why hasn’t the person who failed to wrap quality control processes around the author fired?
5gruel@lemmy.world 19 hours ago
in what world would independent fact checking down to the level of individual quotes be feasible for an online magazine? you can’t be serious.
tangeli@piefed.social 18 hours ago
That’s part of the cost of AI that the AI companies leave to their customers. There is a tradeoff and we know from a long history of for-profit corporate behaviour that they will generally prefer lower short term cost, despite consequent risk and harm. But if the companies that sell AI services don’t take care to ensure the outputs are true and the companies that use AI don’t take care then that leaves the ultimate customer/consumer to fact check everything. That or simply be oblivious or stop trusting anything. The problem is made worse by the fact that most companies won’t disclose their use of AI, because of the adverse impact on their reputation, unless they are compelled to do so. So far, I don’t see any legislation to compel disclosure.
just_another_person@lemmy.world 1 day ago
The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.
This person was simply fired because they didn’t catch the false information,not because they used the tools forced upon them.
Fmstrat@lemmy.world 6 hours ago
Absolutely not. Ars has a no AI policy, it’s the exact opposite. Guessing you are a nice little bot.
just_another_person@lemmy.world 3 hours ago
A fucking moron who runs around calling everything a bit when you disagree with whatever the topic is.
It’s the new CyberTruck of online insecurity.
Hope that’s “good” for you.
MountingSuspicion@reddthat.com 1 day ago
I don’t work at Ars, and maybe you know something I don’t, but I have seen nothing to suggest that they’re one of the companies doing that. It seems like they are pretty open about how they do not allow AI to be used in the process. Have they said something to indicate otherwise and I just misssed it?
ExcessShiv@lemmy.dbzer0.com 1 day ago
Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write.
just_another_person@lemmy.world 1 day ago
Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they used all the AI companies for stealing content.
Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.
ExcessShiv@lemmy.dbzer0.com 1 day ago
I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.
tangeli@piefed.social 21 hours ago
You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.
Fmstrat@lemmy.world 6 hours ago
The problem here is you are both characterizing Ars as you would other companies that have these AI mandates. Ars is the opposite, they have a mandate NOT to use AI.
While I agree a separation of responsibilities is important, they had two coauthors for exactly that reason. One trusted the other for the references, not knowing that they used AI.
Either way, the initial comment is certainly not “absolutely correct” when it comes to Ars.