You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.
Comment on Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
ExcessShiv@lemmy.dbzer0.com 1 day agoSifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write.
tangeli@piefed.social 18 hours ago
Fmstrat@lemmy.world 3 hours ago
The problem here is you are both characterizing Ars as you would other companies that have these AI mandates. Ars is the opposite, they have a mandate NOT to use AI.
While I agree a separation of responsibilities is important, they had two coauthors for exactly that reason. One trusted the other for the references, not knowing that they used AI.
Either way, the initial comment is certainly not “absolutely correct” when it comes to Ars.
just_another_person@lemmy.world 1 day ago
Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they used all the AI companies for stealing content.
Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.
ExcessShiv@lemmy.dbzer0.com 1 day ago
I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.