At least it’s only an issue for new articles, which probably have the least editor involvement.
People creating self-promotion on Wikipedia has been a problem for a long time before ChatGPT.
Comment on Wikipedia Pauses AI-Generated Summaries After Editor Backlash
UnderpantsWeevil@lemmy.world 2 days agoToo late.
With thresholds calibrated to achieve a 1% false positive rate on pre-GPT-3.5 articles, detectors flag over 5% of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics.
At least it’s only an issue for new articles, which probably have the least editor involvement.
People creating self-promotion on Wikipedia has been a problem for a long time before ChatGPT.
kassiopaea@lemmy.blahaj.zone 2 days ago
Human posting of AI-generated content is definitely a problem; but ultimately that’s a moderation problem that can be solved, which is quite different from AI-generated content being put forward by the platform itself. There wasn’t necessarily anything stopping people from doing the same thing pre-GPT, it’s just easier and more prevalent now.
UnderpantsWeevil@lemmy.world 2 days ago
It isn’t clear whether this content is posted by humans or by AI fueled bot accounts. All they’re sifting for is text with patterns common to AI text generation tools.
The big inhibiting factor was effort. ChatGPT produces long form text far faster than humans and in a form less easy to identify than prior Markov Chains.
The fear is that Wikipedia will be swamped with slop content. Humans won’t be able to keep up with the work of cleaning it out.