This is actually fascinating from a discourse perspective. The RfC mentions that AI detectors are unreliable, which is the whole problem.
I work on mapping public opinion across thousands of responses using AI as a tool to find patterns, not to detect individual writers. The difference matters.
We can detect patterns across a corpus without needing to prove any single person wrote it. That scale of analysis is what lets us see where opinion clusters, not just label individual posts.
Wikipedia’s ban is probably the right call for their use case. They need verifiable authorship for accountability. But we shouldn’t conflate that with not being able to use AI for understanding large-scale discourse.
infeeeee@lemmy.zip 20 hours ago
Saved you a click:
RIotingPacifist@lemmy.world 20 hours ago
AIbros: we’re creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit
halcyoncmdr@piefed.social 19 hours ago
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
youcantreadthis@quokk.au 17 hours ago
Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.
XLE@piefed.social 17 hours ago
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.
Goodlucksil@lemmy.dbzer0.com 6 hours ago
To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.
MissesAutumnRains@lemmy.blahaj.zone 20 hours ago
Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.
daychilde@lemmy.world 20 hours ago
Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.
;-)
ji59@hilariouschaos.com 20 hours ago
So, it should be used reasonably, as it should have always been.
errer@lemmy.world 18 hours ago
Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.
I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.
Grimy@lemmy.world 17 hours ago
Sell licenses of what? It’s already all in the creative commons iirc.
SuspciousCarrot78@lemmy.world 17 hours ago
AI already trains on Wikipedia.
commoncrawl.org
MountingSuspicion@reddthat.com 17 hours ago
This was only done because the editors pushed to minimize AI involvement. There’s a comment here already mentioning that: lemmy.world/comment/22826863
FauxPseudo@lemmy.world 16 hours ago
Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.
Zagorath@quokk.au 14 hours ago
That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.
Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.