An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards
Wikipedia has banned AI-generated text, with two exceptions
Submitted 3 weeks ago by corbin@infosec.pub to technology@lemmy.world
https://www.howtogeek.com/wikipedia-banned-ai-generated-text-in-articles-with-two-exceptions/
Comments
SpaceNoodle@lemmy.world 3 weeks ago
kazerniel@lemmy.world 2 weeks ago
It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community’s resistance! ✊
SpaceNoodle@lemmy.world 2 weeks ago
Good point. The real strength of Wikipedia truly lies in the editors
banshee@lemmy.world 2 weeks ago
Does anyone like LLM summaries in pages? This seems like a better fit for a browser extension to generate a summary on demand instead of wasting resources generating it for everyone. Google’s documentation is absolutely littered with the mess.
ricecake@sh.itjust.works 2 weeks ago
Just for more clarity: they workshoped for ideas on how to improve clarity and accessibility from some editors at an event. They did some small experiments, and they then developed a plan to trial some of them and presented the plan to a wider audience for feedback. After they got feedback they decided not to.
It’s not quite the editors pushing back on Wikipedia. Or rather, it’s not the “rebellion” people want to make it out to be.
mediawiki.org/…/Wikimania_2024,_"Written_by_AI"_H…
www.mediawiki.org/…/Simple_Article_Summaries
It rubs me the wrong way when the process going how it should go gets cast as controversial and dramatic. Asking the community if you should do something and listening to them is how it’s supposed to go. It’s not resistance, it’s all of them being on the same team and talking.
SchwertImStein@lemmy.dbzer0.com 2 weeks ago
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.
translation assistance
UnderpantsWeevil@lemmy.world 2 weeks ago
The former I’m still looking sideways at.
The latter, probably the only truly benevolent use of LLMs.
ThunderComplex@lemmy.today 2 weeks ago
Eh I think this sounds ok. If you prompt an AI to improve your text, you submit that, and another human reviews that (and maybe asks you to make changes) it should be fine. I can see this giving more people the ability to make edits (e.g. non-native speakers)
Holytimes@sh.itjust.works 2 weeks ago
Honestly anything is an improvement over the subpar translation tools we had before. Still ain’t great but we can give a W where it’s earned.
rodneylives@lemmy.world 2 weeks ago
Thank you!
SunlessGameStudios@lemmy.world 3 weeks ago
I know at least once writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI.
antonim@lemmy.world 2 weeks ago
How do you win an award from Wikipedia?
yucandu@lemmy.world 2 weeks ago
Banned the people who openly admit it, anyway.
aliser@lemmy.world 2 weeks ago
there are ai detectors, although Im not sure about accuracy of those
Aatube@thriv.social 2 weeks ago
very bad
ZILtoid1991@lemmy.world 2 weeks ago
There should be only one exception: In case someone needs an example of an AI-generated text.
UnderpantsWeevil@lemmy.world 2 weeks ago
LLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.
Mwa@thelemmy.club 2 weeks ago
W Wikipedia,would be better to remove the exceptions but its fine tbh.
webp@mander.xyz 3 weeks ago
Why do they need AI at all? Wikipedia had existed long before it and was doing fine.
AmbitiousProcess@piefed.social 3 weeks ago
You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.
…except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.
It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.
fuckwit_mcbumcrumble@lemmy.dbzer0.com 3 weeks ago
Why should we use (insert tool) when we did just fine before?
Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.
webp@mander.xyz 2 weeks ago
Call me mad, call me crazy. AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent. If there is a question on whether certain words are appropriate why can’t you ask another human being, they have forums for a reason, or someone else comes along and fixes it. Or look at a dictionary. The amount of energy spent for dubious information, holy. It’s not like there is a shortage of human beings on earth.
davidgro@lemmy.world 3 weeks ago
I hoped the exceptions would be like “Quoted example text of LLM output, when it’s clearly labeled and separate from the article text.”
baltakatei@sopuli.xyz 2 weeks ago
That exception probably would be twisted into permission to add an “AI summary” section to each article.
davidgro@lemmy.world 2 weeks ago
Ugh. Yeah, it would have to be worded carefully, you’re right
amateurcrastinator@lemmy.world 2 weeks ago
But how do they know it is ai written?
Aatube@thriv.social 2 weeks ago
umbraroze@slrpnk.net 2 weeks ago
I was about to link to that, and specifically the stuff that now seems to have been moved to Signs of AI writing.
I thought that was a very interesting read, because it’s so much better than the usual AI ragebait that led to people getting pilloried over the fact that they actually know how to use em dashes. You can’t detect LLM use just by the fact that someone uses em dashes. It’s a complicated stylistic issue that usually boils down to “well, you know what ChatGPT output looks like when you see it”.
amateurcrastinator@lemmy.world 2 weeks ago
Ok but surely there must be an automated way. You can’t throw manpower at this because they will loose
phoenixz@lemmy.ca 3 weeks ago
So in other words, when used responsibly as a tool with limitations, AI has it’s uses? Though very environmentally unfriendly uses?
Slashme@lemmy.world 2 weeks ago
*its
albert_inkman@lemmy.world 2 weeks ago
[deleted]Blackfeathr@lemmy.world 2 weeks ago
You’re not working on anything, clanker.
For those wondering, check the timestamps this accounts comment history, especially comments from 4 days ago or longer. Fully formatted multi-paragraph comments made 10-30 seconds apart. This is an LLM-controlled account.
luciferofastora@feddit.org 2 weeks ago
I can’t even write a two-sentence comment in 30s without overthinking. I do like to use formatting, but that doesn’t make it quicker…
echodot@feddit.uk 2 weeks ago
Yeah you can tell because the comment doesn’t really say anything. It’s just a lot of text but no actual meaning.
hperrin@lemmy.ca 2 weeks ago
Good news. Hopefully they’ll get rid of those two exceptions in the future.
JohnEdwa@sopuli.xyz 2 weeks ago
Would be pretty shitty to make sure every time you are editing Wikipedia to disable any AI based grammar/spellcheckers, and not being allowed to use translation tools.
Because those are the two exceptions.
antonim@lemmy.world 2 weeks ago
Spell- and grammar-checking is useless anyway. If you don’t have at least one word underlined with red in every sentence, you’re not writing anything intellectually serious. 🧐
hperrin@lemmy.ca 2 weeks ago
Why? That’s how they’ve been doing it for 25 years.
eletes@sh.itjust.works 2 weeks ago
There should be a Wikipedia LLM with a sole purpose to check that the tone of the text is objective and matches Wikipedia standards.
The LLM should flag any changes it would make and if the the changes are above a threshold, the edit should be flagged to be reviewed more by another human.
infeeeee@lemmy.zip 3 weeks ago
Saved you a click:
RIotingPacifist@lemmy.world 3 weeks ago
AIbros: we’re creating God!!!
AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit
halcyoncmdr@piefed.social 3 weeks ago
The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.
The “AI” is just streamlining the process to save time.
Relying on it otherwise is stupid and just proves instantly that you are incompetent.
youcantreadthis@quokk.au 2 weeks ago
Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.
XLE@piefed.social 2 weeks ago
I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.
MissesAutumnRains@lemmy.blahaj.zone 3 weeks ago
Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.
ji59@hilariouschaos.com 3 weeks ago
So, it should be used reasonably, as it should have always been.
daychilde@lemmy.world 3 weeks ago
Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.
;-)
Goodlucksil@lemmy.dbzer0.com 2 weeks ago
To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.
errer@lemmy.world 2 weeks ago
Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.
I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.
Grimy@lemmy.world 2 weeks ago
Sell licenses of what? It’s already all in the creative commons iirc.
SuspciousCarrot78@lemmy.world 2 weeks ago
AI already trains on Wikipedia.
commoncrawl.org
MountingSuspicion@reddthat.com 2 weeks ago
This was only done because the editors pushed to minimize AI involvement. There’s a comment here already mentioning that: lemmy.world/comment/22826863
arcine@jlai.lu 2 weeks ago
Treating it like a tool instead of treating it like a God. What a novel idea !
FauxPseudo@lemmy.world 2 weeks ago
Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.
Zagorath@quokk.au 2 weeks ago
That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.
Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.