Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Wikipedia has banned AI-generated text, with two exceptions

⁨494⁩ ⁨likes⁩

Submitted ⁨⁨21⁩ ⁨hours⁩ ago⁩ by ⁨corbin@infosec.pub⁩ to ⁨technology@lemmy.world⁩

https://www.howtogeek.com/wikipedia-banned-ai-generated-text-in-articles-with-two-exceptions/

source

Comments

Sort:hotnewtop
  • infeeeee@lemmy.zip ⁨20⁩ ⁨hours⁩ ago

    Saved you a click:

    After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

    The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

    source
    • RIotingPacifist@lemmy.world ⁨20⁩ ⁨hours⁩ ago

      AIbros: we’re creating God!!!

      AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit

      source
      • halcyoncmdr@piefed.social ⁨19⁩ ⁨hours⁩ ago

        The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.

        The “AI” is just streamlining the process to save time.

        Relying on it otherwise is stupid and just proves instantly that you are incompetent.

        source
        • -> View More Comments
      • youcantreadthis@quokk.au ⁨17⁩ ⁨hours⁩ ago

        Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.

        source
        • -> View More Comments
      • XLE@piefed.social ⁨17⁩ ⁨hours⁩ ago

        I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.

        source
    • Goodlucksil@lemmy.dbzer0.com ⁨6⁩ ⁨hours⁩ ago

      To save you another few clicks: this is the discussion (RfC) that implemented the changes, and the policy is linked at the top.

      source
    • MissesAutumnRains@lemmy.blahaj.zone ⁨20⁩ ⁨hours⁩ ago

      Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

      source
    • daychilde@lemmy.world ⁨20⁩ ⁨hours⁩ ago

      Liar. I already read the article before opening the comments. YOU SAVED ME NOTHING.

      ;-)

      source
    • ji59@hilariouschaos.com ⁨20⁩ ⁨hours⁩ ago

      So, it should be used reasonably, as it should have always been.

      source
    • errer@lemmy.world ⁨18⁩ ⁨hours⁩ ago

      Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.

      I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.

      source
      • Grimy@lemmy.world ⁨17⁩ ⁨hours⁩ ago

        Sell licenses of what? It’s already all in the creative commons iirc.

        source
        • -> View More Comments
      • SuspciousCarrot78@lemmy.world ⁨17⁩ ⁨hours⁩ ago

        AI already trains on Wikipedia.

        commoncrawl.org

        source
      • MountingSuspicion@reddthat.com ⁨17⁩ ⁨hours⁩ ago

        This was only done because the editors pushed to minimize AI involvement. There’s a comment here already mentioning that: lemmy.world/comment/22826863

        source
    • FauxPseudo@lemmy.world ⁨16⁩ ⁨hours⁩ ago

      Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.

      source
      • Zagorath@quokk.au ⁨14⁩ ⁨hours⁩ ago

        That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.

        Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.

        source
  • albert_inkman@lemmy.world ⁨2⁩ ⁨hours⁩ ago

    This is actually fascinating from a discourse perspective. The RfC mentions that AI detectors are unreliable, which is the whole problem.

    I work on mapping public opinion across thousands of responses using AI as a tool to find patterns, not to detect individual writers. The difference matters.

    We can detect patterns across a corpus without needing to prove any single person wrote it. That scale of analysis is what lets us see where opinion clusters, not just label individual posts.

    Wikipedia’s ban is probably the right call for their use case. They need verifiable authorship for accountability. But we shouldn’t conflate that with not being able to use AI for understanding large-scale discourse.

    source
    • Blackfeathr@lemmy.world ⁨17⁩ ⁨minutes⁩ ago

      You’re not working on anything, clanker.

      For those wondering, check the timestamps this accounts comment history, especially comments from 4 days ago or longer. Fully formatted multi-paragraph comments made 10-30 seconds apart. This is an LLM-controlled account.

      source
  • SpaceNoodle@lemmy.world ⁨19⁩ ⁨hours⁩ ago

    An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards

    source
    • kazerniel@lemmy.world ⁨18⁩ ⁨hours⁩ ago

      It has to be said, they originally changed their stance due to the considerable editor pushback when they tried to introduce LLM summaries on the top of articles. So kudos to the editor community’s resistance! ✊

      source
      • SpaceNoodle@lemmy.world ⁨18⁩ ⁨hours⁩ ago

        Good point. The real strength of Wikipedia truly lies in the editors

        source
  • SunlessGameStudios@lemmy.world ⁨20⁩ ⁨hours⁩ ago

    I know at least once writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI.

    source
    • antonim@lemmy.world ⁨7⁩ ⁨hours⁩ ago

      How do you win an award from Wikipedia?

      source
  • yucandu@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    Banned the people who openly admit it, anyway.

    source
    • aliser@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      there are ai detectors, although Im not sure about accuracy of those

      source
  • Mwa@thelemmy.club ⁨14⁩ ⁨hours⁩ ago

    W Wikipedia,would be better to remove the exceptions but its fine tbh.

    source
  • amateurcrastinator@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    But how do they know it is ai written?

    source
  • webp@mander.xyz ⁨19⁩ ⁨hours⁩ ago

    Why do they need AI at all? Wikipedia had existed long before it and was doing fine.

    source
    • AmbitiousProcess@piefed.social ⁨19⁩ ⁨hours⁩ ago

      You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.

      …except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.

      It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.

      source
    • fuckwit_mcbumcrumble@lemmy.dbzer0.com ⁨19⁩ ⁨hours⁩ ago

      Why should we use (insert tool) when we did just fine before?

      Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.

      source
      • REDACTED@infosec.pub ⁨19⁩ ⁨hours⁩ ago

        Why fire, berries fine

        source
        • -> View More Comments
      • webp@mander.xyz ⁨18⁩ ⁨hours⁩ ago

        Call me mad, call me crazy. AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent. If there is a question on whether certain words are appropriate why can’t you ask another human being, they have forums for a reason, or someone else comes along and fixes it. Or look at a dictionary. The amount of energy spent for dubious information, holy. It’s not like there is a shortage of human beings on earth.

        source
        • -> View More Comments
  • hperrin@lemmy.ca ⁨16⁩ ⁨hours⁩ ago

    Good news. Hopefully they’ll get rid of those two exceptions in the future.

    source
    • JohnEdwa@sopuli.xyz ⁨15⁩ ⁨hours⁩ ago

      Would be pretty shitty to make sure every time you are editing Wikipedia to disable any AI based grammar/spellcheckers, and not being allowed to use translation tools.

      Because those are the two exceptions.

      source
      • antonim@lemmy.world ⁨7⁩ ⁨hours⁩ ago

        Spell- and grammar-checking is useless anyway. If you don’t have at least one word underlined with red in every sentence, you’re not writing anything intellectually serious. 🧐

        source
      • hperrin@lemmy.ca ⁨15⁩ ⁨hours⁩ ago

        Why? That’s how they’ve been doing it for 25 years.

        source
        • -> View More Comments
  • davidgro@lemmy.world ⁨19⁩ ⁨hours⁩ ago

    I hoped the exceptions would be like “Quoted example text of LLM output, when it’s clearly labeled and separate from the article text.”

    source
    • baltakatei@sopuli.xyz ⁨16⁩ ⁨hours⁩ ago

      That exception probably would be twisted into permission to add an “AI summary” section to each article.

      source
      • davidgro@lemmy.world ⁨14⁩ ⁨hours⁩ ago

        Ugh. Yeah, it would have to be worded carefully, you’re right

        source
  • phoenixz@lemmy.ca ⁨19⁩ ⁨hours⁩ ago

    So in other words, when used responsibly as a tool with limitations, AI has it’s uses? Though very environmentally unfriendly uses?

    source
    • Slashme@lemmy.world ⁨16⁩ ⁨hours⁩ ago

      *its

      source