Articles already have a summary at the top due to the page format, why was AI shoved into the process?
The Wikimedia Foundation Pauses an Experiment That Showed Wikipedia Users AI-Generated Summaries at The Top of Some Articles, Following an Editor Backlash.
Submitted 3 weeks ago by Pro@programming.dev to technology@lemmy.world
Comments
spankmonkey@lemmy.world 3 weeks ago
cannedtuna@lemmy.world 3 weeks ago
Because AI
Mac@mander.xyz 3 weeks ago
Grok please ElI5 this comment so i can understand it
prex@aussie.zone 3 weeks ago
prole@lemmy.blahaj.zone 3 weeks ago
I can’t wait until this “put LLMs in everything” phase is over.
Ulrich@feddit.org 3 weeks ago
So they:
- Didn’t ask editors/users
- noticed loud and overwhelmingly negative feedback
- "paused" the program
They still don’t get it. There’s very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what’s even the point? It’s not saving them any money.
Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.
ooo@sh.itjust.works 3 weeks ago
If her list were straight talk:
- Were gonna make up shit
- But don’t worry we’ll manually label it what could go wrong
- Dang no one was fooled let’s figure out a different way to pollute everything with alternative facts
SilverShark@lemmy.world 3 weeks ago
I also think that generating blob summaries just goes towards brain rot things we see everywhere on the web that’s just destroying people’s attention spam. Wikipedia is kind of good to read something that is long enough and not just some quick, simplistic and brain rotting inducing blob
benjhm@sopuli.xyz 3 weeks ago
Since much (so-called) “AI” basic training data depends on Wikipedia, wouldn’t this create a feedback loop that could quickly degenerate ?
Petter1@lemm.ee 3 weeks ago
Yes
plyth@feddit.org 3 weeks ago
Only if the summary is included in the training data.
phoenixz@lemmy.ca 3 weeks ago
I passionately hate the corpo speech she’s using. This fake list of “things she’s done wrong but now she’ll do them right, pinky promise!!” whilst completely ignoring the actual reason for the pushback they’ve received (which boils down to “fuck your AI, keep it out”) is typical management behavior after they were caught trying to screw over the workers in some way.
We’re going to screw you over one way or the other, we just should have communicated it better!
Basically this.
SpicyLizards@reddthat.com 3 weeks ago
I don’t see how AI could benefit wikipedia. Just the power consumption alone isn’t worth it. Wiki is one of the rare AI free zones, which is a reason why it is good
KnitWit@lemmy.world 3 weeks ago
I canceled my recurring over this about a week ago, explaining that this was the reason. One of their people sent me a lengthy response that I appreciated. Still going to wait a year before I reinstate it, hopefully they fully move on from this idea by then.
DigDoug@lemmy.world 3 weeks ago
If they thought this would be well-received they wouldn’t have sprung it on people. The fact that they’re only “pausing the launch of the experiment” means they’re going to do it again once the backlash has subsided.
RIP Wikipedia, it was a fun 24 years.
pastermil@sh.itjust.works 3 weeks ago
Not everything is black and white, you know. Just because they have this blunder, doesn’t mean they’re down for good. The fact they’re willing to listen to feedback, whatever their reason was, still shows some good sign.
Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.
I mean yeah, sure, do ‘leave’ Wikipedia if you want. I’m curious to where you’d go.
DigDoug@lemmy.world 3 weeks ago
Me saying “RIP” was an attempt at hyperbole. That being said, shoehorning AI into something for which a big selling point is that it’s user-made is a gigantic misstep - Maybe they’ll listen to everybody, but given that they tried it at all, I can’t see them properly backing down. Especially when it was worded as “pausing” the experiment.
Richat@lemmy.ml 3 weeks ago
the fact they’re willing to listen to feedback, whatever their reason was, is a good sign Oh you have so much to learn about companies fucking their users over if you think this is the end of them trying to shove AI into Wikipedia
count_dongulus@lemmy.world 3 weeks ago
Summarization is one of the things LLMs are pretty good at. Same for the other thing where they talked about auto-generating the “simple article” variants that are normally managed by hand to dumb down content.
But if they’re pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.
davidgro@lemmy.world 3 weeks ago
Summaries that look good are something LLMs can do, but not summaries that actually have a higher ratio of important/unimportant than the source, nor ones that keep things accurate. That last one is super mandatory on something like an encyclopedia.
prole@lemmy.blahaj.zone 3 weeks ago
The only application I’ve kind of liked so far has been the one on Amazon that summarizes the content of the reviews. Seems relatively accurate in general.
sentient_loom@sh.itjust.works 3 weeks ago
If we need summaries, let’s let a human being write the summaries. We are already experts at writing. We love doing it.
propitiouspanda@lemmy.cafe 3 weeks ago
not forced behavior for end users.
This is what I’m constantly criticizing. It’s fine to have more options, but they should be options and not mandatory.
No, having to scroll past an AI summary for every fucking article is not an ‘option.’ Having the option to hide it forever (or even better, opt-in), now that’s a real option.
I’d really love to see the opt-in/opt-out data for AI. I guarantee businesses aren’t including the option or recording data because they know it will show people don’t want it, and they have to follow the data!
SufferingSteve@feddit.nu 3 weeks ago
Lol, the source data for all AI is starting to use AI to summarize.
Have you ever tried to zip a zipfile?
But then on the other hand, as compilers become better, they become more efficient at compiling their own source code…
lennivelkant@discuss.tchncs.de 3 weeks ago
Yeah but the compilers compile improved versions. Like, if you manually curated the summaries to be even better, then fed it to AI to produce a new summary you also curate… you’ll end up with a carefully hand-trained LLM.
SufferingSteve@feddit.nu 3 weeks ago
So if the AI generated summaries are better than man made summaries, this would not be an issue would it?
sentient_loom@sh.itjust.works 3 weeks ago
Is there a way for us to complain to wikipedia about this? I contribute money every year, and I will 100% stop if they’re stomping more LLM-slop down my throat.
Kusimulkku@lemm.ee 3 weeks ago
It does sound like it could be handy
OmegaLemmy@discuss.online 3 weeks ago
I don’t think Wikipedia is for the benefit of users anymore, what even are the alternatives? Leftypedia?
Fizz@lemmy.nz 3 weeks ago
Noo Wikipedia why would you do this
Coolbeanschilly@lemmy.ca 3 weeks ago
dan1101@lemm.ee 3 weeks ago
Yeah as more organizations implement LLMs Wikipedia has the opportunity to become more reliable and authoritative. Don’t mess that opportunity up with “AI.”
espentan@lemmy.world 3 weeks ago
These days, most companies that work with web based products are under pressure from upper management to “use AI”, as there’s a fear of missing out if they don’t. Now, management doesn’t necessarily have any idea what they should use it for, so they leave that to product managers and such. They don’t have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.
Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.
jjjalljs@ttrpg.network 3 weeks ago
I’ve already posted this a few times, but Ed Zitron wrote a long article about what he calls “Business Idiots”. Basically, people in decision making positions who are out of touch with their users and their products. They make bad decisions, and that’s a big factor in why everything kind of sucks now.
wheresyoured.at/the-era-of-the-business-idiot/ (it’s long)
I think a lot of us have this illusion that higher ranking people are smarter, more visionary, or whatever. But I think no. I think a lot of people are just kind of stupid, surrounded by other stupid people, cushioned from real, personal, consequences. On top of that, for many enterprises, the incentives don’t line up with the users. At least wikipedia isn’t profit driven, but you can probably think of some things you’ve used that got more annoying with updates. Like google putting more ads up top, or any website that does a redesign that yields more ad space, worse navigation.
ChicoSuave@lemmy.world 3 weeks ago
The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don’t really seem to be high on their priorities.