Whale’s quote isn’t nearly as bad as the byline makes it out to be:
Wales explains that the article was originally rejected several years ago, then someone tried to improve it, resubmitted it, and got the same exact template rejection again.
“It’s a form letter response that might as well be ‘Computer says no’ (that article’s worth a read if you don’t know the expression),” Wales said. “It wasn’t a computer who says no, but a human using AFCH, a helper script […] In order to try to help, I personally felt at a loss. I am not sure what the rejection referred to specifically. So I fed the page to ChatGPT to ask for advice. And I got what seems to me to be pretty good. And so I’m wondering if we might start to think about how a tool like AFCH might be improved so that instead of a generic template, a new editor gets actual advice. It would be better, obviously, if we had lovingly crafted human responses to every situation like this, but we all know that the volunteers who are dealing with a high volume of various situations can’t reasonably have time to do it. The templates are helpful - an AI-written note could be even more helpful.”
That being said, it still wreaks of “CEO Speak.” And trying to find a place to shove AI in.
More NLP could absolutely be useful to Wikipedia, especially for flagging spam and malicious edits for human editors to review. This is an excellent task for dirt cheap, small and open models, where an error rate isn’t super important. And it’s a huge existing problem that needs solving.
…Using an expensive, proprietary API to give error prone yet “pretty good” sounding suggestions to new editors is not.
This is the problem. Not natural language processing itself, but the seemingly contagious compulsion among executives to find some place to shove it when the technical extend of their knowledge is typing something into ChatGPT.
ramsay@lemmy.world 1 hour ago
I will stop donating to Wikipedia if they use AI
Corn@lemmy.ml 1 hour ago
Wikipedia already has a decades operating cost of savings.