jimmy wales is also the president and co-founder of fandom
to give you an idea of who that guy is
Submitted 3 weeks ago by tonytins@pawb.social to technology@lemmy.world
https://www.404media.co/jimmy-wales-wikipedia-ai-chatgpt/
jimmy wales is also the president and co-founder of fandom
to give you an idea of who that guy is
I mean, the Wikipedia page does say it was sold in 2018. Not sure how it was before but it’s not surprising that it enshittified by now.
I guess in his defense it wasn’t too bad before 2018, as far as I can remember. Most of the enshittification of fandom I can remember has happened since.
Obligatory plug for BreezeWiki. Makes that shit actually usable.
Oh yeah that website’s pretty great It has really in depth wiki about games like fallout.fandom.com/wiki/Caesar's_Legion
So I guess you mean that Wales guy is pretty great then
Oh, you mean the fallout.wiki/wiki/Caesar's_Legion ?
The user content on fandom is generally pretty good, at least for the wikis I frequent. It’s everything else about the site which is awful – the pop-ups, the completely irrelevant auto-playing videos, how it’s constantly trying to shove other fandom wikis into your attention.
I’m sure the site is improved with userscripts and such, and I am already using adblock, but it’s pretty unforgivable IMO.
I will stop donating to Wikipedia if they use AI
Wikipedia already has a decades operating cost of savings.
No they don’t because they blast it on inflated exec wages.
What’s funny is that for enormous big systems with network effects we are trying to use mechanisms intended for smaller businesses, like a hot dog kiosk.
IRL we have a thing for those, it’s called democracy.
In the Internet it’s either anarchy or monarchy, sometimes bureaucratic dictatorship, but in that area even Soviet-style collegial rule is something not yet present.
I’m recently read that McPherson article about Unix and racism, and how our whole perception of correct computing (modularity, encapsulation, object-orientation, all the KISS philosophy even) is based on that time’s changes in the society and reaction to those. I mean, real world is continuous and you can quantize it into discrete elements in many ways. Some unfit for your task. All unfit for some task.
So - first, I like the Usenet model.
Second, cryptography is good.
Third, cryptographic ownership of a limited resource is … fine, blockchains are maybe not so stupid. But not really necessary, because one can choose between a few versions of the same article retrieved, based on web of trust or whatever else. No need to have only one right version.
Fourth, we already have a way to turn sequence of interdependent actions into state information, it’s called a filesystem.
Fifth, Unix with its hierarchies is really not the only thing in existence, there’s BTRON, and even BeOS had a tagged filesystem.
Sixth, interop and transparency are possible with cryptography.
Seventh, all these also apply to a hypothetical service over global network.
Eighth, of course, is that the global network doesn’t have to be globally visible\addressable to operate globally for spreading data, so even the Internet itself is not as much needed as the actual connectivity over which those change messages will propagate where needed and synchronize.
Ninth, for Wikipedia you don’t need as much storage as for, say, Internet Archive.
And tenth - with all these one can make a Wikipedia-like decentralized system with democratic government, based on rather primitive principles, other than, of course, cryptography involved.
(Yes, Briar impressed me.)
How do you prevent sybil attacks without making it overly expensive to vote?
Why is leadership always so vapid and disconnected from reality?
Because this is one of the rare times he sat down at the keyboard to do the real work being done by people in this organization and he realized that it’s hard and he wants a shortcut. He sees his time as more valuable and sees this task as wasting his time, but it is their primary task and one they do as volunteers because they are passionate about it. He’s not going to get a lot of traction with them telling them the thing they do for free because they love it isn’t worth anyone’s time.
I think commenters here don't actually do Wikipedia. Wales was instrumental in Wikipedia's principles and organization besides the first year of Sanger. He handpicked the first administrators to make sure the project would continue its anarchistic roganization and prevent a hierarchy from having a bigger say in content matters.
I would characterize Wales as a long-retired leader rather than leadership.
I swear these people have never been around a cathedral and thought about how it was built.
Because that’s what being in a position of power does to a mf
Remember you can download all of Wikipedia in your language and safely store it on a drive buried in your backyard, for after they rewrite history and eliminate freedom of speech.
Already got it downloaded. It’s only like 100 - 150 gigabytes or something like that. Got it on my PC, my laptop, and my external hard drive. I don’t trust the powers that be to keep it intact anymore so I’d rather have my own copy, even if outdated.
Some people can’t really stop seeing conspiracies everywhere.
As long as billionaires are campaigning to destroy it, there is no place for that comment you made.
I don’t think an existing conspiracy is necessary, it’s just a cheap way to help protect yourself and others against something that could happen one day.
By downloading it every month and seeding its torrent (totally legal!), you are also helping to keep Wikimedia accountable.
tbh i somehow didnt even realize that wikipedia is one of the few super popular sites not trying to shove ai down my throat every 5 seconds
i’m grateful now
Don’t count your chickens before they hatch, Jimmy Wales founded Wikipedia and already used ChatGPT in a review process once according to this article.
damn T_T
To all our readers on Lemmy,
Please don’t scroll past this. This Friday, for the 1st time recently, we interrupt your reading to humbly ask you to support Wikipedia’s independence. Only 2% of our readers give. Many think they’ll give later, but then forget. If you donate just £2, or whatever you can this Friday, Wikipedia could keep thriving for years. We don’t run ads, and we never have. We rely on our readers for support. We serve millions of people, but we run on a fraction of what other top sites spend. Wikipedia is special. It is like a library or a public park where we can all go to learn. We ask you, humbly: please don’t scroll away. If Wikipedia has given you £2 worth of knowledge this year, take a minute to donate. Show the world that access to neutral information matters to you. Thank you.
He can stick AI inside his own ass
if jimmy wales puts ai in wikipedia i stg imma scream
The editor community rejected the idea so overwhelmingly, that Wikipedia paused the planned experiment in June, hopefully for good.
The problem with LLMs and other generative AI is that they’re not completely useless. People’s jobs are on the line much of the time, so it would really help if they were completely useless, but they’re not. Generative AI is certainly not as good as its proponents claim, and critically, when it fucks up, it can be extremely hard for a human to tell, which eats away a lot of their benefits, but they’re not completely useless. For the most basic example, give an LLM a block of text and ask it how to improve grammar or to make a point clearer, and then compare the AI generated result with the original, and take whatever parts you think the AI improved.
Everybody knows this, but we’re all pretending it’s not the case because we’re caring people who don’t want the world to be drowned in AI hallucinations, we don’t want to have the world taken over by confidence tricksters who just fake everything with AI, and we don’t want people to lose their jobs. But sometimes, we are so busy pretending that AI is completely useless that we forget that it actually isn’t completely useless. The reason they’re so dangerous is that they’re not completely useless.
It’s almost as if nuance and context matters.
How much energy does a human use to write a Wikipedia article? Now also measure the accuracy and completeness of the article.
Now do the same for AI.
Objective metrics are what is missing, because much of what we hear is “phd-level inference” and it’s still just a statistical, probabilistic generator.
It is completely useless as presented by the major players who atrocities trying to jam models that are trying to everything at the same time and that is what we always talk about when discussing AI.
We aren’t talking about focused implementations that are Wikipedia to a certain set of data or designed for specific purposes. That is why we don’t need nuance, although the reminder that we aren’t talking about smaller scale AI used by humans as tools is nice once in a while.
Honestly, translating the good articles from other languages would improve Wikipedia immensely.
For example, the Nanjing dialect article is pretty bare in English and very detailed in Mandarin
You can do that that’s fine. As long as you can verify and accurate translation, you need to know the subject matter and the language.
But you could probably also have used Google translate and then just fine tune the output yourself. Anyone could have done that at any point in the last 10 years.
Google translate is horrendously bad at Korean, especially with slang and accidental typos. Like nonsense bad.
As long as you can verify it is an accurate translation
Unless the process has changed in the last decade, article translations are a multi-step process, which includes translators and proof-readers. It’s easier to get volunteer proof-readers than volunteer translators. Adding AI for the translation step, but keeping the proof-reading step should be a great help.
But you could probably also have used Google translate and then just fine tune the output yourself. Anyone could have done that at any point in the last 10 years.
Have you ever used Google translate? Putting an entire Wikipedia article through it and then “fine tuning” it would be more work than translating it from scratch. Absolutely no comparison between Google translate and AI translations.
I recently have edited a small wiki page that was obviously written by someone that wasn’t proficient in English. I used AI to just reword what was already written and then I edited the output myself. It did a pretty good job. It was a page about some B-list Indonesian actress that I just stumbled upon and I didn’t want to put time and effort into it but the page really needed work done.
This is the goal of Abstract Wikipedia. meta.wikimedia.org/wiki/…/Abstract_Wikipedia
Wikipedia’s translation tool for porting articles between languages currently uses google translate so I could see an LLM being an improvement but LLMs are also way way costlier than normal translation models like google translate. Would it be worth it? And also would the better LLM translations make editors less likely to reword the translation to make it’s tone better?
You can use an LLM to reword the translation to make the tone better. It’s literally what LLMs are designed to do
He is nobody to Wikipedia now. He also failed to create a news site and a micro SNS.
Christ, I miss when I could click on an article and not be asked to sign up for it.
Oh, right! Thanks for reminding me. I tried to archive it the last time but it took forever.
You know, I remember way back in the day when…
#Interested in reading the rest of this comment?
Please sign up with your name, DOB, banking information, list of valuables, times you’re away from home, and an outline of your house key to “Yaztromo@lemmy.world”. It’s quick, easy, and fun!
…and that’s why I’m no longer welcome in New Zealand. Crazy!
As I have adblock mostly because of the abuse of trackers, I understand people trying to monetize their work.
Journalists monetizing their work is totally reasonable. The problem for me is that it seems unfair to ask that literally everyone trying to read an article have to sign up. Maybe I’m missing something.
Fuck AI
They’re trying to get rid of Wikipedia by saying they’re shit and doing things you’ll hate. Fight for no AI if that’s your thing, but read very carefully what’s happening. Wikipedia can NOT go away.
So I fed the page to ChatGPT to ask for advice. And I got what seems to me to be pretty good. And so I’m wondering if we might start to think about how a tool like AFCH might be improved so that instead of a generic template, a new editor gets actual advice. It would be better, obviously, if we had lovingly crafted human responses to every situation like this, but we all know that the volunteers who are dealing with a high volume of various situations can’t reasonably have time to do it. The templates are helpful - an AI-written note could be even more helpful.
This actually sounds like a plausibly decent use for an LLM. Initial revision to take some of the load off from the human review process isn’t a bad idea - he isn’t advocating for AI to write articles, just that it can be useful for copy-editing.
Important context: he’s not suggesting AIs writing content for Wikipedia. He’s suggesting using AI to provide feedback for new editors. Take that how you will.
Right, which makes it just as bad. Wikipedia had enough proofreaders. You don’t need AI for that, because the need is already filled.
This is entirely different from a book writer who is going everything solo and has exactly one publishing window.
And writing feedback software has existed for decades. So AI adds nothing new. Again it is snake oil. It is always snake oil. Except when it’s bait and switch, to pretend it wasn’t snake oil in the first place.
Not sure about Wikipedia, but Conservapedia would find it very useful. In fact, since most of their entries are factually incorrect and appear as fantasy I think AI writing articles would save them a lot of time.
Only if used appropriately and in a safe manner.
Like a summary of article, translations etc
And definitely always highlighting what was generated by the AI
ok jimmy boy, will that ai also beg for donations? /s
Sit down Jimmy. Wikipedia has enough problems already, it doesn’t need more to be added by AI.
WikipedAI
Why would anyone want an editor that doesn’t fact check?
brucethemoose@lemmy.world 3 weeks ago
Whale’s quote isn’t nearly as bad as the byline makes it out to be:
That being said, it still wreaks of “CEO Speak.” And trying to find a place to shove AI in.
More NLP could absolutely be useful to Wikipedia, especially for flagging spam and malicious edits for human editors to review. This is an excellent task for dirt cheap, small and open models, where an error rate isn’t super important. And it’s a huge existing problem that needs solving.
…Using an expensive, proprietary API to give error prone yet “pretty good” sounding suggestions to new editors is not.
This is the problem. Not natural language processing itself, but the seemingly contagious compulsion among executives to find some place to shove it when the technical extend of their knowledge is typing something into ChatGPT.
Pringles@sopuli.xyz 2 weeks ago
I think you mean reeks, which means to stink, having a foul odor.
cygnus@lemmy.ca 2 weeks ago
Those homophones have reeked havoc for too long!
brucethemoose@lemmy.world 2 weeks ago
Waves hands “You didn’t see anything.”
heavyboots@lemmy.ml 2 weeks ago
Thank you. Glad to know I am not the only one that got triggered, lol.
frezik@lemmy.blahaj.zone 2 weeks ago
This is another reason why I hate bubbles. There is something potentially useful in here. It needs to be considered very carefully. However, it gets to a point where everyone’s kneejerk reaction is that it’s bad.
I can’t even say that people are wrong for feeling that way. The AI bubble has affected our economy and lives in a multitude of ways that go far beyond any reasonable use. I don’t blame anyone for saying “everything under this is bad, period”. The reasonable uses of it are so buried in shit that I don’t expect people to even bother trying to reach into that muck to clean it off.
brucethemoose@lemmy.world 2 weeks ago
This bubble’s hate is pretty front-loaded though.
Dotcom was, well, a useful thing. I guess valuations were nuts, but it looks like the hate was mostly the enshittified aftermath that would come.
Crypto is a series of bubbles trying to prop up flavored pyramid schemes for a neat niche concept, but people largely figured that out after they popped.
Machine Learning is a long running, useful field, but ever since ChatGPT caught investors eyes, the cart has felt so far ahead of the horse. The hate started, and got polarized, waaay before the bubble popping.
peoplebeproblems@midwest.social 2 weeks ago
So… I actually proposed a use case for NLP and LLMs in 2017. I don’t actually know if it was used.
But the usecase was generating large sets of fake data that looked real enough for performance testing enterprise sized data transformations. That way we could skip a large portion of the risk associated with using actual customer data. We wouldn’t have to generate the data beforehand, we could validate logic with it, and we could just plop it in the replica non-prodiction environment.
At the time we didn’t have any LLMs. So it didn’t go anywhere. But it’s always funny when I see all this “LLMs can do x” because I always think about how my proposal was to use it… For fake data.