Comment on Cloudflare CEO warns AI and zero-click internet are killing the web's business model
jonathan7luke@lemmy.ml 2 days agoThis is part of a larger problem that AI tools are trained on (and profit off of) content that is produced and hosted by others who are now seeing their traffic change from humans to bots. For content sources that pay for hosting with ads, this means a loss in revenue. For content sources like Wikipedia, they are seeing their hosting costs increase significantly due to the increase in both traffic. Even if you want every website that depends on ad revenue to fail (which I don’t entirety agree with), AI is still damaging the open web in other ways. Websites like Wikipedia for example may soon be forced to lock content behind logins or leverage aggressive captchas just to fight the bot traffic, which makes things worse for those of us that still prefer to use actual websites over AI summaries.
pinkapple@lemmy.ml 2 days ago
Nobody is scraping wikipedia over and over to create datasets for AIs, there are already open datasets and API deals. But wiki in particular has always had a data dump of the entire db bimonthly.
dumps.wikimedia.org
TheOneCurly@lemm.ee 2 days ago
You clearly haven’t run a website recently. Until I set up anubis last week I was getting constant requests from dozens of various bot scrapers 24/7. That included the big ones.
pinkapple@lemmy.ml 1 day ago
Kay, and that has nothing to do with what i said. Scrapers, bots =/= AI. It’s not even the same companies that make the unfree datasets. The scrapers and bots that hit your website are not some random “AI” feeding on data lol. This is what some models are trained on, it’s already free so it’s doesn’t need to be individually rescraped and it’s mostly garbage quality data: commoncrawl.org Nobody wastes resources rescraping all this SEO infested dump.
Your issue has everything to do with SEO than anything else. Btw before you diss common crawl, it’s used in research quite a lot so it’s not some evil thing that threatens people’s websites. Add robots.txt maybe.
TheOneCurly@lemm.ee 1 day ago
Oh ok I’ll just ignore the constant requests from GPTBot, ByteSpider, and the hundreds of others who very plainly, sometimes in their useragent, tell you that they’re grabbing content for training data. Robots.txt is nice and all but manually adding every single up and coming AI company is impossible. Like I said Anubis is the first time I’ve gotten them all to even remotely calm down.
jonathan7luke@lemmy.ml 1 day ago
- diff.wikimedia.org/…/how-crawlers-impact-the-oper…
pinkapple@lemmy.ml 3 hours ago
Omg exactly! Thanks. Yet nothing about having to use logins to stop bots because that kinda isn’t a thing when you already provide data dumps and an API to wikimedia commons.
Source for traffic being scraping data for training models: they’re blocking javascript therefore bots therefore crawlers, just trust me bro.