Comment on The surreal joy of having an overprovisioned homelab (2025) - from Anubis creator
tal@lemmy.today 6 days ago
What makes this worse is that git servers are the most pathologically vulnerable to the onslaught of doom from modern internet scrapers because remember, they click on every link on every page.
The especially disappointing thing is that, for the specific case that Xe was running into, a better-written scraper could just recognize that this is a public git repository and just git clone the thing. Like, it’s not even “this scraper is scraping data that I don’t want it to have”, but “this scraper is too dumb to just scrape the thing efficiently and is blowing both the scraper’s resources and the server’s resources downloading innumerable redundant copies of the data”.
It’s probably just as well, since the protection is relevant for other websites, and he probably wouldn’t have done it if he hadn’t been getting his git repo hammered, but…
mic_check_one_two@lemmy.dbzer0.com 5 days ago
Sorta like how people complain about bots scraping Lemmy, even though federation already exists as a standardized protocol for distributing data. Like any scraper who wanted to efficiently scrape Lemmy would just spin up their own instance and let federation do the scraping for them. It would even have the added benefit that they could set their server to ignore delete requests, so deleted posts/comments wouldn’t get automatically removed from their server. And then they could scrape as much as they wanted without impacting anyone else.
But they don’t want to do that, because it would require the smallest modicum of forethought. They don’t care that scrapers are trashing the Internet and causing massive bandwidth issues for hosters. They just want the data, and they want it now. All of those “bots are flooding my server and eating all my bandwidth, so legitimate users can’t actually access the site” complaints are for other people.
tal@lemmy.today 5 days ago
I bet that if someone went to The Internet Archive, they could pay them to get timestamped snapshots of professionally-spidered stuff at zero load to the websites. I’m sure that it’d cost something for all the hard drives and probably something for labor, but so does spidering the whole Internet yourself.