Comment on FediDB has stoped crawling until they get robots.txt support
WhoLooksHere@lemmy.world 2 months agoFrom your own wiki link
robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.
How is f3didn not an “other web robot”?
rimu@piefed.social 2 months ago
Ok if you want to focus on that single phrase and ignore the whole rest of the page which documents decades of stuff to do with search engines and not a single mention of api endpoints, that's fine. You can have the win on this, here's a gold star.
WhoLooksHere@lemmy.world 2 months ago
Okay,
So why should reinevent a standard when one that serves functionally the same purpose with one of implied consent?