I just think you're making it way more simple than it is... Why not implement 20 other standards that have been around for 30 years? Why not make software perfect and without issues? Why not anticipate what other people will do with your public API endpoints in the future?
There could be many reasons. They forgot, they didn't bother, they didn't consider themselves to be the same as a commercial Google or Yandex crawler... That's why I keep pushing for information and refuse to give a simple answer. Could be an honest mistake. Could be honest and correct to do it and the other side is wrong, since it's not a crawler alike Google or the AI copyright thieves... Could be done maliciously. In my opinion, it's likely that it hadn't been an issue before, the situation changed and now this needs a solution. And we're getting one. Seems at least FediDB took it offline and they're working on robots.txt support. They did not refuse to do it. So it's fine. And I can't comment on why it hadn't been in place. I'm not involved with that project and the history of it's development.
rimu@piefed.social 2 weeks ago
Let's see about that.
Wikipedia lists http://www.robotstxt.org as the official homepage of robots.txt and the "Robots Exclusion Protocol". In the FAQ at http://www.robotstxt.org/faq.html the first entry is "What is a WWW robot?" http://www.robotstxt.org/faq/what.html. It says:
That's not FediDB. That's not even nodeinfo.
WhoLooksHere@lemmy.world 2 weeks ago
From your own wiki link
How is f3didn not an “other web robot”?
rimu@piefed.social 2 weeks ago
Ok if you want to focus on that single phrase and ignore the whole rest of the page which documents decades of stuff to do with search engines and not a single mention of api endpoints, that's fine. You can have the win on this, here's a gold star.
WhoLooksHere@lemmy.world 2 weeks ago
Okay,
So why should reinevent a standard when one that serves functionally the same purpose with one of implied consent?