You’re second point is a good one, but you absolutely can log the IP which requested robots.txt. That’s just a standard part of any http server ever, no JavaScript needed.
Comment on AI companies are violating a basic social contract of the web and and ignoring robots.txt
Aatube@kbin.social 10 months agorobots.txt is purely textual; you can't run JavaScript or log anything. Plus, one who doesn't intend to follow robots.txt wouldn't query it.
ShitpostCentral@lemmy.world 10 months ago
GenderNeutralBro@lemmy.sdf.org 10 months ago
You’d probably have to go out of your way to avoid logging this. I’ve always seen such logs enabled by default when setting up web servers.
ricecake@sh.itjust.works 10 months ago
People not intending to follow it is the real reason not to bother, but it’s trivial to track who downloaded the file and then hit something they were asked not to.
Like, 10 minutes work to do right. You don’t need js to do it at all.
BrianTheeBiscuiteer@lemmy.world 10 months ago
If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:
here-there-be-dragons.html
Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
4am@lemm.ee 10 months ago
server {
name herebedragons.example.com; root /dev/random;
}
PlexSheep@feddit.de 10 months ago
Nice idea! Better use
/dev/urandom
through, as that is non blocking. See here.aniki@lemm.ee 10 months ago
That was really interesting. I always used urandom by practice and wondered what the difference was.
aniki@lemm.ee 10 months ago
I wonder if Nginx would just load random into memory and crash if you did this.
gravitas_deficiency@sh.itjust.works 10 months ago
I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.