If your reverse proxy is Traefik I would suggest This plugin which pulls the robot.txt from This GitHub repository.
I honestly should’ve setup a robots.txt a long time ago.
If your reverse proxy is Traefik I would suggest This plugin which pulls the robot.txt from This GitHub repository.
I honestly should’ve setup a robots.txt a long time ago.
xthexder@l.sw0.com 1 day ago
Unfortunately robots.txt only stops the well behaved scrapers. Even with disallow all, you’ll still get loads of bots. Setting up the web server to block those user agents would work a bit better, but even then there’s bots out there crawling using regular browser user agents.