As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.
Honestly it seems like in all aspects of society the social contract is being ignored these days, that’s why things seem so much worse now.
palordrolap@kbin.social 9 months ago
Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
lvxferre@mander.xyz 9 months ago
Good old honeytrap. I’m not sure, but I think that it’s doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won’t access it. Disallow crawling those pages through robots.txt.
Then if some crawler still access it, you could simply record it in the logs and ban the relevant IPs. Or you could be really nasty, and fill the page with poison - nonsensical text that would look like something that humans would write.
CosmicTurtle@lemmy.world 9 months ago
I think I used to do something similar with email spam traps. Not sure if it’s still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I’d love to see something similar with robots.
thefactremains@lemmy.world 9 months ago
Even better. Build a WordPress plugin to do this.
KairuByte@lemmy.dbzer0.com 9 months ago
I’m the idiot human that digs through robots.txt and the site map to see things that aren’t normally accessible by an end user.
lol@discuss.tchncs.de 9 months ago
Blackmist@feddit.uk 9 months ago
“Help, my website no longer shows up in Google!”
PM_Your_Nudes_Please@lemmy.world 9 months ago
Yeah, this is a pretty classic honeypot method. Basically make something available but inaccessible to the normal user. Then you know anyone who accesses it is not a normal user.
I’ve even seen this done with Steam achievements before; There was a hidden game achievement which was only available via hacking. So anyone who used hacks immediately outed themselves with a rare achievement that was visible on their profile.
Link@rentadrunk.org 9 months ago
That’s a bit annoying as it means you can’t 100% the game as there will always be one achievement you can’t get.
CileTheSane@lemmy.ca 9 months ago
There are tools that just flag you as having gotten an achievement on Steam, you don’t even have to have the game open to do it. I’d hardly call that ‘hacking’.
Aatube@kbin.social 9 months ago
robots.txt is purely textual; you can't run JavaScript or log anything. Plus, one who doesn't intend to follow robots.txt wouldn't query it.
BrianTheeBiscuiteer@lemmy.world 9 months ago
If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:
here-there-be-dragons.html
Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
ShitpostCentral@lemmy.world 9 months ago
You’re second point is a good one, but you absolutely can log the IP which requested robots.txt. That’s just a standard part of any http server ever, no JavaScript needed.
ricecake@sh.itjust.works 9 months ago
People not intending to follow it is the real reason not to bother, but it’s trivial to track who downloaded the file and then hit something they were asked not to.
Like, 10 minutes work to do right. You don’t need js to do it at all.
Ultraviolet@lemmy.world 9 months ago
Better yet, point the crawler to a massive text file of almost but not quite grammatically correct garbage to poison the model.
odelik@lemmy.today 9 months ago
Maybe one of the lorem ipsum generators could help.
nullPointer@programming.dev 9 months ago
a bad-bot .htaccess trap.