Comment on A Project to Poison LLM Crawlers
GamingChairModel@lemmy.world 8 hours agoThe Fediverse is designed specifically to publish its data for others to use in an open manner.
Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there’s been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.
If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.
FaceDeer@fedia.io 8 hours ago
I have a sneaking suspicion that the vast majority of the people raging about AIs scraping their data are not raging about it being done inefficiently.
badgermurphy@lemmy.world 1 hour ago
Maybe not, but at least in part because they don’t understand what the previous poster said. If their scrapers were more efficient at data harvesting by employing API calls instead of scraping your whole domain, it would be much less burdensome on the target’s server resources and one would think they would be less annoyed by that than if the same thing had happened without that burden.
Their grievances with LLMs and their owners may not be limited to that, but they are certainly likely to include it.