As you've probably seen or heard Dropsitenews has published a list (from a Meta whistleblower) of "the roughly 100,000 top websites and content delivery network addresses scraped to train Meta's proprietary AI models" -- including quite a few fedi sites. Meta denies everything of course, but they routinely lie through their teeth so who knows. In any case, whether the specific details in the report are accurate, it's certainly a threat worth thinking about.

So I'm wondering what defenses fedi admins are using today to try to defeat scrapers: robots.txt, user-agent blocking, firewall-level blocking of ip ranges, Cloudflare or Fastly AI scraper blocking, Anubis, stuff you don't want to disclose ...

And a couple of more open-ended questions:

  • Do you feel like your defenses against scraping are generally holding up pretty well?

  • Are there other approaches that you think might be promising that you just haven't had the time or resources to try?

  • Do you have any language in your terms of servive that attempts to prohibit training for AI?

Here's @FediPact's post with a link to the Dropsitenews report and (in the replies) a list of fedi instances and CDNs that show up on the list.

https://cyberpunk.lol/@FediPact/114999480874284493

@fediverse @fediversenews

#MastoAdmin #Meta #FediPact