Meta has scraped data from the most-trafficked domains on the internet —including news organizations, education platforms, niche forums, personal blogs, and even revenge porn sites—to train its artificial intelligence models, according to a leaked list obtained by Drop Site News.
By scraping data from roughly 6 million unique websites, including 100,000 of the top-ranked domains, Meta has generated millions of pages of content to use for Meta’s AI-training pipeline.
The sites that Meta scrapes consist of copyrighted content, pirated content, and adult videos, some of whose content is potentially illegally obtained or recorded, as well as news and original content from prominent outlets and content publishers.
They include mainstream businesses like Getty Images, Shopify, Shutterstock, but also extreme pornographic content, including websites advertising explicit sexual content and humiliation porn that exploits teenagers.
Lemmy really hates piracy… in this specific context.
And a lot of the extreme and extremist content going into these things is just Twitter. People post all kinds of shit from all kinds of places. At what point is this like clutching pearls over what the Internet Archive has saved? They’re trying to grab anything you could see.
It’s not some hacking and exfiltration campaign. Meta’s just bad at spidering. How do you go breadth-first across the entire internet and still DDoS any particular site? You don’t decide to check every DeviantArt account, at the same time, you dolts.
keyhoh@piefed.social 7 months ago
If I scrape Meta's AI to develop my own, would that be fair game? I'm genuinely curious about the legality of this.
BrikoX@lemmy.zip 7 months ago
Tehnically you would be breaking terms of service and license, but in a legal sense we don’t know if that would be enforceable. Sill hasn’t been answered by courts.
cm0002@lemmy.world 7 months ago
So far, OpenAI, anthropic et al hasn’t sued anyone over it, but they have cut account access when it’s discovered to be used for that purpose
It’s how early versions of deepseek were trained iirc, it’s called distillation