AI crawlers tend to overwhelm websites by doing the least efficient scraping of data possible, basically DDOSing a huge portion of the internet. Perplexity already scraped the net for training data and is now hammering it inefficiently for searches.
Cloudflare is just trying to keep the bots from overwhelming everything.
BetaDoggo_@lemmy.world 7 months ago
Perplexity (an “AI search engine” company with 500 million in funding) can’t bypass cloudflare’s anti-bot checks. For each search Perplexity scrapes the top results and summarizes them for the user. Cloudflare intentionally blocks perplexity’s scrapers because they consider them to be malicious traffic. Perplexity argues that their scraping is acceptable because it’s user initiated.
Personally I think cloudflare is in the right here. The scraped sites get 0 revenue from Perplexity searches (unless the user decides to go through the sources section and click the links) and Perplexity’s scraping is unnecessarily traffic intensive since they don’t cache the scraped data.
lividweasel@lemmy.world 7 months ago
That seems almost maliciously stupid. We need to train a new model. Hey, where’d the data go? Oh well, let’s just go scrape it all again. Wait, did we already scrape this site? No idea, let’s scrape it again just to be sure.
rdri@lemmy.world 7 months ago
First we complain that AI steals and trains on our data. Then we complain when it doesn’t train. Cool.
ubergeek@lemmy.today 7 months ago
I think it boils down to “consent” and “remuneration”.
I run a website, that I do not consent to being accessed for LLMs. However, should LLMs use my content, I should be compensated for such use.
So, these LLM startups ignore both consent, and the idea of remuneration.
Most of these concepts have already been figured out for the purpose of law, if we consider websites much akin to real estate: Then, the typical trespass laws, compensatory usage, and hell, even eminent domain if needed ie, a city government can “take over” the boosted post feature to make sure alerts get pushed as widely and quickly as possible.
spankmonkey@lemmy.world 7 months ago
They do it this way in case the data changed, similar to how a person would be viewing the current site. The training was for the basic understanding, the real time scraping is to account for changes.
It is also horribly inefficient and works like a small scale DDOS attack.
jballs@lemmy.world 7 months ago
It’s worth giving the article a read. It seems that they’re not using the data for training, but for real-time results.