Comment on A Project to Poison LLM Crawlers
douglasg14b@lemmy.world 5 days agoI can get a 50Gb/s residential link where I am, and have a whole rack of servers.
Sounds like a good opportunity to crowd fund thousands and thousands of common scrap able instances that have random poisoning.
vane@lemmy.world 4 days ago
To be honest bandwidth isn’t a problem because it’s text files. The problem is to optimize network stack for multiple connections because they’re hitting from whole subnets without any delay so literally ddos and cache those html files because at some point CPU becomes bottleneck.
douglasg14b@lemmy.world 4 days ago
This is assuming aggressively cached, yes.
Also “Just text files” is what every website is sans media. And you can still, EASILY get 10+ MB pages this way between HTML, CSS, JS, and JSON. Which are all text files.
A gitea repo page for example is 400-500KB transferred (1.5-2.5MB decompressed) of almost all text.
If you have a repo with 150 files, and the scraper isn’t caching assets (many don’t) then you just served up 60MB of HTMl/CSS/JS alongside the actual repository assets.
vane@lemmy.world 4 days ago
I don’t know from theory or counting but I know that my 8 cores depleted sooner than my bandwidth and I have like 60 Mb/s uplink. My linux network stack parameters are pretty aggressive. The way I figured out that something is not right was when I heard loud fan noise from my server inside room. I logged in and all cores were red and logs were showing corporate fuckers trying to burn my house.
douglasg14b@lemmy.world 4 days ago
I assume that the gitea instance itself was being hit directly, which would make sense. It has a whole rendering stack that has to reach out to a database, get data, render the actual webpage through a template…etc
It’s a massive amount of work compared to serving up static files from say Nginx or Caddy. You can stick one of these in front of your servers, and cache http responses (to some degree anyways, that depends on gitea)
Benchmarks like this show what kind of throughput you can expect on say a 4 core VM just serving up cached files: blog.tjll.net/reverse-proxy-hot-dog-eating-contes…
90-400MB/s derived from the stats here on 4 cores. Enough to saturate a 3Gb/s connection. And caching intentionally polluted sites is crazy easy since you don’t care if it’s stale or not. Put a cloudflair cache on front of it and even easier.