This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?
Cloudflare announces AI Labyrinth, which uses AI-generated content to confuse and waste the resources of AI Crawlers and bots that ignore “no crawl” directives.
Submitted 1 year ago by Tea@programming.dev to technology@lemmy.world
https://blog.cloudflare.com/ai-labyrinth/
Comments
missandry351@lemmings.world 1 year ago
petaqui@lemmings.world 1 year ago
As for everything, it has good things, and bad things. We need to be careful and use it in a proper way, and the same thing applies to the ones creating this technology
gap_betweenus@lemmy.world 1 year ago
Once a technology or even an idea is there, you can’t really make it go away - ai is here to stay. The generative LLM are just a small part.
Slaxis@discuss.tchncs.de 1 year ago
The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.
x0x7@lemmy.world 1 year ago
Jokes on them. I’m going to use AI to estimate the value of content, and now I’ll get the kind of content I want, though fake, that they will have to generate.
VeloRama@feddit.org 1 year ago
Should have called it “Black ICE”.
supersquirrel@sopuli.xyz 1 year ago
Definitely falls under the category of a Trap ICE card.
Greyfoxsolid@lemmy.world 1 year ago
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
ArchRecord@lemm.ee 1 year ago
Here’s the key distinction:
This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.
A_Random_Idiot@lemmy.world 1 year ago
I’m a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
DasSkelett@discuss.tchncs.de 1 year ago
This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)
Doesn’t sound too bad to me.
cupcakezealot@lemmy.blahaj.zone 1 year ago
i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.
shads@lemy.lol 1 year ago
I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”
Which I found scarily insightful.
A_Random_Idiot@lemmy.world 1 year ago
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
umbraroze@lemmy.world 1 year ago
I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?
EddoWagt@feddit.nl 1 year ago
They want everything, does it exist, but it’s not in their dataset? Then they want it.
They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that
T156@lemmy.world 1 year ago
Because it takes work to obey the rules, and you get less data for it. The theoretical comoetutor could get more ignoring those and get some vague advantage for it.
I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rule and the like.
phoenixz@lemmy.ca 1 year ago
Because you are coming from the perspective of a reasonable person
These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already
pup_atlas@pawb.social 1 year ago
That’s what they are saying though. These shouldn’t be thought of as “rules”, they are suggestions near universally designed to point you to the most relevant content. Ignoring them isn’t “stealing something not meant to be captured”, it’s wasting time and resources of your own infra on something very likely to be useless to you.
Randomgal@lemmy.ca 1 year ago
I’m glad we’re burning the forests even faster in the name of identity politics.
terrifyingtuba@lemmy.world 1 year ago
Do you identify as retarded?
raphaelarias@lemm.ee 1 year ago
What has this anything to do with identity politics?
xcutie@linux.community 1 year ago
Don’t feed the troll
surph_ninja@lemmy.world 1 year ago
I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.
biofaust@lemmy.world 1 year ago
I guess this is what the first iteration of the Blackwall looks like.
owl@infosec.pub 1 year ago
Gotta say “AI Labyrinth” sounds almost as cool.
gmtom@lemmy.world 1 year ago
“I used the AI to destroy the AI”
Fluke@lemm.ee 1 year ago
And consumed the power output of a medium country to do it.
Yeah, great job! 👍
LeninOnAPrayer@lemm.ee 1 year ago
We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people.
cantstopthesignal@sh.itjust.works 1 year ago
We had to kill the internet, to save the internet.
Asfalttikyntaja@sopuli.xyz 1 year ago
We have to kill the Internet, to save humanity.
digdilem@lemmy.ml 1 year ago
Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.
dan@upvote.au 1 year ago
thousands of times a second
Modify your Nginx (or whatever web server you use) config to rate limit requests to dynamic pages, and cache them. For Nginx, you’d use either fastcgi_cache or proxy_cache depending on how the site is configured. Even if the pages change a lot, a cache with a short TTL (say 1 minute) can still help reduce load quite a bit while not letting them get too outdated.
Static content (and cached content) shouldn’t cause issues even if requested thousands of times per second. Following best practices like pre-compressing content using gzip, Brotli, and zstd helps a lot, too :)
Of course, this advice is just for “unintentional” DDoS attacks, not intentionally malicious ones. Those are often much larger and need different protection - often some protection on the network or load balancer before it even hits the server.
AWittyUsername@lemmy.world 1 year ago
I think the negativity is around the unfortunate fact that solutions like this shouldn’t be necessary.
drmoose@lemmy.world 1 year ago
Considering how many false positives Cloudflare serves i see nothing but misery coming from this.
Xella@lemmy.world 1 year ago
Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!
Dave@lemmy.nz 1 year ago
In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.
Appoxo@lemmy.dbzer0.com 1 year ago
Can’t you configure exceptions for behaviours?
kandoh@reddthat.com 1 year ago
Burning 29 acres of rainforest a day to do nothing
zovits@lemmy.world 1 year ago
It certainly sounds like they generate the fake content once and serve it from cache every time: “Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval.”
kandoh@reddthat.com 1 year ago
Yeah but you also add in the energy consumption of the data scrappers
cantstopthesignal@sh.itjust.works 1 year ago
Bitcoin?
weremacaque@lemmy.world 1 year ago
You have Thirteen hours in which to solve this labyrinth before your baby AI becomes one of us, forever.
cantstopthesignal@sh.itjust.works 1 year ago
While AI David Bowie sings you rock lullabies.
DomesticForeigner@lemm.ee 1 year ago
so sick and tired of ai this, ai that.
TorJansen@sh.itjust.works 1 year ago
And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.
cantstopthesignal@sh.itjust.works 1 year ago
Some of us are only here to crank hog.
gac11@lemmy.world 1 year ago
AROOO!
cultsuperstar@lemmy.world 1 year ago
I introduce to you, the Trace Buster Buster!
If you’ve never seen the movie The Big Hit, it’s great.
Onsotumenh@discuss.tchncs.de 1 year ago
Why do I have the feeling that I will end up in that nightmare with my privacy focused and ad-free Browser setup. I already end up in captcha hell too often because of it.
drmoose@lemmy.world 1 year ago
Of course it will. Cloudflare has already ruined the web and it’s just another step further.
quack@lemmy.zip 1 year ago
Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now.
4am@lemm.ee 1 year ago
Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
zovits@lemmy.world 1 year ago
From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."
tfm@europe.pub 1 year ago
Demdaru@lemmy.world 1 year ago
On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.
Melvin_Ferd@lemmy.world 1 year ago
I just want to keep using uncensored AI that answers my questions. Why is this a good thing?
_cnt0@sh.itjust.works 1 year ago
This is so fucking retarded on so many levels. It’s time to regulate the shit out of “AI”.
baltakatei@sopuli.xyz 1 year ago
Relevant excerpt from part 11 of Anathem (2008) by Neal Stephenson:
Artificial Inanity
Note: Reticulum=Internet, syndev=computer, crap~=spam “Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said. “Crap, you once called it,” I reminded him. “Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.” “What is good crap?” Arsibalt asked in a politely incredulous tone. “Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.” “As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.” “Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.” “So, are Artificial Inanity systems still active in the Rampant Orphan Botnet Ecologies?” asked Arsibalt, utterly fascinated. “The ROBE evolved into something totally different early in the Second Millennium,” Sammann said dismissively. “What did it evolve into?” Jesry asked. “No one is sure,” Sammann said. “We only get hints when it finds ways to physically instantiate itself, which, fortunately, does not happen that often. But we digress. The functionality of Artificial Inanity still exists. You might say that those Ita who brought the Ret out of the Dark Age could only defeat it by co-opting it. So, to make a long story short, for every legitimate document floating around on the Reticulum, there are hundreds or thousands of bogus versions—bogons, as we call them.” “The only way to preserve the integrity of the defenses is to subject them to unceasing assault,” Osa said, and any idiot could guess he was quoting some old Vale aphorism. “Yes,” Sammann said, “and it works so well that, most of the time, the users of the Reticulum don’t know it’s there. Just as you are not aware of the millions of germs trying and failing to attack your body every moment of every day. However, the recent events, and the stresses posed by the Antiswarm, appear to have introduced the low-level bug that I spoke of.” “So the practical consequence for us,” Lio said, “is that—?” “Our cells on the ground may be having difficulty distinguishing between legitimate messages and bogons. And some of the messages that flash up on our screens may be bogons as well.”
codexarcanum@lemmy.dbzer0.com 1 year ago
One of my favorite books! Great world building and quite thought provoking!
truxnell@infosec.pub 1 year ago
Read Anathema last year, really enjoyed it!
_cryptagion@lemmy.dbzer0.com 1 year ago
Now this is a AI trap worth using. Don’t waste your money and resources hosting something yourself, let Cloudflare do it for you if you don’t want AI scraping your shit.
oppy1984@lemm.ee 1 year ago
Spiderman pointing at Spiderman meme.
fmstrat@lemmy.nowsci.com 1 year ago
And this, ladies and gentleman, is how you actually make profits on AI.
oldfart@lemm.ee 1 year ago
So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.
theparadox@lemmy.world 1 year ago
There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?
finitebanjo@lemmy.world 1 year ago
Cloudflare kind of real for this. I love it.
It makes perfect sense for them as a business, infinite automated traffic equals infinite costs and lower server stability, but at the same time how often do giant tech companies do things that make sense these days?
nyan@lemmy.cafe 1 year ago
Will it actually allow ordinary users to browse normally, though? Their other stuff breaks in minority browsers. Have they tested this well enough so that it won’t? (I’d bet not.)
NotProLemmy@lemmy.ml 1 year ago
They used AI to destroy AI