People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
Cloudflare announces AI Labyrinth, which uses AI-generated content to confuse and waste the resources of AI Crawlers and bots that ignore “no crawl” directives.
Submitted 2 months ago by Tea@programming.dev to technology@lemmy.world
https://blog.cloudflare.com/ai-labyrinth/
Comments
Greyfoxsolid@lemmy.world 2 months ago
shads@lemy.lol 2 months ago
I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”
Which I found scarily insightful.
A_Random_Idiot@lemmy.world 2 months ago
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
ArchRecord@lemm.ee 2 months ago
Here’s the key distinction:
This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.
A_Random_Idiot@lemmy.world 2 months ago
I’m a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
DasSkelett@discuss.tchncs.de 2 months ago
This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)
Doesn’t sound too bad to me.
cupcakezealot@lemmy.blahaj.zone 2 months ago
i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.
cultsuperstar@lemmy.world 2 months ago
I introduce to you, the Trace Buster Buster!
If you’ve never seen the movie The Big Hit, it’s great.
_cnt0@sh.itjust.works 2 months ago
This is so fucking retarded on so many levels. It’s time to regulate the shit out of “AI”.
nyan@lemmy.cafe 2 months ago
Will it actually allow ordinary users to browse normally, though? Their other stuff breaks in minority browsers. Have they tested this well enough so that it won’t? (I’d bet not.)
fubarx@lemmy.world 2 months ago
So this showed up last week: github.com/raminf/RoboNope-nginx
Similar vibe, minus the AI.
_cryptagion@lemmy.dbzer0.com 2 months ago
Now this is a AI trap worth using. Don’t waste your money and resources hosting something yourself, let Cloudflare do it for you if you don’t want AI scraping your shit.
NotProLemmy@lemmy.ml 2 months ago
They used AI to destroy AI
DomesticForeigner@lemm.ee 2 months ago
so sick and tired of ai this, ai that.
oppy1984@lemm.ee 2 months ago
Spiderman pointing at Spiderman meme.
fmstrat@lemmy.nowsci.com 2 months ago
And this, ladies and gentleman, is how you actually make profits on AI.
x0x7@lemmy.world 2 months ago
Jokes on them. I’m going to use AI to estimate the value of content, and now I’ll get the kind of content I want, though fake, that they will have to generate.