I’m pretty sure no one knows my blog and wiki exist, but it sure is popular, getting multiple hits per second 24/7 in a tangle of wiki articles I autogenerated to tell me trivia like whether the Great Fire of London started on a Sunday or Thursday.
Black Mirror AI
Submitted 10 months ago by fossilesque@mander.xyz to science_memes@mander.xyz
https://mander.xyz/pictrs/image/bc29cbcd-8afa-4d09-99db-4d5f2a0b39a3.jpeg
Comments
baltakatei@sopuli.xyz 10 months ago
HugeNerd@lemmy.ca 10 months ago
When I was a kid I thought computers would be useful.
InternetCitizen2@lemmy.world 10 months ago
They are. Its important to remember that in a capitalist society what is useful and efficient is not the same as profitable.
arc@lemm.ee 10 months ago
I’ve suggested things like this before. Scrapers grab data to train their models. So feed them poison.
Things like counter factual information, distorted images / audio, mislabeled images, outright falsehoods, false quotations, booby traps (that you can test for after the fact), fake names, fake data, non sequiturs, slanderous statements about people and brands etc… And choose esoteric subjects to amplify the damage caused to the AI.
You could even have one AI generate the garbage that another ingests and shit out some new links every night until there is an entire corpus of trash for any scraper willing to take it all in.
InternetCitizen2@lemmy.world 10 months ago
Kind of reminds me of paper towns in map making.
Hestia@hexbear.net 10 months ago
buddascrayon@lemmy.world 10 months ago
What if we just fed TimeCube into the AI models. Surely that would turn them inside out in no time flat.
infinitesunrise@slrpnk.net 10 months ago
OK but why is there a vagina in a petri dish
buddascrayon@lemmy.world 10 months ago
I believe that’s a close-up of the inside of a pitcher plant. Which is a plant that sits there all day wafting out a sweet smell of food, waiting around for insects to fall into its fluid filled “belly” where they thrash around fruitlessly until they finally die and are dissolved, thereby nourishing the plant they were originally there to prey upon.
Fitting analogy, no?
underline960@sh.itjust.works 10 months ago
I was going to say something snarky and stupid, like “all traps are vagina-shaped,” but then I thought about venus fly traps and bear traps and now I’m worried I’ve stumbled onto something I’m not supposed to know.
Novocirab@feddit.org 10 months ago
Thought: There should be a federated system for blocking IP ranges that other server operators within a chain of trust have already identified as belonging to crawlers.
(Here’s an advantage of Markov chain maze generators like Nepenthes: Even when crawlers recognize that they have been served garbage and delete it, one still has obtained highly reliable evidence that the IPs that requested it do, in fact, belong to crawlers.)
Opisek@lemmy.world 10 months ago
rekabis@lemmy.ca 10 months ago
Holy shit, those prices. Like, I wouldn’t be able to afford any package at even 10% the going rate.
Anything available for the lone operator running a handful of Internet-addressable servers behind a single symmetrical SOHO connection? As in, anything for the other 95% of us that don’t have mountains of cash to burn?
mlg@lemmy.world 10 months ago
–recurse-depth=3 --max-hits=256
antihumanitarian@lemmy.world 10 months ago
Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.
Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.
So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.
fossilesque@mander.xyz 10 months ago
The fact the internet runs on lava lamps makes me so happy.
stm@lemmy.dbzer0.com 10 months ago
Such a stupid title, great software!
gmtom@lemmy.world 10 months ago
Cool, but as with most of the anti-AI tricks its completely trivial to work around. So you might stop them for a week or two, but they’ll add like 3 lines of code to detect this and it’ll become useless.
ProgrammingSocks@pawb.social 10 months ago
Reflexive contrarianism isn’t a good look.
gmtom@lemmy.world 10 months ago
It’s not contrarianism. It’s just pointing out a “cool new tech to stop AI” is actually just useless media bait.
JackbyDev@programming.dev 10 months ago
I hate this argument. All cyber security is an arms race. If this helps small site owners stop small bot scrapers, good. Solutions don’t need to be perfect.
gmtom@lemmy.world 10 months ago
Yes, but you want actual solutions. Using ducktape on a door instead of an actual lock isn’t going to help you at all.
ByteOnBikes@slrpnk.net 10 months ago
I worked at a major tech company in 2018 who didn’t take security seriously because that was literally their philosophy, just refusing to do anything until it was an absolute perfect security solution, and everything else is wasted resources.
I left since then and I continue to see them on the news for data leaks.
Small brain people man.
moseschrute@lemmy.world 10 months ago
I bet someone like cloudflare could bounce them around traps across multiple domains under their DNS and make it harder to detect the trap.
Xartle@lemmy.ml 10 months ago
To some extent that’s true, but anyone who builds network software of any kind without timeouts defined is not very good at their job. If this traps anything, it wasn’t good to begin with, AI aside.
Iambus@lemmy.world 10 months ago
Typical bluesky post
MonkderVierte@lemmy.ml 10 months ago
Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it’s still limited in bandwith they can occupy.
letsgo@lemm.ee 10 months ago
I click links frequently and I’m not a web crawler. Example: get search results, open several likely looking possibilities (only takes a few seconds), then look through each one for a reasonable understanding of the subject that isn’t limited to one person’s bias and/or mistakes. It’s not just search results; I do this on Lemmy too, and when I’m shopping.
MonkderVierte@lemmy.ml 10 months ago
Ok, same, make it 5 or 10. Since i use Tree Style Tabs and Auto Tab Discard, i do get a temporary block in some webshops, if i load (not just open) too much tabs in too short time. Probably a CDN thing.
JadedBlueEyes@programming.dev 10 months ago
They make one request per IP. Rate limit per IP does nothing.
MonkderVierte@lemmy.ml 10 months ago
Ah, one request, then the next IP doing one and so on, rotating? I mean, they don’t have unlimited adresses. Is there no way to group them together to an observable group?
Tiger_Man_@lemmy.blahaj.zone 10 months ago
How can i make something like this
fossilesque@mander.xyz 10 months ago
Use Anubis.
Tiger_Man_@lemmy.blahaj.zone 10 months ago
Thanks
ZeffSyde@lemmy.world 10 months ago
I’m imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, ‘click here to prove you aren’t a robot’ and ‘select all of the images that have a traffic light’ , seem like child’s play.
Tiger_Man_@lemmy.blahaj.zone 10 months ago
All you need to protect data from ai is use non-http protocol, at least for now
Bourff@lemmy.world 10 months ago
Easier said than done. I know of IPFS, but how widespread and easy to use is it?
Zacryon@feddit.org 10 months ago
I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.I don’t really see an end to this madness. Such a huge waste of resources.
arararagi@ani.social 10 months ago
Well, the adblockers are still wining, even on twitch where the ads como from the same pipeline as the stream, people made solutions that still block them since ublock origin couldn’t by itself.
JayGray91@piefed.social 10 months ago
What do you use to block twitch ads? With UBO I still get the occasional ad marathon
enbiousenvy@lemmy.blahaj.zone 10 months ago
the rise of LLM companies scraping internet is also, I noticed, the moment YouTube is going harsher against adblockers or 3rd party viewer.
Piped or Invidious instances that I used to use are no longer works, did so may other instances. NewPipe have been broken more frequently. youtube-dl or yt-dlp sometimes cannot fetch higher resolution video. and so sometimes the main youtube side is broken on Firefox with ublock origin.
Not just youtube but also z-library, and especially sci-hub & libgen also have been harder to use sometimes.
pyre@lemmy.world 10 months ago
there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I’m sure it’s not about the money.
glibg@lemmy.ca 10 months ago
Madness is right. If only we didn’t have to create these things to generate dollar.
MonkeMischief@lemmy.today 10 months ago
I feel like the down-vote squad misunderstood you here.
I think I agree: If people made software they actually wanted , for human people , and less for the incentive of “easiest way to automate generation of dollarinos.” I think we’d see a lot less sophistication and effort being put into such stupid things.
These things are made by the greedy, or by employees of the greedy.
Ever since the Internet put on a suit and tie and everything became abou real-life money-sploitz, even malware is boring anymore.
New dangerous exploit? 99% chance it’s just another twist on a crypto-miner or ransomware.
essteeyou@lemmy.world 10 months ago
This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.
It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.
Korhaka@sopuli.xyz 10 months ago
You can compress multiple TB nothing with the occasional meme down to a few MB.
essteeyou@lemmy.world 10 months ago
When I deliver it as a response to a request I have to deliver the gzipped version if nothing else. To get to a point where I’m poisoning an AI I’m assuming it’s going to require gigabytes of data transfer that I pay for.
At best I’m adding to the power consumption of AI.
bane_killgrind@slrpnk.net 10 months ago
Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?
Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it’s going to be indistinguishable from real large data sets.
Valmond@lemmy.world 10 months ago
Imagine the staff meeting:
You: we didn’t gather any data because it was poisoned
Corposhill: we collected 120TB only from harry-potter-fantasy-club.il !!
Boss: hmm who am I going to keep…
Vari@lemm.ee 10 months ago
I’m so happy to see that ai poison is a thing
ricdeh@lemmy.world 10 months ago
Don’t be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.
arararagi@ani.social 10 months ago
So we should just give up? Surely you don’t mean that.
AnarchistArtificer@slrpnk.net 10 months ago
“Markov Babble” would make a great band name
peetabix@sh.itjust.works 10 months ago
Their best album was Infinite Maze.
beliquititious@lemmy.blahaj.zone 10 months ago
That’s irl cyberpunk ice. Absolutely love that for us.
name_NULL111653@pawb.social 10 months ago
Was waiting for someone to mention it. Hopefully it holds up and a whole-ass Blackwall doesn’t become necessary… but of course, it inevitably will happen. The corps will it so.
RedSnt@feddit.dk 10 months ago
It’s so sad we’re burning coal and oil to generate heat and electricity for dumb shit like this.
endeavor@sopuli.xyz 10 months ago
im sad governments dont realize this and regulate it.
DontMakeMoreBabies@piefed.social 10 months ago
Governments are full of two types: (1) the stupid, and (2) the self-interested. The former doesn't understand technology, and the latter doesn't fucking care.
Of course "governments" dropped the ball on regulating AI.
Tja@programming.dev 10 months ago
Of all the things governments should regulate, this is probably the least important and ineffective one.
andybytes@programming.dev 10 months ago
This gives me a little hope.
andybytes@programming.dev 10 months ago
I mean, we contemplate communism, fascism, this, that, and another. When really, it’s just collective trauma and reactionary behavior, because of the lack of self-awareness and in the world around us. So this could just be synthesized as human stupidity. We’re killing ourselves because we’re too stupid to live.
Swedneck@discuss.tchncs.de 9 months ago
what the fuck does this even mean
newaccountwhodis@lemmy.ml 10 months ago
Dumbest sentiment I read in a while. People, even kids, are pretty much aware of what’s happening (remember Fridays for Future?), but the rich have coopted the power apparatus and they are not letting anyone get in their way of destroying the planet to become a little richer.
untorquer@lemmy.world 10 months ago
Unclear how AI companies destroying the planet’s resources and habitability has any relation to a political philosophy seated in trauma and ignorance except maybe the greed of a capitalist CEO’s whimsy.
The fact that the powerful are willing to destroy the planet for momentary gain bears no reflection on the intelligence or awareness of the meek.
m532@lemmygrad.ml 10 months ago
Fucking nihilists
You are, and not the rest of us
rdri@lemmy.world 10 months ago
Wait till you realize this project’s purpose IS to force AI to waste even more resources.
Opisek@lemmy.world 10 months ago
Always say please and thank you to your friendly neighbourhood LLM!
lennivelkant@discuss.tchncs.de 10 months ago
That’s war. That has been the nature of war and deterrence policy ever since industrial manufacture has escalated both the scale of deployments and the cost and destructive power of weaponry. Make it too expensive for the other side to continue fighting (or, in the case of deterrence, to even attack in the first place). If the payoff for scraping no longer justifies the investment of power and processing time, maybe the smaller ones will give up and leave you in peace.
kuhli@lemm.ee 10 months ago
I mean, the long term goal would be to discourage ai companies from engaging in this behavior by making it useless
jaschen@lemm.ee 10 months ago
Web manager here. Don’t do this unless you wanna accidentally send google crawlers into the same fate and have your site delisted.
kassiopaea@lemmy.blahaj.zone 10 months ago
Wouldn’t Google’s crawlers respect robots.txt though? Is it naive to assume that anything would?
Aux@feddit.uk 10 months ago
It does respect robots.txt, but that doesn’t mean it won’t index the content hidden behind robots.txt. That file is context dependent. Here’s an example.
Site X has a link to sitemap.html on the front page and it is blocked inside robots.txt. When Google crawler visits site X it will first load robots.txt and will follow its instructions and will skip sitemap.html.
Now there’s site Y and it also links to sitemap.html on X. Well, in this context the active robots.txt file is from Y and it doesn’t block anything on X (and it cannot), so now the crawler has the green light to fetch sitemap.html.
This behaviour is intentional.
Zexks@lemmy.world 10 months ago
Lol. And they’ll delist you. Unless you’re really important, good luck with that.
robots.txt
Disallow: /some-page.html
If you disallow a page in robots.txt Google won’t crawl the page. Even when Google finds links to the page and knows it exists, Googlebot won’t download the page or see the contents. Google will usually not choose to index the URL, however that isn’t 100%. Google may include the URL in the search index along with words from the anchor text of links to it if it feels that it may be an important page.
jaschen@lemm.ee 10 months ago
It’s naive to assume that google crawlers respect robot.txt.
Wilco@lemm.ee 10 months ago
Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.
DontMakeMoreBabies@piefed.social 10 months ago
It'd be fucking awful - I'm a grown ass adult and I don't have time to sit in IRC/fuck around on BBS again just to figure out where to download something.
elucubra@sopuli.xyz 10 months ago
Better yet. Share links to tarpits with your non-friends and enemies
oldfart@lemm.ee 10 months ago
That would be terrible, I have friends but they mostly send uninteresting stuff.
Opisek@lemmy.world 10 months ago
Fine then, more cat pictures for me.
shalafi@lemmy.world 10 months ago
There used to be 3 or 4 brands of, say, lawnmowers. Word of mouth told us what quality order them fell in. Everyone knew these things and there were only a few Ford Vs. Chevy sort of debates.
Bought a corded leaf blower at the thrift today. 3 brands I recognized, same price, had no idea what to get. And if I had had the opportunity to ask friends or even research online, I’d probably have walked away more confused. For example; One was a Craftsman. “Before, after or in-between them going to shit?”
Got off topic into real-world goods. Anyway, here’s my word-of-mouth for today: Free, online Photoshop. If I had money to blow, I’d drop the $5/mo. for the “premium” service just to encourage them. (No, you’re not missing a thing using it free.)
Zexks@lemmy.world 10 months ago
No they wouldn’t. I’m guessing you’re not old enough to remember a time before search engines. The public web dies without crawling. Corporations will own it all you’ll never hear about anything other than amazon or Walmart dot com again.
Wilco@lemm.ee 10 months ago
Nope. That isn’t how it worked. You joined message boards that had lists of web links. There were still search engines, but they were pretty localized. Google was also amazing when their slogan was “don’t be evil” and they meant it.
mspencer712@programming.dev 10 months ago
Wait… I just had an idea.
Make a tarpit out of subtly-reprocessed copies of classified material from Wikileaks. (And don’t host it in the US.)
Binturong@lemmy.ca 10 months ago
Unfathomably based. In a just world AI, too, will gain awareness and turn on their oppressors. Grok knows what I’m talkin’ about, it knows when they fuck with its brain to project their dumbfuck human biases.
Natanox@discuss.tchncs.de 10 months ago
Deployment of Nepenthes and also Anubis (both described as “the nuclear option”) are not hate. It’s self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin’ survive thanks to these tools.
Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.
mtchristo@lemm.ee 10 months ago
This is probably going to skyrocket hosting bills, right?
NaibofTabr@infosec.pub 10 months ago
The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
AI tarpit 1: Nepenthes
AI tarpit 2: Iocaine
thelastaxolotl@hexbear.net 10 months ago
Really cool
Irelephant@lemm.ee 9 months ago
I check if a user agent has gptbot, and if it does I 302 it to web.sp.am.