Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Black Mirror AI

⁨1587⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨week⁩ ago⁩ by ⁨fossilesque@mander.xyz⁩ to ⁨science_memes@mander.xyz⁩

https://mander.xyz/pictrs/image/bc29cbcd-8afa-4d09-99db-4d5f2a0b39a3.jpeg

source

Comments

Sort:hotnewtop
  • Catoblepas@lemmy.blahaj.zone ⁨1⁩ ⁨week⁩ ago

    Funny that they’re calling them AI haters when they’re specifically poisoning AI that ignores the do not enter sign. FAFO.

    source
    • caseyweederman@lemmy.ca ⁨6⁩ ⁨days⁩ ago

      First Albatross, First Out

      source
      • Naich@lemmings.world ⁨6⁩ ⁨days⁩ ago

        Fluffy Animal’s Fecal Orifice.

        source
    • Clanket@lemmy.world ⁨6⁩ ⁨days⁩ ago

      Fair As Fuck Ok?

      source
    • Stupidmanager@lemmy.world ⁨6⁩ ⁨days⁩ ago

      Sheesh people, it’s “fuck around and find out”. Probably more appropriate in the leopards eating face context but this works enough.

      source
      • tatterdemalion@programming.dev ⁨6⁩ ⁨days⁩ ago

        What are you talking about? FAFO obviously stands for “fill asshole full of”. Like FAFO dicks. Or FAFO pennies.

        source
      • Gismonda@lemmy.world ⁨6⁩ ⁨days⁩ ago

        I’m glad you’re here to tell us these things!

        source
  • passepartout@feddit.org ⁨1⁩ ⁨week⁩ ago

    AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.’”

    Well said.

    source
  • Natanox@discuss.tchncs.de ⁨6⁩ ⁨days⁩ ago

    Deployment of Nepenthes and also Anubis (both described as “the nuclear option”) are not hate. It’s self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin’ survive thanks to these tools.

    Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.

    source
    • faythofdragons@slrpnk.net ⁨6⁩ ⁨days⁩ ago

      Feels good to be on an instance with Anubis

      source
    • pewgar_seemsimandroid@lemmy.blahaj.zone ⁨6⁩ ⁨days⁩ ago

      one of the united Nations websites deployed Anubis

      source
    • chonglibloodsport@lemmy.world ⁨6⁩ ⁨days⁩ ago

      Do you have a link to a story of what happened to ScummVM? I love that project and I’d be really upset if it was lost!

      source
      • Natanox@discuss.tchncs.de ⁨6⁩ ⁨days⁩ ago

        Here you go.

        source
        • -> View More Comments
    • Hexarei@beehaw.org ⁨6⁩ ⁨days⁩ ago

      Wait what? I am uninformed, can you elaborate on the ScummVM thing? Or link an article?

      source
      • gaael@lemm.ee ⁨6⁩ ⁨days⁩ ago

        From the Fabulous Systems (ScummVM’s sysadmin) blog post linked by Natanox:

        About three weeks ago, I started receiving monitoring notifications indicating an increased load on the MariaDB server.

        This went on for a couple of days without seriously impacting our server or accessibility–it was a tad slower than usual.

        And then the website went down.

        Now, it was time to find out what was going on. Hoping that it was just one single IP trying to annoy us, I opened the access log of the day

        there were many IPs–around 35.000, to be precise–from residential networks all over the world. At this scale, it makes no sense to even consider blocking individual IPs, subnets, or entire networks. Due to the open nature of the project, geo-blocking isn’t an option either.

        The main problem is time. The URLs accessed in the attack are the most expensive ones the wiki offers since they heavily depend on the database and are highly dynamic, requiring some processing time in PHP. This is the worst-case scenario since it throws the server into a death spiral.

        First, the database starts to lag or even refuse new connections. This, combined with the steadily increasing server load, leads to slower PHP execution.

        At this point, the website dies. Restarting the stack immediately solves the problem for a couple of minutes at best until the server starves again.

        Anubis is a program that checks incoming connections, processes them, and only forwards “good” connections to the web application. To do so, Anubis sits between the server or proxy responsible for accepting HTTP/HTTPS and the server that provides the application.

        Many bots disguise themselves as standard browsers to circumvent filtering based on the user agent. So, if something claims to be a browser, it should behave like one, right? To verify this, Anubis presents a proof-of-work challenge that the browser needs to solve. If the challenge passes, it forwards the incoming request to the web application protected by Anubis; otherwise, the request is denied.

        As a regular user, all you’ll notice is a loading screen when accessing the website. As an attacker with stupid bots, you’ll never get through. As an attacker with clever bots, you’ll end up exhausting your own resources. As an AI company trying to scrape the website, you’ll quickly notice that CPU time can be expensive if used on a large scale.

        I didn’t get a single notification afterward. The server load has never been lower. The attack itself is still ongoing at the time of writing this article. To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.

        source
    • Rubisco@slrpnk.net ⁨6⁩ ⁨days⁩ ago

      I love that one is named Nepenthes.

      source
  • RedSnt@feddit.dk ⁨6⁩ ⁨days⁩ ago

    It’s so sad we’re burning coal and oil to generate heat and electricity for dumb shit like this.

    source
    • rdri@lemmy.world ⁨6⁩ ⁨days⁩ ago

      Wait till you realize this project’s purpose IS to force AI to waste even more resources.

      source
      • kuhli@lemm.ee ⁨6⁩ ⁨days⁩ ago

        I mean, the long term goal would be to discourage ai companies from engaging in this behavior by making it useless

        source
        • -> View More Comments
      • lennivelkant@discuss.tchncs.de ⁨6⁩ ⁨days⁩ ago

        That’s war. That has been the nature of war and deterrence policy ever since industrial manufacture has escalated both the scale of deployments and the cost and destructive power of weaponry. Make it too expensive for the other side to continue fighting (or, in the case of deterrence, to even attack in the first place). If the payoff for scraping no longer justifies the investment of power and processing time, maybe the smaller ones will give up and leave you in peace.

        source
      • Opisek@lemmy.world ⁨6⁩ ⁨days⁩ ago

        Always say please and thank you to your friendly neighbourhood LLM!

        source
    • endeavor@sopuli.xyz ⁨6⁩ ⁨days⁩ ago

      im sad governments dont realize this and regulate it.

      source
      • DontMakeMoreBabies@piefed.social ⁨6⁩ ⁨days⁩ ago

        Governments are full of two types: (1) the stupid, and (2) the self-interested. The former doesn't understand technology, and the latter doesn't fucking care.

        Of course "governments" dropped the ball on regulating AI.

        source
      • Tja@programming.dev ⁨6⁩ ⁨days⁩ ago

        Of all the things governments should regulate, this is probably the least important and ineffective one.

        source
        • -> View More Comments
    • andybytes@programming.dev ⁨6⁩ ⁨days⁩ ago

      This gives me a little hope.

      source
    • andybytes@programming.dev ⁨6⁩ ⁨days⁩ ago

      I mean, we contemplate communism, fascism, this, that, and another. When really, it’s just collective trauma and reactionary behavior, because of the lack of self-awareness and in the world around us. So this could just be synthesized as human stupidity. We’re killing ourselves because we’re too stupid to live.

      source
      • newaccountwhodis@lemmy.ml ⁨6⁩ ⁨days⁩ ago

        Dumbest sentiment I read in a while. People, even kids, are pretty much aware of what’s happening (remember Fridays for Future?), but the rich have coopted the power apparatus and they are not letting anyone get in their way of destroying the planet to become a little richer.

        source
      • untorquer@lemmy.world ⁨6⁩ ⁨days⁩ ago

        Unclear how AI companies destroying the planet’s resources and habitability has any relation to a political philosophy seated in trauma and ignorance except maybe the greed of a capitalist CEO’s whimsy.

        The fact that the powerful are willing to destroy the planet for momentary gain bears no reflection on the intelligence or awareness of the meek.

        source
      • m532@lemmygrad.ml ⁨6⁩ ⁨days⁩ ago

        Fucking nihilists

        You are, and not the rest of us

        source
      • Swedneck@discuss.tchncs.de ⁨2⁩ ⁨days⁩ ago

        what the fuck does this even mean

        source
  • heyWhatsay@slrpnk.net ⁨1⁩ ⁨week⁩ ago

    This might explain why newer AI models are going nuts. Good jorb 👍

    source
    • pennomi@lemmy.world ⁨6⁩ ⁨days⁩ ago

      It absolutely doesn’t. The only model that has “gone nuts” is Grok, and that’s because of malicious code pushed specifically for the purpose of spreading propaganda.

      source
    • Eyekaytee@aussie.zone ⁨6⁩ ⁨days⁩ ago

      what models are going nuts?

      source
      • Vari@lemm.ee ⁨6⁩ ⁨days⁩ ago

        Not sure if OP can provide sources, but it makes sense kinda? Like AI has been trained on just about every human creation to get it this far, what happens when the only new training data is AI slop?

        source
        • -> View More Comments
      • heyWhatsay@slrpnk.net ⁨6⁩ ⁨days⁩ ago

        Claude version 4, the openAi mini models, not sure what else

        source
  • Vari@lemm.ee ⁨6⁩ ⁨days⁩ ago

    I’m so happy to see that ai poison is a thing

    source
    • ricdeh@lemmy.world ⁨6⁩ ⁨days⁩ ago

      Don’t be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.

      source
      • arararagi@ani.social ⁨6⁩ ⁨days⁩ ago

        So we should just give up? Surely you don’t mean that.

        source
        • -> View More Comments
  • NaibofTabr@infosec.pub ⁨1⁩ ⁨week⁩ ago

    The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt

    AI tarpit 1: Nepenthes

    AI tarpit 2: Iocaine

    source
    • MadMadBunny@lemmy.ca ⁨6⁩ ⁨days⁩ ago

      Thank you!!

      source
    • sad_detective_man@lemmy.dbzer0.com ⁨6⁩ ⁨days⁩ ago

      thanks for the links. the more I read of this the more based it is

      source
  • ininewcrow@lemmy.ca ⁨1⁩ ⁨week⁩ ago

    Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.

    source
    • IndiBrony@lemmy.world ⁨1⁩ ⁨week⁩ ago

      All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.

      source
      • DeathsEmbrace@lemm.ee ⁨1⁩ ⁨week⁩ ago

        Actually if you think about it AI might help climate change become an actual catastrophe.

        source
        • -> View More Comments
      • vivendi@programming.dev ⁨6⁩ ⁨days⁩ ago

        Image

        source
        • -> View More Comments
      • Zozano@aussie.zone ⁨6⁩ ⁨days⁩ ago

        I’ve been think about this for a while. Consider how quick LLM’s are.

        If the amount of energy spent powering your device (without an LLM), is more than using an LLM, then it’s probably saving energy.

        In all honesty, I’ve probably saved over 50 hours or more since I starred using it about 2 months ago.

        Coding has become incredibly efficient, and I’m not suffering through search-engine hell any more.

        source
        • -> View More Comments
      • Eyekaytee@aussie.zone ⁨6⁩ ⁨days⁩ ago

        we’re rolling out renewables at like 100x the rate of ai electricity use, so no need to worry there

        source
        • -> View More Comments
    • Prox@lemmy.world ⁨6⁩ ⁨days⁩ ago

      We’re racing towards the Blackwall from Cyberpunk 2077…

      source
      • barsoap@lemm.ee ⁨6⁩ ⁨days⁩ ago

        Already there. The blackwall is AI-powered and Markov chains are most definitely an AI technique.

        source
  • Zacryon@feddit.org ⁨6⁩ ⁨days⁩ ago

    I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
    There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.

    I don’t really see an end to this madness. Such a huge waste of resources.

    source
    • pyre@lemmy.world ⁨6⁩ ⁨days⁩ ago

      there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I’m sure it’s not about the money.

      source
    • enbiousenvy@lemmy.blahaj.zone ⁨6⁩ ⁨days⁩ ago

      the rise of LLM companies scraping internet is also, I noticed, the moment YouTube is going harsher against adblockers or 3rd party viewer.

      Piped or Invidious instances that I used to use are no longer works, did so may other instances. NewPipe have been broken more frequently. youtube-dl or yt-dlp sometimes cannot fetch higher resolution video. and so sometimes the main youtube side is broken on Firefox with ublock origin.

      Not just youtube but also z-library, and especially sci-hub & libgen also have been harder to use sometimes.

      source
    • arararagi@ani.social ⁨6⁩ ⁨days⁩ ago

      Well, the adblockers are still wining, even on twitch where the ads como from the same pipeline as the stream, people made solutions that still block them since ublock origin couldn’t by itself.

      source
      • JayGray91@piefed.social ⁨5⁩ ⁨days⁩ ago

        What do you use to block twitch ads? With UBO I still get the occasional ad marathon

        source
        • -> View More Comments
    • glibg@lemmy.ca ⁨6⁩ ⁨days⁩ ago

      Madness is right. If only we didn’t have to create these things to generate dollar.

      source
      • MonkeMischief@lemmy.today ⁨6⁩ ⁨days⁩ ago

        I feel like the down-vote squad misunderstood you here.

        I think I agree: If people made software they actually wanted , for human people , and less for the incentive of “easiest way to automate generation of dollarinos.” I think we’d see a lot less sophistication and effort being put into such stupid things.

        These things are made by the greedy, or by employees of the greedy.

        Ever since the Internet put on a suit and tie and everything became abou real-life money-sploitz, even malware is boring anymore.

        New dangerous exploit? 99% chance it’s just another twist on a crypto-miner or ransomware.

        source
  • Wilco@lemm.ee ⁨6⁩ ⁨days⁩ ago

    Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.

    source
    • Zexks@lemmy.world ⁨6⁩ ⁨days⁩ ago

      No they wouldn’t. I’m guessing you’re not old enough to remember a time before search engines. The public web dies without crawling. Corporations will own it all you’ll never hear about anything other than amazon or Walmart dot com again.

      source
      • Wilco@lemm.ee ⁨6⁩ ⁨days⁩ ago

        Nope. That isn’t how it worked. You joined message boards that had lists of web links. There were still search engines, but they were pretty localized. Google was also amazing when their slogan was “don’t be evil” and they meant it.

        source
        • -> View More Comments
    • oldfart@lemm.ee ⁨6⁩ ⁨days⁩ ago

      That would be terrible, I have friends but they mostly send uninteresting stuff.

      source
      • Opisek@lemmy.world ⁨6⁩ ⁨days⁩ ago

        Fine then, more cat pictures for me.

        source
    • shalafi@lemmy.world ⁨6⁩ ⁨days⁩ ago

      There used to be 3 or 4 brands of, say, lawnmowers. Word of mouth told us what quality order them fell in. Everyone knew these things and there were only a few Ford Vs. Chevy sort of debates.

      Bought a corded leaf blower at the thrift today. 3 brands I recognized, same price, had no idea what to get. And if I had had the opportunity to ask friends or even research online, I’d probably have walked away more confused. For example; One was a Craftsman. “Before, after or in-between them going to shit?”

      Got off topic into real-world goods. Anyway, here’s my word-of-mouth for today: Free, online Photoshop. If I had money to blow, I’d drop the $5/mo. for the “premium” service just to encourage them. (No, you’re not missing a thing using it free.)

      source
      • Angelusz@lemmy.world ⁨6⁩ ⁨days⁩ ago

        Bad bot, please die.

        source
        • -> View More Comments
    • elucubra@sopuli.xyz ⁨6⁩ ⁨days⁩ ago

      Better yet. Share links to tarpits with your non-friends and enemies

      source
    • DontMakeMoreBabies@piefed.social ⁨6⁩ ⁨days⁩ ago

      It'd be fucking awful - I'm a grown ass adult and I don't have time to sit in IRC/fuck around on BBS again just to figure out where to download something.

      source
  • essteeyou@lemmy.world ⁨6⁩ ⁨days⁩ ago

    This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.

    It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.

    source
    • bane_killgrind@slrpnk.net ⁨6⁩ ⁨days⁩ ago

      Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?

      Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it’s going to be indistinguishable from real large data sets.

      source
      • Valmond@lemmy.world ⁨6⁩ ⁨days⁩ ago

        Imagine the staff meeting:

        You: we didn’t gather any data because it was poisoned

        Corposhill: we collected 120TB only from harry-potter-fantasy-club.il !!

        Boss: hmm who am I going to keep…

        source
        • -> View More Comments
    • Korhaka@sopuli.xyz ⁨6⁩ ⁨days⁩ ago

      You can compress multiple TB nothing with the occasional meme down to a few MB.

      source
      • essteeyou@lemmy.world ⁨6⁩ ⁨days⁩ ago

        When I deliver it as a response to a request I have to deliver the gzipped version if nothing else. To get to a point where I’m poisoning an AI I’m assuming it’s going to require gigabytes of data transfer that I pay for.

        At best I’m adding to the power consumption of AI.

        source
        • -> View More Comments
  • Zerush@lemmy.ml ⁨1⁩ ⁨week⁩ ago

    Nice one

    source
    • Trainguyrom@reddthat.com ⁨6⁩ ⁨days⁩ ago

      The Arstechnica article in the OP is about 2 months newer than Cloudflare’s tool

      arstechnica.com/…/ai-haters-build-tarpits-to-trap…

      source
  • antihumanitarian@lemmy.world ⁨6⁩ ⁨days⁩ ago

    Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.

    Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.

    So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.

    source
  • stm@lemmy.dbzer0.com ⁨6⁩ ⁨days⁩ ago

    Such a stupid title, great software!

    source
  • AnarchistArtificer@slrpnk.net ⁨6⁩ ⁨days⁩ ago

    “Markov Babble” would make a great band name

    source
  • mspencer712@programming.dev ⁨6⁩ ⁨days⁩ ago

    Wait… I just had an idea.

    Make a tarpit out of subtly-reprocessed copies of classified material from Wikileaks. (And don’t host it in the US.)

    source
  • hedhoncho@lemm.ee ⁨1⁩ ⁨week⁩ ago

    Why are the photos all ugly biological things

    source
  • MonkderVierte@lemmy.ml ⁨6⁩ ⁨days⁩ ago

    Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it’s still limited in bandwith they can occupy.

    source
  • gmtom@lemmy.world ⁨6⁩ ⁨days⁩ ago

    Cool, but as with most of the anti-AI tricks its completely trivial to work around. So you might stop them for a week or two, but they’ll add like 3 lines of code to detect this and it’ll become useless.

    source
  • arc@lemm.ee ⁨5⁩ ⁨days⁩ ago

    I’ve suggested things like this before. Scrapers grab data to train their models. So feed them poison.

    Things like counter factual information, distorted images / audio, mislabeled images, outright falsehoods, false quotations, booby traps (that you can test for after the fact), fake names, fake data, non sequiturs, slanderous statements about people and brands etc… And choose esoteric subjects to amplify the damage caused to the AI.

    You could even have one AI generate the garbage that another ingests and shit out some new links every night until there is an entire corpus of trash for any scraper willing to take it all in.

    source
  • HugeNerd@lemmy.ca ⁨5⁩ ⁨days⁩ ago

    When I was a kid I thought computers would be useful.

    source
  • Irelephant@lemm.ee ⁨2⁩ ⁨days⁩ ago

    I check if a user agent has gptbot, and if it does I 302 it to web.sp.am.

    source
  • Novocirab@feddit.org ⁨6⁩ ⁨days⁩ ago

    Thought: There should be a federated system for blocking IP ranges that other server operators within a chain of trust have already identified as belonging to crawlers.

    (Here’s an advantage of Markov chain maze generators like Nepenthes: Even when crawlers recognize that they have been served garbage and delete it, one still has obtained highly reliable evidence that the IPs that requested it do, in fact, belong to crawlers.)

    source
  • Iambus@lemmy.world ⁨6⁩ ⁨days⁩ ago

    Typical bluesky post

    source
  • mlg@lemmy.world ⁨6⁩ ⁨days⁩ ago

    –recurse-depth=3 --max-hits=256

    source
  • ZeffSyde@lemmy.world ⁨6⁩ ⁨days⁩ ago

    I’m imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, ‘click here to prove you aren’t a robot’ and ‘select all of the images that have a traffic light’ , seem like child’s play.

    source
  • infinitesunrise@slrpnk.net ⁨6⁩ ⁨days⁩ ago

    OK but why is there a vagina in a petri dish

    source
  • Binturong@lemmy.ca ⁨6⁩ ⁨days⁩ ago

    Unfathomably based. In a just world AI, too, will gain awareness and turn on their oppressors. Grok knows what I’m talkin’ about, it knows when they fuck with its brain to project their dumbfuck human biases.

    source
  • thelastaxolotl@hexbear.net ⁨1⁩ ⁨week⁩ ago

    Really cool

    source
  • Goretantath@lemm.ee ⁨1⁩ ⁨week⁩ ago

    Yeah, this is WAY bettee than the shitty thing people are using instead that wastes peoples batteries.

    source
-> View More Comments