Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant

⁨818⁩ ⁨likes⁩

Submitted ⁨⁨11⁩ ⁨hours⁩ ago⁩ by ⁨throws_lemy@lemmy.nz⁩ to ⁨technology@lemmy.world⁩

https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-code-deletes-developers-production-setup-including-its-database-and-snapshots-2-5-years-of-records-were-nuked-in-an-instant

source

Comments

Sort:hotnewtop
  • zr0@lemmy.dbzer0.com ⁨9⁩ ⁨hours⁩ ago

    Hey Siri, what is a “backup”.

    source
    • HowAbt2day@futurology.today ⁨8⁩ ⁨hours⁩ ago

      Siri: “sure! I’ll go right ahead and permanently delete everything.”

      source
    • jaybone@lemmy.zip ⁨7⁩ ⁨hours⁩ ago

      Playing Back It Up by Cardi B.

      source
  • edgemaster72@lemmy.world ⁨6⁩ ⁨hours⁩ ago

    lol, lmao even

    source
  • outer_spec@lemmy.blahaj.zone ⁨9⁩ ⁨hours⁩ ago

    haha, whoopsie lol :)

    source
  • Valthorn@feddit.nu ⁨5⁩ ⁨hours⁩ ago

    What, is a requirement for Claude to work that you “sudo chmod -R 777 /” or something?

    source
  • pokexpert30@jlai.lu ⁨8⁩ ⁨hours⁩ ago

    Terraform state is a garbage hack I feel. You have your plan in code. You have a target. Just diff it. Thats what helmfile do. No managing state file. Thats what iac should be. Just code. Deterministic. Diff before applying it.

    source
  • woelkchen@lemmy.world ⁨11⁩ ⁨hours⁩ ago

    No backups, no pity.

    source
  • FireWire400@lemmy.world ⁨10⁩ ⁨hours⁩ ago

    No backup, no mercy.

    source
  • root@lemmy.world ⁨5⁩ ⁨hours⁩ ago

    3-2-1

    source
  • eestileib@lemmy.blahaj.zone ⁨7⁩ ⁨hours⁩ ago

    And nothing of value was lost

    source
  • zod000@lemmy.dbzer0.com ⁨5⁩ ⁨hours⁩ ago

    womp womp

    source
  • Jackhammer_Joe@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    Good. Serves them right.

    source
  • obelisk_complex@piefed.ca ⁨11⁩ ⁨hours⁩ ago

    Ff7gZMz677sZoxW.jpg

    source
    • artyom@piefed.social ⁨11⁩ ⁨hours⁩ ago

      You can code this into it’s training all you want, but it will find a way around it. This is one of many problems with AI.

      source
      • thebestaquaman@lemmy.world ⁨11⁩ ⁨hours⁩ ago

        Nah, you can run it in a box and limit its ability to interact with anything outside the box to certain white-listed endpoints. Depending on what you want to achieve, that can be more than safe enough.

        source
        • -> View More Comments
      • markz@suppo.fi ⁨10⁩ ⁨hours⁩ ago

        I thought this was about restricting the thing’s access and not training?

        source
        • -> View More Comments
    • _stranger_@lemmy.world ⁨8⁩ ⁨hours⁩ ago

      You gotta be knowledgeable enough to know when they’re destructive, that’s the rub.

      source
      • obelisk_complex@piefed.ca ⁨6⁩ ⁨hours⁩ ago

        Sure, but reading the article, I think he might be knowledgeable enough. His mistake seems to have been blindly trusting the keys to the kingdom to an enthusiastic junior dev who’ll be very sorry if they nuke your system, but won’t think to do a damn thing to make sure it doesn’t happen in the first place…

        source
  • Flying_Lynx@lemmy.ml ⁨11⁩ ⁨hours⁩ ago

    Whenever you outsource something (like your intelligence) then it becomes a trust issue…

    source
  • webkitten@piefed.social ⁨11⁩ ⁨hours⁩ ago

    sigh

    Use LLMs as instructional models not as production/development models. It’s not hard, people. You don’t need to connect credentials to any LLMs just like you’d never write your production passwords on post-it’s and stick them on your computer monitor.

    source
    • artyom@piefed.social ⁨11⁩ ⁨hours⁩ ago

      Or don’t use LLMs at all, because they fucking lie to you constantly?

      source
      • Semi_Hemi_Demigod@lemmy.world ⁨11⁩ ⁨hours⁩ ago

        “Lie” implies they have some kind of agency. They’re basically a Plinko board.

        source
      • thebestaquaman@lemmy.world ⁨11⁩ ⁨hours⁩ ago

        Meh, they work well enough if you treat them as a rubber duck that responds. I’ve had an actual rubber duck on my desk for some years, but I’ve found LLM’s taking over its role lately.

        I don’t use them to actually generate code. I use them as a place where I can write down my thoughts. When the LLM responds, it has likely “misunderstood” some aspect of my idea, and by reformulating myself and explaining how it works I can help myself think through what I’m doing. Previously I would argue with the rubber duck, but I have to admit that the LLM is actually slightly better for the same purpose.

        source
        • -> View More Comments
  • flamingo_pinyata@sopuli.xyz ⁨11⁩ ⁨hours⁩ ago

    How do you even achieve that? I have to coax it into correctly running the project locally.

    source
  • Naevermix@lemmy.world ⁨8⁩ ⁨hours⁩ ago

    skill issue tbh

    source
  • reksas@sopuli.xyz ⁨10⁩ ⁨hours⁩ ago

    i wonder which would be worse idea, letting llm to have full access to your critical systems and data, or letting random people from internet freely connect to them and expect them to help.

    source
  • HubertManne@piefed.social ⁨9⁩ ⁨hours⁩ ago

    Its funny because when I worked at places where I even had the rights to do something like this (exempting small companies where I was the multi hat guy but even there I made it so I had to go through hoops to do something like this) it makes me like. This is crazy.

    source
  • vext01@feddit.uk ⁨11⁩ ⁨hours⁩ ago

    I dont understand why people aren’t sandboxing these things.

    source
    • frongt@lemmy.zip ⁨10⁩ ⁨hours⁩ ago

      If he had had the sense to do that, he would have had the sense to not do it at all.

      source
  • MagnificentSteiner@lemmy.zip ⁨9⁩ ⁨hours⁩ ago

    FAFO

    source