256,338 rows affected.
It’s fine, just restore the backup.
Submitted 1 day ago by phudgins@lemmy.world to [deleted]
https://lemmy.world/pictrs/image/858a82de-fb9a-4060-8166-f02e0e2458f3.jpeg
256,338 rows affected.
It’s fine, just restore the backup.
The what now?
It’s right there, in the room with the unicorns and leprechauns.
You know how we installed that system and have been waiting for a chance to see how it works
You go ask the other monkey who was in charge of the backup … and all they do is incoherently scream at you.
Backup and sprint straight out the building. Ain’t about to be there when they find out.
Every seasoned IT person, devOps or otherwise has accidentally made a catastrophic mistake. I ask that in interviews :D
Mine was replacing a failed hard drive in array.
shit.
I accidentally rm’ed /bin on a remote host located in another country, and had to wait for someone to get in and fix it.
Not IT but data analyst. Missed a 2% salary increase for our union members when projecting next year’s budget. $12 million mistake that was only caught once it was too late to fix.
I pushed a $1 bln test trade through production instead of my test environment… that was a sweaty 30 minutes
I once deleted the whole production kubernetes environment trying to fix an update to prod give awry, at11pm. My saving grace was that our systems are barely used between 10pm-8am, and I managed to teach myself by reading enough docs and stack overflow comments to rebuild it and fix the initial mistake before 5am. Never learned how to correctly use a piece of stack that quickly before or since.
Nothing focuses the mind more than the panicked realisation that you have just hosed the production systems
Yep. Ran a config as code migration on prod instead of dev. We introduced new safeguards for running against prod after that. And changed the expectations for primary on call to do dev work with down time. Shifted to improving ops tooling or making pretty charts from all the metrics. Actually ended up reducing toil substantially over the next couple quarters.
10/10 will absolutely still do something dumb again.
I deleted all of our DNS records. As it turns out, you can't make money when you can't resolve dns records :P
I can do one better. Novo Nordisk lost their Canadian patent for Ozempic because someone forgot to fill out the renewal with a $400 admin fee.
They will lose $10B before patent ending.
But they saved $400. Someone needs to talk to HR.
I once bricked all the POS terminals in a 30-store chain, at once.
One checkbox allowed me to do this.
Was it the recompute hash button?
No they were ancient ROM based tills, I unchecked a box that was blocking firmware updates from being pushed to the tills. For some reason I still don’t completely understand, these tills received their settings by Ethernet, but received their data by dialup fucking modems. When I unchecked the box, it told the tills to cease processing commands until the firmware update was completed. But the firmware update wouldn’t happen until I dialled into every single store, one at a time, and sent the firmware down through a 56k modem with horrendous stability, to each till, also one at a time. If one till lost one packet, I had to send it’s firmware again.
I say for 8 hrs watchimg bytes trickle down to the tills while answering calls from frantic wait staff and angry managers.
That’s actually pretty impressive
Ctrl + z.
Thank you for coming to my Ted talk.
Works on my machine (excel sheet)
Elon was onto something after sll
Reminds me of a major incident I got involved in. I was the Problem Manager and not MIM (Major Incident Management), but I’ve had years of MIM experience so was asked to help out on this one. The customer manufactured blood plasma and each of the lots on the production floor was worth a cool $1 million. The application that was down and had brought production down was not the app that actually handled production, but an application (service) that supplied data to it.
Of course the customer thought that app was not Mission Critical so it didn’t have redundancy. I joined the call and first thing I asked was when did the last change go through on this app… Spoiler: I had the change in front of me and it went in the previous night. The admin of the app speaks up that he did a change the previous night… And NO the MIM team had NOT looked at that change yet… Did I mention this was FOUR FUCKING HOURS into the outage? That is MIM 101. Something goes down, look to see who last fucked with it.
This is why you need experienced MIM people in enterprise environments.
So I took control of the MIM, instructed the App Admin to share his screen and walk us through the change he did the previous night… Two screens in and OH… Look at that… There’s a check box that put the app into read only (or something like that, this happened back in 2009 and I don’t remember all the details). I’d never seen the application before in my life, but knew that check box being checked, just based on the verbiage, could not be right… So I asked… The Admin, sounding embarrassed, said yeah he forgot to uncheck that box last night…
Fuck me.
He unchecked the box, bounced the app and what do you know… It started to work.
A single damn check box brought down the production line of a multi-billion dollar company.
My investigation for that Problem was a bit scathing to multiple levels of the customer. If a service supports a Tier 1 production app and that Tier 1 app would stop working if that service goes down… GUESS WHAT! That service is MISSION FUCKING CRITICAL and it should be supported as such. My employer was not on the hook for this one, as both applications involved were customer supported, we just did the MIM support for them.
I would love to say that the above is an uncommon occurrence, but honestly it is the main reason for outages in my experience. Something small and stupid that is easily missed.
“Ah, shit. Oh well. They have backups.”
“…”
“They habe backups, right?”
If they don’t, that’s something you can’t blame on a new start.
It was all a Pentest! The company should have been operating under the Zero Trust Policy and their Security systems should not have permitted a new employee to have that many rights. You’re welcome, the bill for this insightful Security Audit will arrive via mail.
Pretend you thought you were hired as disaster recovery tester
Now if you’ll excuse me while I fetch some documents from my car for my formal evaluation of your system
*Gets in car and drives away*
Its my last day at work, an I just started to dd
my work laptop…but I forgot I was ssh’d into the production database.
Did you know that morning it would be your last day at work?
I was hired as a backup representative and just wanted to know what I was dealing with and make a clear statement.
…which would make it about 99,9999999% their fault for giving a new employee write access to the DB.
The data was a mess and needed to be cleaned up anyway.
Get this monkey a job at Tesla
We need an army of them working at Palantir too
256,338 rows affected.
when it gives you a time to rub it in. 'in 0.00035 seconds'
What’s with the weird vertical artifacts in this image?
Trying to hide the slop
Scanlines in tate mode.
You’re stress testing the IT department’s RTO and RPO. This is important to do regularly at random intervals.
Netflix even invented something called Chaos Monkey that randomly breaks shit to make sure they’re ready.
Tell them your name is Claude, they’ll pay you $200 a month for the privilege.
Oh, so that’s what the Delete button does.
Have you tried turning it off and on again?
Right click Recycle Bin --> Restore
Another company that never had a real DBA tell them about _A tables.
This stuff is literally in the first Database class in any real college.
This is trivial, before any update or delete you put the main table (let us use table foo with a couple columns (row_id,a, b, create_date,create_user_id, update_date and update_user_id) in this example)
For vc in (select * from foo where a=3) Loop Insert into foo_A (row_id,a,b, create_date,create_user_id, update_date, update_user_id, audit_date,audit_user_id) values(vc.row_id,vc.a,vc.b, vc.create_date,vc.create_user_id, vc.update_date, vc.update_user_id, ln_sysdate,ln_audit_user_id); Delete from foo where row_id =vc.row_id; End loop
Now you have a driver that you can examine exactly the records you are going to update, along with ensuring that you will be able to get the old values back, who updated/deleted the values and an audit log for all changes (as you only give accounts insert access to the _A tables and only access to the main tables through stored procedures)
If you want a helper table you can just insert directly, no need for the cursor loop.
If you need to speed up your deletes, might I suggest not storing data that you don’t need. It is much faster, cheaper and better protects user privacy.
Modern SQL engines can parallelize the loop and the code is about enabling humans to be able to reason about what exactly is being done and to know that it is being done correctly.
All of my bananas are worthless now!
Why is the default for some database tools to auto commit after that? Pants on head design decision.
TheFunkyMonk@lemmy.world 1 day ago
If you have the ability to do this on your first day, it’s 100% not your fault.
InvalidName2@lemmy.zip 1 day ago
This is literally true and I know it because I came here to say it and then noticed you beat me by 5 minutes.
Magnum@lemmy.dbzer0.com 19 hours ago
This is literally true and I know it because I came here to say it and then noticed you beat me by 21 hours.