No, in “DevOps” environments “configuration changes” is most of what you do every day
Comment on Cloudflare blames massive internet outage on 'latent bug'
A_norny_mousse@feddit.org 22 hours ago
a routine configuration change
Honest question (I don’t work in IT): this sounds like a contradiction or at the very least deliberately placating choice of words. Isn’t a config change the opposite of routine?
floquant@lemmy.dbzer0.com 15 hours ago
fushuan@lemmy.blahaj.zone 21 hours ago
They probably mean that they did a change in a config file that is uploaded in their weekly or bi-weekly change window, and that that file was malformed for whichever reason that made the process that reads it crash. The main process depends on said process, and all the chain failed.
Things to improve:
- make the pipeline more resilient, if you have a “bot detection module” that expects a file,and that file is malformed, it shouldn’t crash the whole thing: if the bot detection module crahses, control it, fire an alert but accept the request until fixed.
- Have a control of updated files to ensure that nothing outside of expected values and form is uploaded: this file does not comply with the expected format, upload fails and prod environment doesn’t crash.
- Have proper validation of updated config files to ensure that if something is amiss, nothing crashes and the program makes a controlled decision: if file is wrong, instead of crashing the module return an informed value and let the main program decide if keep going or not.
I’m sure they have several of these and sometimes shit happens, but for something as critical as CloudFlare to not have automated integration tests in a testing environment before anything touches prod is pretty bad.
groet@feddit.org 19 hours ago
it shouldn’t crash the whole thing: if the bot detection module crahses, control it, fire an alert but accept the request until fixed.
Fail open vs fail closed. Bot detection is a security feature. If the security feature fails, do you disable it and allow unchecked access to the client data? Or do you value Integrity over Availability
Imagine the opposite: they disable the feature and during that timeframe some customers get hacked. The hacks could have been prevented by the Bot detection (that the customer is paying for).
Yes, bot detection is not the most critical security feature and probably not the reason someone gets hacked but having “fail closed” as the default for all security features is absolutely a valid policy. Changing this policy should not be the lesson from this disasters.
fushuan@lemmy.blahaj.zone 17 hours ago
You don’t get hacking protection from bots, you get protection from DDoS attacks. Yeah some customers would have gone down, instead everyone went down… I said that instead of crashing the system they should have something that takes an intentional decision and informs properly about what’s happening. That decision might have been to clo
You can keep the policy and inform everyone much better about what’s happening. Half a day is a wild amount of downtime if it were properly managed.
Yes, bot detection is not the most critical…
So you agree that if this were controlled instead of open crahsing everything them being able to make an informed decision and opening or closing things, with the suggestion of opening in the case of not detection is the correct approach. What’s the point of your complaint if you do agree? C’mon.
groet@feddit.org 16 hours ago
You don’t get hacking protection from bots
I disagree. I don’t know the details of cloudflares bot detecion, but there are many automated vulnerability scanners that this could protect against.
I said that instead of crashing the system they should have something that takes an intentional decision and informs properly about what’s happening.
I agree. Every crash is a failure by the designers. Instead it should be caught by the program and result in a useful error state. They probably have something like that but it didn’t work because the crash was to severe.
What’s the point of your complaint if you do agree?
I am not complaining. I am informing you that you are missing an angle in your consideration. You can never prevent every crash ever. So when designing your product you have to consider what should happen if every safeguard fails and you get an uncontrolled crash. In that case you have to design for “fail open” or “fail closed”. Cloudflare fucked up. The crash should not have happened and if it did it should have been caught. They didn’t. They fucked up. But, i agree with the result of the fuck up causing a fail closed state.
monkeyslikebananas2@lemmy.world 22 hours ago
Not really. Sometimes there are processes designed where engineers will make a change as a reaction or in preparation for something. They could have easily made a mistake when making a change like that.
123@programming.dev 22 hours ago
E.g.: companies that advertise on a large sporting event might preemptively scale up (maybe warm up depending on language) their servers in preparation for a large load increase following some ad or mention of a coupon or promo code. Failure to capture the market it could generate would be seen as wasted $$$
NotMyOldRedditName@lemmy.world 22 hours ago
I don’t think it was a bug making the configuration change, I think there was a bug as a result of that change.
That specific combination of changes may not have been tested, or applied in production for months, and it just happened to happen today, hence the latent part.
monkeyslikebananas2@lemmy.world 21 hours ago
Yeah, I just read the postmortem. My response was more about the confusion that any configuration change is inherently non-routine.