Unfortunately, the pace of attack development doesn’t really give much time for testing.
Comment on An angry admin shares the CrowdStrike outage experience
ripcord@lemmy.world 1 month agoThey also don’t seem to have a process for testing updates like these…?
This seems like showing some really shitty testing practices at a ton of IT departments.
catloaf@lemm.ee 1 month ago
ripcord@lemmy.world 1 month ago
More time that the zero time than companies appear to have invested here.
TonyOstrich@lemmy.world 1 month ago
I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it’s only some or most, but not all.
hangonasecond@lemmy.world 1 month ago
Heck even 30 minutes ahead for 1% of devices wouldve had a reasonable chance of catching this
USSEthernet@startrek.website 1 month ago
Apparently from what I was reading these are forced updates from Crowdstrike, you don’t have a choice.
ripcord@lemmy.world 1 month ago
I’ve heard differently. But if it’s true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.
Entropywins@lemmy.world 1 month ago
Companies use crowdstrike so they don’t need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.
hangonasecond@lemmy.world 1 month ago
Automatic updates should still have risk mitigation in place, and the outage didn’t only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.
ripcord@lemmy.world 1 month ago
Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.