$5.4 Bn so far, not including lost worker productivity or damage to brand reputations, so that’s a very conservative estimate. And Cybersecurity insurance will supposedly only cover up to 20% of that (but good luck getting even that much). What a clusterf***
We finally know what caused the global tech outage - and how much it cost
Submitted 3 months ago by WhatsHerBucket@lemmy.world to technology@lemmy.world
https://www.cnn.com/2024/07/24/tech/crowdstrike-outage-cost-cause/index.html?cid=ios_app
Comments
dditty@lemm.ee 3 months ago
Empricorn@feddit.nl 3 months ago
And that $5,400,000,000 loss estimate is only Fortune 500 companies!
11111one11111@lemmy.world 3 months ago
No it’s all of them because all the companies combined out side of the 500 wouldn’t even have enough net worth large enough to move the needle. So technically they may not be included but would be covered by whatever amount they rounded up to make the even 5.4b
0x0@programming.dev 3 months ago
On Wednesday, CrowdStrike released a report outlining the initial results of its investigation into the incident, which involved a file that helps CrowdStrike’s security platform look for signs of malicious hacking on customer devices.
whatwhatwhatwhat@lemmy.world 3 months ago
The fact that they weren’t already doing staggered releases is mind-boggling. I work for a company with a minuscule fraction of CrowdStrike’s user base / value, and even we do staggered releases.
foggenbooty@lemmy.world 3 months ago
They do have staggered releases, but it’s a bit more complicated. The client that you run does have versioning and you can choose to lag behind the current build, but this was a bad definition update. Most people want the latest definition to protect themselves from zero days. The whole thing is complicated and a but wonky, but the real issue here is cloudflare’s kernel driver not validating the content of the definition before loading it.
AA5B@lemmy.world 3 months ago
a bug in CrowdStrike’s cloud-based testing system
Always blame the tests. There are so many dark patterns in this industry including blaming qa for being the last group to touch a release, that I never believe “it’s the tests”.
There’s usually something more systemic going on where something like this is missed by project management and developers, or maybe they have a blind spot that it will never happen, or maybe there’s a lack of communication or planning, or maybe they outsourced testing to the cheapest offshore providers, or maybe everyone has huge time pressure, but “it’s the tests”
aStonedSanta@lemm.ee 3 months ago
There was probably one dude at CrowdStrike going. Uh hey guys??? 😆
Plopp@lemmy.world 3 months ago
Couldn’t it, though? 🤔
IANAD and AFAIU, not in kernel mode. Things like trying to read non existing memory in kernel mode are supposed to crash the system because continuing could be worse.
0x0@programming.dev 3 months ago
I.meant couldn’t they test for a NULL pointer.
cheddar@programming.dev 3 months ago
The company routinely tests its software updates before pushing them out to customers, CrowdStrike said in the report. But on July 19, a bug in CrowdStrike’s cloud-based testing system — specifically, the part that runs validation checks on new updates prior to release — ended up allowing the software to be pushed out “despite containing problematic content data.”
It is time to write tests for tests!
Passerby6497@lemmy.world 3 months ago
My thoughts are to have a set of machines that have to run the update for a while, and if any single machine doesn’t pass and all allow it to move forward, it halts any further rollout.
essteeyou@lemmy.world 3 months ago
Oh, finally, I have been waiting for so long.
Wispy2891@lemmy.world 3 months ago
This crowdstrike stuff seems an expensive subscription
I saw a lot of photos of crashed ad screens.
Why the hell are corps paying this much money for windows+cloudstrike for a glorified digital picture frame?? Wouldn’t be 100x cheaper to do it with some embedded stuff instead of having a full desktop computer running a full desktop os???
sugar_in_your_tea@sh.itjust.works 3 months ago
Yeah, an RPi or similar with a screen would be more than plenty for this, and the Pi Zero is really small. Connect that to a central Linux server with a hot backup or two (through local DNS) and you’ll have a hard time crashing it.
Varyk@sh.itjust.works 3 months ago
And the stockades?
Any word on the stockades?
Bishma@discuss.tchncs.de 3 months ago
George Kurtz has only crashed the world twice so he has one strike to go, I guess.
Blue_Morpho@lemmy.world 3 months ago
You can only fail upwards at the executive level. He went from CTO to CEO on his last global crash. What’s next? Running for President?
No risk, All rewards.
c0smokram3r@midwest.social 3 months ago
Wowowow! This is insane! 😨🤯
Semi_Hemi_Demigod@lemmy.world 3 months ago
For the rest of history this sort of thing will mention Crowdstrike, or it might even be called a “crowdstrike.”
You can’t buy that kind of marketing
riodoro1@lemmy.world 3 months ago
Ok. Can we get a solar storm next? I want linux servers out this time too.
sugar_in_your_tea@sh.itjust.works 3 months ago
Best I can do is an xz vuln where half the Linux servers go down for maintenance.
unexpectedteapot@lemmy.ml 3 months ago
Do we actually know? We might know that Crowdstrike was the cause but we don’t actually know what went wrong and how it happened. It is an unfree proprietary closed source software, we just have to take their word for it, which for all purposes is PR in line with the fact that it is coming from a profit-driven organisation.
lightsblinken@lemmy.world 3 months ago
this is exactly the question that needs answering… the PIR is bullshit
JasonDJ@lemmy.zip 3 months ago
Pretty soon we are gonna have to start deciding if it’s safer for enterprise computers to run without AV or AMP.
bigFab@lemmy.world 3 months ago
Beautiful
Imgonnatrythis@sh.itjust.works 3 months ago
“CrowdStrike said it also plans to move to a staggered approach to releasing content updates so that not everyone receives the same update at once, and to give customers more fine-grained control over when the updates are installed.”
Hol up. So they like still get to exist? Microsoft and affected industries just gonna kinda move past this?
BakerBagel@midwest.social 3 months ago
Haven’t seen anything from the affected major players. Obviously Crowdstrike isn’t going to say they are fucked long term, they have to act like this is just a little hiccup and move on. Lawsuits are absolutely incoming
Ledivin@lemmy.world 3 months ago
We’ll see how fucked they are from SLA breaches/etc., and then we’ll see how many companies jump ship to an alternative. We won’t see the real fallout from this event for months or years.
Modern_medicine_isnt@lemmy.world 3 months ago
Newsflash, Solarwinds still exists too. Not sure I could name a company that screwed up so big and actually paid the price.
Imgonnatrythis@sh.itjust.works 3 months ago
Yeah, what was I thinking. United airlines was bankrupt and literally beating people up on their planes and still got taxpayer payouts and is around paying investors divends still today.
TheLimiter@lemmy.world 3 months ago
Two days ago my company sent out an all hands email that we’re going company wide with Crowdstrike.
LodeMike@lemmy.today 3 months ago
Companies using CrowdStrike and Windows aren’t really the type to be active about this sort of thing.
11111one11111@lemmy.world 3 months ago
What do you mean by this?
JasonDJ@lemmy.zip 3 months ago
I wasn’t effected but I bet a lot of admins, as pissed as they were, were thinking “I could easily fuck up this bad or worse”.
jeeva@lemmy.world 3 months ago
Yeah, what’s the jokey parable thing?
(</Blah>)