Scaling has a budget, I’m sure. They’ll only pay for so much.
Comment on Disney+ cancellation page crashes as customers rush to quit after Kimmel suspension
Bongles@lemmy.zip 2 weeks agoOn one hand, could be a “crash”. On the other hand, tons of websites break when they get a little extra traffic.
Side tangent, seems odd to me this is still a thing. Most company websites aren’t hosted on premises, so do these services like (i assume) AWS not scale for when there’s traffic? Squarespace has been advertising for years that it will scale up if there’s extra traffic. I’ve never tested it but still.
RememberTheApollo_@lemmy.world 2 weeks ago
DreamlandLividity@lemmy.world 2 weeks ago
If your page is just static, e.g. no login, no interaction, everyone always sees the same thing then it scales easily. Scaling means you copy the site to more servers. Now imagine a user adds a comment. Now you need to add the comment to every copy of your site, so a comment creates more work the more servers you use. And this is where scaling becomes a complex science, that you need to manually prepare for as a software developer.
BCsven@lemmy.ca 2 weeks ago
Caching servers, they self replicate when a change is committed, then send back a signal to main server that task has completed
DreamlandLividity@lemmy.world 2 weeks ago
I am not sure what you are trying to say?
BCsven@lemmy.ca 2 weeks ago
Oh right, I skipped a part. It is not really a dev complexity prep issue. You build the database that serves the comments etc in as of in one place, then you deploy cache servers for scaling. They self replicate, so a comment in California gets commited to the dbase, the server in new York pulls the info over from the Cali change, it sends back that it is synced with the change. And vice versa. The caching servers do the work, not your program.
andros_rex@lemmy.world 2 weeks ago
I feel like Disney has internal stuff? I listened to a podcast where an ex employee changed the fonts on a bunch of stuff to be wingdings, etc, and made everything unusable.
okmko@lemmy.world 2 weeks ago
It could also be poor graceful failure. What we see as a crash may be from some unavailability deep in a long pipeline of services.
pinball_wizard@lemmy.zip 2 weeks ago
Side tangent, seems odd to me this is still a thing. Most company websites aren’t hosted on premises, so do these services like (i assume) AWS not scale for when there’s traffic?
Scaling is only for companies that have not been allowed to purchase and enshittify every serious competitor. (Pixar, Marvel, HBO…)
Kissaki@feddit.org 2 weeks ago
You have to design for scalability. Bottlenecks may be wherever. Even if their virtual server CPU and RAM can scale up, other stuff may be bottlenecks. Maybe the connection to the DB. Maybe the DB is elsewhere and doesn’t scale. Can’t really reasonably guess from the outside.
Mass cancellation is not usually a thing they would design around bottle-necks. It also doesn’t add value to them.