eleitl@lemmy.zip 2 days ago
“and database snapshots that Grigorev had counted on as backups” – yes, this is exactly how you run “production”.
eleitl@lemmy.zip 2 days ago
“and database snapshots that Grigorev had counted on as backups” – yes, this is exactly how you run “production”.
Nighed@feddit.uk 2 days ago
With some of the cloud providers, their built in backups are linked to the resource. So even if you have super duper geo-zone redundant backups for years, they still get nuked if you drop the server.
It’s always felt a bit stupid, but the backups can still normally be restored by support.
eleitl@lemmy.zip 1 day ago
That’s because these are not backups. With backups you still have your data even if the cloud provider has gone away.
Nighed@feddit.uk 1 day ago
They are backups, you potentially get copy’s of the data in multiple locations across continents.
BUT I agree, you are relying on them entirely for it. Lots of vendor tie in stuff in the industry unfortunately.
EffortlessGrace@piefed.social 22 hours ago
Is everyone in commercial software development finally saying, “Fuck it, we’ll run the shit ourselves”?
I’m an infrastructure and devops noob here; take my words with a grain of salt.
I need GPU clusters with ECC VRAM for research and found it’s cheaper to just have my own high-ish performance compute in my own office I paid for once than pay AWS/Azure/GCS/etc forever or at least everytime I want to train a custom DNN model. Sometimes I use Linode but it’s for monitoring. But I can run shit at will and I have data sovereignty.
Has the paradigm shifted back to developing *and * serving things in house now that big tech vendor-lock/tie-ins have so many dark patterns that scalability isn’t cost-effective with them? Or is it just my own pipe dream?