Some things you should determine first:
- Total amount of data you will be backing up
- Frequency of backups
- Number of copies to keep
Plug these numbers into cost calculators for whatever cloud service you’re hoping to use, because this is honestly not going to be the cheapest route to store off-site if there are ingress charges like with S3.
I know Cloudflare’s R2 service doesn’t charge for ingress or egress (for now), but you might be able to find something even cheaper if you’re only backing up certain types of data that can be easily compressed.
I’d also investigate cheap ways to maybe just store an off-site drive with your data: office/work, family house, friends house…etc.
RadDevon@lemmy.zip 3 days ago
OK, cool. That’s helpful. Thank you!
I know in general you can just grab a docker volume and then point at it with a new container later, but I was under the impression that backing up a database in particular in this way could leave you with a database in a bad state after restoring. Fingers crossed that was just bad info. 😅
supersheep@lemmy.world 3 days ago
In theory the database can end up in an invalid state when you leave the database container running. What I do for most containers is to temporarily stop them, backup the Docker volume and then restart the container.
scrubbles@poptalk.scrubbles.tech 3 days ago
Seconded, and great callout @RadDevon@lemmy.zip , yes part of my script was to stop the container gracefully, tar it, start it again, and then copy the tar somewhere. it “should” be fine, in a production environment where you could have zero downtime I would take a different approach, but we’re selfhosters. Just schedule it for 2am or something.
RadDevon@lemmy.zip 3 days ago
Is your script something you can share? I’d love to see your approach. I can definitely live with a few minutes of down time in the early morning.
RadDevon@lemmy.zip 3 days ago
Much simpler than my solution. I’ll look into this. Thank you!