But… can we trust that we will have stuff available on the internet in the future?
Comment on Cheapest way to back up a *lot* of data?
roofuskit@lemmy.world 2 days ago
How much of that 50 terabytes is media downloaded from the Internet? Because the cheapest way would be to trust that it’s already backed up on the Internet and then use one of the usual services like B2 by Backblaze or Storagebox by Hetzner to back the rest of it up.
morto@piefed.social 2 days ago
BootLoop@sh.itjust.works 2 days ago
All my TV shows and movies, I don’t bother. But my 150gb mp3 library I keep backed up because it’s much smaller and I know some of that stuff is not readily available online.
bridgeenjoyer@sh.itjust.works 2 days ago
Exactly. Regimes want to kill this as fast as they can to milk us of every penny witghr their shitty services. I dont trust any sites will stay up.
kumi@feddit.online 2 days ago
You can replicate and do regukar monitoring that backups are still accessible.
If one goes down you hopefully have time to figure out a replacment before the other(s) do.
RamRabbit@lemmy.world 2 days ago
Yeah, if 90% of that is movies/shows, then you really don’t need a backup of that as you can always re-download it.
Inkstainthebat@pawb.social 1 day ago
It’s kinda complicated because a good chunk of that is data that is technically redownloadable, but has been tweaked (most of my movies are a multiplexed high-res eng version merged with audio from lower-res dub.) Either way, thank you for the suggestions
roofuskit@lemmy.world 1 day ago
I suggest making a script that uses existing software to extract the dubbed audio and then backing that up and l leaving the high quality video to the Web to backup.
I know it’s less than ideal but you can automate both extracting it and muxing it back in. It may take some effort to setup, but it’s well worth the huge recurring costs incurred from backing up that amount of data.
Just an idea to consider.
irmadlad@lemmy.world 2 days ago
Because the cheapest way would be to trust that it’s already backed up on the Internet
That’s a shit load of downloading. LOL wow!
roofuskit@lemmy.world 2 days ago
I have 56TB of storage and the majority of that is definitely downloaded media. They call us data hoarders for a reason.
irmadlad@lemmy.world 2 days ago
Oh sure I understand data hoarding. I was just thinking, to restore 50 tb from the internet is going to take more than a fortnight.
roofuskit@lemmy.world 2 days ago
It’s literally downloading the same amount of data you would be backing up, and you won’t be charged hourly for downloading it from the internet as opposed to a large storage service.
Appoxo@lemmy.dbzer0.com 1 day ago
Imagine having to back up 50Tb to S3 :p
Not everyone has a symmetric connection.
i_stole_ur_taco@lemmy.ca 2 days ago
This is a good compromise. When I was tight on backup space, I just had a “backup” script that ran nightly and wrote all the media file names to a text file and pushed that to my backup.
It would mean tons of redownloading if my storage array failed, but it was preferable to spending hundreds of dollars I didn’t have on new hardware.
roofuskit@lemmy.world 2 days ago
If you back up the modern day Arrs databases then it’s essentially the same thing and already built into the software that will redownload them for you. That’s my solution. I backup my backups of those, of my home assistant, my Immich library, my Nextcloud, etc… Pirated media is, for the most part, out there backed up on several places already.
Zikeji@programming.dev 2 days ago
This is what I do - well, I back up there entire container. But functionally the same.
There’s only a few pieces of media that I have backed up manually due to their rarity, but even those I don’t really care about.