Comment on Incremental backups to optical media: tar, dar, or something else?
traches@sh.itjust.works 2 days agoOhhh boy, after so many people are suggesting I do simple files directly on the disks I went back and rethought some things. I think I’m landing on a solution that does everything and doesn’t require me to manually manage all these files:
fd
(and any number of other programs) can produce lists of files that have been modified since a given date.- fpart can produce lists of files that add up to a given size.
xorrisofs
can accept lists of files to add to an iso
So if I fd
a list of new files (or don’t for the first backup), pipe them into fpart to chunk them up, and then pass these lists into xorrisofs to create ISOs, I’ve solved almost every problem.
- The disks have plain files and folders on them, no special software is needed to read them. My wife could connect a drive, pop the disk in, and the photos would be right there organized by folder.
- Incremental updates can be accomplished by keeping track of whenever the last backup was.
- The fpart lists are also a greppable index; I can use them to find particular files easily.
Downsides:
- Change detection is naive. Just mtime. Good enough?
- Renames will still produce new copies. Solution: don’t rename files. They’re organized well enough, stop messing with it.
- Deletions will be disregarded.
- There isn’t much rhyme or reason to how fpart splits up files. The first backup will be a bit chaotic. I don’t think I really care.
Honestly those downsides look quite tolerable given the benefits. Is there some software that will produce and track a checksum database?
Off to do some testing to make sure these things work like I think they do!
nibbler@discuss.tchncs.de 17 hours ago
your first two points can be mitigated by using checksums. trivial to name the file after it’s checksum, but ugly. save checksums separately? safe checksums in file metadata (exit)? this can be a bit tricky 🤣 I believe zfs already has the checksum, so the job would be to just compare lists.
restoring is as easy, creation gets more complicated and thus prone to errors
traches@sh.itjust.works 14 hours ago
I’ve been thinking through how I’d write this. With so many files it’s probably worth using sqlite, and then I can match them up by joining on the hash. Deletions and new files can be found with different join conditions. I found a tool called ‘hashdeep’ that can checksum everything, though for incremental runs I’ll probably skip hashing if the size, times, and filename haven’t changed. I’m thinking nushell for the plumbing? It runs everywhere, though they have breaking changes frequently. Maybe rust?
ZFS checksums are done at the block level, and after compression and encryption. I don’t think they’re meant for this purpose.
nibbler@discuss.tchncs.de 13 hours ago
never heard of nushell, but sounds interesting… but it’s not default anyhwhere yet. I’d go for bash, perl or maybe python? your comments on zfs make a lot of sense, and invalidate my respective thoughts :D
traches@sh.itjust.works 12 hours ago
I only looked how zfs tracks checksums because of your suggestion! Hashing 2TB will take a minute, would be nice to avoid.
Nushell is neat, I’m using it as my login shell. Good for this kind of data-wrangling but also a pre-1.0 moving target.