If the data has value, then yes, duplication is a good thing up to a point. The thesis is that only 10% of the data has value, though, and therefore duplicating the other 90% is a waste of resources.
The real problem is figuring out which 10% of the data has value, which may be more obvious in some cases than others.
futatorius@lemm.ee 6 days ago
Git only duplicates blobs; textual content is stored as deltas. And it’s bad practice to version-control blobs: the more correct approach is to control the source from which the blob is generated.
lemmyng@lemmy.ca 6 days ago
You’re missing the first for the tree here.
Given identical client setups, two clones of a git repo are identical. That’s duplication, and it’s an intentional feature to allow concurrent development.
A CDN works by replicating content in various locations. Anycast is then used to deliver the content from any one of those locations, which couldn’t be done reliably without content duplication.
Blockchains work by checking new blocks against previous blocks. In order to fully guarantee the validity of a block you need to guarantee every block, going back to the beginning of the chain. This is why each root node on a chain needs a full local copy of it. Duplication.
My point is that we have a lot of processes that rely on full or partial duplication of data, for several purposes: concurrency, faster content delivery, verification, etc. Duplicated data is a feature, not a bug.