duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Rollup Functionality and Parity?


From: Colin Ryan
Subject: Re: [Duplicity-talk] Rollup Functionality and Parity?
Date: Mon, 18 Aug 2008 15:35:30 -0400
User-agent: Thunderbird 2.0.0.16 (Windows/20080708)

Kenneth Loafman wrote:
Colin Ryan wrote:
In thinking about the uses of duplicity, I'm always concerned about the
trade off between having to do the occasional large full backup verus
"infinite incrementals". I see the "infinite incremental" approach as
the dirty little secret of the hosted storage business where no-one
seems to acknowledge the sensitivity of this technique to the corruption
of even a single bit in a single file.

That is the main risk of almost any backup system, including duplicity.
 That's the reason we recommend regular full backups, plus local and
remote copies of each.

Yes this point almost philosophical in nature. Am I correct in assuming that with duplicity the corruption of an incremental would leave anything prior in the backup chain accessible.

My concern about the latter is should even one incremental in the chain
become corrupt everything from that point on (I assume) is
unrecoverable. So I was wondering if there is any technique that could
be used to "periodically" roll up the incrementals on the remote
respository side into a full to create an "new single full" which
contains all the incrementals, but that would allow duplicity to simply
continue on with incremental backups on the client end. This would
simply - for what it's worth - reduce the number of files that must be
100% intact but would allow one to always run duplicity in just
incremental mode while periodically generating a full.

Such a rollup would be possible, but it would require a lot of network
bandwidth, equivalent to restoring all of the changed files and their
increments, then writing them back to the host as a single incremental
backup, verifying, then deleting the intermediate incrementals.


I guess I was looking for some insights regarding how one might do this out of band. Imagine f you will that you have your local filesystem, and you are moving that off to a a remote site. If from a system local to the remote site (thereby localizing the network traffic) could one somehow manipulate and roll up the full + incremental into a "new full", in such a way that the duplicity client on the local system simply sees the backend as simply having a recent incremental, and hence we continue taking incrementals off site and "artificially build" the fulls on the backend.

As a side note has anyone put any thought to using Par2 parity files on
the tar files that duplicity generates. Yes this would increase the back
end storage but would allow for recoverability of the file provided data
corruption was 5-10-20% of the file.

Yes, par2 has been studied.  Its in the plans, but down the list a bit.
   Of course, finding 5-10% of a file corrupted should alert you to some
serious hardware and/or network problems.

Cheers

Not sure I've explained my thoughts clearly but...

Looks like it.

...Ken


------------------------------------------------------------------------

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk





reply via email to

[Prev in Thread] Current Thread [Next in Thread]