duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Create full backup from incremental


From: Eric O'Connor
Subject: Re: [Duplicity-talk] Create full backup from incremental
Date: Fri, 17 Apr 2015 10:42:49 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0

On 04/17/2015 06:35 AM, Kenneth Loafman wrote:
I can't help but think that the space and network requirements are
going to be the same as that of a full backup or restore.

Hmm, I think the only way the space and network requirements would be
equal is if you change all your files all the time. I haven't touched
~/documents/old_laptop/highschool/floppies/homework-17.rtf in many
years, and I don't regularly edit my photos or music (also the biggest
pain point when doing full backups).

On 04/17/2015 06:35 AM, Kenneth Loafman wrote:
Rather than doing that, why not implement a backup where instead of
doing an incremental based on the current chain, you reset the
incremental process and start over with a second chain off the same
full.  I have no name for that, and actually just thought of it, but
it would solve some of the problems that people seem to have with
backups. That way you reuse your base full backup and still have
incrementals.

Well, then the size of each successive chain grows unbounded as you add
and remove files. For example, if you start off with a small backup set
and then add your music library, you have to re-upload your music
library each time. You also still have a processing step to recover, and
it's comforting to know that a semi-recent version is backed up
verbatim, without the need to apply a diff to your tar volumes.

On 04/17/2015 06:40 AM, Scott Hannahs wrote:
I am still not clear how this scheme could be implemented without
the remote machine having all the files and lengths etc.  But this
meta data is not supposed to be in the clear on the remote machine
ever. Thus if it is local then all the incremental files would need
to be transferred back to the local machine for combining with the
full. Not saving bandwidth which I believe is the original intent.

The remote machine (say, S3) doesn't have any use for files and lengths
-- it's just a dumb bucket of bits. Anyway, Duplicity already stores a
bunch of metadata locally, such as a rolling checksum for every file
that's backed up. Unless that local metadata became corrupted or lost,
why would it need to be repeatedly transferred back?

Anyway, it sounds like this isn't wanted, so I'll be on my way. Cheers :)

Eric




reply via email to

[Prev in Thread] Current Thread [Next in Thread]