duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] Re: Re: Duplicity best practices?


From: Eric B.
Subject: [Duplicity-talk] Re: Re: Duplicity best practices?
Date: Sat, 15 Mar 2008 11:18:02 -0400

> > I'm just trying to figure out how to use that effectively myself.  Right
> > now, my script runs the verify but always fails.  I have to do 
> > additional
> > testing and see where the problems lie, but I would think/expect that 
> > good
> > buisness practice would require a full verify after any backup to ensure
> > that the backup was done properly and is successful.  However, I am not 
> > sure
> > what to do if a verify fails; does one delete the most recent backup and 
> > try
> > again?
>
> First off, make sure the verification is *supposed* to suceed. For
> example, if you are taking a backup of a live file system undergoing
> changes, the expectation is that it will not succeed (if possible in
> the environment, file system snapshots are greatly recommended).

Agreed.  I have been running into exactly that issue, where I need to make a 
snapshot of the fs prior to running duplicity on it.  However, unless I 
misunderstand how to use duplicity, I am having a lot of trouble getting 
duplicity to work with the paths in my snapshot.  The problem that I am 
having is that once I create my snapshot, I need to "mount" it somewhere to 
get access to it.  So, for instance, if I put my snapshot in 
/mount/snapshot, then I am not sure how I would run duplicity.
 # duplicity --include /etc /mount/snapshot file:///duplicity

doesn't seem to work; tells me that /etc cannot be found.  Is there a way to 
indicate to duplicity that all paths specified in the --include 
/ --include-filelist parameters are relative to the source (ie: 
/mount/snapshot) and not from / itself?  Or is that how it is supposed to be 
working and I'm just doing something wrong on my end?


> Are your failures consistent with files legitimately being modified?

Yes they are at the moment.  The question is rather associated to the issue 
if the verify fails legitmately - what to do then.  Is there a way to delete 
only the backup made?  ie: if the last duplicity execution was incremental, 
then to delete the incremental backup, if full, then delete the full backup? 
>From what I read in the duplicity specs, I can't figure out how to do that 
from within duplicity itself.


> > The other thing I saw "missing" is a discussion relating to 
> > the --volsize
> > parameter.  I have given this a lot of consideration, and I get the 
> > feeling
> > that keeping the backup in a single volume is the best method; my 
> > reasoning
> > being that there would be less chance of finding a corrupt piece. 
> > However,
> > I am not entirely sure about this theory and would love to hear
> > feedback/comments about it.  For my needs, I am backing up to a local 
> > NAS so
> > I don't need to worry about slow network transfers, however, I don't 
> > know if
> > a single volume is good for remote uploads or not.  Any thoughts?
>
> (1) A huge volume size means you need a huge amount of free space in
> your temp dir.

Why?  Wouldn't it be the same total amount of total free space required? 
Even if I were to split a 2G backup into 100 files, I would still need 2G 
total space in tmp, wouldn't I?

> As for corruption: Depends on what you are trying to protect
> against. Bitflips in the data aren't going to be less likely under
> typical circumstances. I guess meta data corruption such that a file
> somehow becomes unavailable might become slightly less likely. The
> question is whether that is offset by the risk of triggering some edge
> case in some software involved in handling huge files...

Agreed.  I think it is a lot a question of backend storage.  If, indeed, it 
is a remote upload to S3, or a webdav folder, etc, then it might be more 
efficient to use smaller files.  But my gut tells me that the greater the 
number of files, the more of a chance for something to go wrong; a file not 
copied properly, reassembly of files having an error, etc.  But using 5G 
backup files tends to be exaggerated as well.  I would think that in the 
realm of 250MB-500MB would prob. be a decent compromise...  but that is 
based on nothing but pure gut feeling.

Thanks,

Eric







reply via email to

[Prev in Thread] Current Thread [Next in Thread]