duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Re: Duplicity best practices?


From: Peter Schuller
Subject: Re: [Duplicity-talk] Re: Duplicity best practices?
Date: Fri, 14 Mar 2008 19:12:48 +0100
User-agent: Mutt/1.5.17 (2007-11-01)

> I'm just trying to figure out how to use that effectively myself.  Right 
> now, my script runs the verify but always fails.  I have to do additional 
> testing and see where the problems lie, but I would think/expect that good 
> buisness practice would require a full verify after any backup to ensure 
> that the backup was done properly and is successful.  However, I am not sure 
> what to do if a verify fails; does one delete the most recent backup and try 
> again?

First off, make sure the verification is *supposed* to suceed. For
example, if you are taking a backup of a live file system undergoing
changes, the expectation is that it will not succeed (if possible in
the environment, file system snapshots are greatly recommended).

If it does not succeed in spite of the fact that it "should", that
would indicate some kind of bug, or an actual problem with backups.

Are your failures consistent with files legitimately being modified?

> The other thing I saw "missing" is a discussion relating to the --volsize 
> parameter.  I have given this a lot of consideration, and I get the feeling 
> that keeping the backup in a single volume is the best method; my reasoning 
> being that there would be less chance of finding a corrupt piece.  However, 
> I am not entirely sure about this theory and would love to hear 
> feedback/comments about it.  For my needs, I am backing up to a local NAS so 
> I don't need to worry about slow network transfers, however, I don't know if 
> a single volume is good for remote uploads or not.  Any thoughts?

(1) A huge volume size means you need a huge amount of free space in
your temp dir.

(2) Huge sizes do not necessarily play well with all backends (e.g.,
FTP servers with 2 GB limits, Amazon S3 with it's per-file limit).

(3) Connectivity errors, or other problems, will cause the *entire*
backup to be re-tried instead of just a managably small volume.

That said, I personally tend to up the volume size to 256 MB or so, to
make it reasonable. That is because with several gigs or
tends/hundreds of gigs being backed up, the default 5 MB volume size
yields unreasonable amounts of files.

As for corruption: Depends on what you are trying to protect
against. Bitflips in the data aren't going to be less likely under
typical circumstances. I guess meta data corruption such that a file
somehow becomes unavailable might become slightly less likely. The
question is whether that is offset by the risk of triggering some edge
case in some software involved in handling huge files...

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <address@hidden>'
Key retrieval: Send an E-Mail to address@hidden
E-Mail: address@hidden Web: http://www.scode.org

Attachment: pgpEs7XmBs623.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]