duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Large backups to S3


From: Peter Schuller
Subject: Re: [Duplicity-talk] Large backups to S3
Date: Sat, 3 Apr 2010 12:13:40 -0400

> How does duplicity handle S3 bucket overflows?  I have several hundred GB of
> data in a large director tree that I would like to start backing up to S3.
>  I'm really loath to attempt to partition this data into buckets by hand.
>  I'm assume that duplicity can do some sort of magic for me?

What "overflow" are you concerned with? Unless you're backing up
petabytes of data (or whatever the limit would be) you can effectively
consider your S3 bucket to be infinite in size.

If you're backing up hundreds of gigs, I would suspect the concern's
you'll have include:

* Cost. You'll have to make periodic backups as you don't want an ever
increasing chain of incremental backups.
* Time to completion. If there is no parallelization backing up
hundreds of GB of data will likely take significant time.

-- 
/ Peter Schuller




reply via email to

[Prev in Thread] Current Thread [Next in Thread]