duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Big bandwidth bill from Rackspace


From: Joseph D. Wagner
Subject: Re: [Duplicity-talk] Big bandwidth bill from Rackspace
Date: Mon, 10 Jun 2013 14:47:13 -0700
User-agent: Roundcube Webmail/0.8.6

On 06/10/2013 12:20 pm, address@hidden wrote:

On 10.06.2013 21:10, Lou wrote:

I recently got hit with a 200 usd bill from Rackspace for bandwidth

while using duply against their cloudfiles backend. I'd like to find
out why - the backup source didn't change much, and reading the
random daily e-mail reports that duply generates don't show any big
numbers. Does anyone know if perhaps the "verify" part of
backup_verify_purge is doing something traffic intensive?

yes. essentially it restores any file of the latest backup and compares it to the original path in the local file system. therefore pretty much
all the last full and the incremental volumes have to be downloaded
again.

Why does it need to download the entire file? Why not just download a list of files and their checksums (CRC32/64 or if paranoid SHA1/256/512)? This would minimize bandwidth usage for verify operations and be just as effective as comparing the whole file.

Joseph D. Wagner



reply via email to

[Prev in Thread] Current Thread [Next in Thread]