I'm trying to perform a test restore of the S3 backups I've been doing of a
LARGE fileserver (approx 700GB , approx 100,000 files) which comes to 236
volumes (with volsize set to 2000) in a full backup. Duplicity version is
duplicity-0.6.17-1.el6.x86_64
I've made two attempts to perform a full test restore to another machine, and
in both cases it eventually gives up. It got to volume 127 and then keeps
failing with
Download s3+http://...//duplicity-full.20120304T000513Z.vol127.difftar.gpg
failed (attempt #9993, reason: IncompleteRead: IncompleteRead(0 bytes read,
1622573204 more expected))
Then eventually, when it hit the retry limit of 9999, it printed the following,
and now is hung doing nothing
BackendException: Error downloading
s3+http://...//duplicity-full.20120304T000513Z.vol127.difftar.gpg
A couple of other things I notice. The duplicity process has grown to almost
4GB - is this expected? (however, the machine has lots of RAM and swap, so this
in itself shouldn't be a problem)
root 31336 9.1 1.8 3801340 613756 pts/2 Sl+ Apr18 673:44
/usr/bin/python /usr/bin/duplicity --s3-use-new-style --num-retries 9999
--tempdir /opt/restore-tmp --archive-dir /opt/restore-archive restore
s3+http://.../ /opt/restore-test
Also, there are lots of gpg processes running (presumably left behind)?
However no where near one per volume (there are 38 gpg processes, and we've
processed 126 volumes before we hit the problem)
Any thoughts? Anyone else successfully using duplicity on filesystems this
large?