duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] --num-retries doesn't work when fetching signature


From: Simon Blandford
Subject: Re: [Duplicity-talk] --num-retries doesn't work when fetching signature files
Date: Wed, 14 Jan 2009 08:55:27 +0000
User-agent: Thunderbird 2.0.0.18 (X11/20081120)

Hi Kenneth,

I now know the problem was a scripting error on my part regarding the signature files. I was only getting 5 retries because I wasn't setting the --retries option correctly. However I am now still getting timeouts during bucket listing. The output below is from a restore command with -v 9. This is using duplicity-0.5.06 and python-boto-1.6a.

I notice that S3 does sporadically become unresponsive and need retrying, not just from using duplicity but also when using Amazon's EC2 tools. And this includes accessing S3 from an EC2 instance, not just from outside of Amazon's infrastructure.

Regards,
SimonB

...
Listed s3+http://some.company.com/duplicity-inc.2008-11-24T23:35:42Z.to.2008-11-25T23:35:55Z.vol20.difftar.gpg Listed s3+http://some.company.com/duplicity-inc.2008-11-24T23:35:42Z.to.2008-11-25T23:35:55Z.vol21.difftar.gpg Listed s3+http://some.company.com/duplicity-inc.2008-11-24T23:35:42Z.to.2008-11-25T23:35:55Z.vol22.difftar.gpg Listed s3+http://some.company.com/duplicity-inc.2008-11-24T23:35:42Z.to.2008-11-25T23:35:55Z.vol23.difftar.gpg Using temporary directory /tmp/duplicity-fxmnVt-tempdir Traceback (most recent call last): File "/usr/bin/duplicity", line 583, in ? with_tempdir(main) File "/usr/bin/duplicity", line 577, in with_tempdir fn() File "/usr/bin/duplicity", line 506, in main globals.archive_dir).set_values() File "/usr/lib64/python2.4/site-packages/duplicity/collections.py", line 524, in set_values backend_filename_list = self.backend.list() File "/usr/lib64/python2.4/site-packages/duplicity/backends/botobackend.py", line 210, in list for k in self.bucket.list(prefix = self.key_prefix, delimiter = '/'): File "/usr/lib/python2.4/site-packages/boto/s3/bucketlistresultset.py", line 30, in bucket_lister delimiter=delimiter) File "/usr/lib/python2.4/site-packages/boto/s3/bucket.py", line 200, in get_all_keys body = response.read() File "/usr/lib64/python2.4/httplib.py", line 460, in read return self._read_chunked(amt) File "/usr/lib64/python2.4/httplib.py", line 509, in _read_chunked value += self._safe_read(chunk_left) File "/usr/lib64/python2.4/httplib.py", line 555, in _safe_read chunk = self.fp.read(min(amt, MAXAMOUNT)) File "/usr/lib64/python2.4/httplib.py", line 971, in read s = self._read() File "/usr/lib64/python2.4/httplib.py", line 947, in _read buf = self._ssl.read(self._bufsize) sslerror: The read operation timed out


Kenneth Loafman wrote:
Yes, they should be.  Would you mind re-running duplicity with the -v9
option and posting the log from the command-line down.  If its long,
post the first 200 lines, skip the middle, and the last 200 lines.

...Ken

Simon Blandford wrote:
Sorry. Forgot to say I'm running duplicity-0.5.03 on Centos 5.3.

Simon Blandford wrote:
I set --num-retries to 50 but sometimes Duplicity gives up when trying
to fetch signature files after 5 attempts. This causes some backups or
restores to fail.

Shouldn't all access to S3 be retried --num-retries number of times?


------------------------------------------------------------------------

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk





reply via email to

[Prev in Thread] Current Thread [Next in Thread]