duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: gpg: [don't know]: invalid packet (ctb=14) - WAS: [Duplicity-talk] R


From: Kenneth Loafman
Subject: Re: gpg: [don't know]: invalid packet (ctb=14) - WAS: [Duplicity-talk] Rollup Functionality and Parity?
Date: Wed, 20 Aug 2008 06:52:30 -0500
User-agent: Thunderbird 2.0.0.16 (X11/20080724)

address@hidden wrote:
> Thanks for your answer
> 
>>> as I have to use a ftp backup space and had now the "gpg ctb 14"
>>> error for the 4th time.. I am interested in a best practice or some
>>> other solution except doing a full backup, which I do all the time
>>> now if this occurs.
>>>     
>>
>> Well, my take on it is that if you are having any kind of repeated
>> issues like that it would either be because of some bug or
>> incompatibility that systematically introduces incorrect data, or
>> actual corruption (bitrot, bitflip in transit, etc).
>>
>> If there is a bug in duplicity that should be fixed, but if there is
>> actual corruption going on the question is whether you might want to
>> switch backup location to some place that doesn't corrupt files.
>>
>> This does not mean I am not 100% for improving error recovery in
>> duplicity.
>>   
> 
> I agree on that .. I first have to find out when the error actually
> occurs is it
> a) during backup creation
> b) on the local filesystem (write and read)
> c) during transfer
> d) on the remote fs (even after it is stored for a while)
> 
>>> I like the idea of parity as this could help if there is minor
>>> damage. But a maybe even easier way could be like this (at least
>>> easier to implement)
>>> a) doing a backup to a remote filesystem but keeping a copy of the
>>> backup files locally
>>> b) a verify of the backup files against their signature
>>> c) retransfer of defect files only (and do step b again for a limited
>>> count, and telling a severe error if it failed multiple times)
>>>     
>>
>> [snip]
>>
>>  
>>> Does this all make sense to you guys .. any advice how to circumvent
>>> the gpg error at all, except of not using ftp space?
>>>     
>>
>> Ignoring the cause of the problem, keeping local files might be a good
>> way to make recovery more efficient. But I am really scepticle to the
>> usefulness of the backup if you are systematically having corruption
>> or incorrect creation of files.
>>   
> 
> well .. thats the point .. I couldn't make out the pattern. All I can
> say it happens from time to time.
> I don't even know if it's not gpg as the error is not meaningful enough.
> 
>> Particularly if it is corruption, since it would indicate exiting full
>> backups would get blown away. If it is software bug that causes files
>> to be written incorrectly, at least it only affects things insofar as
>> incrementals are made. Your problem in this case, if I remember
>> correctly from previous posts, is that ideally you would just like to
>> be able to "forget" a previous incremental and re-do it, instead of
>> re-doing a full backup? That is, the problem for you isn't that the
>> initial full backup becomes corrupt after initial creation?
>>   
> 
> yeah .. but mainly because I can't detect which incremental/full
> (backup file set) is actually damaged
> 
>> One possibility is to run your duplicity session locally and use the
>> features of a client like lftp to upload the files (mirror -R on the
>> directory and it acts similarly to rsync in that it figures out which
>> files are missing and uploads them).
>>   
> 
> worth a try.. in case of an error I only have to compare the content of
> the two repositories
> 
>> This could be a good way to find out what's going on as well. For
>> example, if you still run into the same problem when backing up
>> locally, that means the problem is not with any kind of server-side
>> corruption (unless you're massively unlucky and this happens in both
>> cases). Alternatively if everything works well you should be able to
>> discover that the contents of the remote and local files have suddenly
>> changed.
>>
>> That's if you'd be willing to investigate a bit more to try to narrow
>> down the problem and see if it is a duplicity bug or not.
>>   
> 
> thats why I put this here .. I really wan't to find out whats going on here
> 
>> With regards to supporting keeping a local copy of files, and
>> implementing par2, it is on my personal todo list. Not sure when I
>> will look at it.
>>   
> 
> eventually, it could only fight the symptoms .. but probably not
> circumvent the error
> 
>> Hmm. In the mean time, one possible improvement could be to simply
>> emit the md5sums of uploaded files as part of regular duplicity
>> logging. Say that gets emitted at verbosity 5, you can run your
>> backups at verbosity 5 with output logged to some file (e.g. using
>> script(1)). The next time you have this problem, you will be able to
>> go back and find the checksum of the offending file at the time it was
>> uploaded. This would perhaps be the easiest way to get a feel for
>> whether the contents of the file actually changed. It won't however
>> say whether the file was corrupted in transit or corrupted server-side
>> after the transfer.
>>
>> Would that be of interest to you?
>>   
> 
> Could you give me an example output? .. would it show up the name of the
> offending file? .. because right now even in 'verbosity 9'  I
> unfortunately don't.. pls see attached log
> 
> thanks alot .. ede

After looking at the log, it seems to me that GPG errors are getting
only one try.  Once the pipe is broken (GPG task has exited on error),
we retry the IO, but do not back up and restart GPG itself.

I'll take a look into it.

...Thanks,
...Ken


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]