duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: gpg: [don't know]: invalid packet (ctb=14) - WAS: [Duplicity-talk] R


From: edgar . soldin
Subject: Re: gpg: [don't know]: invalid packet (ctb=14) - WAS: [Duplicity-talk] Rollup Functionality and Parity?
Date: Wed, 20 Aug 2008 13:26:13 +0200
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.16) Gecko/20080708 Thunderbird/2.0.0.16 Mnenhy/0.7.5.0

Thanks for your answer

as I have to use a ftp backup space and had now the "gpg ctb 14" error for the 4th time.. I am interested in a best practice or some other solution except doing a full backup, which I do all the time now if this occurs.

Well, my take on it is that if you are having any kind of repeated
issues like that it would either be because of some bug or
incompatibility that systematically introduces incorrect data, or
actual corruption (bitrot, bitflip in transit, etc).

If there is a bug in duplicity that should be fixed, but if there is
actual corruption going on the question is whether you might want to
switch backup location to some place that doesn't corrupt files.

This does not mean I am not 100% for improving error recovery in
duplicity.

I agree on that .. I first have to find out when the error actually
occurs is it
a) during backup creation
b) on the local filesystem (write and read)
c) during transfer
d) on the remote fs (even after it is stored for a while)

I like the idea of parity as this could help if there is minor damage. But a maybe even easier way could be like this (at least easier to implement) a) doing a backup to a remote filesystem but keeping a copy of the backup files locally
b) a verify of the backup files against their signature
c) retransfer of defect files only (and do step b again for a limited count, and telling a severe error if it failed multiple times)

[snip]

Does this all make sense to you guys .. any advice how to circumvent the gpg error at all, except of not using ftp space?

Ignoring the cause of the problem, keeping local files might be a good
way to make recovery more efficient. But I am really scepticle to the
usefulness of the backup if you are systematically having corruption
or incorrect creation of files.

well .. thats the point .. I couldn't make out the pattern. All I can
say it happens from time to time.
I don't even know if it's not gpg as the error is not meaningful enough.

Particularly if it is corruption, since it would indicate exiting full
backups would get blown away. If it is software bug that causes files
to be written incorrectly, at least it only affects things insofar as
incrementals are made. Your problem in this case, if I remember
correctly from previous posts, is that ideally you would just like to
be able to "forget" a previous incremental and re-do it, instead of
re-doing a full backup? That is, the problem for you isn't that the
initial full backup becomes corrupt after initial creation?

yeah .. but mainly because I can't detect which incremental/full
(backup file set) is actually damaged

One possibility is to run your duplicity session locally and use the
features of a client like lftp to upload the files (mirror -R on the
directory and it acts similarly to rsync in that it figures out which
files are missing and uploads them).

worth a try.. in case of an error I only have to compare the content of
the two repositories

This could be a good way to find out what's going on as well. For
example, if you still run into the same problem when backing up
locally, that means the problem is not with any kind of server-side
corruption (unless you're massively unlucky and this happens in both
cases). Alternatively if everything works well you should be able to
discover that the contents of the remote and local files have suddenly
changed.

That's if you'd be willing to investigate a bit more to try to narrow
down the problem and see if it is a duplicity bug or not.

thats why I put this here .. I really wan't to find out whats going on here

With regards to supporting keeping a local copy of files, and
implementing par2, it is on my personal todo list. Not sure when I
will look at it.

eventually, it could only fight the symptoms .. but probably not
circumvent the error

Hmm. In the mean time, one possible improvement could be to simply
emit the md5sums of uploaded files as part of regular duplicity
logging. Say that gets emitted at verbosity 5, you can run your
backups at verbosity 5 with output logged to some file (e.g. using
script(1)). The next time you have this problem, you will be able to
go back and find the checksum of the offending file at the time it was
uploaded. This would perhaps be the easiest way to get a feel for
whether the contents of the file actually changed. It won't however
say whether the file was corrupted in transit or corrupted server-side
after the transfer.

Would that be of interest to you?

Could you give me an example output? .. would it show up the name of the
offending file? .. because right now even in 'verbosity 9'  I
unfortunately don't.. pls see attached log

thanks alot .. ede

--
public class WhoDidIt{ // A comment. I love comments
  private static Person sender;

  public static void main (String[] foo){

  sender = new Person();
  sender.setName(new String[]{"Edgar", "Soldin"});

  Address address = new Address();
  address.setStreet("Stadtweg 119");
  address.setZip(39116);
  address.setCity("Magdeburg");
  address.setCountry("Germany");

  sender.setAddress(address);

  sender.setMobilePhone(" +49(0)171-2782880 ");
  sender.setWebSiteUrl(" http://www.soldin.de ");
  sender.setEmail(" address@hidden ");
  sender.setPGPPublicKey(" http://www.soldin.de/edgar_soldin.asc ");
  sender.setGender(true);

  System.out.println(sender.toString());
  }
}


Attachment: verify.zip
Description: Zip archive


reply via email to

[Prev in Thread] Current Thread [Next in Thread]