duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

gpg: [don't know]: invalid packet (ctb=14) - WAS: [Duplicity-talk] Rollu


From: edgar . soldin
Subject: gpg: [don't know]: invalid packet (ctb=14) - WAS: [Duplicity-talk] Rollup Functionality and Parity?
Date: Tue, 19 Aug 2008 11:32:23 +0200
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.16) Gecko/20080708 Thunderbird/2.0.0.16 Mnenhy/0.7.5.0

Hi Guys,

as I have to use a ftp backup space and had now the "gpg ctb 14" error for the 4th time.. I am interested in a best practice or some other solution except doing a full backup, which I do all the time now if this occurs.

I like the idea of parity as this could help if there is minor damage. But a maybe even easier way could be like this (at least easier to implement) a) doing a backup to a remote filesystem but keeping a copy of the backup files locally
b) a verify of the backup files against their signature
c) retransfer of defect files only (and do step b again for a limited count, and telling a severe error if it failed multiple times)

All I am worried about is my data. In the current state I could loose up to one month of changes if I wouldn't manually check the logs daily and do full backups in the case of the error. This is why I always do a verify of the whole backup (the biggest is 3GB). This retransmits the whole data but is the only way to stumble across the error mentioned and to be safe. There are only few Megabytes changing over the course of one month (the full backup period) and it wouldn't lengthen the whole backup routine if the deltas are doublechecked for saving/transmission errors.

Does this all make sense to you guys .. any advice how to circumvent the gpg error at all, except of not using ftp space?

Sincerely Ede
--


Done locally would still require some programming on the duplicity side,
but it would indeed be possible to take the current full, merge in all
the incrementals and build a new full.  This would be equivalent to
doing a new full and would not have any advantage that I can see.

I was proposing that the "locally" in this case would be local to the duplicity backend i.e. the remote system to which duplicity is sending data, so the advantage in theory would be that fulls are being generated local to the repository and don't need to be sent over the slow Internet wire from the source of the files.

Most files are not changed between full backups.  The incremental could
be rolled up a lot faster, leaving only the last full and the rolled up
incremental.

Ahh that is an interesting thought.

Thanks for your feedback Ken.
...Ken


------------------------------------------------------------------------

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk



_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk


--
public class WhoDidIt{ // A comment. I love comments private static Person sender;

 public static void main (String[] foo){

 sender = new Person();
 sender.setName(new String[]{"Edgar", "Soldin"});

 Address address = new Address();
 address.setStreet("Stadtweg 119");
 address.setZip(39116);
 address.setCity("Magdeburg");
 address.setCountry("Germany");

 sender.setAddress(address);

 sender.setMobilePhone(" +49(0)171-2782880 ");
 sender.setWebSiteUrl(" http://www.soldin.de ");
 sender.setEmail(" address@hidden ");
 sender.setPGPPublicKey(" http://www.soldin.de/edgar_soldin.asc ");
 sender.setGender(true);

 System.out.println(sender.toString());
 }
}





reply via email to

[Prev in Thread] Current Thread [Next in Thread]