|
From: | edgar . soldin |
Subject: | integrity check - Re: [Duplicity-talk] Some questions from a new user |
Date: | Mon, 07 Sep 2009 15:42:46 +0200 |
User-agent: | Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.1) Gecko/20090715 Thunderbird/3.0b3 |
I know we have the "remove-all-but-n-full" option, but from what I understand this also keeps any associated incremental sets.That's not such a bad idea at all. If you deleted all of the incremental files younger than the backup, the next incremental run would be, in effect, a differential.
I like this ... it could be implemented as 'remove-all-incr' ..Still this does not protect against a possible data integrity error in the existing full backup.
Idea: integrity check commandThe reason for not doing full backups regularly is the slow upload channel. But usually this combines with some pretty fast download (not always but often). Before deleting backups the leftovers should be checked for integrity. We could verify the last full against the data, but this does not make sense as a portion of it might have changed and would show up. As far as I understand the combination tar/gz/gpg already catches defective data, although very conservative by breaking the verify/restore process running. Therefor - wouldn't it make sense to introduce a integrity check that simply does a verify, receiving and unpacking without actually comparing data to the source. Additionally if there are checksums already in the backup they could be used. If not they can be added in the future and used then.
@Ken: Are there checksums?This check could be run instead of regularly full backups to assure us that the old backup data we rely on is still intact.
Command could be: check-integrity [last-full|<age>] .. regards ede
[Prev in Thread] | Current Thread | [Next in Thread] |