[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] broken backup but no errors until restore
From: |
edgar . soldin |
Subject: |
Re: [Duplicity-talk] broken backup but no errors until restore |
Date: |
Tue, 13 Nov 2012 11:39:32 +0100 |
User-agent: |
Mozilla/5.0 (Windows NT 5.1; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 |
On 13.11.2012 11:09, Igor Balic wrote:
> Hello,
>
>
>
> this backup job ran on 22.10.2012:
>
>
>
> Reading globbing filelist /root/duplicity-list-home2
>
> Local and Remote metadata are synchronized, no sync needed.
>
> Warning, found the following local orphaned signature files:
>
> duplicity-new-signatures.20120225T032124Z.to.20120226T032111Z.sigtar.part
>
> duplicity-new-signatures.20120226T032111Z.to.20120226T151230Z.sigtar.part
>
> Warning, found the following orphaned backup file:
>
> [duplicity-inc.20120226T032111Z.to.20120226T151230Z.manifest.part]
>
> Last full backup date: Sat Sep 22 04:27:03 2012
>
> Last full backup is too old, forcing full backup
>
> Warning, found the following local orphaned signature files:
>
> duplicity-new-signatures.20120225T032124Z.to.20120226T032111Z.sigtar.part
>
> duplicity-new-signatures.20120226T032111Z.to.20120226T151230Z.sigtar.part
>
> Warning, found the following orphaned backup file:
>
> [duplicity-inc.20120226T032111Z.to.20120226T151230Z.manifest.part]
>
> --------------[ Backup Statistics ]--------------
>
> StartTime 1350878635.72 (Mon Oct 22 06:03:55 2012)
>
> EndTime 1350916625.87 (Mon Oct 22 16:37:05 2012)
>
> ElapsedTime 37990.15 (10 hours 33 minutes 10.15 seconds)
>
> SourceFiles 1776644
>
> SourceFileSize 75438832465 (70.3 GB)
>
> NewFiles 1776644
>
> NewFileSize 75438832099 (70.3 GB)
>
> DeletedFiles 0
>
> ChangedFiles 0
>
> ChangedFileSize 0 (0 bytes)
>
> ChangedDeltaSize 0 (0 bytes)
>
> DeltaEntries 1776644
>
> RawDeltaSize 74450597216 (69.3 GB)
>
> TotalDestinationSizeChange 57067456204 *(53.1 GB)*
>
> *Errors 0*
>
> -------------------------------------------------
>
>
>
> All seemed well. But today I wanted to restore some user's files and it
> failed:
>
>
>
> duplicity -t 6D --no-encryption --name /cache/xyz.com/home2/
> --file-to-restore home2/file file:///backup/xyz.com/home2/ /tmp/file
>
> Local and Remote metadata are synchronized, no sync needed.
>
> Last full backup date: Mon Oct 22 06:03:41 2012
>
> Traceback (most recent call last):
>
> File "/usr/bin/duplicity", line 1251, in <module>
>
> with_tempdir(main)
>
> File "/usr/bin/duplicity", line 1244, in with_tempdir
>
> fn()
>
> File "/usr/bin/duplicity", line 1198, in main
>
> restore(col_stats)
>
> File "/usr/bin/duplicity", line 538, in restore
>
> restore_get_patched_rop_iter(col_stats)):
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 520, in
> Write_ROPaths
>
> for ropath in rop_iter:
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 492, in
> integrate_patch_iters
>
> for patch_seq in collated:
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 377, in
> yield_tuples
>
> setrorps( overflow, elems )
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 366, in
> setrorps
>
> elems[i] = iter_list[i].next()
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 98, in
> filter_path_iter
>
> for path in path_iter:
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 111, in
> difftar2path_iter
>
> tarinfo_list = [tar_iter.next()]
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 327, in
> next
>
> self.set_tarfile()
>
> File "/usr/lib/python2.6/dist-packages/duplicity/patchdir.py", line 321, in
> set_tarfile
>
> self.current_fp = self.fileobj_iter.next()
>
> File "/usr/bin/duplicity", line 574, in get_fileobj_iter
>
> backup_set.volume_name_dict[vol_num],
>
> KeyError: 879
>
>
>
> *Odd error, so I checked the backup files collection-status:*
>
>
>
> Found primary backup chain with matching signature chain:
>
> -------------------------
>
> Chain start time: Mon Oct 22 06:03:41 2012
>
> Chain end time: Tue Nov 13 04:37:15 2012
>
> Number of contained backup sets: 23
>
> Total number of contained volumes: 148
>
> Type of backup set: Time: Num volumes:
>
> Full Mon Oct 22 06:03:41 2012 36
>
> Incremental Tue Oct 23 04:34:46 2012 1
>
> Incremental Wed Oct 24 04:29:35 2012 5
>
> Incremental Thu Oct 25 04:32:50 2012 7
>
> Incremental Fri Oct 26 04:33:04 2012 9
>
> Incremental Sat Oct 27 04:32:37 2012 2
>
> Incremental Sun Oct 28 04:31:42 2012 1
>
> Incremental Mon Oct 29 04:32:23 2012 1
>
> Incremental Tue Oct 30 04:35:13 2012 13
>
> Incremental Wed Oct 31 04:31:53 2012 2
>
> Incremental Thu Nov 1 04:35:11 2012 2
>
> Incremental Fri Nov 2 04:29:02 2012 1
>
> Incremental Sat Nov 3 04:33:24 2012 1
>
> Incremental Sun Nov 4 04:35:06 2012 1
>
> Incremental Mon Nov 5 04:34:11 2012 1
>
> Incremental Tue Nov 6 04:35:15 2012 32
>
> Incremental Wed Nov 7 04:35:55 2012 8
>
> Incremental Thu Nov 8 04:34:50 2012 2
>
> Incremental Fri Nov 9 04:35:29 2012 9
>
> Incremental Sat Nov 10 04:37:26 2012 11
>
> Incremental Sun Nov 11 04:39:09 2012 1
>
> Incremental Mon Nov 12 04:37:14 2012 1
>
> Incremental Tue Nov 13 04:37:15 2012 1
>
>
>
>
>
> Big problem here. Normally Full backup contains 1700+ volumes of 50MB, this
> time only 36. Backup job report size seems good (53GB), also backup time
> duration matches. Oddly enough, it only wrote 36 volumes during that time and
> no errors.
>
> There were no network/power outages, disk has 400GB of free space. Other
> backup jobs (we run about 10) seem fine so far, checking them now.
>
>
>
> Is this a known bug? Version is 0.6.17.
>
no. looks to me like there are volumes missing.. looks to me like there are
volumes missing for the full missing on the backend.
do you still have the log from the full backup? can you verify that it states
it created more than 36 volumes?
try to use command verify on the full. if it comes up with the same error than
you can be sure the full is already damaged and the backup chain therefor
broken.
situations like that can be avoided by running 'verify' periodically. i cannot
stress that enough.
..ede/duply.net