[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Bug-ddrescue] Algorithm of skipping bad blocks is not fully described i
From: |
Alexander Sashnov |
Subject: |
[Bug-ddrescue] Algorithm of skipping bad blocks is not fully described in the 'info' file |
Date: |
Fri, 31 May 2013 12:19:30 +0700 |
Algorithm of skipping bad blocks is not fully described in
the info file.
Info file says:
-----------------------------------------------------
4 Algorithm
***********
2) (First phase; Copying) Read the non-tried parts of
the input file,
marking the failed blocks as non-trimmed and skipping
beyond them, until
all the rescue domain is tried. Only non-tried areas are
read in large
blocks. Trimming, splitting and retrying are done sector
by sector.
...
-----------------------------------------------------
I have found it tryes to read small block inside bad area
instead of going to a next cluster.
I provide the test in attachment.
In my test I create block device with only 128 sectors and
try to read it with 4K clusters.
Bad blocks are resides in blocks in range [27,43]
So I have 16 clusters, clusters 3, 4 and 5 are contains
bad blocks.
I run GNU ddrescue as following with no log file exists on
start:
ddrescue -vv --no-split --cluster-size=8 --skip-size=4Ki
With strace tool I peek 'lseek's it does. It is the
following:
4
8
12
16
20.5
25
25.5
26
26.5
27
27.5
28
28.5
29
32
36
40
44
48
52
56
60
My expectation is it would be linear trying whenever it
successfull or not by even clusters:
4
8
12
16
24
32
36
....
So currenty GNU ddrescue does 10 extra unsuccessfull tries
instead of 1.
Other problem here is log file blow up. Instead of 3
records after first --no-split pass:
cluster_0 cluster_2 +
cluster_3 cluster_5 *
cluster_6 cluster_7 +
it really have lot more:
# Rescue Logfile. Created by GNU ddrescue version 1.17-rc3
# Command line: /home/alex/ddrescue-1.17-rc3/ddrescue -vv
--no-split --cluster-size=8 --skip-size=4Ki
/dev/mapper/disk_with_bad_blocks
/tmp/bbbtest/rescued_image.bin /tmp/bbbtest/rescue.log
# current_pos current_status
0x00006000 +
# pos size status
0x00000000 0x00003000 +
0x00003000 0x00000200 -
0x00003200 0x00000C00 /
0x00003E00 0x00000600 -
0x00004400 0x00000C00 /
0x00005000 0x00000600 -
0x00005600 0x00000800 /
0x00005E00 0x00000200 -
0x00006000 0x0000A000 +
This is serios problem for me. I am trying to rescue data
from a 1Tb HDD with strange
defect: it have everywhere ~700Mb readable and unreadable
areas.
I stop it after rescuing 32Gb data. Log file size is
~100Mb already.
This is full of records like this:
...
0xC0982800 0x00010000 *
0xC0992800 0x00000200 -
0xC0992A00 0x00020000 *
0xC09B2A00 0x00000200 -
0xC09B2C00 0x00040000 *
0xC09F2C00 0x00000200 -
0xC09F2E00 0x00080000 *
0xC0A72E00 0x00000200 -
0xC0A73000 0x00100000 *
0xC0B73000 0x00000200 -
0xC0B73200 0x00200000 *
0xC0D73200 0x00000200 -
0xC0D73400 0x00400000 *
0xC1173400 0x00000200 -
0xC1173600 0x0000EA00 *
...
instead of:
...
mb_offset_0 mb_offset_700 +
mb_offset_700 mb_offset_1200 *
mb_offset_1200 mb_offset_1800 +
mb_offset_1800 mb_offset_2400 *
...
---
Alexander Sashnov.
test_on_badblocks.tgz
Description: application/compressed-tar
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Bug-ddrescue] Algorithm of skipping bad blocks is not fully described in the 'info' file,
Alexander Sashnov <=