[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Bug-ddrescue] Speed and Benchmarking tests
From: |
Scott Dwyer |
Subject: |
Re: [Bug-ddrescue] Speed and Benchmarking tests |
Date: |
Wed, 26 Mar 2014 20:44:57 -0400 |
User-agent: |
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 |
I have created an accelerated method of testing ddresuce speed
performance, and would like to share the first of my results. The
results are at the bottom of this message, and also a attached as a
spreadsheet. These tests are designed to compare timing differences
between different options and versions, but be aware that the timings
are not linear with normal tests. For a better understanding of that
concept, there are results included to compare one of the tests as
accelerated and normal.
One thing you may notice from the test results is that when there are
more errors, the total recovery time can be faster with the
–cluster-size set to 1. This is because when reading a normal cluster
size which contains failed sectors(s), there is no way to tell what
sector was bad, so the block has to be marked as non-trimmed, which
means that the bad sector must be read again during trimming. And every
error takes time to process. First the drive itself could take between
2-4 seconds to process the error (can vary between drives and the nature
of the error). Second, the Linux kernel also likes to perform its own
retries. My calculations say that normally it will try 15 times, and
using the –direct option it will only try 5 times (this could vary on
different systems). So 5 retries multiplied by an average of 3 seconds
equals 15 seconds per error. This time can add up fast. This is why
accelerated results are not quite the same as real normal results.
So why not just always read the disk one sector at a time to make it
finish faster? If you do that, then you do not get the most data
recovered very fast. Part of the goal of the rescue is to get the most
recoverable data first and fast, and then work on the harder parts. This
can be a tricky trade off. Part of the reason of these testing results
is to help people understand that.
The tests are all based off real ddrescue log files. I have asked for
more finished log files, but am disappointed to report that I received
no replies to that request, so I used the best that I have. I am still
open to receiving more actual logs. I am also open to testing different
options, although understand that I will be picky as a set of tests is
time consuming and can mean an overnight run of testing.
TESTING RESULTS:
Most Data Recovered Time / Total Recovery Time - in minutes
1.18-pre6 1.18-pre7 1.18-pre8
most /total most /total most /total
disk1-cluster1 18.96/18.96 18.86/18.86 18.86/18.86
disk1-default 1.4/3.63 1.4/3.66 1.42/3.65
disk1-skip5M 1.42/3.68 1.67/2.03 1.42/3.68
disk2-cluster1 21.87/30.66 21.8/25.71 21.7/33.16
disk2-default 3.07/21.78 3.07/20.95 2.6/20.5
disk2-skip5M 3.07/19.96 6.93/13.16 2.98/19.11
disk3-cluster1 24.1/30.08 24.13/25.16 24/31.93
disk3-default 6.1/33.6 6.05/33.58 6.1/37.26
disk3-skip5M 5.95/36.23 15.27/19.9 6.43/33.28
disk4-cluster1 15.47/34.04 15.47/28.81 15.4/40.71
disk4-default 5.73/47.68 5.77/47.95 5.13/48.53
disk4-skip5M 3.28/47.21 11.08/27.53 6/44.65
cluster1 = run with option --cluster-size=1
default = run with default options
skip5M = run with option --skip-size=5M
disk1 = 4GiB, errors = 87, errsize = 44544 B, errors are in a few small
groups
disk2 = 4GiB, errors = 1797, errsize = 1323 kB, errors are sort of grouped
disk3 = 4GiB, errors = 2093, errsize = 1274 kB, errors grouped to 1 of 2
heads
disk4 = 1GiB, errors = 6575, errsize = 4579 kB, errors grouped to 1 of 2
heads
Note that all the disks have lots of small errors of 1-3 sectors in size.
The difference is in the grouping, and how much good data is in between
the bad.
The "Most Data Recoverd Time" is the point in the gnuplot where the rate
sharply
slows and flattens out. This does not mean that the same amount of data was
recovered in this time between the different tests. It is a relative reading
to give a way to compare, although I tried to match as much as possible.
All of this testing is done at a highly accelerated speed. In a real
recovery,
the times would be much more exaggerated. See the comparison below.
1.18-pre7-accelerated 1.18-pre7-normal
most /total most /total
disk1-cluster1 18.86/18.86 55.01/55.01
disk1-default 1.4/3.66 34.37/68.4
disk1-skip5M 1.67/2.03 34.37/69
ddrescue_results_spreadsheet_post.zip
Description: Zip archive