[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Bug-ddrescue] ddrescue slower than dd
From: |
Adrien Cordonnier |
Subject: |
[Bug-ddrescue] ddrescue slower than dd |
Date: |
Thu, 10 Apr 2014 22:56:00 +0100 |
Hi Antonio,
Thank you very much for your answer. As described, ddrescue was slower
than dd so I investigated further while following your advice, i.e.
running ddrescue without any options.
I had plenty of time with an expected 7 weeks recovery for the
remaining good area. Eventually, I found how to have ddrescue proceed
at acceptable speed and it completed the good areas in 2 days instead
of 6 weeks.
In short, it works much better if the partition image is on its own
partition than in a file on a NTFS partition.
In more details, ddrescue proceeded at 11 kB/s. There hasn't been any
bad sectors for two days (1,8 GB) and dd has been able to copy this
same area in 69 seconds (26,9 MB/s). I tried with a different
computer, with ddrescue 1.16 and 1.17, with eSATA and USB2 or 2xUSB2
and with different destinations.
Eventually, it appeared that when the destination is a large 495GB
image file on a NTFS partition, recovery is slow. If the destination
is a "small" (2GB) image file on a NTFS partition, it is fast. If the
destination is its own partition on the same drive, it is fast.
So I copied my image file to a partition on the same drive and proceed
with the recovery (with option -f). Good areas were recovered at 5,3
MB/s by ddrescue. This is not as fast as dd but was a good enough
speed.
Note: I am using Ubuntu 13.10.
You also let me discover lzip. I understand it uses the same
compression algorithm than the 7z file format which is what I meant
with 7zip compression. It is now installed on my computers.
Cheers,
Adrien
2014-02-17 23:47 GMT+00:00 Antonio Diaz Diaz <address@hidden>:
>
> Hello Adrien.
>
>
> Adrien Cordonnier wrote:
>>>
>>> BTW, would anybody here find useful that ddrescue could produce compressed
>>> logs of rates and reads? I think they may become pretty large (specially
>>> the reads log).
>>
>>
>> I think this is a really good idea. Actually I subscribed to the list last
>> week because ddrescue became really slow, probably because of the size of
>> the log file.
>
>
> I don't think the slowness of ddrescue is caused by the size of the logfile.
> A logfile of 7 MB is written to disc every 5 minutes. This is only 23 kB/s.
>
> I was refering to the reads log (the one activated by option --log-reads),
> which can grow to 55 MB after reading just 100 GB without errors using the
> default cluster size.
>
>
>
>> The disk to rescue is 500 GB with bad sectors mainly at 50% and 75%. I
>> backed up the first 50% then the last 25% with the -r option. I saw that
>> the speed was sometimes 11-15 MB/s, sometimes 5 MB/s in the third quarter.
>> Thus I stupidly ran ddrescue for the third times with a minimum speed of 10
>> MB/s to get the fast areas first. The speed decreased to around 9 MB/s so I
>> backed up almost nothing more and the log file grew to 7 MB. Now, ddrescue
>> speed has decreased to 10 kB/s, 1000 times less than dd at the same
>> position.
>
>
> A lot of things sound incorrect in this description. For example, the
> --min-read-rate option (I suppose this is what you mean with "minimum speed")
> does not have effect when retrying (-r option). Also the "dd is 1000 times
> faster than ddrescue" sounds pretty suspicious.
>
>
>
>> I think it would be good to have the option to keep the previous versions
>> of the log file.
>
>
> 1) Using an old version of the logfile just makes ddrescue to forget about
> some of the areas already tried, and you don't want this.
>
> 2) The logfile is a critical resource in the rescue. I do not plan even to
> ever compress it or somehow decrease its reliability.
>
> 3) 7zip is an archiver for windows systems, the least adequate kind of
> program to use in combination with "posix" programs like ddrescue. We are not
> talking about compressing a regular file on disc. We are talking about
> compresing a stream on the fly through pipes.
>
>
>
>> I suggest 7zip compression because it gives much better compression. For
>> example my 7 MB log file (available if you want) is between 300 kB and 500
>> kB with zip, gzip or bzip2 but only 84 kB with 7zip (with default
>> fileroller parameters).
>
>
> For this task I would only use bzip2 or lzip (see the NOTE here[1]), but lzip
> is much better than bzip2 for this kind of file:
>
> -rw-r--r-- 1 55343627 2014-02-17 20:55 readlog
> -rw-r--r-- 1 1343741 2014-02-17 20:55 readlog.bz2
> -rw-r--r-- 1 4154545 2014-02-17 20:55 readlog.gz
> -rw-r--r-- 1 351966 2014-02-17 20:55 readlog.lz
> -rw-r--r-- 1 513932 2014-02-17 20:55 readlog.xz
>
> [1] http://www.nongnu.org/zutils/zutils.html
>
>
>
>> I am interested by any idea that you may have to proceed with the back up
>> of my disk. Currently, I consider either:
>> a) to run dd on 0-50%, 55-70% and 75-100%, and ask ddrescue to finish the
>> work by guessing what the log file should be.
>> b) to write a python script to simplify my 7 MB log file by replacing all 3
>> lines non-tried/1 good sector/non tried with a long non-tried area.
>
>
> I would just run ddrescue without options and would let it do its job, unless
> I have proof that it is not behaving properly.
>
> If you do option 'a' remember to give the conv=noerror,sync option to dd, or
> else you will ruin your rescue. (And be prepared to combine the generated
> logfile with the real one with ddrescuelog).
>
> Option 'b' makes no sense, as ddrescue would have to read again already read
> sectors.
>
>
>
> Best regards,
> Antonio.
>
> _______________________________________________
> Bug-ddrescue mailing list
> address@hidden
> https://lists.gnu.org/mailman/listinfo/bug-ddrescue
- [Bug-ddrescue] ddrescue slower than dd,
Adrien Cordonnier <=