bug-ddrescue
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-ddrescue] Suggestion / feature request - bad head mapping


From: Scott Dwyer
Subject: Re: [Bug-ddrescue] Suggestion / feature request - bad head mapping
Date: Thu, 4 Jan 2018 17:22:53 -0500
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2

Part 2:

The third copy pass reads all the leftover non-tried blocks, from the beginning, in order, with no skipping. This phase still reads in blocks (128 sectors or 65536 bytes by default). The reason the copy is done like this with three passes is that the first two passes are capable of getting a big chunk of good data quickly, while the third pass deals with the leftovers. And the leftovers are located in and around the bad areas of the disk, which is what we were trying to avoid on the first two passes.

The next phase after the copy phase is the trimming phase. This phase reads all the non-trimmed blocks in order. To be more accurate, it reads one sector at a time from the start of each block until it encounters an error, then reads from the end of that block until it encounters another error. Meaning it reads the outsides of each block until it hits a read error. If there is untried data left in the middle it is marked as non-scraped. Depending on the nature of the errors, this phase can take a long time.

After the trimming phase comes the scraping phase (used to be the splitting phase). The scraping phase reads all the non-scraped blocks in order, one sector at a time from the beginning of each block. It continues to do this until it reaches the end, at which point all the sectors on the disk have been attempted. Depending on the nature of the errors, this phase can take a very long time.

After the trimming phase is done and if you have only a small error size, you may wish to attempt retries of the bad area by using the --retry-passes option (-r). However, just understand that by doing this, you are focusing on a bad spot and it could make it worse (which is why we save this for last). You also may not get any more successful reads so don’t expect any miracles with retries. Also, if you have a large error size, the retries will take a very long time, possibly close to as long as the whole recovery took so far.

Ddrescue version 1.19 has some very good options for controlling what phases and copy passes you wish to do. For instance, you can skip the trimming phase with the --no-trim option. You can skip the scraping phase with the --no-scrap options. Note that if you use the no-trim option, the scraping phase will still try to run, but because there was no trimming there should be no areas marked as non-scraped so the scraping phase will not do anything. To continue on with the rescue including these phases just remove the options from your ddrescue command.

The copy passes can also be controlled with the --cpass option. So if you wanted to just do copy pass 1 and 2 you would use the options --cpass=1,2 --no-trim --no-scrape. This will stop the rescue after the first two copy passes so you can examine it and determine the best way to continue. Just be aware of something very important. If you don’t run copy pass 3, you will most likely have non-tried areas left over! Also, if copy pass 3 has not completed (either by options or you stop the rescue with ctrl-c) and then you run the ddrescue command again, it will start with copy pass 1 again.

The following assumes a default ddrescue command (no special pass options). So if you stop ddrescue during copy phase 1 and then resume, it will continue from where it left off. If you stop ddrescue during copy phase 2 or 3 it will start with copy pass 1 again, but from the last input position.

So now that you know this, if you only perform the first two copy passes (--cpass=1,2 --no-trim --no-scrape), when you resume you may wish to use the option --cpass=3 so it will do what it should next.

I intend my next post to be about how to implement the commands, but it may be a bit before I can get to it.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]