qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] write_zeroes/trim on the whole disk


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] write_zeroes/trim on the whole disk
Date: Sat, 24 Sep 2016 23:19:53 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

On 24.09.2016 21:24, Alex Bligh wrote:
On 24 Sep 2016, at 18:47, Vladimir Sementsov-Ogievskiy <address@hidden> wrote:

I just wanted to say, that if we want a possibility of clearing the whole disk 
in one request for qcow2 we have to take 512 as granularity for such requests 
(with X = 9). An this is too small. 1tb will be the upper bound for the request.
Sure. But I do not see the value in optimising these huge commands to run as 
single requests. If you want to do that, do it properly and have a 
negotiation-phase flag that supports 64 bit request lengths.

And add additional request type with another magic in first field and 64bit length field? If such solution is appropriate for nbd it is ok for me of course. I've proposed something like this in first letter - "Increase length field of the request to 64bit". Changing existing request message type is wrong of course, but creating an additional one should be ok.


Full backup, for example:

1. target can do fast write_zeroes: clear the whole disk (great if we can do it 
in one request, without splitting, etc), then backup all data except zero or 
unallocated (save a lot of time on this skipping).
2. target can not do fast write_zeroes: just backup all data. We need not clear 
the disk, as we will not save time by this.

So here, we need not splitting as a general. Just clear all or not clearing at 
all.
As I said, within the current protocol you cannot tell whether a target 
supports 'fast write zeroes', and indeed the support may be partial - for 
instance with a QCOW2 backend, a write that is not cluster aligned would likely 
only partially satisfy the command by deallocating bytes. There is no current 
flag for 'supports fast write zeroes' and (given the foregoing) it isn't 
evident to me exactly what it would mean.

I suggest to add this flag - which is a negotiation-phase flag, exposing support of the whole feature (separate command or flag for clearing the whole disk). Fast here means that we can do this in one request. write_zeroes(of any size, up to the whole disk) is fast if it will not take more time than usual write (restricted to 2G).


It seems however you could support your use case by simply iterating through 
the backup disk, using NBD_CMD_WRITE for the areas that are allocated and 
non-zero, and using NBD_CMD_WRITE_ZEROES for the areas that are not allocated 
or zeroed. This technique would not require a protocol change (beyond the 
existing NBD_CMD_WRITE_ZEROES extension), works irrespective of whether the 
target supports write zeroes or not, works irrespective of difference in 
cluster allocation size between source and target, is far simpler, and has the 
added advantage of making the existing zeroes-but-not-holes area into holes 
(that is optional if you can tell the difference between zeroes and holes on 
the source media). It also works on a single pass. Yes, you need to split 
requests up, but you need to split requests up ANYWAY to cope with 
NBD_CMD_WRITE's 2^32-1 length limit (I strongly advise you not to use more than 
2^31). And in any case, you probably want to parallelise reads and writes and 
have more than one write in flight in any case, all of which suggests you are 
going to be breaking up requests anyway.

This is slow, see my first letter. Iterative zeroing of qcow2 is slow.

Why separate command/flag for clearing the whole disk is better for me than block-based solution with splitting requests? I want to clear the whole disk and I don't want to introduce new functionality, which I don't need for now. I need to clearing the whole disk, but with block-based solution I have a lot of code, which solves another task. And it only indirectly solves my task. I.e. instead of simple_realisation+simple_usage+nice_solution_for_my_task I have harder_realisation+harder_usage+ugly_solution_for_my_task.

I understand, that we must take into account that such functionality (large requests) will likely be needed in future, so more generic solution is better for a protocol. And I suggest a compromise:

negotiation-phase flag NBD_FLAG_SEND_BIG_REQUEST : command flag NBD_CMD_FLAG_BIG_REQUEST is supported for WRITE_ZEROES and TRIM negotiation-phase flag NBD_FLAG_SEND_BIG_REQUEST_REGION : non-zero length is supported for big request

flag NBD_CMD_FLAG_BIG_REQUEST is set and length = 0 -> request on the whole disk, offset must be 0 flag NBD_CMD_FLAG_BIG_REQUEST is set and length > 0 -> request on (offset*block_size, length*block_size), length*block_size must be <= disk_size (only if NBD_FLAG_SEND_BIG_REQUEST_REGION is negotiated) flag NBD_CMD_FLAG_BIG_REQUEST is unset -> usual request on (offset, length)

....

or a separate command/flag for clearing the whole disk, and separate block-based solution in future if needed.

....

or new request type with 64bit length


--
Best regards,
Vladimir





reply via email to

[Prev in Thread] Current Thread [Next in Thread]