qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 0/9] nbd block status base:allocation


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [PATCH 0/9] nbd block status base:allocation
Date: Fri, 9 Mar 2018 22:28:35 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

09.03.2018 22:08, Eric Blake wrote:
On 02/15/2018 07:51 AM, Vladimir Sementsov-Ogievskiy wrote:
Hi all.

Here is minimal realization of base:allocation context of NBD
block-status extension, which allows to get block status through
NBD.

Vladimir Sementsov-Ogievskiy (9):
   nbd/server: add nbd_opt_invalid helper
   nbd: change indenting in nbd.h
   nbd: BLOCK_STATUS for standard get_block_status function: server part
   block/nbd-client: save first fatal error in nbd_iter_error
   nbd/client: fix error messages in nbd_handle_reply_err
   nbd: BLOCK_STATUS for standard get_block_status function: client part
   iotests.py: tiny refactor: move system imports up
   iotests: add file_path helper
   iotests: new test 206 for NBD BLOCK_STATUS

I'd really like to send a PULL request for NBD on Monday, in order to make the 2.12 softfreeze deadline (this is a new feature, so if we miss Tuesday, we have to wait until 2.13 or whatever the next release is called).  Where do you stand on rebasing this, and what help can I offer? (I know you have factored out some of the patches in another thread that I'm in the middle of reviewing as well; you can submit the later patches even before the earlier ones land, and use a 'Based-on:' tag in the cover letter to make it obvious the dependencies between series).


I'm now at start of "Re: [PATCH 6/9] nbd: BLOCK_STATUS for standard get_block_status function: client part", and I think rebasing on byte-based is too much for me for that moment (10:20 pm =). I'll do my best on Monday morning as early as I can.

--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]