qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] nbd: strict nbd_wr_syncv


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH] nbd: strict nbd_wr_syncv
Date: Tue, 16 May 2017 13:16:51 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

16.05.2017 12:51, Paolo Bonzini wrote:

On 16/05/2017 11:32, Vladimir Sementsov-Ogievskiy wrote:
16.05.2017 12:10, Vladimir Sementsov-Ogievskiy wrote:
15.05.2017 12:43, Vladimir Sementsov-Ogievskiy wrote:
I mean, make negotiation behave like normal nbd communication,
non-blocking socket + yield.. So, some other coroutines may do their
work, while nbd-negotiation coroutine waits for io..
Some callers of bdrv_open may not allow reentrancy.  For example:

        handle_qmp_command
        -> qmp_dispatch
        -> do_qmp_dispatch
        -> qmp_marshal_blockdev_add
        -> qmp_blockdev_add
        -> bds_tree_init
        -> bdrv_open

You cannot return to the monitor before qmp_blockdev_add is done,
otherwise you don't have a return value for handle_qmp_command to pass
to monitor_json_emitter.

Hmm. What about something like bdrv_pread (finally, bdrv_prwv_co) for non-coroutine, ie, calling aio_poll in a while loop, until coroutine finishes?


Also, one more question here: in nbd_negotiate_write(), why do we need
qio_channel_add_watch? write_sync will yield with qio_channel_yield()
until io complete, why to use 2 mechanisms to wake up a coroutine?
Hmm, these nbd_negotiate_* functions was introduced in 1a6245a5b, when
nbd_wr_syncv was working through qemu_co_sendv_recvv, which just yields,
without setting any handlers. But now, nbd_wr_syncv works through
qio_channel_yield() which sets handlers, so the code with extra watchers
looks wrong.
Yes, I think you're right about that.

Ok, I'll make a patch for it and finish LOG->errp conversion.


Paolo


--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]