qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] *** make netlayer re-entrant ***


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 0/3] *** make netlayer re-entrant ***
Date: Thu, 7 Mar 2013 10:31:03 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Mar 07, 2013 at 10:06:52AM +0800, liu ping fan wrote:
> On Wed, Mar 6, 2013 at 5:30 AM, mdroth <address@hidden> wrote:
> > On Sun, Mar 03, 2013 at 09:21:19PM +0800, Liu Ping Fan wrote:
> >> From: Liu Ping Fan <address@hidden>
> >>
> >> This series aim to make netlayer re-entrant, so netlayer can
> >> run out of biglock safely.
> >
> > I think most of the locking considerations are still applicable either
> > way, but this series seems to be written under the assumption that
> > we'll be associating hubs/ports with separate AioContexts to facilitate
> > driving the event handling outside of the iothread. Is this the case?
> >
> Yes.
> > From what I gathered from the other thread, the path forward was to
> > replace the global iohandler list that we currently use to drive
> > NetClient events and replace it with a GSource and GMainContext, rather
> > than relying on AioContexts.
> >
> Not quite sure about it. Seems that AioContext is built on GSource, so
> I think they are similar, and AioContext is easy to reuse.
> 
> > I do agree that the event handlers currently grouped under
> > iohandler.c:io_handlers look like a nice fit for AioContexts, but other
> > things like slirp and chardevs seem better served by a more general
> > mechanism like GSources/GMainContexts. The chardev flow control patches
> > seem to be doing something similar already as well.
> >
> I have made some fix for this series, apart from the concert about
> GSource/ AioContext.  Hope to discuss it clearly in next version and
> fix it too. BTW what can we benefit from AioContext besides those from
> GSource

This is a good discussion.  I'd like to hear more about using glib event
loop concepts instead of rolling our own.  Here are my thoughts after
exploring the glib main loop vs AioContext.

AioContext supports two things:

1. BH which is similar to GIdle for scheduling functions that get called
   from the event loop.

   Note that GIdle doesn't work in aio_poll() because we don't run
   integrate the glib event loop.

2. aio_poll() which goes a step beyond iohandlers because the
   ->io_flush() handler signals whether there are pending aio requests.
   This way aio_poll() can be called in a loop until all pending
   requests have finished.

   Imagine block/iscsi.c which has a TCP socket to the iSCSI target.
   We're ready to receive from the socket but we only want to wait until
   all pending requests complete.  That means the socket fd is always
   looking for G_IO_IN events but we shouldn't wait unless there are
   actually iSCSI requests pending.

   This feature is important for the synchronous (nested event loop)
   functionality that QEMU relies on for bdrv_drain_all() and
   synchronous I/O (bdrv_read(), bdrv_write(), bdrv_flush()).

The glib equivalent to aio_poll() is g_main_context_iteration():

https://developer.gnome.org/glib/2.30/glib-The-Main-Event-Loop.html#g-main-context-iteration

But note that the return value is different.  g_main_context_iteration()
tells you if any event handlers were called.  aio_poll() tells you
whether more calls are necessary in order to reach a quiescent state
(all requests completed).

I guess it's time to look at the chardev flow control patches to see how
glib event loop concepts are being used.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]