qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 0/8] monitor: allow per-monitor thread


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [RFC v2 0/8] monitor: allow per-monitor thread
Date: Thu, 7 Sep 2017 10:15:09 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

* Peter Xu (address@hidden) wrote:
> On Wed, Sep 06, 2017 at 12:54:28PM +0100, Daniel P. Berrange wrote:
> > On Wed, Sep 06, 2017 at 12:31:58PM +0100, Dr. David Alan Gilbert wrote:
> > > * Daniel P. Berrange (address@hidden) wrote:
> > > > This does imply that you need a separate monitor I/O processing, from 
> > > > the
> > > > command execution thread, but I see no need for all commands to suddenly
> > > > become async. Just allowing interleaved replies is sufficient from the
> > > > POV of the protocol definition. This interleaving is easy to handle from
> > > > the client POV - just requires a unique 'serial' in the request by the
> > > > client, that is copied into the reply by QEMU.
> > > 
> > > OK, so for that we can just take Marc-André's syntax and call it 'id':
> > >   https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03634.html
> > > 
> > > then it's upto the caller to ensure those id's are unique.
> > 
> > Libvirt has in fact generated a unique 'id' for every monitor command
> > since day 1 of supporting QMP.
> > 
> > > I do worry about two things:
> > >   a) With this the caller doesn't really know which commands could be
> > >   in parallel - for example if we've got a recovery command that's
> > >   executed by this non-locking thread that's OK, we expect that
> > >   to be doable in parallel.  If in the future though we do
> > >   what you initially suggested and have a bunch of commands get
> > >   routed to the migration thread (say) then those would suddenly
> > >   operate in parallel with other commands that we're previously
> > >   synchronous.
> > 
> > We could still have an opt-in for async commands. eg default to executing
> > all commands in the main thread, unless the client issues an explicit
> > "make it async" command, to switch to allowing the migration thread to
> > process it async.
> > 
> >  { "execute": "qmp_allow_async",
> >    "data": { "commands": [
> >        "migrate_cancel",
> >    ] } }
> > 
> > 
> >  { "return": { "commands": [
> >        "migrate_cancel",
> >    ] } }
> > 
> > The server response contains the subset of commands from the request
> > for which async is supported.
> > 
> > That gives good negotiation ability going forward as we incrementally
> > support async on more commands.
> 
> I think this goes back to the discussion on which design we'd like to
> choose.  IMHO the whole async idea plus the per-command-id is indeed
> cleaner and nicer, and I believe that can benefit not only libvirt,
> but also other QMP users.  The problem is, I have no idea how long
> it'll take to let us have such a feature - I believe that will include
> QEMU and Libvirt to both support that.  And it'll be a pity if the
> postcopy recovery cannot work only because we cannot guarantee a
> stable monitor.

libvirt will need changes for postcopy recovery however we do it;
so we need to do it the way they want.

I think Dan's suggestion isn't as hard as it initially sounded;  a first
thing to try would be taking all the monitor IO into another thread
and feeding all commands to the main thread for execution - that sounds
like the hard part.
(I'm not sure how multiple monitors interact for this).

Dave

> I'm curious whether there are other requirements (besides postcopy
> recovery) that would want an always-alive monitor to run some
> lock-free commands?  If there is, I'd be more inclined to first
> provide a work-around solution like "-qmp-lockfree", and we can
> provide a better solution afterwards until when the whole async QMP
> work ready.
> 
> > 
> > >   b) I still worry how the various IO channels will behave on another
> > >   thread.  But that's more a general feeling rather than anything
> > >   specific.
> > 
> > The only complexity will be around making sure the Chardev code uses
> > the right GMainContext for any watches on the underlying QIOChannel,
> > so that we poll() from the custom thread instead of the main thread.
> > IOW, as long as all I/O is done from the single thread everything
> > should work fine.
> > 
> > Regards,
> > Daniel
> > -- 
> > |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange 
> > :|
> > |: https://libvirt.org         -o-            https://fstop138.berrange.com 
> > :|
> > |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange 
> > :|
> 
> -- 
> Peter Xu
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]