qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support


From: Peter Xu
Subject: Re: [Qemu-devel] [RFC 00/15] QMP: out-of-band (OOB) execution support
Date: Tue, 19 Sep 2017 10:23:12 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Mon, Sep 18, 2017 at 11:40:40AM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (address@hidden) wrote:
> > On Fri, Sep 15, 2017 at 04:17:07PM +0100, Dr. David Alan Gilbert wrote:
> > > * Stefan Hajnoczi (address@hidden) wrote:
> > > > On Fri, Sep 15, 2017 at 01:29:13PM +0100, Daniel P. Berrange wrote:
> > > > > On Fri, Sep 15, 2017 at 01:19:56PM +0100, Dr. David Alan Gilbert 
> > > > > wrote:
> > > > > > * Daniel P. Berrange (address@hidden) wrote:
> > > > > > > On Fri, Sep 15, 2017 at 01:06:44PM +0100, Dr. David Alan Gilbert 
> > > > > > > wrote:
> > > > > > > > * Daniel P. Berrange (address@hidden) wrote:
> > > > > > > > > On Fri, Sep 15, 2017 at 11:49:26AM +0100, Stefan Hajnoczi 
> > > > > > > > > wrote:
> > > > > > > > > > On Fri, Sep 15, 2017 at 11:50:57AM +0800, Peter Xu wrote:
> > > > > > > > > > > On Thu, Sep 14, 2017 at 04:19:11PM +0100, Stefan Hajnoczi 
> > > > > > > > > > > wrote:
> > > > > > > > > > > > On Thu, Sep 14, 2017 at 01:15:09PM +0200, Marc-André 
> > > > > > > > > > > > Lureau wrote:
> > > > > > > > > > > > > There should be a limit in the number of requests the 
> > > > > > > > > > > > > thread can
> > > > > > > > > > > > > queue. Before the patch, the limit was enforced by 
> > > > > > > > > > > > > system socket
> > > > > > > > > > > > > buffering I think. Now, should oob commands still be 
> > > > > > > > > > > > > processed even if
> > > > > > > > > > > > > the queue is full? If so, the thread can't be 
> > > > > > > > > > > > > suspended.
> > > > > > > > > > > > 
> > > > > > > > > > > > I agree.
> > > > > > > > > > > > 
> > > > > > > > > > > > Memory usage must be bounded.  The number of requests 
> > > > > > > > > > > > is less important
> > > > > > > > > > > > than the amount of memory consumed by them.
> > > > > > > > > > > > 
> > > > > > > > > > > > Existing QMP clients that send multiple QMP commands 
> > > > > > > > > > > > without waiting for
> > > > > > > > > > > > replies need to rethink their strategy because OOB 
> > > > > > > > > > > > commands cannot be
> > > > > > > > > > > > processed if queued non-OOB commands consume too much 
> > > > > > > > > > > > memory.
> > > > > > > > > > > 
> > > > > > > > > > > Thanks for pointing out this.  Yes the memory usage 
> > > > > > > > > > > problem is valid,
> > > > > > > > > > > as Markus pointed out as well in previous discussions (in 
> > > > > > > > > > > "Flow
> > > > > > > > > > > Control" section of that long reply).  Hopefully this 
> > > > > > > > > > > series basically
> > > > > > > > > > > can work from design prospective, then I'll add this flow 
> > > > > > > > > > > control in
> > > > > > > > > > > next version.
> > > > > > > > > > > 
> > > > > > > > > > > Regarding to what we should do if the limit is reached: 
> > > > > > > > > > > Markus
> > > > > > > > > > > provided a few options, but the one I prefer most is that 
> > > > > > > > > > > we don't
> > > > > > > > > > > respond, but send an event showing that a command is 
> > > > > > > > > > > dropped.
> > > > > > > > > > > However, I would like it not queued, but a direct reply 
> > > > > > > > > > > (after all,
> > > > > > > > > > > it's an event, and we should not need to care much on 
> > > > > > > > > > > ordering of it).
> > > > > > > > > > > Then we can get rid of the babysitting of those "to be 
> > > > > > > > > > > failed"
> > > > > > > > > > > requests asap, meanwhile we don't lose anything IMHO.
> > > > > > > > > > > 
> > > > > > > > > > > I think I also missed at least a unit test for this new 
> > > > > > > > > > > interface.
> > > > > > > > > > > Again, I'll add it after the whole idea is proved solid.  
> > > > > > > > > > > Thanks,
> > > > > > > > > > 
> > > > > > > > > > Another solution: the server reports available receive 
> > > > > > > > > > buffer space to
> > > > > > > > > > the client.  The server only guarantees immediate OOB 
> > > > > > > > > > processing when
> > > > > > > > > > the client stays within the receive buffer size.
> > > > > > > > > > 
> > > > > > > > > > Clients wishing to take advantage of OOB must query the 
> > > > > > > > > > receive buffer
> > > > > > > > > > size and make sure to leave enough room.
> > > > > > > > > 
> > > > > > > > > I don't think having to query it ahead of time is 
> > > > > > > > > particularly nice,
> > > > > > > > > and of course it is inherantly racy.
> > > > > > > > > 
> > > > > > > > > I would just have QEMU emit an event when it pausing 
> > > > > > > > > processing of the
> > > > > > > > > incoming commands due to a full queue.  If the event includes 
> > > > > > > > > the ID
> > > > > > > > > of the last queued command, the client will know which (if 
> > > > > > > > > any) of
> > > > > > > > > its outstanding commands are delayed. Another even can be 
> > > > > > > > > sent when
> > > > > > > > > it restarts reading.
> > > > > > > > 
> > > > > > > > Hmm and now we're implementing flow control!
> > > > > > > > 
> > > > > > > > a) What exactly is the current semantics/buffer sizes?
> > > > > > > > b) When do clients send multiple QMP commands on one channel 
> > > > > > > > without
> > > > > > > > waiting for the response to the previous command?
> > > > > > > > c) Would one queue entry for each class of commands/channel work
> > > > > > > >   (Where a class of commands is currently 'normal' and 'oob')
> > > > > > > 
> > > > > > > I do wonder if we need to worry about request limiting at all 
> > > > > > > from the
> > > > > > > client side.  For non-OOB commands clients will wait for a reply 
> > > > > > > before
> > > > > > > sending a 2nd non-OOB command, so you'll never get a deep queue 
> > > > > > > for.
> > > > > > > 
> > > > > > > OOB commands are supposed to be things which can be handled 
> > > > > > > quickly
> > > > > > > without blocking, so even if a client sent several commands at 
> > > > > > > once
> > > > > > > without waiting for replies, they're going to be processed 
> > > > > > > quickly,
> > > > > > > so whether we temporarily block reading off the wire is a minor
> > > > > > > detail.
> > > > > > 
> > > > > > Lets just define it so that it can't - you send an OOB command and 
> > > > > > wait
> > > > > > for it's response before sending another on that channel.
> > > > > > 
> > > > > > > IOW, I think we could just have a fixed 10 command queue and apps 
> > > > > > > just
> > > > > > > pretend that there's an infinite queue and nothing bad would 
> > > > > > > happen from
> > > > > > > the app's POV.
> > > > > > 
> > > > > > Can you justify 10 as opposed to 1?
> > > > > 
> > > > > Semantically I don't think it makes a difference if the OOB commands 
> > > > > are
> > > > > being processed sequentially by their thread. A >1 length queue would 
> > > > > only
> > > > > matter for non-OOB commands if an app was filling the pipeline with 
> > > > > non-OOB
> > > > > requests, as then that could block reading of OOB commands. 
> > > > 
> > > > To summarize:
> > > > 
> > > > The QMP server has a lookahead of 1 command so it can dispatch
> > > > out-of-band commands.  If 2 or more non-OOB commands are queued at the
> > > > same time then OOB processing will not occur.
> > > > 
> > > > Is that right?
> > > 
> > > I think my view is slightly more complex;
> > >   a) There's a pair of queues for each channel
> > >   b) There's a central pair of queues on the QMP server
> > >     one for OOB commands and one for normal commands.
> > >   c) Each queue is only really guaranteed to be one deep.
> > > 
> > >   That means that each one of the channels can send a non-OOB
> > > command without getting in the way of a channel that wants
> > > to send one.
> > 
> > But current version should not be that complex:
> > 
> > Firstly, parser thread will only be enabled for QMP+NO_MIXED monitors.
> > 
> > Then, we only have a single global queue for QMP non-oob commands, and
> > we don't have response queue yet.  We do respond just like before in a
> > synchronous way (I explained why - for OOB we don't need that
> > complexity IMHO).
> 
> I think  the discussion started because of two related comments:
>   Marc-André said :
>      'There should be a limit in the number of requests the thread can
> queue'
> and Stefan said :
>      'Memory usage must be bounded.'
> 
> actually neither of those cases really worried me (because they only
> happen if the client keeps pumping commands, and that seems it's fault).
> 
> However, once you start adding a limit, you've got to be careful - if
> you just added a limit to the central queue, then what happens if that
> queue is filled by non-OOB commands?

Ah!  So I misunderstood "a pair of queues for each channel".  I
thought it means the input and output of a single monitor, while I
think it actually means "OOB channel" and "non-OOB channel".

My plan (or say, this version) starts from only one global queue for
non-OOB commands.  There is no queue for OOB commands at all.  As
discussed below [1], if we receive one OOB command, we execute it
directly and reply to client.  And here the "request queue" will only
queue non-OOB commands.  Maybe the name "request queue" sounds
confusing here.

If so, we should not have above problem, right?  Since even if the
queue is full (of course there will only be non-OOB commands in the
queue), the parsing is still working, and we will still be able to
handle OOB ones:

  req = parse(stream);

  if (is_oob(req)) {
    execute(req);
    return;
  }

  if (queue_full(req_queue)) {
    emit_full_event(req);
    return;
  }

  enqueue(req_queue, req);

So again, this version is a simplified version from previous
discussion (no oob-queue but only non-oob-queue, no respond queue but
only request queue, etc...), but hope it can work.

Thanks,

> 
> Dave
> 
> > When we parse commands, we execute it directly if OOB, otherwise we
> > put it onto request queue.  Request queue handling is done by a main
> > thread QEMUBH.  That's all.

[1]

> > 
> > Would this "simple version" suffice to implement this whole OOB idea?
> > 
> > (Again, I really don't think we need to specify queue length to 1,
> >  though we can make it small)
> > 
> > -- 
> > Peter Xu
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]