qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 00/12][RFC] char: add flow control and fix guest


From: Amit Shah
Subject: Re: [Qemu-devel] [PATCH 00/12][RFC] char: add flow control and fix guest_[open|close]
Date: Thu, 4 Aug 2011 20:16:38 +0530
User-agent: Mutt/1.5.21 (2010-09-15)

(Adding Gerd to cc)

On (Thu) 04 Aug 2011 [08:11:23], Anthony Liguori wrote:
> On 08/04/2011 01:45 AM, Amit Shah wrote:
> >On (Mon) 01 Aug 2011 [09:22:58], Anthony Liguori wrote:
> >>The char layer has been growing some nasty warts for some time now as we 
> >>ask it
> >>to do things it was never intended on doing.  It's been long over due for an
> >>overhaul and its become evident to me that we need to address this first 
> >>before
> >>adding any more features to the char layer.
> >>
> >>This series is the start at sanitizing the char layer.  It effectively turns
> >>the char layer into an internal pipe.  It supports flow control using an
> >>intermediate ring queue for each direction.
> >
> >Let's break down what we have now:
> >
> >* chardev ->  guest (backend writes):
> >    we have connect/disconnect notifications, we have an ugly
> >    can_read() implementation that is executed each time iohandlers are
> >    run.  However, it gives us reasonable flow control.
> 
> It has one major flaw.  It assumes that the backends can implement
> polling to determine when the front end can read.
> 
> This makes it impossible to implement a backend that isn't
> associated with a file descriptor (like a QMP server as required by
> the guest agent).

OK; is a ring the best way to get these 'into the fold'?  How about a
separate, qemu-specific poll() for such code?  Does glib have any
support for this?  Should we look at extending glib with such library
code instead?

> >* guest ->  chardev (frontend writes):
> >    we don't get chardev connect/disconnect events, neither do we get
> >    to know if the chardev is overwhelmed with data and to stop sending
> >    any more till it has some room in its queue.
> 
> We do have connect/disconnect event--that's open/close.

chardev connect/disconnect events right now aren't useful.  Eg., a tcp
disconnect or a unix disconnect is only noticed when the socket gets
re-connected.  And this is because select() can't give us POLLHUP.

> >    This is because we
> >    need poll semantics instead of select to get POLLIN and POLLOUT
> >    events, using which we can ascertain what state the chardev is in.
> >    There's no call corresponding to the existing qemu_chr_can_read(),
> >    which essentially confirms if a chardev is able to handle data for
> >    output.  This series only adds a qemu_char_fe_can_write(), doesn't
> >    add callbacks for connect/disconnect events since that depends on
> >    polling.
> 
> There are already events for connect/disconnect so I'm not sure what
> you're talking about.

(see above)

> >The problem I think with adding a buffer is it just solves the flow
> >control problem without solving the connect/disconnect events issue by
> >just switching to poll,
> 
> I don't understand at all what you're saying here :-)
> 
> Hopefully you'd agree, that from a design perspective, the closer a
> chrdrv looks to a normal unix socket, the more correct it is.

What I wanted to say there, without knowing about the special in-qemu
QMP server usage, is: we already have an fd for host-side chardevs, so
introducing a ring isn't beneficial.

For the frontend (as well as to take care of the QMP server case you
cite), we can have something like a QEMUFD, that supports open, read,
write, close and poll() semantics.  Getting edge notifications from
frontends shouldn't really be a problem, and all of this can be tied
into the main loop.

Is that workable?  Removes the need for a ring for sure.

> After this series, we have:
> 
> 1) read/write methods that behave like unix read/write (except zero
> indicates EAGAIN, not EOF).
> 
> 2) OPENED/CLOSE events that map to accept/EOF
> 
> 3) edge events for readability that semantically map to epoll()
> 
> I think that's pretty complete.  I don't see anything that's missing.

(keeping for context)

                Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]