qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] virtio-serial: report frontend connection s


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH 1/2] virtio-serial: report frontend connection state via monitor
Date: Thu, 29 May 2014 14:57:52 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 05/29/2014 02:36 PM, Laszlo Ersek wrote:

> Emitting the events unconditionally shouldn't hurt anything, I think. I
> added the property for two reasons:
> 
> - I was sure someone would ask for it. :)

Someone still might :)  But it may be a pre-mature optimization.

> 
> - The property is device-level. The example I gave in the blurb uses
> -global with qemu:arg for simplicity, but the intent is that libvirt set
> it only for the one (or few) virtio-serial port(s) where it actually
> intends to communicate with the guest agent(s). Libvirt can set up
> several virtio-serial ports (to be read & written by other programs on
> the host side via their respective chardevs (*)), and if libvirt doesn't
> connect to those, then qemu shouldn't spam it with uninteresting events.
> (If libvirt does set the property for more than one virtio-serial port,
> then it can distinguish the ports by the ids carried in the events.)

Yes, libvirt would be matching the id carried in the event message
before deciding whether to act or ignore an event, particularly if the
events are unconditional.

> 
> (*) In fact I regularly use this feature: it lets me hack on qga and
> test it from the host side (with "socat" or otherwise), without libvirt
> "interfering" with the traffic, but nonetheless setting up the unix
> domain socket for me, which is nice.

You do make a good point that turning events on or off per-device does
some filtering and thus reduces the workload of libvirt - but a
micro-optimization may not be worth the complexity since the event
doesn't happen often in the common case (in two senses - how common is
it for someone to plug in a channel with libvirt other than the monitor
or guest agent; and how common is it for a guest to open/close a virtio
serial device).

> 
>>
>> Also, since these events are triggered in relation to guest activity, I
>> think they need to be rate-limited (a guest that repeatedly opens and
>> closes the device as fast as possible should not cause an event storm).
> 
> I did notice some throttling code in the event emission code... Let me see.
> 
> set_guest_connected() [hw/char/virtio-console.c]
>   monitor_protocol_event() [monitor.c]
>     monitor_protocol_event_queue()
> 
> Ah okay. So rate limiting is indeed an attribute of the event. I didn't
> know that. Apparently, the rates are configured statically, in
> monitor_protocol_event_init() [monitor.c]. Any idea what limit I should set?

Probably set it to default to the same rate limit as we have for the
memballoon event, since that is another example of a guest-triggered
virtio event.  Libvirt has code to tweak the
guest-stats-polling-interval QOM property when it wants a throttling
period faster or slower than the default (which I think is 1 second?).

For this particular event, libvirt will probably want to expose to
management whether the agent is responsive or not (especially since
libvirt is adding even more agent commands such as virDomainFSFreeze),
so just like memballoon, libvirt will probably also expose a way to
tweak the throttling period in the off chance that the default wasn't
good enough.  But as long as the knob is there, qemu doesn't really have
to worry about much else.


>> Do we _really_ need two separate events?  Why not just one event
>> VSERPORT_CHANGE, along with an additional data member 'open':'bool' that
>> is true when the guest opened the port, and false when the guest closed
>> the port?
> 
> That's a valid remark entirely, and I did contemplate the question, but
> other events seem to exist for CONNECT and DISCONNECT cases separately,
> already:
> 
> - SPICE_CONNECTED, SPICE_DISCONNECTED -- even documented together
> - VNC_CONNECTED, VNC_DISCONNECTED -- here too the accompanying data
>   structure seems to be identical between connect & disconnect
> 
> I just wanted to follow that pattern, but I don't insist.

Anyone else have an opinion?

You made me think more about it.  One event per state change doesn't
scale - the moment you add more states, you have to add more events; but
until management apps write new event handlers, the new state is
effectively invisible.  On the other hand, a single event with
information about the latest state is nicer, in that the existing
management event handler will automatically start seeing the new state
(of course, it implies that the event handlers have to be written to
gracefully handle the addition of a new state value that was not
documented at the time the management app wrote the handeler).  So based
on _that_ argument, I'm now leaning 60:40 towards using 1 event.

(Or put another way, I think the VNC_{DIS,}CONNECTED events are not
going to scale well if VNC introduces a third state, and that SPICE_ was
just blindly copying VNC_ without thinking about the ramifications)

> Anyway, it's of course easier to drop the property, so if you think that
> controlling the event on the level of the individual virtio-serial port
> is overkill, I can remove it.

You may want to wait for other reviewers to chime in, but in my mind
simpler is better.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]