[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] top(1) utility implementation in QEMU
From: |
Daniel P. Berrange |
Subject: |
Re: [Qemu-devel] top(1) utility implementation in QEMU |
Date: |
Mon, 26 Sep 2016 17:28:46 +0100 |
User-agent: |
Mutt/1.7.0 (2016-08-17) |
On Mon, Sep 26, 2016 at 07:14:33PM +0530, prashanth sunder wrote:
> Hi All,
>
> Summary of the discussion and different approaches we had on IRC
> regarding a top(1) tool in qemu
>
> Implement unique naming for all event loop resources. Sometimes a
> string literal can be used but other times the unique name needs to be
> generated at runtime (e.g. filename for an fd).
>
> Approach 1)
> For a built-in QMP implementation:
> We have callbacks from fds, BHs and Timers
> So everytime one of them is registered - we add them to the list(what
> we see through QMP)
> and when they are unregistered - we remove them from the list.
> Ex: aio_set_fd_handler(fd, NULL, NULL, NULL) - unregistering an fd -
> will remove the fd from the list.
>
> QMP API:
> set-event-loop-profiling enable=on/off
> [interval=seconds] [iothread=name] and it emits a QMP event with
> [{name, counter, time_elapsed}]
>
> Pros:
> It works on all systems.
> Cons:
> Information present inside glib is exposed only via systemtap tracing
> - these will not be available via QMP.
> For example - I/O in chardevs, network IO etc
There's other downsides to QMP approach
- Emitting data via QMP will change the behaviour of the system
itself, since QMP will trigger usage of the main event loop
which is the thing being traced. The degree of disturbance
will depend on the interval for emitting events
- If the interval is small and you're monitoring more than one
guest at a time, then the overhead of QMP could start to get
quite significant across the host as a whole. This was
mentioned at the summit wrt existing I/O stats expose by
QEMU for block / net device backends.
- The 'top' tool does not actually have direct access to
QMP for any libvirt guests and we've unlikely to want to
expose such QMP events via libvirt in any kind of supported
API, as they're very use-case specific in design. So at best
the app would have to use libvirt QMP passthrough which is
acceptable for developer / test environments, but not
something that's satisfactory for production deployments.
> Approach 2)
> Using Trace:
> Add trace event for each type of event loop resource (timer, fd, bh,
> etc) in order to see when a resource fires.
> Write top(1)-like SystemTap script to get data from the trace backend.
>
> Pros:
> No performance overhead using trace
Nothing is zero overhead, but more specifically it would avoid
the problem of the "top" tool data transport interfering with
the very data it is trying to measure from the event loop.
It also makes it easier to pull in data other sources. For example
you don't need to extend QMP for each new bit of internal state/data
that the top tool wants access to. You can get access to data that
QEMU doesn't have, such as in glib, or even in the kernel.
>
> Cons:
> The data available from trace depends on the trace-backend that qemu
> is configured with.
> It is dependent on availability of SystemTap and is backend specific
>
> Approach 3)
> Use Trace and extract trace backend data through QMP
>
> Pros:
> No performance overhead using trace
Not sure why you're claiming that - anything that feeds trace
data over QMP is going to have a potentially significant effect
as it'll send traffic through the event loop which is what is
being analysed.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|