qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Simple performance logging and network limiting


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH] Simple performance logging and network limiting based on trace option
Date: Fri, 31 Oct 2014 11:13:04 +0000
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Oct 30, 2014 at 03:05:11PM +0100, harald Schieche wrote:
> > Missing commit description:
> > 
> > What problem are you trying to solve?
> > 
> 
> I want to log the storage (iops per second) and
> network speed (packets and bandwidth per second)

QEMU offers the query-blockstats QMP command to poll I/O statistics for
block devices.

Nowadays a lot of KVM users bypass the QEMU network subsystem and use
the vhost-net Linux host kernel module instead.  That is the
highest-performance and most actively developed networking path.  Are
you sure you don't want to use vhost-net?

> I want to limit the network traffic to a specific bandwidth.

You can use the host kernel's firewall or traffic shaping features to do
that when using a tap device (most common production configuration).
For example, libvirt offers this feature and uses tc under the hood.

> > It is simplest to have unconditional trace events and calculate
> > latencies during trace file analysis.  That way no arbitrary constants
> > like 1 second are hard-coded into QEMU.
> 
> We already have an unconditional trace event (paio_submit) but maybe there
> are too many calls of it.

If you add the BlockDriverState *bs pointer to the paio_submit call,
then you can distinguish between drives.

However, tracing is not mean as a stable interface for building other
features.  Trace events can change and are mainly used for interactive
or ad-hoc instrumentation.

If you build a tool on top of trace events, be prepared to actively
maintain it as the set of trace events evolves over time.  It's not a
stable ABI.

> > > diff --git a/net/queue.c b/net/queue.c
> > > index f948318..2b0fef7 100644
> > > --- a/net/queue.c
> > > +++ b/net/queue.c
> > > @@ -23,7 +23,9 @@
> > >  
> > >  #include "net/queue.h"
> > >  #include "qemu/queue.h"
> > > +#include "qemu/timer.h"
> > >  #include "net/net.h"
> > > +#include "trace.h"
> > >  
> > >  /* The delivery handler may only return zero if it will call
> > >   * qemu_net_queue_flush() when it determines that it is once again able
> > > @@ -58,6 +60,15 @@ struct NetQueue {
> > >      unsigned delivering : 1;
> > >  };
> > >  
> > > +static int64_t bandwidth_limit;     /* maximum number of bits per second 
> > > */
> > 
> > Throttling should be per-device, not global.
> 
> Maybe this would be better. But this patch should be most simple.

Everything in the network subsystem is per-NetClientState.  It doesn't
make sense to introduce global state just because it's easier.

> > > +static int64_t limit_network_performance(int64_t start_clock,
> > > +                                         int64_t bytes)
> > > +{
> > > +    int64_t clock = get_clock();
> > > +    int64_t sleep_usecs = 0;
> > > +    if (bandwidth_limit > 0) {
> > > +        sleep_usecs = (bytes * 8 * 1000000LL) / bandwidth_limit -
> > > +                      (clock - start_clock) / 1000LL;
> > > +    }
> > > +    if (sleep_usecs > 0) {
> > > +        usleep(sleep_usecs);
> > 
> > This does more than limit the network performance, it can also freeze
> > the guest.
> > 
> > QEMU is event-driven.  The event loop thread is not allowed to block or
> > sleep - otherwise the vcpu threads will block when they try to acquire
> > the QEMU global mutex.
> > 
> 
> Yes, it freezes the guest. That's not fine, but simple.

I won't merge this approach.

Attachment: pgpYs4kAu2bmc.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]