qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] interrupt mitigation for e1000


From: Luigi Rizzo
Subject: Re: [Qemu-devel] interrupt mitigation for e1000
Date: Wed, 25 Jul 2012 11:56:55 +0200
User-agent: Mutt/1.4.2.3i

On Wed, Jul 25, 2012 at 11:53:29AM +0300, Avi Kivity wrote:
> On 07/24/2012 07:58 PM, Luigi Rizzo wrote:
> > I noticed that the various NIC modules in qemu/kvm do not implement
> > interrupt mitigation, which is very beneficial as it dramatically
> > reduces exits from the hypervisor.
> > 
> > As a proof of concept i tried to implement it for the e1000 driver
> > (patch below), and it brings tx performance from 9 to 56Kpps on
> > qemu-softmmu, and from ~20 to 140Kpps on qemu-kvm.
> > 
> > I am going to measure the rx interrupt mitigation in the next couple
> > of days.
> > 
> > Is there any interest in having this code in ?
> 
> Indeed.  But please drop the #ifdef MITIGATIONs.

Thanks for the comments. The #ifdef block MITIGATION was only temporary to
point out the differences and run the performance comparisons.
Similarly, the magic thresholds below will be replaced with
appropriately commented #defines.

Note:
On the real hardware interrupt mitigation is controlled by a total of four
registers (TIDV, TADV, RIDV, RADV) which control it with a granularity
of 1024ns , see

http://www.intel.com/content/dam/doc/manual/pci-pci-x-family-gbe-controllers-software-dev-manual.pdf

An exact emulation of the feature is hard, because the timer resolution we
have is much coarser (in the ms range). So i am inclined to use a different
approach, similar to the one i have implemented, namely:
- the first few packets (whether 1 or 4 or 5 will be decided on the host)
  report an interrupt immediately;
- subsequent interrupts are delayed through qemu_bh_schedule_idle()
  (which is unpredictable but efficient; i tried qemu_bh_schedule()
  but it completely defeats mitigation)
- when the TX or RX rings are close to getting full, then again
  an interrupt is delivered immediately.

This approach also has the advantage of not requiring specific support
in the OS drivers.

cheers
luigi

> > +
> > +#ifdef MITIGATION
> > +    QEMUBH *int_bh;        // interrupt mitigation handler
> > +    int tx_ics_count;      // pending tx int requests
> > +    int rx_ics_count;      // pending rx int requests
> > +    int int_cause; // int cause
> 
> Use uint32_t for int_cause, also a correctly sized type for the packet
> counts.
> 
> >  
> > +#ifdef MITIGATION
> > +    /* we transmit the first few packets, or we do if we are
> > +     * approaching a full ring. in the latter case, also
> > +     * send an ics.
> > +     * 
> > +     */
> > +{
> > +    int len, pending;
> > +    len = s->mac_reg[TDLEN] / sizeof(desc) ;
> > +    pending = s->mac_reg[TDT] - s->mac_reg[TDH];
> > +    if (pending < 0)
> > +   pending += len;
> > +    /* ignore requests after the first few ones, as long as
> > +     * we are not approaching a full ring.
> > +     * Otherwise, deliver packets to the backend.
> > +     */
> > +    if (s->tx_ics_count > 4 && s->tx_ics_count + pending < len - 5)
> > +   return;
> 
> Where do the 4 and 5 come from?  Shouldn't they be controlled by the
> guest using a device register?
> 
> >      }
> > +#ifdef MITIGATION
> > +    s->int_cause |= cause; // remember the interrupt cause.
> > +    s->tx_ics_count += pending;
> > +    if (s->tx_ics_count >= len - 5) {
> > +        // if the ring is about to become full, generate an interrupt
> 
> Another magic number.
> 
> > +   set_ics(s, 0, s->int_cause);
> > +   s->tx_ics_count = 0;
> > +   s->int_cause = 0;
> > +    } else {       // otherwise just schedule it for later.
> > +        qemu_bh_schedule_idle(s->int_bh);
> > +    }
> > +}
> > +#else /* !MITIGATION */
> >      set_ics(s, 0, cause);
> > +#endif
> >  }
> >  
> 
> -- 
> error compiling committee.c: too many arguments to function
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]