qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [regression] dataplane: throughout -40% by commit 580b6


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [regression] dataplane: throughout -40% by commit 580b6b2aa2
Date: Fri, 27 Jun 2014 16:50:15 +0200
User-agent: Mutt/1.5.23 (2014-03-12)

On Fri, Jun 27, 2014 at 02:21:06PM +0200, Kevin Wolf wrote:
> Am 27.06.2014 um 14:01 hat Stefan Hajnoczi geschrieben:
> > On Thu, Jun 26, 2014 at 11:14:16PM +0800, Ming Lei wrote:
> > > Hi Stefan,
> > > 
> > > I found VM block I/O thoughput is decreased by more than 40%
> > > on my laptop, and looks much worsen in my server environment,
> > > and it is caused by your commit 580b6b2aa2:
> > > 
> > >           dataplane: use the QEMU block layer for I/O
> > > 
> > > I run fio with below config to test random read:
> > > 
> > > [global]
> > > direct=1
> > > size=4G
> > > bsrange=4k-4k
> > > timeout=20
> > > numjobs=4
> > > ioengine=libaio
> > > iodepth=64
> > > filename=/dev/vdc
> > > group_reporting=1
> > > 
> > > [f]
> > > rw=randread
> > > 
> > > Together with throughput drop, the latency is improved a little.
> > > 
> > > With this commit, I/O block submitted to fs becomes much smaller
> > > than before, and more io_submit() need to be called to kernel, that
> > > means iodepth may become much less.
> > > 
> > > I am not surprised with the result since I did compare VM I/O
> > > performance between qemu and lkvm before, which has no big qemu
> > > lock problem and handle I/O in a dedicated thread, but lkvm's block
> > > IO is still much worse than qemu from view of throughput, because
> > > lkvm doesn't submit block I/O at batch like the way of previous
> > > dataplane, IMO.
> > > 
> > > But now you change the way of submitting I/O, could you share
> > > the motivation about the change? Is the throughput drop you expect?
> > 
> > Thanks for reporting this.  40% is a serious regression.
> > 
> > We were expecting a regression since the custom Linux AIO codepath has
> > been replaced with the QEMU block layer (which offers features like
> > image formats, snapshots, I/O throttling).
> > 
> > Let me know if you get stuck working on a patch.  Implementing batching
> > sounds like a good idea.  I never measured the impact when I wrote the
> > ioq code, it just seemed like a natural way to structure the code.
> > 
> > Hopefully this 40% number is purely due to batching and we can get most
> > of the performance back.
> 
> Shouldn't it be easy enough to take the old code, remove the batching
> there and then measure if you get the same 40%?

Yes, that's a good idea.

Stefan

Attachment: pgp7o4MeWGDLr.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]