qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] block: fix bdrv_exceed_iops_limits wait computa


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH] block: fix bdrv_exceed_iops_limits wait computation
Date: Thu, 21 Mar 2013 16:14:58 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Mar 21, 2013 at 09:04:20PM +0800, Zhi Yong Wu wrote:
> On Thu, 2013-03-21 at 10:17 +0100, Stefan Hajnoczi wrote:
> > On Thu, Mar 21, 2013 at 09:18:27AM +0800, Zhi Yong Wu wrote:
> > > On Wed, 2013-03-20 at 16:12 +0100, Stefan Hajnoczi wrote:
> > > > On Wed, Mar 20, 2013 at 03:56:33PM +0100, Benoît Canet wrote:
> > > > > > But I don't understand why bs->slice_time is modified instead of 
> > > > > > keeping
> > > > > > it constant at 100 ms:
> > > > > >
> > > > > >     bs->slice_time = wait_time * BLOCK_IO_SLICE_TIME * 10;
> > > > > >     bs->slice_end += bs->slice_time - 3 * BLOCK_IO_SLICE_TIME;
> > > > > >     if (wait) {
> > > > > >         *wait = wait_time * BLOCK_IO_SLICE_TIME * 10;
> > > > > >     }
> > > > > 
> > > > > In bdrv_exceed_bps_limits there is an equivalent to this with a 
> > > > > comment.
> > > > > 
> > > > > ---------
> > > > >   /* When the I/O rate at runtime exceeds the limits,
> > > > >      * bs->slice_end need to be extended in order that the current 
> > > > > statistic
> > > > >      * info can be kept until the timer fire, so it is increased and 
> > > > > tuned
> > > > >      * based on the result of experiment.
> > > > >      */
> > > > >     bs->slice_time = wait_time * BLOCK_IO_SLICE_TIME * 10;
> > > > >     bs->slice_end += bs->slice_time - 3 * BLOCK_IO_SLICE_TIME;
> > > > >     if (wait) {
> > > > >         *wait = wait_time * BLOCK_IO_SLICE_TIME * 10;
> > > > >     }
> > > > > ----------
> > > > 
> > > > The comment explains why slice_end needs to be extended, but not why
> > > > bs->slice_time should be changed (except that it was tuned as the result
> > > > of an experiment).
> > > > 
> > > > Zhi Yong: Do you remember a reason for modifying bs->slice_time?
> > > Stefan,
> > >   In some case that the bare I/O speed is very fast on physical machine,
> > > when I/O speed is limited to be one lower value, I/O need to wait for
> > > one relative longer time(i.e. wait_time). You know, wait_time should be
> > > smaller than slice_time, if slice_time is constant, wait_time may not be
> > > its expected value, so the throttling function will not work well.
> > >   For example, bare I/O speed is 100MB/s, I/O throttling speed is 1MB/s,
> > > slice_time is constant, and set to 50ms(a assumed value) or smaller, If
> > > current I/O can be throttled to 1MB/s, its wait_time is expected to
> > > 100ms(a assumed value), and is more bigger than current slice_time, I/O
> > > throttling function will not throttle actual I/O speed well. In the
> > > case, slice_time need to be adjusted to one more suitable value which
> > > depends on wait_time.
> > 
> > When an I/O request spans a slice:
> > 1. It must wait until enough resources are available.
> > 2. We extend the slice so that existing accounting is not lost.
> > 
> > But I don't understand what you say about a fast host.  The bare metal
> I mean that a fast host is one host with very high metal throughput.
> > throughput does not affect the throttling calculation.  The only values
> > that matter are bps limit and slice time:
> > 
> > In your example the slice time is 50ms and the current request needs
> > 100ms.  We need to extend slice_end to at least 100ms so that we can
> > account for this request.
> > 
> > Why should slice_time be changed?
> It isn't one must choice, if you have one better way, we can maybe do it
> based on your way. I thought that if wait_time is big in previous slice
> window, slice_time should also be adjusted to be a bit bigger
> accordingly for next slice window.
> > 
> > >   In some other case that the bare I/O speed is very slow and I/O
> > > throttling speed is fast, slice_time also need to be adjusted
> > > dynamically based on wait_time.
> > 
> > If the host is slower than the I/O limit there are two cases:
> This is not what i mean; I mean that the bare I/O speed is faster than
> I/O limit, but their gap is very small.
> 
> > 
> > 1. Requests are below I/O limit.  We do not throttle, the host is slow
> > but that's okay.
> > 
> > 2. Requests are above I/O limit.  We throttle them but actually the host
> > will slow them down further to the bare metal speed.  This is also fine.
> > 
> > Again, I don't see a nice to change slice_time.
> > 
> > BTW I discovered one thing that Linux blk-throttle does differently from
> > QEMU I/O throttling: we do not trim completed slices.  I think trimming
> > avoids accumulating values which may lead to overflows if the slice
> > keeps getting extended due to continuous I/O.
> QEMU I/O throttling is not completely same as Linux Block throttle way.

There is a reason why blk-throttle implements trimming and it could be
important for us too.  So I calculated how long it would take to
overflow int64_t with 2 GByte/s of continuous I/O.  The result is 136
years so it does not seem to be necessary in practice yet :).

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]