[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH V7 0/5] Continuous Leaky Bucket Throttling
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] [PATCH V7 0/5] Continuous Leaky Bucket Throttling |
Date: |
Fri, 30 Aug 2013 11:53:20 +0200 |
User-agent: |
Mutt/1.5.21 (2010-09-15) |
On Thu, Aug 29, 2013 at 11:37:20AM +0200, Benoît Canet wrote:
> > 1. We keep modifying the timer in bdrv_io_limits_intercept() on each
> > request even when it has already been set. I think we'll set it to
> > the same absolute timestamp, modulo numerical issues. Should we
> > avoid doing this?
>
> I could check that the timer is not pending before setting it.
Paolo is making timer_pending() very cheap so this sounds good.
> >
> > 2. bdrv_io_limits_resched() only wakes up requests of the same type
> > (read/write). Does this mean that BPS_TOTAL/IOPS_TOTAL requests
> > will have to wait until the other request type timer expires instead
> > of piggybacking on request completion?
> >
> > Is this a problem? If no, then why piggyback on request completion
> > at all since apparently it works fine when we don't wake up the other
> > request type?
>
> It only wakes up the same request type to be coherent with the two requests
> queues and two timers strategy.
> The ultimate goal of this is to be able to do:
> block_set_io_throttle virtio1 0 0 0 0 3000 1
> The code can cope with this and do independent throttling for reads and
> writes.
I understand why there are separate queues for r/w requests. What I'm
getting at is that bdrv_io_limits_resched() in its current form is not
needed:
Resources are refilled as time passes, not by completing requests, so
there should be no need to act when a request completes.
bdrv_io_limits_resched() is not necessary (if it was,
BPS_TOTAL/IOPS_TOTAL wouldn't work since bdrv_io_limits_resched() does
not handle the other request type).
bdrv_io_limits_intercept() should wake the next request after calling
throttle_account() so we can submit as many requests as possible right
away, instead of waiting for the first request to complete before
submitting the next request.
After this change:
1. Submitting a request also kicks queued requests. We always submit as
many requests as allowed by the bucket.
2. If we need to wait the timer will wake us up when more resources are
available.
Stefan
- [Qemu-devel] [PATCH V7 0/5] Continuous Leaky Bucket Throttling, Benoît Canet, 2013/08/28
- [Qemu-devel] [PATCH V7 1/5] throttle: Add a new t hrottling API implementing continuous leaky bucket., Benoît Canet, 2013/08/28
- [Qemu-devel] [PATCH V7 2/5] throttle: Add units t ests, Benoît Canet, 2013/08/28
- [Qemu-devel] [PATCH V7 3/5] block: Enable the new throttling code in the block layer., Benoît Canet, 2013/08/28
- [Qemu-devel] [PATCH V7 4/5] block: Add support fo r throttling burst max in QMP and the command line., Benoît Canet, 2013/08/28
- [Qemu-devel] [PATCH V7 5/5] block: Add iops_siz e to do the iops accounting for a given io size., Benoît Canet, 2013/08/28
- Re: [Qemu-devel] [PATCH V7 0/5] Continuous Leaky Bucket Throttling, Stefan Hajnoczi, 2013/08/29
- Re: [Qemu-devel] [PATCH V7 0/5] Continuous Leaky Bucket Throttling, Benoît Canet, 2013/08/29