[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [PATCH 00/21] new backup architecture
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-block] [PATCH 00/21] new backup architecture |
Date: |
Tue, 31 Jan 2017 10:20:35 +0000 |
User-agent: |
Mutt/1.7.1 (2016-10-04) |
On Fri, Dec 23, 2016 at 05:28:43PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> This is a new architecture for backup. It solves some current problems:
> 1. intersecting requests: for now at request start we wait for all
> intersecting requests, which means that
> a. we may wait even for unrelated to our request clusters
> b. not full async: if we are going to copy clusters 1,2,3,4, when 2 and 4
> are in flight, why should we wait for 2 and 4 to be fully copied? Why not to
> start 1 and 3 in parallel with 2 and 4?
>
> 2. notifier request is internally synchronous: if notifier starts copying
> clusters 1,2,3,4, they will be copied one by one in synchronous loop.
>
> 3. notifier wait full copying of corresponding clusters (when actually it may
> wait only for _read_ operations to be finished)
Please include benchmark results since this is a performance
optimization. I think this new level of complexity is worth it because
it should be possible to achieve significantly higher throughput, but
data is still necessary.
The cover letter mentions spawning 24 coroutines. Did you compare the
memory footprint against the old backup architecture? Sometimes users
complain when they notice QEMU using significantly more memory than in
previous versions. If there's a good justification or a way to minimize
the impact then it's fine, but please check.
Stefan
signature.asc
Description: PGP signature