qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released b


From: Jitendra Kolhe
Subject: Re: [Qemu-devel] [PATCH v1] migration: skip sending ram pages released by virtio-balloon driver.
Date: Tue, 15 Mar 2016 18:50:45 +0530
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

On 3/11/2016 8:09 PM, Jitendra Kolhe wrote:
You mean the total live migration time for the unmodified qemu and the
'you modified for test' qemu
are almost the same?


Not sure I understand the question, but if 'you modified for test' means
below modifications to save_zero_page(), then answer is no. Here is what
I tried, let’s say we have 3 versions of qemu (below timings are for
16GB idle guest with 12GB ballooned out)

v1. Unmodified qemu – absolutely not code change – Total Migration time
= ~7600ms (I rounded this one to ~8000ms)
v2. Modified qemu 1 – with proposed patch set (which skips both zero
pages scan and migrating control information for ballooned out pages) -
Total Migration time = ~5700ms
v3. Modified qemu 2 – only with changes to save_zero_page() as discussed
in previous mail (and of course using proposed patch set only to
maintain bitmap for ballooned out pages) – Total migration time is
irrelevant in this case.
Total Zero page scan time = ~1789ms
Total (save_page_header + qemu_put_byte(f, 0)) = ~556ms.
Everything seems to add up here (may not be exact) – 5700+1789+559 =
~8000ms

I see 2 factors that we have not considered in this add up a. overhead
for migrating balloon bitmap to target and b. as you mentioned below
overhead of qemu_clock_get_ns().

Missed one more factor of testing each page against balloon bitmap during migration, which is consuming around ~320ms for same configuration. If we remove this overhead which is introduced by proposed patch set from above calculation we almost get total migration time for unmodified qemu (5700-320+1789+559=~7700ms)

Thanks,
- Jitendra




reply via email to

[Prev in Thread] Current Thread [Next in Thread]