qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync


From: Denis V. Lunev
Subject: Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync
Date: Thu, 8 Oct 2015 19:51:58 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 10/07/2015 12:44 PM, Paolo Bonzini wrote:

On 07/10/2015 08:20, Denis V. Lunev wrote:
All calls of this hook will be from ram_save_pending().

At the first call of this hook we need to save the initial
size of VM memory and put the migration thread to sleep for
decent period (downtime for example). During this period
guest would dirty memory.

The second and the last call.
We make our estimation of dirty bytes rate assuming that time
between two synchronizations of dirty bitmap differs from downtime
negligibly.

An alternative to this approach is receiving information about
size of data “transmitted” through the transport.
This would use before_ram_iterate/after_ram_iterate, right?

However, this
way creates large time and memory overheads:
1/Transmitted guest’s memory pages are copied to QEMUFile’s buffer
   (~8 sec per 4GB VM)
Note that they are not if you implement writev_buffer.

yep, but we will have to setup iovec entry for each page
but pls see below

2/Dirty memory pages are processed one by one (~60msec per 4GB VM)
That however improves the accuracy, doesn't it?

Paolo
from the point of estimate we need we need amount of dirtied
page per second as a count as a result thus I do not think
that this will make a difference.

Though the approach proposed by David in the letter below
is much better from the point of overhead and the result
was presented in the original description  as (2) aka ~60 msecs
per 4 GB VM was obtained that way. Sorry that this was not
clearly exposed in the description.

Den



reply via email to

[Prev in Thread] Current Thread [Next in Thread]