qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: Improve bandwidth estimation


From: Pierre Riteau
Subject: Re: [Qemu-devel] [PATCH] migration: Improve bandwidth estimation
Date: Wed, 14 Sep 2011 19:17:16 +0200

There is some discussion today on migration downtime so I try again: anyone 
with comments on this patch?

On 2 mai 2011, at 14:19, Pierre Riteau wrote:

> Any comment on this patch?
> 
> On 31 mars 2011, at 22:30, Pierre Riteau wrote:
> 
>> In the current migration code, bandwidth is estimated by measuring the
>> time spent in the ram_save_block loop and dividing by the number of sent
>> bytes. However, because of buffering, the time spent in this loop is
>> usually much less than the actual time required to send data on the
>> wire. Try to improve this by measuring the time spent between two calls
>> to ram_save_live instead.
>> 
>> Signed-off-by: Pierre Riteau <address@hidden>
>> ---
>> arch_init.c |    9 +++++++--
>> 1 files changed, 7 insertions(+), 2 deletions(-)
>> 
>> diff --git a/arch_init.c b/arch_init.c
>> index 0c09f91..7b822fe 100644
>> --- a/arch_init.c
>> +++ b/arch_init.c
>> @@ -175,6 +175,7 @@ static int ram_save_block(QEMUFile *f)
>> }
>> 
>> static uint64_t bytes_transferred;
>> +static int64_t prev_time;
>> 
>> static ram_addr_t ram_save_remaining(void)
>> {
>> @@ -254,6 +255,7 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
>> void *opaque)
>>    uint64_t bytes_transferred_last;
>>    double bwidth = 0;
>>    uint64_t expected_time = 0;
>> +    int64_t current_time;
>> 
>>    if (stage < 0) {
>>        cpu_physical_memory_set_dirty_tracking(0);
>> @@ -286,6 +288,8 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
>> void *opaque)
>>        /* Enable dirty memory tracking */
>>        cpu_physical_memory_set_dirty_tracking(1);
>> 
>> +        prev_time = qemu_get_clock_ns(rt_clock);
>> +
>>        qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
>> 
>>        QLIST_FOREACH(block, &ram_list.blocks, next) {
>> @@ -296,7 +300,6 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
>> void *opaque)
>>    }
>> 
>>    bytes_transferred_last = bytes_transferred;
>> -    bwidth = qemu_get_clock_ns(rt_clock);
>> 
>>    while (!qemu_file_rate_limit(f)) {
>>        int bytes_sent;
>> @@ -308,8 +311,10 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, 
>> void *opaque)
>>        }
>>    }
>> 
>> -    bwidth = qemu_get_clock_ns(rt_clock) - bwidth;
>> +    current_time = qemu_get_clock_ns(rt_clock);
>> +    bwidth = current_time - prev_time;
>>    bwidth = (bytes_transferred - bytes_transferred_last) / bwidth;
>> +    prev_time = current_time;
>> 
>>    /* if we haven't transferred anything this round, force expected_time to a
>>     * a very high value, but without crashing */
>> -- 
>> 1.7.4.2
>> 
> 
> -- 
> Pierre Riteau -- PhD student, Myriads team, IRISA, Rennes, France
> http://perso.univ-rennes1.fr/pierre.riteau/
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]