qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v4 0/3] migration: reduce time of loading non-iterable vmstate


From: Chuang Xu
Subject: Re: [RFC v4 0/3] migration: reduce time of loading non-iterable vmstate
Date: Fri, 23 Dec 2022 11:11:47 -0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.4.2


On 2022/12/23 下午11:50, Peter Xu wrote:
Chuang,

On Fri, Dec 23, 2022 at 10:23:04PM +0800, Chuang Xu wrote:
In this version:

- attach more information in the cover letter.
- remove changes on virtio_load().
- add rcu_read_locked() to detect holding of rcu lock.

The duration of loading non-iterable vmstate accounts for a significant
portion of downtime (starting with the timestamp of source qemu stop and
ending with the timestamp of target qemu start). Most of the time is spent
committing memory region changes repeatedly.

This patch packs all the changes to memory region during the period of
loading non-iterable vmstate in a single memory transaction. With the
increase of devices, this patch will greatly improve the performance.

Here are the test1 results:
test info:
- Host
  - Intel(R) Xeon(R) Platinum 8260 CPU
  - NVIDIA Mellanox ConnectX-5
- VM
  - 32 CPUs 128GB RAM VM
  - 8 16-queue vhost-net device
  - 16 4-queue vhost-user-blk device.

	time of loading non-iterable vmstate     downtime
before	about 150 ms			  740+ ms
after		about 30 ms			  630+ ms
Have you investigated why multi-queue added so much downtime overhead with
the same environment, comparing to below [1]?
I have analyzed the downtime in detail. Both stopping and starting the device are 
time-consuming. 

For stopping vhost-net devices, vhost_net_stop_one() will be called once more 
for each additional queue, while vhost_virtqueue_stop() will be called twice 
in vhost_dev_stop(). For example, we need to call vhost_virtqueue_stop() 
32(= 16 * 2) times to stop a 16-queue vhost-net device. In vhost_virtqueue_stop(), 
QEMU needs to negotiate with the vhost user daemon. The same is true for vhost-net 
devices startup.

For stopping vhost-user-blk devices, vhost_virtqueue_stop() will be called once 
more for each additional queue. For example, we need to call vhost_virtqueue_stop() 
4 times to stop a 4-queue vhost-user-blk device. The same is true for vhost-user-blk 
devices startup. 

It seems that the vhost-user-blk device is less affected by the number of queues 
than the vhost-net device. However, the vhost-user-blk device needs to prepare 
inflight when it is started. The negotiation with spdk in this step is also 
time-consuming. I tried to move this step to the startup phase of the target QEMU 
before the migration started. In my test, This optimization can greatly reduce 
the vhost-user-blk device startup time and thus reduce the downtime. I'm not sure 
whether this is hacky. If you are interested in this, maybe we can discuss it further.

      
(This result is different from that of v1. It may be that someone has 
changed something on my host.., but it does not affect the display of 
the optimization effect.)


In test2, we keep the number of the device the same as test1, reduce the 
number of queues per device:

Here are the test2 results:
test info:
- Host
  - Intel(R) Xeon(R) Platinum 8260 CPU
  - NVIDIA Mellanox ConnectX-5
- VM
  - 32 CPUs 128GB RAM VM
  - 8 1-queue vhost-net device
  - 16 1-queue vhost-user-blk device.

	time of loading non-iterable vmstate     downtime
before	about 90 ms			 about 250 ms
after		about 25 ms			 about 160 ms
[1]

In test3, we keep the number of queues per device the same as test1, reduce 
the number of devices:

Here are the test3 results:
test info:
- Host
  - Intel(R) Xeon(R) Platinum 8260 CPU
  - NVIDIA Mellanox ConnectX-5
- VM
  - 32 CPUs 128GB RAM VM
  - 1 16-queue vhost-net device
  - 1 4-queue vhost-user-blk device.

	time of loading non-iterable vmstate     downtime
before	about 20 ms			 about 70 ms
after		about 11 ms			 about 60 ms


As we can see from the test results above, both the number of queues and 
the number of devices have a great impact on the time of loading non-iterable 
vmstate. The growth of the number of devices and queues will lead to more 
mr commits, and the time consumption caused by the flatview reconstruction 
will also increase.
The downtime measured in precopy can be more complicated than postcopy
because the time of switch is calculated by qemu based on the downtime
setup, and also that contains part of RAM migrations.  Postcopy should be
more accurate on that because there's no calculation done, meanwhile
there's no RAM transferred during downtime.

However postcopy downtime is not accurate either in implementation of it in
postcopy_start(), where the downtime is measured right after we flushed the
packed data, and right below it there's some idea of optimizing it:

    if (migrate_postcopy_ram()) {
        /*
         * Although this ping is just for debug, it could potentially be
         * used for getting a better measurement of downtime at the source.
         */
        qemu_savevm_send_ping(ms->to_dst_file, 4);
    }

So maybe I'll have a look there.
The current calculation of downtime is really inaccurate, because the source 
side calculation does not take into account the time consumption of various 
factors at the destination side. Maybe we can consider transmitting some key 
timestamps to the destination, and the destination will calculate the actual 
downtime data after startup.

Besides above, personally I'm happy with the series, one trivial comment in
patch 2 but not a huge deal.  I don't expect you can get any more comment
before the end of this year.. but let's wait until after the Xmas holiday.

Thanks!

I’ll further modify the patch 2 according to your comments.

Merry Christmas!

reply via email to

[Prev in Thread] Current Thread [Next in Thread]