qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] VM I/O performance drops dramatically during storage migrat


From: Chunguang Li
Subject: [Qemu-devel] VM I/O performance drops dramatically during storage migration with drive-mirror
Date: Mon, 28 May 2018 18:17:10 +0800 (GMT+08:00)

Hi, everyone.




Recently I am doing some tests on the VM storage+memory migration with 
KVM/QEMU/libvirt. I use the following migrate command through virsh: "virsh 
migrate --live --copy-storage-all --verbose vm1 qemu+ssh://192.168.1.91/system 
tcp://192.168.1.91". I have checked the libvirt debug output, and make sure 
that the drive-mirror + NBD migration method is used.

Inside the VM, I use an I/O benchmark (Iometer) to generate an oltp workload. I 
record the I/O performance (IOPS) before/during/after migration. When the 
migration begins, the IOPS dropped by 30%-40%. This is reasonable, because the 
migration I/O competes with the workload I/O. However, during almost the last 
period of migration (which is 66s in my case), the IOPS dropped dramatically, 
from about 170 to less than 10. I also show the figure of this experiment in 
the attachment of this email.




I want to figure out what results in this period with very low IOPS. First, I 
added some printf()s in the QEMU code, and knew that, this period occurs just 
before the memory migration phase. (BTW, the memory migration is very fast, 
which is just about 5s.) So I think this period should be the last phase of the 
"drive-mirror" process of QEMU. So then I tried to read the code of 
"drive-mirror" in QEMU, but failed to understand it very well.




Does anybody know what may lead to this period with very low IOPS? Thank you 
very much. 

Some details of this experiment:
The VM disk image file is 30GB (format = raw,cache=none,aio=native), and 
Iometer operates on an 10GB file inside the VM. The oltp workload consists of 
33% writes and 67% reads (8KB request size, all random). The VM memory size is 
4GB, most of which should be zero pages, so the memory migration is very fast.




--
Chunguang Li, Ph.D. Candidate
Wuhan National Laboratory for Optoelectronics (WNLO)
Huazhong University of Science & Technology (HUST)
Wuhan, Hubei Prov., China

Attachment: iometer_oltp_IOPS.jpg
Description: JPEG image


reply via email to

[Prev in Thread] Current Thread [Next in Thread]