qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] extremely low IOPS performance of QCOW2 image format on an


From: lihuiba
Subject: [Qemu-devel] extremely low IOPS performance of QCOW2 image format on an SSD RAID1
Date: Mon, 23 Jun 2014 10:06:51 +0800 (CST)

Hi, all

I'm using a qcow2 image stored on a SSD RAID1 (2 x intel S3500), and I'm benchmarking the
system using fio. Although the throughput in VM (with KVM and virtio enabled) is acceptable (67%
of thoughtput in host), the IOPS performance seems is extremely low ---- only 2% of IOPS in host.

I was initially using qemu-1.1.2, and I also tried qemu-1.7.1 for comparison. There was no significant
difference.

In contrast, raw image and LVM perform very well. They usually achieve 90%+ of throughput and
60%+ of IOPS. So the problem must lie in the QCOW2 image format.

And I observed that, when I perform 4KB IOPS benchmark in VM with a QCOW2 image, fio in VM reports
it is reading 9.x MB/s, while iostat in host reports the SSD is being read 150+ MB/s. So QEMU or QCOW2
must have amplified the amount of read by nearly 16 times.

So, how can I fix or tune the performance issue of qcow2?

Thanks!


PS:
1. qemu parameters:
-enable-kvm -cpu qemu64 -rtc base=utc,clock=host,driftfix=none -usb -device usb-tablet -nodefaults -nodefconfig -no-kvm-pit-reinjection -global kvm-pit.lost_tick_policy=discard -machine pc,accel=kvm -vga std -k en-us -smp 8 -m 4096 -boot order=cdn -vnc :1 -drive file=$1,if=none,id=drive_0,cache=none,aio=native -device virtio-blk-pci,drive=drive_0,bus=pci.0,addr=0x5 -drive file=$2,if=none,id=drive_2,cache=none,aio=native -device virtio-blk-pci,drive=drive_2,bus=pci.0,addr=0x7

2. fio parameters for IOPS:
fio --filename=/dev/vdb --direct=1 --ioengine=libaio --iodepth 32 --thread --numjobs=1 --rw=randread --bs=4k --size=100% --runtime=60s --group_reporting --name=test

3. fio parameters for throughput:
fio --filename=/dev/vdb--direct=1 --ioengine=psync --thread --numjobs=3 --rw=randread --bs=1024k --size=100% --runtime=60s --name=randread --group_reporting -name=test



reply via email to

[Prev in Thread] Current Thread [Next in Thread]