qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [question] virtio-blk performance degradation happened with


From: Zhang Haoyu
Subject: [Qemu-devel] [question] virtio-blk performance degradation happened with virito-serial
Date: Fri, 29 Aug 2014 15:45:30 +0800

Hi, all

I start a VM with virtio-serial (default ports number: 31), and found that 
virtio-blk performance degradation happened, about 25%, this problem can be 
reproduced 100%.
without virtio-serial:
4k-read-random 1186 IOPS
with virtio-serial:
4k-read-random 871 IOPS

but if use max_ports=2 option to limit the max number of virio-serial ports, 
then the IO performance degradation is not so serious, about 5%.

And, ide performance degradation does not happen with virtio-serial.

[environment]
Host OS: linux-3.10
QEMU: 2.0.1
Guest OS: windows server 2008

[qemu command]
/usr/bin/kvm -id 1587174272642 -chardev 
socket,id=qmp,path=/var/run/qemu-server/1587174272642.qmp,server,nowait -mon 
chardev=qmp,mode=control -vnc :0,websocket,to=200 -enable-kvm -pidfile 
/var/run/qemu-server/1587174272642.pid -daemonize -name win2008-32 -smp 
sockets=1,cores=1 -cpu core2duo -nodefaults -vga cirrus -no-hpet -k en-us -boot 
menu=on,splash-time=8000 -m 2048 -usb -drive 
if=none,id=drive-ide0,media=cdrom,aio=native -device 
ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200 -drive 
file=/sf/data/local/images/host-00e081de43d7/cea072c4294f/win2008-32.vm/vm-disk-1.qcow2,if=none,id=drive-ide2,cache=none,aio=native
 -device ide-hd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=100 -netdev 
type=tap,id=net0,ifname=158717427264200,script=/sf/etc/kvm/vtp-bridge -device 
e1000,mac=FE:FC:FE:D3:F9:2B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
 -rtc driftfix=slew,clock=rt,base=localtime -global 
kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3
 =1 -global PIIX4_PM.disable_s4=1

Any ideas?

Thanks,
Zhang Haoyu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]