qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane


From: Jonghwan Choi
Subject: Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
Date: Thu, 05 Sep 2013 10:18:28 +0900

Thanks for your reply.

> 
> 1. The fio results so it's clear which cases performed worse and by how
>    much.
>
When I set vcpu = 8, read performance is decreased about 25%.
In my test, when vcpu  = 1, I got a best formance.

> 2. The fio job files.
> 
[testglobal]
description=high_iops
exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
group_reporting=1
rw=read
direct=1
ioengine=sync
bs=4m
numjobs=1
size=2048m

> 3. The QEMU command-line to launch the guest.
> 
<domain type='kvm' id='6'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>  <name>vm1</name>
  <uuid>76d6fca9-904d-1e3e-77c7-67a048be9d60</uuid>
  <memory unit='KiB'>50331648</memory>
  <currentMemory unit='KiB'>50331648</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/vm1/ubuntu-kvm/tmpJQKe7w.raw'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </disk>
...
<qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.scsi=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.config-wce=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
  </qemu:commandline>

> 4. The host disk configuration (e.g. file systems, local
>    SATA/FibreChannel/NFS, etc).
>
-> SSD 
> 5. The basic host specs including RAM and number of logical CPUs.
 -> Host : 256GB, CPUs : 31, Guest  : 48GB, VCPUs : 8


Thanks.
Best Regards.

> -----Original Message-----
> From: Stefan Hajnoczi [mailto:address@hidden
> Sent: Wednesday, September 04, 2013 5:59 PM
> To: Jonghwan Choi
> Cc: address@hidden
> Subject: Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-
> Data-Plane
> 
> On Mon, Sep 02, 2013 at 05:24:09PM +0900, Jonghwan Choi wrote:
> > Nowdays i measured io performance with Virtio-Blk-Data-Plane.
> > There was something strange in test.
> > When vcpu count is 1, io performance is increased in test
> > But vcpu count is over 2, io performance is decreased in test.
> >
> > i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1
> >
> > What should i check in my test?
> 
> It's hard to say without any details on your benchmark configuration.
> 
> In general, x-data-plane=on performs well with SMP guests and with
> multiple disks.  This is because the dataplane threads can process I/O
> requests without contending on the QEMU global mutex or the iothread
> event loop.
> 
> In order to investigate further, please post:
> 
> 1. The fio results so it's clear which cases performed worse and by how
>    much.
> 
> 2. The fio job files.
> 
> 3. The QEMU command-line to launch the guest.
> 
> 4. The host disk configuration (e.g. file systems, local
>    SATA/FibreChannel/NFS, etc).
> 
> 5. The basic host specs including RAM and number of logical CPUs.
> 
> Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]