qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [regression] dataplane: throughout -40% by commit 580b6


From: Ming Lei
Subject: Re: [Qemu-devel] [regression] dataplane: throughout -40% by commit 580b6b2aa2
Date: Thu, 26 Jun 2014 23:47:53 +0800

On Thu, Jun 26, 2014 at 11:43 PM, Paolo Bonzini <address@hidden> wrote:
> Il 26/06/2014 17:37, Ming Lei ha scritto:
>
>> On Thu, Jun 26, 2014 at 11:29 PM, Paolo Bonzini <address@hidden>
>> wrote:
>>>
>>> Il 26/06/2014 17:14, Ming Lei ha scritto:
>>>
>>>> Hi Stefan,
>>>>
>>>> I found VM block I/O thoughput is decreased by more than 40%
>>>> on my laptop, and looks much worsen in my server environment,
>>>> and it is caused by your commit 580b6b2aa2:
>>>>
>>>>           dataplane: use the QEMU block layer for I/O
>>>>
>>>> I run fio with below config to test random read:
>>>>
>>>> [global]
>>>> direct=1
>>>> size=4G
>>>> bsrange=4k-4k
>>>> timeout=20
>>>> numjobs=4
>>>> ioengine=libaio
>>>> iodepth=64
>>>> filename=/dev/vdc
>>>> group_reporting=1
>>>>
>>>> [f]
>>>> rw=randread
>>>>
>>>> Together with throughput drop, the latency is improved a little.
>>>>
>>>> With this commit, I/O block submitted to fs becomes much smaller
>>>> than before, and more io_submit() need to be called to kernel, that
>>>> means iodepth may become much less.
>>>>
>>>> I am not surprised with the result since I did compare VM I/O
>>>> performance between qemu and lkvm before, which has no big qemu
>>>> lock problem and handle I/O in a dedicated thread, but lkvm's block
>>>> IO is still much worse than qemu from view of throughput, because
>>>> lkvm doesn't submit block I/O at batch like the way of previous
>>>> dataplane, IMO.
>>>
>>>
>>>
>>> What is your elevator setting in both the host and the guest?  Usually
>>> deadline gives the best performance.
>>
>>
>> The test is based on cfq, but I just run a quick test with deadline, looks
>> no obvious difference.
>
>
> Can you give us your QEMU command line?

The data.img is put on a ext4 over ssd, and basically similar
result can be observed on backend on /dev/nullb1 too.

/home/tom/git/other/qemu/x86_64-softmmu/qemu-system-x86_64 \
    -name 'kvm-test'  \
    -M pc  \
    -vga none  \
    -drive 
id=drive_image1,if=none,format=raw,cache=none,aio=native,file=/mnt/ssd/img/f19-fs.img
\
    -device 
virtio-blk-pci,id=image1,drive=drive_image1,bootindex=1,scsi=off,config-wce=off,x-data-plane=on,bus=pci.0,addr=02
\
    -drive 
id=drive_image3,if=none,format=raw,cache=none,aio=native,file=/dev/nullb1
\
    -device 
virtio-blk-pci,id=image3,drive=drive_image3,bootindex=3,scsi=off,config-wce=off,x-data-plane=on,bus=pci.0,addr=04
\
    -drive 
id=drive_image2,if=none,format=raw,cache=none,aio=native,file=/mnt/ssd/img/data.img
\
    -device 
virtio-blk-pci,id=image2,drive=drive_image2,bootindex=2,scsi=off,config-wce=off,x-data-plane=on,bus=pci.0,addr=03
\
    -netdev user,id=idabMX4S,hostfwd=tcp::5000-:22  \
    -device 
virtio-net-pci,mac=9a:be:bf:c0:c1:c2,id=idDyAIbK,vectors=4,netdev=idabMX4S,bus=pci.0,addr=08
\
    -m 1024  \
    -smp 4,maxcpus=4  \
    -kernel /mnt/ssd/git/linux-2.6/linux-2.6-next/arch/x86_64/boot/bzImage \
    -append 'earlyprintk console=ttyS0 mem=1024M rootfstype=ext4
root=/dev/vda rw virtio_blk.queue_depth=128 loglevel=9
no_console_suspend ip=dhcp ftrace_dump_on_oops'  \
    -nographic  \
    -rtc base=utc,clock=host,driftfix=none \
    -enable-kvm



Thanks,
-- 
Ming Lei



reply via email to

[Prev in Thread] Current Thread [Next in Thread]