qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio-scsi vs. virtio-blk


From: ronnie sahlberg
Subject: Re: [Qemu-devel] virtio-scsi vs. virtio-blk
Date: Thu, 9 Aug 2012 22:52:56 +1000

On Thu, Aug 9, 2012 at 10:31 PM, Stefan Priebe - Profihost AG
<address@hidden> wrote:
> Am 09.08.2012 14:19, schrieb Paolo Bonzini:
>
>> Il 09/08/2012 14:08, Stefan Priebe - Profihost AG ha scritto:
>>>
>>>
>>> virtio-scsi:
>>> rand 4k:
>>>    write: io=822448KB, bw=82228KB/s, iops=20557, runt= 10002msec
>>>    read : io=950920KB, bw=94694KB/s, iops=23673, runt= 10042msec
>>> seq:
>>>    write: io=2436MB, bw=231312KB/s, iops=56, runt= 10784msec
>>>    read : io=3248MB, bw=313799KB/s, iops=76, runt= 10599msec
>>>
>>> virtio-blk:
>>> rand 4k:
>>>    write: io=896472KB, bw=89051KB/s, iops=22262, runt= 10067msec
>>>    read : io=1710MB, bw=175073KB/s, iops=43768, runt= 10002msec
>>> seq:
>>>    write: io=4008MB, bw=391285KB/s, iops=95, runt= 10489msec
>>>    read : io=5748MB, bw=570178KB/s, iops=139, runt= 10323msec
>>
>>
>> Thanks; some overhead is expected, but not this much.  Especially the
>> sequential case is bad, what disk is this?
>
>
> right now this is an external iscsi Nexentastor. Locally i can't get this
> bandwith nor these iops to test.
>
>
>
>> Things to test include:
>>
>> - using the deadline I/O scheduler on at least the host, and possibly
>> the guest too
>
> guest uses noop right now. Disk Host is nexentastor running open solaris. I
> use libiscsi right now so the disks are not visible in both cases
> (virtio-blk and virtio-scsi) to the host right now.
>

And if you mount the disks locally on the host using open-iscsi, and
access them as /dev/sg* from qemu, what performance do you get?


virtio-blk would first go to scsi emulation and then call out to
block/iscsi.c to translate back to scsi commands to send to libiscsi

while virtio-scsi (I think) would treat libiscsi as a generic scsi
passthrough device.  I.e. all commands just go straight through
bdrv_aio_ioctl(SG_IO)


If anything, I think the codepaths should be shorter for the
virtio-scsi case, and it should due to the lack of scsi emulate and
scsi re-encode perform better.


Can you try also using normal scsi-generic and see how it performs
compared to virtio-blk/-scsi ?

git show 983924532f61091fd90d1f2fafa4aa938c414dbb
This command shows how to set up libiscsi with passthrough via an
emulated scsi hba.


as  virtio-blk/-scsi both use libiscsi, i think the bottleneck might
either be the interface between guest and qemu,  or the difference to
the guest when talking to local scsi-emulation vs talking to the
passthrough remote target.

also is it possible to map the LUNs locally on the host using
open-iscsi and then use the scsi-generic devices /dev/sg*  for qemu
and see how it compares?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]