qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio-scsi vs. virtio-blk


From: Stefan Priebe - Profihost AG
Subject: Re: [Qemu-devel] virtio-scsi vs. virtio-blk
Date: Thu, 09 Aug 2012 15:39:50 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0


Am 09.08.2012 15:15, schrieb Paolo Bonzini:
Il 09/08/2012 14:52, ronnie sahlberg ha scritto:

guest uses noop right now. Disk Host is nexentastor running open solaris. I
use libiscsi right now so the disks are not visible in both cases
(virtio-blk and virtio-scsi) to the host right now.

And if you mount the disks locally on the host using open-iscsi, and
access them as /dev/sg* from qemu, what performance do you get?

Good question.

virtio-blk would first go to scsi emulation and then call out to
block/iscsi.c to translate back to scsi commands to send to libiscsi

while virtio-scsi (I think) would treat libiscsi as a generic scsi
passthrough device.  I.e. all commands just go straight through
bdrv_aio_ioctl(SG_IO)

I think he's not using scsi-block or scsi-generic, because 1.0 libiscsi
didn't support that.

scsi-generic would indeed incur some overhead because it does not do
scatter/gather I/O directly, but scsi-hd/scsi-block do not have this
overhead.  In any case, that should be visible through the output of
perf if it is significant.

Thanks for your help and replies. I'm a little bit lost on all these comments. So what to check / do next?

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]