qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH] virtio-blk: add SGI_IO passthru support


From: Nicholas A. Bellinger
Subject: Re: [Qemu-devel] Re: [PATCH] virtio-blk: add SGI_IO passthru support
Date: Thu, 30 Apr 2009 13:55:52 -0700

On Thu, 2009-04-30 at 22:13 +0200, Christoph Hellwig wrote:
> On Wed, Apr 29, 2009 at 12:37:20PM +0100, Paul Brook wrote:
> > How exactly does it introduce additional latency? A scsi command block is 
> > hardly large or complicated. Are you suggesting that a 16/32byte scsi 
> > command 
> > takes significantly longer to process than a 16byte virtio command 
> > descriptor? I'd expect any extra processing to be a small fraction of the 
> > host syscall latency, let alone the latency of the physical host adapter. 
> > It 
> > probably even fits on the same CPU cache line.
> 
> Encoding the scsi CDB is additional work but I would be surprised it it
> is mesurable.  Just using scsi cdbs would be simple enough, the bigger
> issue is emulating a full blown scsi bus because then you need to do all
> kinds queueing decisions at target levels etc and drag in a complicated
> scsi stack and not just a simple block driver in the guest.  And at
> least on current linux kernels that does introduce mesurable latency.
>
> Now it might be possible to get that latency down to a level where we
> can ignore it but when doing all this additional work there always will
> be additional overhead.
> 

/me puts on SCSI target mode hat

The other obvious benefit for allowing passthrough SCSI block devices
into KVM guests via virtio-blk means that at some point those SCSI block
devices could be coming from a local target mode stack that is
representing LVM block devices as SCSI-3 storage, or say FILEIO on top
of a btrfs mount also representing SCSI-3 storage along with the typical
hardware passthroughs for local (KVM Host) accessable SCSI devices.

The important part is that KVM guests using SG_IO passthrough with
virtio-blk (assuming that they actually showed up as SCSI devices in the
KVM guest) is that guests would be able to take advantage of existing
Linux SCSI I/O fencing functionality that is used for H/A, and for
cluster filesystems like GFS in RHEL, etc.

Also using scsi_dh_alua in Linux KVM guests against SCSI-3 compatible
block_devices using the passthrough means you could do some interesting
things with controlling bandwith and paths using asymmetric access port
states from local storage into Linux KVM guest.

How much latency overhead a SCSI passthrough (with a pseudo SCSI bus) in
the KVM guest would add compared to native virtio-blk would be the key
metric.  Now if the KVM SCSI passthrough was talking in SCSI-target mode
directly to drivers/scsi (instead of going through Block -> SCSI for
example) it would actually involve less overhead on the KVM host IMHO.

For the usage of using BLOCK and FILEIO devices that have SCSI-3
emulation on top (and then passing into the KVM guest) would obviously
add processing overhead, but for those folks who are interested in SCSI
I/O fencing in the KVM guest using existing tools, the performance
overhead would be reasonable tradeoff on machines with a fast memory and
point-to-point bus architecture.

--nab

> > 
> > Paul
> ---end quoted text---
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to address@hidden
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]