qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Verita


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support
Date: Wed, 30 Nov 2016 09:01:40 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Mon, Nov 28, 2016 at 02:17:56PM +0000, Stefan Hajnoczi wrote:
> Please take a look at vhost-user-scsi, which folks from Nutanix are
> currently working on.  See "[PATCH v2 0/3] Introduce vhost-user-scsi and
> sample application" on qemu-devel.  It is a true zero-copy local I/O tap
> because it shares guest RAM.  This is more efficient than cross memory
> attach's single memory copy.  It does not require running the server as
> root.  This is the #1 thing you should evaluate for your final
> architecture.
> 
> vhost-user-scsi works on the virtio-scsi emulation level.  That means
> the server must implement the virtio-scsi vring and device emulation.
> It is not a block driver.  By hooking in at this level you can achieve
> the best performance but you lose all QEMU block layer functionality and
> need to implement your own SCSI target.  You also need to consider live
> migration.

To clarify why I think vhost-user-scsi is best suited to your
requirements for performance:

With vhost-user-scsi the qnio server would be notified by kvm.ko via
eventfd when the VM submits new I/O requests to the virtio-scsi HBA.
The QEMU process is completely bypassed for I/O request submission and
the qnio server processes the SCSI command instead.  This avoids the
context switch to QEMU and then to the qnio server.  With cross memory
attach QEMU first needs to process the I/O request and hand it to
libqnio before the qnio server can be scheduled.

The vhost-user-scsi qnio server has shared memory access to guest RAM
and is therefore able to do zero-copy I/O into guest buffers.  Cross
memory attach always incurs a memory copy.

Using this high-performance architecture requires significant changes
though.  vhost-user-scsi hooks into the stack at a different layer so a
QEMU block driver is not used at all.  QEMU also wouldn't use libqnio.
Instead everything will live in your qnio server process (not part of
QEMU).

You'd have to rethink the resiliency strategy because you currently rely
on the QEMU block driver connecting to a different qnio server if the
local qnio server fails.  In the vhost-user-scsi world it's more like
having a phyiscal SCSI adapter - redundancy and multipathing are used to
achieve resiliency.

For example, virtio-scsi HBA #1 would connect to the local qnio server
process.  virtio-scsi HBA #2 would connect to another local process
called the "proxy process" which forwards requests to a remote qnio
server (using libqnio?).  If HBA #1 fails then I/O is sent to HBA #2
instead.  The path can reset back to HBA #1 once that becomes
operational again.

If the qnio server is supposed to run in a VM instead of directly in the
host environment then it's worth looking at the vhost-pci work that Wei
Wang <address@hidden> is working on.  The email thread is called
"[PATCH v2 0/4] *** vhost-user spec extension for vhost-pci ***".  The
idea here is to allow inter-VM virtio device emulation so that instead
of terminating the virtio-scsi device in the qnio server process on the
host, you can terminate it inside another VM with good performance
characteristics.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]