qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH v3 2/6] block: Add VFIO based NVMe


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v3 2/6] block: Add VFIO based NVMe driver
Date: Fri, 7 Jul 2017 12:06:26 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0

On 06/07/2017 19:38, Keith Busch wrote:
> On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
>> This is a new protocol driver that exclusively opens a host NVMe
>> controller through VFIO. It achieves better latency than linux-aio by
>> completely bypassing host kernel vfs/block layer.
>>
>>     $rw-$bs-$iodepth  linux-aio     nvme://
>>     ----------------------------------------
>>     randread-4k-1     8269          8851
>>     randread-512k-1   584           610
>>     randwrite-4k-1    28601         34649
>>     randwrite-512k-1  1809          1975
>>
>> The driver also integrates with the polling mechanism of iothread.
>>
>> This patch is co-authored by Paolo and me.
>>
>> Signed-off-by: Fam Zheng <address@hidden>
> 
> I haven't much time to do a thorough review, but in the brief time so
> far the implementation looks fine to me.
> 
> I am wondering, though, if an NVMe vfio driver can be done as its own
> program that qemu can link to. The SPDK driver comes to mind as such an
> example, but it may create undesirable dependencies.

I think there's room for both (and for PCI passthrough too).  SPDK as
"its own program" is what vhost-user-blk provides, in the end.

This driver is simpler for developers to test than SPDK.  For cloud
providers that want to provide a stable guest ABI but also want a faster
interface for high-performance PCI SSDs, it offers a different
performance/ABI stability/power consumption tradeoff than either PCI
passthorough or SDPK's poll-mode driver.

The driver is also useful when tuning the QEMU event loop, because its
higher performance makes it easier to see some second order effects that
appear at higher queue depths (e.g. faster driver -> more guest
interrupts -> lower performance).

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]