qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1] block/NVMe: introduce a new vhost NVMe host de


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC v1] block/NVMe: introduce a new vhost NVMe host device to QEMU
Date: Mon, 29 Jan 2018 15:29:12 +0000
User-agent: Mutt/1.9.1 (2017-09-22)

On Mon, Jan 15, 2018 at 04:01:55PM +0800, Changpeng Liu wrote:
> NVMe 1.3 specification introduces a new NVMe ADMIN command:
> doorbell buffer config, which can write shadow doorbell buffer
> instead of MMIO registers, so it can improve the Guest performance
> a lot for emulated NVMe devices inside VM.

If I understand correctly the Shadow Doorbell Buffer offers two
optimizations:

1. The guest driver only writes to the MMIO register when EventIdx has
   been reached.  This eliminates some MMIO writes.

2. The device may poll the Shadow Doorbell Buffer so that command
   processing can begin before guest driver performs an MMIO write.

Is this correct?

> Similar with existing vhost-user-scsi solution, this commit builds a
> new vhost_user_nvme host device to VM and the I/O is processed at
> the slave I/O target, so users can implement a user space NVMe driver
> in the slave I/O target.
> 
> Users can start QEMU with: -chardev socket,id=char0,path=/path/vhost.0 \
> -device vhost-user-nvme,chardev=char0,num_io_queues=2.

Each new feature has a cost in terms of maintainance, testing,
documentation, and support.  Users need to be educated about the role of
each available storage controller and how to choose between them.

I'm not sure why QEMU should go in this direction since it makes the
landscape more complex and harder to support.  You've said the
performance is comparable to vhost-user-blk.  So what does NVMe offer
that makes this worthwhile?

A cool NVMe feature would be the ability to pass through invididual
queues to different guests without SR-IOV.  In other words, bind a queue
to namespace subset so that multiple guests can be isolated from each
other.  That way the data path would not require vmexits.  The control
path and device initialization would still be emulated by QEMU so the
hardware does not need to provide the full resources and state needed
for SR-IOV.  I looked into this but came to the conclusion that it would
require changes to the NVMe specification because the namespace is a
per-command field.

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]