qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1] block/NVMe: introduce a new vhost NVMe host de


From: Liu, Changpeng
Subject: Re: [Qemu-devel] [RFC v1] block/NVMe: introduce a new vhost NVMe host device to QEMU
Date: Tue, 30 Jan 2018 01:19:54 +0000


> -----Original Message-----
> From: Stefan Hajnoczi [mailto:address@hidden
> Sent: Monday, January 29, 2018 11:29 PM
> To: Liu, Changpeng <address@hidden>
> Cc: address@hidden; Harris, James R <address@hidden>; Busch,
> Keith <address@hidden>; address@hidden; address@hidden;
> address@hidden
> Subject: Re: [RFC v1] block/NVMe: introduce a new vhost NVMe host device to
> QEMU
> 
> On Mon, Jan 15, 2018 at 04:01:55PM +0800, Changpeng Liu wrote:
> > NVMe 1.3 specification introduces a new NVMe ADMIN command:
> > doorbell buffer config, which can write shadow doorbell buffer
> > instead of MMIO registers, so it can improve the Guest performance
> > a lot for emulated NVMe devices inside VM.
> 
> If I understand correctly the Shadow Doorbell Buffer offers two
> optimizations:
> 
> 1. The guest driver only writes to the MMIO register when EventIdx has
>    been reached.  This eliminates some MMIO writes.
Correct.
> 
> 2. The device may poll the Shadow Doorbell Buffer so that command
>    processing can begin before guest driver performs an MMIO write.
> 
> Is this correct?
Guest should write shadow doorbell memory every time(each new request from 
Guest),
Guest may write PCI registers or not, depends on slave target's feedback.
Slave target can poll the shadow doorbell for new requests.
This can eliminate the MMIO writes for submission data path.
> 
> > Similar with existing vhost-user-scsi solution, this commit builds a
> > new vhost_user_nvme host device to VM and the I/O is processed at
> > the slave I/O target, so users can implement a user space NVMe driver
> > in the slave I/O target.
> >
> > Users can start QEMU with: -chardev socket,id=char0,path=/path/vhost.0 \
> > -device vhost-user-nvme,chardev=char0,num_io_queues=2.
> 
> Each new feature has a cost in terms of maintainance, testing,
> documentation, and support.  Users need to be educated about the role of
> each available storage controller and how to choose between them.
> 
> I'm not sure why QEMU should go in this direction since it makes the
> landscape more complex and harder to support.  You've said the
> performance is comparable to vhost-user-blk.  So what does NVMe offer
> that makes this worthwhile?
Good question, from the test results, this solution is almost same with
vhost-blk, this is still an ongoing work, I don't have a *MUST*
justification now.
> 
> A cool NVMe feature would be the ability to pass through invididual
> queues to different guests without SR-IOV.  In other words, bind a queue
> to namespace subset so that multiple guests can be isolated from each
> other.  That way the data path would not require vmexits.  The control
> path and device initialization would still be emulated by QEMU so the
> hardware does not need to provide the full resources and state needed
> for SR-IOV.  I looked into this but came to the conclusion that it would
> require changes to the NVMe specification because the namespace is a
> per-command field.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]