qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1] block/NVMe: introduce a new vhost NVMe host de


From: Liu, Changpeng
Subject: Re: [Qemu-devel] [RFC v1] block/NVMe: introduce a new vhost NVMe host device to QEMU
Date: Wed, 17 Jan 2018 00:53:37 +0000


> -----Original Message-----
> From: Paolo Bonzini [mailto:address@hidden
> Sent: Wednesday, January 17, 2018 1:07 AM
> To: Liu, Changpeng <address@hidden>; address@hidden
> Cc: Harris, James R <address@hidden>; Busch, Keith
> <address@hidden>; address@hidden; address@hidden;
> address@hidden
> Subject: Re: [RFC v1] block/NVMe: introduce a new vhost NVMe host device to
> QEMU
> 
> On 15/01/2018 09:01, Changpeng Liu wrote:
> > NVMe 1.3 specification introduces a new NVMe ADMIN command:
> > doorbell buffer config, which can write shadow doorbell buffer
> > instead of MMIO registers, so it can improve the Guest performance
> > a lot for emulated NVMe devices inside VM.
> >
> > Similar with existing vhost-user-scsi solution, this commit builds a
> > new vhost_user_nvme host device to VM and the I/O is processed at
> > the slave I/O target, so users can implement a user space NVMe driver
> > in the slave I/O target.
> >
> > Users can start QEMU with: -chardev socket,id=char0,path=/path/vhost.0 \
> > -device vhost-user-nvme,chardev=char0,num_io_queues=2.
> 
> Hi Changpeng,
> 
> I have two comments on this series.
> 
> First, the new command in NVMe 1.3 is great.  However, please first add
> support for the doorbell buffer config in hw/block/nvme.c.  There is no
> need to tie support for the new command to a completely new external
> server architecture.  Emulated NVMe can be enhanced to use iothreads and
> (when the doorbell buffer is configured) ioeventfd, and that should come
> before enhancements for external vhost-like servers.
Yes, adding doorbell buffer config support at hw/block/nvme.c is great to 
improve
the efficiency, this vhost-like idea is another solution which can provide 
end-to-end
userspace software stack.
> 
> Second, virtio-based vhost-user remains QEMU's preferred method for
> high-performance I/O in guests.  Discard support is missing and that is
> important for SSDs; that should be fixed in the virtio spec.  Are there
Previously I have a patch adding DISCARD support to virtio-blk, but I didn't 
find
the way using svn to update the spec patch, any git repository I can use
to update the virtio-blk spec ? I think I can pick up the feature again.
> any other features where virtio-blk is lagging behind NVMe?
From the efficiency consideration, I compared the solution with vhost-user-blk,
Indeed, the two solution are almost at the same level, IOPS/CPU utilization 
inside Guest/KVM events(VM_EXIT,
KVM_FAST_MMIO,KVM_MSI_SET_IRQ,KVM_MSR) are almost same.
> 
> Thanks,
> 
> Paolo

reply via email to

[Prev in Thread] Current Thread [Next in Thread]