qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 11/11] virtio-blk: add x-data-plane=on|off pe


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v4 11/11] virtio-blk: add x-data-plane=on|off performance feature
Date: Tue, 4 Dec 2012 15:19:30 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Dec 04, 2012 at 01:20:20PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 29, 2012 at 04:55:48PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Nov 29, 2012 at 03:45:55PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Nov 29, 2012 at 03:12:35PM +0200, Michael S. Tsirkin wrote:
> > > > On Thu, Nov 22, 2012 at 04:16:52PM +0100, Stefan Hajnoczi wrote:
> > > > > The virtio-blk-data-plane feature is easy to integrate into
> > > > > hw/virtio-blk.c.  The data plane can be started and stopped similar to
> > > > > vhost-net.
> > > > > 
> > > > > Users can take advantage of the virtio-blk-data-plane feature using 
> > > > > the
> > > > > new -device virtio-blk-pci,x-data-plane=on property.
> > > > > 
> > > > > The x-data-plane name was chosen because at this stage the feature is
> > > > > experimental and likely to see changes in the future.
> > > > > 
> > > > > If the VM configuration does not support virtio-blk-data-plane an 
> > > > > error
> > > > > message is printed.  Although we could fall back to regular 
> > > > > virtio-blk,
> > > > > I prefer the explicit approach since it prompts the user to fix their
> > > > > configuration if they want the performance benefit of
> > > > > virtio-blk-data-plane.
> > > > 
> > > > Not only that, this affects features exposed to guest so it really 
> > > > can't be
> > > > trasparent.
> > > > 
> > > > Which reminds me - shouldn't some features be turned off?
> > > > For example, VIRTIO_BLK_F_SCSI?
> > > 
> > > Yes, virtio-blk-data-plane only starts when you give -device
> > > virtio-blk-pci,scsi=off,x-data-plane=on.  If you use scsi=on an error
> > > message is printed.
> > > 
> > > > > Limitations:
> > > > >  * Only format=raw is supported
> > > > >  * Live migration is not supported
> > > > 
> > > > This is probably fixable long term?
> > > 
> > > Absolutely.  There are two parts:
> > > 
> > > 1. Marking written memory dirty so live RAM migration can work.  Missing
> > >    today, easy cheat is to switch off virtio-blk-data-plane and silently
> > >    switch to regular virtio-blk emulation while memory dirty logging is
> > >    enabled.  The more long-term solution is to actually communicate the
> > >    dirty information back to the memory API.
> > > 
> > > 2. Synchronizing virtio-blk-data-plane vring state with virtio-blk so
> > >    save/load works.  This should be relatively straightforward.
> > > 
> > > I don't want to gate this patch series on live migration support but it
> > > is on my TODO list for virtio-blk-data-plane after this initial series
> > > has been merged.
> > > 
> > > > >  * Block jobs, hot unplug, and other operations fail with -EBUSY
> > > > 
> > > > Hmm I don't see code to disable PCU unplug in this patch.
> > > > I expected no_hotplug to be set.
> > > > Where is it?
> > > 
> > > It uses the bdrv_in_use() mechanism.
> > 
> > Hmm but PCI device can still go away if
> > guest ejects it. Does this work fine?
> 
> Any comment?

Sorry for the delay.

virtio_blk_exit() is called when the device is freed.  The code destroys
the data plane thread - this includes draining requests and then
terminating the thread.

I tested with pci_del so the guest is cooperating but virtio_blk_exit()
does not assume that the data plane thread is already stopped.

Is this what you were asking?

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]