qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] live migration vs device assignment (motivation)


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] live migration vs device assignment (motivation)
Date: Mon, 28 Dec 2015 10:51:12 +0200

On Sun, Dec 27, 2015 at 01:45:15PM -0800, Alexander Duyck wrote:
> On Sun, Dec 27, 2015 at 1:21 AM, Michael S. Tsirkin <address@hidden> wrote:
> > On Fri, Dec 25, 2015 at 02:31:14PM -0800, Alexander Duyck wrote:
> >> The PCI hot-plug specification calls out that the OS can optionally
> >> implement a "pause" mechanism which is meant to be used for high
> >> availability type environments.  What I am proposing is basically
> >> extending the standard SHPC capable PCI bridge so that we can support
> >> the DMA page dirtying for everything hosted on it, add a vendor
> >> specific block to the config space so that the guest can notify the
> >> host that it will do page dirtying, and add a mechanism to indicate
> >> that all hot-plug events during the warm-up phase of the migration are
> >> pause events instead of full removals.
> >
> > Two comments:
> >
> > 1. A vendor specific capability will always be problematic.
> > Better to register a capability id with pci sig.
> >
> > 2. There are actually several capabilities:
> >
> > A. support for memory dirtying
> >         if not supported, we must stop device before migration
> >
> >         This is supported by core guest OS code,
> >         using patches similar to posted by you.
> >
> >
> > B. support for device replacement
> >         This is a faster form of hotplug, where device is removed and
> >         later another device using same driver is inserted in the same slot.
> >
> >         This is a possible optimization, but I am convinced
> >         (A) should be implemented independently of (B).
> >
> 
> My thought on this was that we don't need much to really implement
> either feature.  Really only a bit or two for either one.  I had
> thought about extending the PCI Advanced Features, but for now it
> might make more sense to just implement it as a vendor capability for
> the QEMU based bridges instead of trying to make this a true PCI
> capability since I am not sure if this in any way would apply to
> physical hardware.  The fact is the PCI Advanced Features capability
> is essentially just a vendor specific capability with a different ID

Interesting. I see it more as a backport of pci express
features to pci.

> so if we were to use 2 bits that are currently reserved in the
> capability we could later merge the functionality without much
> overhead.

Don't do this. You must not touch reserved bits.

> I fully agree that the two implementations should be separate but
> nothing says we have to implement them completely different.  If we
> are just using 3 bits for capability, status, and control of each
> feature there is no reason for them to need to be stored in separate
> locations.

True.

> >> I've been poking around in the kernel and QEMU code and the part I
> >> have been trying to sort out is how to get QEMU based pci-bridge to
> >> use the SHPC driver because from what I can tell the driver never
> >> actually gets loaded on the device as it is left in the control of
> >> ACPI hot-plug.
> >
> > There are ways, but you can just use pci express, it's easier.
> 
> That's true.  I should probably just give up on trying to do an
> implementation that works with the i440fx implementation.  I could
> probably move over to the q35 and once that is done then we could look
> at something like the PCI Advanced Features solution for something
> like the PCI-bridge drivers.
> 
> - Alex

Once we have a decent idea of what's required, I can write
an ECN for pci code and id assignment specification.
That's cleaner than vendor specific stuff that's tied to
a specific device/vendor ID.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]