qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH V2 00/10] Qemu: Add live migration support f


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC PATCH V2 00/10] Qemu: Add live migration support for SRIOV NIC
Date: Tue, 1 Dec 2015 17:02:40 +0200

On Tue, Dec 01, 2015 at 02:26:57PM +0800, Lan, Tianyu wrote:
> 
> 
> On 11/30/2015 4:01 PM, Michael S. Tsirkin wrote:
> >It is still not very clear what it is you are trying to achieve, and
> >whether your patchset achieves it.  You merely say "adding live
> >migration" but it seems pretty clear this isn't about being able to
> >migrate a guest transparently, since you are adding a host/guest
> >handshake.
> >
> >This isn't about functionality either: I think that on KVM, it isn't
> >hard to live migrate if you can do a host/guest handshake, even today,
> >with no kernel changes:
> >1. before migration, expose a pv nic to guest (can be done directly on
> >   boot)
> >2. use e.g. a serial connection to move IP from an assigned device to pv nic
> >3. maybe move the mac as well
> >4. eject the assigned device
> >5. detect eject on host (QEMU generates a DEVICE_DELETED event when this
> >    happens) and start migration
> >
> 
> This looks like the bonding driver solution

Why does it? Unlike bonding, this doesn't touch data path or
any kernel code. Just run a script from guest agent.

> which put pv nic and VF
> in one bonded interface under active-backup mode. The bonding driver
> will switch from VF to PV nic automatically when VF is unplugged during
> migration. This is the only available solution for VF NIC migration.

It really isn't. For one, there is also teaming.

> But
> it requires guest OS to do specific configurations inside and rely on
> bonding driver which blocks it work on Windows.
> From performance side,
> putting VF and virtio NIC under bonded interface will affect their
> performance even when not do migration. These factors block to use VF
> NIC passthough in some user cases(Especially in the cloud) which require
> migration.

That's really up to guest. You don't need to do bonding,
you can just move the IP and mac from userspace, that's
possible on most OS-es.

Or write something in guest kernel that is more lightweight if you are
so inclined. What we are discussing here is the host-guest interface,
not the in-guest interface.

> Current solution we proposed changes NIC driver and Qemu. Guest Os
> doesn't need to do special thing for migration.
> It's easy to deploy


Except of course these patches don't even work properly yet.

And when they do, even minor changes in host side NIC hardware across
migration will break guests in hard to predict ways.

> and
> all changes are in the NIC driver, NIC vendor can implement migration
> support just in the their driver.

Kernel code and hypervisor code is not easier to develop and deploy than
a userspace script.  If that is all the motivation there is, that's a
pretty small return on investment.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]