qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] live migration vs device assignment (motivation)


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] live migration vs device assignment (motivation)
Date: Thu, 10 Dec 2015 16:23:44 +0000
User-agent: Mutt/1.5.24 (2015-08-30)

* Lan, Tianyu (address@hidden) wrote:
> 
> 
> On 12/10/2015 7:41 PM, Dr. David Alan Gilbert wrote:
> >>Ideally, it is able to leave guest driver unmodified but it requires the
> >>>hypervisor or qemu to aware the device which means we may need a driver in
> >>>hypervisor or qemu to handle the device on behalf of guest driver.
> >Can you answer the question of when do you use your code -
> >    at the start of migration or
> >    just before the end?
> 
> Just before stopping VCPU in this version and inject VF mailbox irq to
> notify the driver if the irq handler is installed.
> Qemu side also will check this via the faked PCI migration capability
> and driver will set the status during device open() or resume() callback.

OK, hmm - I can see that would work in some cases; but:
   a) It wouldn't work if the guest was paused, the management can pause it 
before
     starting migration or during migration - so you might need to hook the 
pause
     as well;  so that's a bit complicated.

   b) How long does qemu wait for the guest to respond, and what does it do if
      the guest doesn't respond ?  How do we recover?

   c) How much work does the guest need to do at this point?

   d) It would be great if we could find a more generic way of telling the guest
      it's about to migrate rather than via the PCI registers of one device; 
imagine
      what happens if you have a few different devices using SR-IOV, we'd have 
to tell
      them all with separate interrupts.   Perhaps we could use a virtio 
channel or
      an ACPI event or something?

> >>>> >It would be great if we could avoid changing the guest; but at least 
> >>>> >your guest
> >>>> >driver changes don't actually seem to be that hardware specific; could 
> >>>> >your
> >>>> >changes actually be moved to generic PCI level so they could be made
> >>>> >to work for lots of drivers?
> >>>
> >>>It is impossible to use one common solution for all devices unless the PCIE
> >>>spec documents it clearly and i think one day it will be there. But before
> >>>that, we need some workarounds on guest driver to make it work even it 
> >>>looks
> >>>ugly.
> 
> Yes, so far there is not hardware migration support and it's hard to modify
> bus level code. It also will block implementation on the Windows.

Well, there was agraf's trick, although that's a lot more complicated at the 
qemu
level, but it should work with no guest modifications.  Michael's point about
dirty page tracking is neat, I think that simplifies it a bit if it can track 
dirty
pages.

Dave

> >Dave
> >
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]