qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capabilit


From: Kirti Wankhede
Subject: Re: [Qemu-devel] [PATCH V3 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Date: Mon, 5 Mar 2018 18:32:56 +0530

Hi Yulei Zhang,

This series is same as the previous version, that is, there is no
pre-copy phase. This only takes care of stop-and-copy phase.
As per we discussed in KVM Forum 2017 in October, there should be
provision of pre-copy phase.

Thanks,
Kirti

On 3/5/2018 11:30 AM, Yulei Zhang wrote:
> Summary
> 
> This series RFC would like to resume the discussion about how to
> introduce the live migration capability to vfio mdev device. 
> 
> By adding a new vfio subtype region VFIO_REGION_SUBTYPE_DEVICE_STATE,
> the mdev device will be set to migratable if the new region exist
> during the initialization.  
> 
> The intention to add the new region is using it for mdev device status
> save and restore during the migration. The access to this region
> will be trapped and forward to the mdev device driver, it also uses 
> the first byte in the new region to control the running state of mdev
> device, so during the migration after stop the mdev driver, qemu could
> retrieve the specific device status from this region and transfer to 
> the target VM side for the mdev device restore.
> 
> In addition,  we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to help do 
> the mdev device dirty page synchronization during the migration, currently
> it is just for static copy, in the future we would like to add new interface
> for the pre-copy.
> 
> Below is the vfio_mdev device migration sequence
> Source VM side:
>                       start migration
>                               |
>                               V
>                get the cpu state change callback, write to the
>                subregion's first byte to stop the mdev device
>                               |
>                               V
>                quary the dirty page bitmap from iommu container 
>                and add into qemu dirty list for synchronization
>                               |
>                               V
>                save the deivce status into Qemufile which is 
>                      read from the vfio device subregion
> 
> Target VM side:
>                  restore the mdev device after get the
>                    saved status context from Qemufile
>                               |
>                               V
>                    get the cpu state change callback
>                    write to subregion's first byte to 
>                       start the mdev device to put it in 
>                       running status
>                               |
>                               V
>                       finish migration
> 
> V3->V2:
> 1. rebase the patch to Qemu stable 2.10 branch.
> 2. use a common name for the subregion instead of specific for 
>    intel IGD.
> 
> V1->V2:
> Per Alex's suggestion:
> 1. use device subtype region instead of VFIO PCI fixed region.
> 2. remove unnecessary ioctl, use the first byte of subregion to 
>    control the running state of mdev device.  
> 3. for dirty page synchronization, implement the interface with
>    VFIOContainer instead of vfio pci device.
> 
> Yulei Zhang (4):
>   vfio: introduce a new VFIO subregion for mdev device migration support
>   vfio: Add vm status change callback to stop/restart the mdev device
>   vfio: Add struct vfio_vmstate_info to introduce put/get callback
>     funtion for vfio device status save/restore
>   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> 
>  hw/vfio/common.c              |  34 +++++++++
>  hw/vfio/pci.c                 | 171 
> +++++++++++++++++++++++++++++++++++++++++-
>  hw/vfio/pci.h                 |   1 +
>  include/hw/vfio/vfio-common.h |   1 +
>  linux-headers/linux/vfio.h    |  29 ++++++-
>  5 files changed, 232 insertions(+), 4 deletions(-)
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]