qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC V2 0/4] vfio: Introduce Live migration capability


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC V2 0/4] vfio: Introduce Live migration capability to vfio_mdev device
Date: Mon, 31 Jul 2017 06:54:59 +0000

> From: Zhang, Yulei
> Sent: Tuesday, May 9, 2017 3:59 PM
> 
> Summary
> 
> This series RFC would like to introduce the live migration capability
> to vfio_mdev device.
> 
> As currently vfio_mdev device don't support migration, we introduce
> a new vfio subtype region
> VFIO_REGION_SUBTYPE_INTEL_IGD_DEVICE_STATE
> for Intel vGPU device, during the vfio device initialization, the mdev
> device will be set to migratable if the new region exist.

Looking at your series, there is really nothing specific to vGPU or
even Intel vGPU regarding to device state save/restore...

> 
> The intention to add the new region is using it for vfio_mdev device
> status save and restore during the migration. The access to this region
> will be trapped and forward to the vfio_mdev device driver. And we use
> the first byte in the new region to control the running state of mdev
> device.
> 
> Meanwhile we add one new ioctl VFIO_IOMMU_GET_DIRTY_BITMAP to help
> do
> the mdev device dirty page synchronization.
> 
> So the vfio_mdev device migration sequence would be
> Source VM side:
>                       start migration
>                               |
>                               V
>                get the cpu state change callback, write to the
>                subregion's first byte to stop the mdev device
>                               |
>                               V
>                quary the dirty page bitmap from iommu container
>                and add into qemu dirty list for synchronization
>                               |
>                               V
>                save the deivce status into Qemufile which is
>                      read from the vfio device subregion
> 
> Target VM side:
>                  restore the mdev device after get the
>                    saved status context from Qemufile
>                               |
>                               V
>                    get the cpu state change callback
>                    write to subregion's first byte to
>                       start the mdev device to put it in
>                       running status
>                               |
>                               V
>                       finish migration
> 
> V1->V2:
> Per Alex's suggestion:
> 1. use device subtype region instead of VFIO PCI fixed region.
> 2. remove unnecessary ioctl, use the first byte of subregion to
>    control the running state of mdev device.
> 3. for dirty page synchronization, implement the interface with
>    VFIOContainer instead of vfio pci device.
> 
> Yulei Zhang (4):
>   vfio: introduce a new VFIO sub region for mdev device migration
>     support
>   vfio: Add vm status change callback to stop/restart the mdev device
>   vfio: Add struct vfio_vmstate_info to introduce put/get callback
>     funtion     for vfio device status save/restore
>   vifo: introduce new VFIO ioctl VFIO_IOMMU_GET_DIRTY_BITMAP
> 
>  hw/vfio/common.c              |  32 +++++++++
>  hw/vfio/pci.c                 | 164
> +++++++++++++++++++++++++++++++++++++++++-
>  hw/vfio/pci.h                 |   1 +
>  include/hw/vfio/vfio-common.h |   1 +
>  linux-headers/linux/vfio.h    |  26 ++++++-
>  5 files changed, 220 insertions(+), 4 deletions(-)
> 
> --
> 2.7.4




reply via email to

[Prev in Thread] Current Thread [Next in Thread]