qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 5/5] vifo: introduce new VFIO ioctl VFIO_DEVICE_PC


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC 5/5] vifo: introduce new VFIO ioctl VFIO_DEVICE_PCI_GET_DIRTY_BITMAP
Date: Wed, 28 Jun 2017 06:04:10 +0000

> From: Alex Williamson [mailto:address@hidden
> Sent: Wednesday, June 28, 2017 3:45 AM
> 
> On Tue, 27 Jun 2017 08:56:01 +0000
> "Zhang, Yulei" <address@hidden> wrote:
> 
> > > -----Original Message-----
> > > From: Qemu-devel [mailto:qemu-devel-
> > > address@hidden On Behalf Of Alex
> Williamson
> > > Sent: Tuesday, June 27, 2017 4:19 AM
> > > To: Zhang, Yulei <address@hidden>
> > > Cc: Tian, Kevin <address@hidden>; address@hidden;
> > > address@hidden; address@hidden; Zheng, Xiao
> > > <address@hidden>; Wang, Zhi A <address@hidden>
> > > Subject: Re: [Qemu-devel] [RFC 5/5] vifo: introduce new VFIO ioctl
> > > VFIO_DEVICE_PCI_GET_DIRTY_BITMAP
> > >
> > > On Tue,  4 Apr 2017 18:28:04 +0800
> > > Yulei Zhang <address@hidden> wrote:
> > >
> > > > New VFIO ioctl VFIO_DEVICE_PCI_GET_DIRTY_BITMAP is used to sync
> the
> > > > pci device dirty pages during the migration.
> > >
> > > If this needs to exist, it needs a lot more documentation.  Why is this
> > > a PCI specific device ioctl?  Couldn't any vfio device need this?
> > >
> > > > Signed-off-by: Yulei Zhang <address@hidden>
> > > > ---
> > > >  hw/vfio/pci.c              | 32 ++++++++++++++++++++++++++++++++
> > > >  hw/vfio/pci.h              |  2 ++
> > > >  linux-headers/linux/vfio.h | 14 ++++++++++++++
> > > >  3 files changed, 48 insertions(+)
> > > >
> > > > diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
> > > > index 833cd90..64c851f 100644
> > > > --- a/hw/vfio/pci.c
> > > > +++ b/hw/vfio/pci.c
> > > > @@ -32,6 +32,7 @@
> > > >  #include "pci.h"
> > > >  #include "trace.h"
> > > >  #include "qapi/error.h"
> > > > +#include "exec/ram_addr.h"
> > > >
> > > >  #define MSIX_CAP_LENGTH 12
> > > >
> > > > @@ -39,6 +40,7 @@ static void vfio_disable_interrupts(VFIOPCIDevice
> > > *vdev);
> > > >  static void vfio_mmap_set_enabled(VFIOPCIDevice *vdev, bool
> enabled);
> > > >  static VMStateDescription vfio_pci_vmstate;
> > > >  static void vfio_vm_change_state_handler(void *pv, int running,
> RunState
> > > state);
> > > > +static void vfio_log_sync(MemoryListener *listener,
> > > MemoryRegionSection *section);
> > > >
> > > >  /*
> > > >   * Disabling BAR mmaping can be slow, but toggling it around INTx can
> > > > @@ -2869,6 +2871,11 @@ static void vfio_realize(PCIDevice *pdev,
> Error
> > > **errp)
> > > >      vfio_setup_resetfn_quirk(vdev);
> > > >
> qemu_add_vm_change_state_handler(vfio_vm_change_state_handler,
> > > vdev);
> > > >
> > > > +    vdev->vfio_memory_listener = (MemoryListener) {
> > > > +           .log_sync = vfio_log_sync,
> > > > +    };
> > > > +    memory_listener_register(&vdev->vfio_memory_listener,
> > > &address_space_memory);
> > > > +
> > > >      return;
> > > >
> > > >  out_teardown:
> > > > @@ -2964,6 +2971,7 @@ static void
> vfio_vm_change_state_handler(void
> > > *pv, int running, RunState state)
> > > >      if (ioctl(vdev->vbasedev.fd, VFIO_DEVICE_PCI_STATUS_SET,
> vfio_status))
> > > {
> > > >          error_report("vfio: Failed to %s device\n", running ? "start" :
> "stop");
> > > >      }
> > > > +    vdev->device_stop = running ? false : true;
> > > >      g_free(vfio_status);
> > > >  }
> > > >
> > > > @@ -3079,6 +3087,30 @@ static int vfio_device_get(QEMUFile *f, void
> *pv,
> > > size_t size, VMStateField *fie
> > > >      return 0;
> > > >  }
> > > >
> > > > +static void vfio_log_sync(MemoryListener *listener,
> > > MemoryRegionSection *section)
> > > > +{
> > > > +    VFIOPCIDevice *vdev = container_of(listener, struct VFIOPCIDevice,
> > > vfio_memory_listener);
> > > > +
> > > > +    if (vdev->device_stop) {
> > > > +        struct vfio_pci_get_dirty_bitmap *d;
> > > > +        ram_addr_t size = int128_get64(section->size);
> > > > +        unsigned long page_nr = size >> TARGET_PAGE_BITS;
> > > > +        unsigned long bitmap_size = (BITS_TO_LONGS(page_nr) + 1) *
> > > sizeof(unsigned long);
> > > > +        d = g_malloc0(sizeof(*d) + bitmap_size);
> > > > +        d->start_addr = section->offset_within_address_space;
> > > > +        d->page_nr = page_nr;
> > > > +
> > > > +        if (ioctl(vdev->vbasedev.fd,
> VFIO_DEVICE_PCI_GET_DIRTY_BITMAP, d))
> > > {
> > > > +            error_report("vfio: Failed to fetch dirty pages for 
> > > > migration\n");
> > > > +            goto exit;
> > > > +        }
> > > > +        cpu_physical_memory_set_dirty_lebitmap((unsigned long*)&d-
> > > >dirty_bitmap, d->start_addr, d->page_nr);
> > > > +
> > > > +exit:
> > > > +        g_free(d);
> > > > +    }
> > > > +}
> > > > +
> > > >  static void vfio_instance_init(Object *obj)
> > > >  {
> > > >      PCIDevice *pci_dev = PCI_DEVICE(obj);
> > > > diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
> > > > index bd98618..984391d 100644
> > > > --- a/hw/vfio/pci.h
> > > > +++ b/hw/vfio/pci.h
> > > > @@ -144,6 +144,8 @@ typedef struct VFIOPCIDevice {
> > > >      bool no_kvm_intx;
> > > >      bool no_kvm_msi;
> > > >      bool no_kvm_msix;
> > > > +    bool device_stop;
> > > > +    MemoryListener vfio_memory_listener;
> > > >  } VFIOPCIDevice;
> > > >
> > > >  uint32_t vfio_pci_read_config(PCIDevice *pdev, uint32_t addr, int len);
> > > > diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
> > > > index fa17848..aa73ee1 100644
> > > > --- a/linux-headers/linux/vfio.h
> > > > +++ b/linux-headers/linux/vfio.h
> > > > @@ -502,6 +502,20 @@ struct vfio_pci_status_set{
> > > >
> > > >  #define VFIO_DEVICE_PCI_STATUS_SET     _IO(VFIO_TYPE, VFIO_BASE +
> 14)
> > > >
> > > > +/**
> > > > + * VFIO_DEVICE_PCI_GET_DIRTY_BITMAP - _IOW(VFIO_TYPE,
> VFIO_BASE +
> > > 15,
> > > > + *                                 struct vfio_pci_get_dirty_bitmap)
> > > > + *
> > > > + * Return: 0 on success, -errno on failure.
> > > > + */
> > > > +struct vfio_pci_get_dirty_bitmap{
> > > > +       __u64          start_addr;
> > > > +       __u64          page_nr;
> > > > +       __u8           dirty_bitmap[];
> > > > +};
> > > > +
> > > > +#define VFIO_DEVICE_PCI_GET_DIRTY_BITMAP _IO(VFIO_TYPE,
> VFIO_BASE
> > > + 15)
> > > > +
> > >
> > > Dirty since when?  Since the last time we asked?  Since the device was
> > > stopped?  Why is anything dirtied after the device is stopped?  Is this
> > > any pages the device has ever touched?  Thanks,
> > >
> > > Alex
> > Dirty since the device start operation and before it was stopped. We track
> > down all the guest pages that device was using before it was stopped, and
> > leverage this dirty bitmap for page sync during migration.
> 
> I don't understand how this is useful or efficient.  This implies that
> the device is always tracking dirtied pages even when we don't care
> about migration.  Don't we want to enable dirty logging at some point
> and track dirty pages since then?  Otherwise we can just assume the
> device dirties all pages and get rid of this ioctl.  Thanks,
> 

Agree. Regarding to interface definition we'd better follow general
dirty logging scheme as Alex pointed out, possibly through another
ioctl cmd to enable/disable logging. However vendor specific 
implementation may choose to ignore the cmd while always tracking 
dirty pages, as on Intel Processor Graphics. Below is some background.

CPU dirty logging is done through either CPU page fault or HW dirty
bit logging (e.g. Intel PML). However there is a gap in DMA side today. 
DMA page fault requires both IOMMU and device support (through 
PCI ATS/PRS), which is not widely available today and mostly for 
special types of workloads (e.g. Shared Virtual Memory). Regarding
to dirty bit, at least VTd doesn't support it today.

So the alternative option is to rely on mediation layer to track the
dirty pages, since workload submissions on vGPU are mediated. It's
feasible for simple devices such as NIC, which has a clear definition
of descriptors so it's easy to scan and capture which pages will be
dirtied. However doing same thing for complex GPU (meaning
to scan all GPU commands, shader instructions, indirect structures, 
etc.) is weigh too complex and insufficient. Today we only scan
privileged commands for security purpose which is only a very 
small portion of all possible cmd set.

Then in reality we choose a simplified approach. instead of tracking
incrementally dirtied pages since last query, we treat all pages which
are currently mapped in GPU page tables as dirtied. To avoid overhead
of walking global page table (GGTT) and all active per-process page
tables (PPGTTs) upon query, we choose to always maintain a bitmap 
which is updated when mediating guest updates to those GTT entries. 
It adds negligible overhead at run-time since those operations are 
already mediated.

Every time when Qemu tries to query dirty map, it likely will get
a similar large dirty bitmap (not exactly same since GPU page tables
are being changed) then will exit iterative memory copy very soon,
which will ends up like below:

1st round: Qemu copies all the memory (say 4GB) to another machine
2nd round: Qemu queries vGPU dirty map (usually in several hundreds
of MBs) and combine with CPU dirty map to copy
3rd round: Qemu will get similar size of dirty pages then exit the 
pre-copy phase since dirty set doesn't converge

Although it's not that efficient, no need to stop service for whole 4GB
memory copy still saves a lot. In our measurement the service 
shutdown time is ~300ms over 10Gb link when running 3D benchmarks
(e.g. 3DMark, Heaven, etc.) and media transcoding workloads, while
copying whole system memory may easily take seconds to trigger TDR.
though service shutdown time is bigger than usual server-based scenario,
it's somehow OK for interactive usages (e.g. VDI) or offline transcoding 
usages. You may take a look at our demo at:

https://www.youtube.com/watch?v=y2SkU5JODIY

In a nutshell, our current dirty logging implementation is a bit
awkward due to arch limitation, but it does work well for some 
scenarios. Most importantly I agree we should design interface 
in a more general way to enable/disable dirty logging as stated
earlier.

Hope above makes the whole background clearer. :-)

Thanks
Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]