[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 24/24] vfio/migration: Multifd device state transfer suppo
From: |
Peter Xu |
Subject: |
Re: [PATCH v3 24/24] vfio/migration: Multifd device state transfer support - send side |
Date: |
Mon, 16 Dec 2024 12:33:34 -0500 |
On Thu, Dec 12, 2024 at 11:53:05PM +0100, Maciej S. Szmigiero wrote:
> On 12.12.2024 15:54, Avihai Horon wrote:
> >
> > On 11/12/2024 1:06, Maciej S. Szmigiero wrote:
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > On 9.12.2024 10:28, Avihai Horon wrote:
> > > >
> > > > On 17/11/2024 21:20, Maciej S. Szmigiero wrote:
> > > > > External email: Use caution opening links or attachments
> > > > >
> > > > >
> > > > > From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>
> > > > >
> > > > > Implement the multifd device state transfer via additional per-device
> > > > > thread inside save_live_complete_precopy_thread handler.
> > > > >
> > > > > Switch between doing the data transfer in the new handler and doing it
> > > > > in the old save_state handler depending on the
> > > > > x-migration-multifd-transfer device property value.
> > > > >
> > > > > Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> > > > > ---
> > > > > hw/vfio/migration.c | 155
> > > > > +++++++++++++++++++++++++++++++++++++++++++
> > > > > hw/vfio/trace-events | 2 +
> > > > > 2 files changed, 157 insertions(+)
> > > > >
> > > > > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > > > > index b54879fe6209..8709672ada48 100644
> > > > > --- a/hw/vfio/migration.c
> > > > > +++ b/hw/vfio/migration.c
> > > > > @@ -771,6 +771,24 @@ static int vfio_save_setup(QEMUFile *f, void
> > > > > *opaque, Error **errp)
> > > > > uint64_t stop_copy_size = VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE;
> > > > > int ret;
> > > > >
> > > > > + /*
> > > > > + * Make a copy of this setting at the start in case it is changed
> > > > > + * mid-migration.
> > > > > + */
> > > > > + if (vbasedev->migration_multifd_transfer == ON_OFF_AUTO_AUTO) {
> > > > > + migration->multifd_transfer =
> > > > > vfio_multifd_transfer_supported();
> > > > > + } else {
> > > > > + migration->multifd_transfer =
> > > > > + vbasedev->migration_multifd_transfer == ON_OFF_AUTO_ON;
> > > > > + }
> > > > > +
> > > > > + if (migration->multifd_transfer &&
> > > > > !vfio_multifd_transfer_supported()) {
> > > > > + error_setg(errp,
> > > > > + "%s: Multifd device transfer requested but
> > > > > unsupported in the current config",
> > > > > + vbasedev->name);
> > > > > + return -EINVAL;
> > > > > + }
> > > > > +
> > > > > qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE);
> > > > >
> > > > > vfio_query_stop_copy_size(vbasedev, &stop_copy_size);
> > > > > @@ -942,13 +960,32 @@ static int vfio_save_iterate(QEMUFile *f, void
> > > > > *opaque)
> > > > > return !migration->precopy_init_size &&
> > > > > !migration->precopy_dirty_size;
> > > > > }
> > > > >
> > > > > +static void vfio_save_multifd_emit_dummy_eos(VFIODevice *vbasedev,
> > > > > QEMUFile *f)
> > > > > +{
> > > > > + VFIOMigration *migration = vbasedev->migration;
> > > > > +
> > > > > + assert(migration->multifd_transfer);
> > > > > +
> > > > > + /*
> > > > > + * Emit dummy NOP data on the main migration channel since the
> > > > > actual
> > > > > + * device state transfer is done via multifd channels.
> > > > > + */
> > > > > + qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> > > > > +}
> > > > > +
> > > > > static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> > > > > {
> > > > > VFIODevice *vbasedev = opaque;
> > > > > + VFIOMigration *migration = vbasedev->migration;
> > > > > ssize_t data_size;
> > > > > int ret;
> > > > > Error *local_err = NULL;
> > > > >
> > > > > + if (migration->multifd_transfer) {
> > > > > + vfio_save_multifd_emit_dummy_eos(vbasedev, f);
> > > > > + return 0;
> > > > > + }
> > > >
> > > > I wonder whether we should add a .save_live_use_thread SaveVMHandlers
> > > > through which a device can indicate if it wants to save its data with
> > > > the async or sync handler.
> > > > This will allow migration layer (i.e.,
> > > > qemu_savevm_state_complete_precopy_iterable) to know which handler to
> > > > call instead of calling both of them and letting each device implicitly
> > > > decide.
> > > > IMHO it will make the code clearer and will allow us to drop
> > > > vfio_save_multifd_emit_dummy_eos().
> > >
> > > I think that it's not worth adding a new SaveVMHandler just for this
> > > specific
> > > use case, considering that it's easy to handle it inside driver by
> > > emitting that
> > > FLAG_END_OF_STATE.
> > >
> > > Especially considering that for compatibility with other drivers that do
> > > not
> > > define that hypothetical new SaveVMHandler not having it defined would
> > > need to
> > > have the same effect as it always returning "false".
> >
> > We already have such handlers like .is_active, .has_postcopy and
> > .is_active_iterate.
> > Since VFIO migration with multifd involves a lot of threads and convoluted
> > code paths, I thought this could put some order (even if small) into
> > things, especially if it allows us to avoid the
> > vfio_save_multifd_emit_dummy_eos() which feels a bit hackish.
> >
> > But anyway, that's only my opinion, and I can understand why this could be
> > seen as an overkill.
>
> @Cedric, @Peter:
> what's your opinion here?
>
> Is it better to add a new "flag" SaveVMHandler or keep handling
> the multifd/non-multifd transfer difference in the VFIO driver
> by emitting VFIO_MIG_FLAG_END_OF_STATE in
> vfio_save_complete_precopy() and vfio_save_state()?
>
> Note that this new "flag" SaveVMHandler would need to have
> semantics of disabling both save_live_complete_precopy and
> save_state handlers and enabling save_live_complete_precopy_thread
> instead.
If it's about adding one more global vmstate hook (even if only used in
vfio), only to conditionally disable two other random vmstate hooks, then
it isn't very attractive idea to me indeed.
PS: when I look at is_active (which is only used by two sites but needs to
be invoked in literally all the rest hooks, and I doubt whether it could
change during migration at all..), or has_postcopy (which is weird in
another way, e.g., should we simply forbid pmem+postcopy setup upfront?), I
doubt whether they were the best solution for the problems.. but that's
separate questions to ask.
Thanks,
--
Peter Xu