qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 16/24] migration/multifd: Send final SYNC only after devic


From: Peter Xu
Subject: Re: [PATCH v3 16/24] migration/multifd: Send final SYNC only after device state is complete
Date: Thu, 5 Dec 2024 14:02:37 -0500

On Tue, Nov 26, 2024 at 10:22:42PM +0100, Maciej S. Szmigiero wrote:
> On 26.11.2024 21:52, Fabiano Rosas wrote:
> > "Maciej S. Szmigiero" <mail@maciej.szmigiero.name> writes:
> > 
> > > From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>
> > > 
> > > Currently, ram_save_complete() sends a final SYNC multifd packet near this
> > > function end, after sending all of the remaining RAM data.
> > > 
> > > On the receive side, this SYNC packet will cause multifd channel threads
> > > to block, waiting for the final sem_sync posting in
> > > multifd_recv_terminate_threads().
> > > 
> > > However, multifd_recv_terminate_threads() won't be called until the
> > > migration is complete, which causes a problem if multifd channels are
> > > still required for transferring device state data after RAM transfer is
> > > complete but before finishing the migration process.
> > > 
> > > Defer sending the final SYNC packet to the end of sending of
> > > post-switchover iterable data instead if device state transfer is 
> > > possible.
> > > 
> > > Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
> > 
> > Reviewed-by: Fabiano Rosas <farosas@suse.de>
> > 
> > I wonder whether we could just defer the sync for the !device_state case
> > as well.
> > 
> 
> AFAIK this should work, just wanted to be extra cautious with bit
> stream timing changes in case there's for example some race in an
> older QEMU version.

I see the issue, but maybe we don't even need this patch..

When I was working on commit 637280aeb2 previously, I forgot that the SYNC
messages are together with the FLUSH which got removed.  It means now in
complete() we will sent SYNCs always, but always without FLUSH.

On new binaries, it means SYNCs are not collected properly on dest threads
so it'll hang all threads there.

So yeah, at least from that part it's me to blame..

I think maybe VFIO doesn't need to change the generic path to sync, because
logically speaking VFIO can also use multifd_send_sync_main() in its own
complete() hook to flush everything.  Here the trick is such sync doesn't
need to be attached to any message (either SYNC or FLUSH, that only RAM
uses).  The sync is about "sync against all sender threads", just like what
we do exactly with mapped-ram.  Mapped-ram tricked that path with a
use_packet check in sender thread, however for VFIO we could already expose
a new parameter to multifd_send_sync_main() saying "let's only sync
threads".

I sent two small patches here:

20241205185303.897010-1-peterx@redhat.com">https://lore.kernel.org/r/20241205185303.897010-1-peterx@redhat.com

The 1st patch should fix the SYNC message hang for 637280aeb2 that I did.
The 2nd patch introduced the flag that I said.  I think after that applied
VFIO should be able to sync directly with:

  multifd_send_sync_main(MULTIFD_SYNC_THREADS);

Then maybe we don't need this patch anymore.  Please have a look.

PS: the two patches could be ready to merge already even before VFIO, if
they're properly reviewed and acked.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]