[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v3 7/7] migration/multifd: Document the reason to sync for save_s
From: |
Peter Xu |
Subject: |
[PATCH v3 7/7] migration/multifd: Document the reason to sync for save_setup() |
Date: |
Fri, 6 Dec 2024 17:47:55 -0500 |
It's not straightforward to see why src QEMU needs to sync multifd during
setup() phase. After all, there's no page queued at that point.
For old QEMUs, there's a solid reason: EOS requires it to work. While it's
clueless on the new QEMUs which do not take EOS message as sync requests.
One will figure that out only when this is conditionally removed. In fact,
the author did try it out. Logically we could still avoid doing this on
new machine types, however that needs a separate compat field and that can
be an overkill in some trivial overhead in setup() phase.
Let's instead document it completely, to avoid someone else tries this
again and do the debug one more time, or anyone confused on why this ever
existed.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/ram.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/migration/ram.c b/migration/ram.c
index 5d4bdefe69..e5c590b259 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3036,6 +3036,31 @@ static int ram_save_setup(QEMUFile *f, void *opaque,
Error **errp)
migration_ops->ram_save_target_page = ram_save_target_page_legacy;
}
+ /*
+ * This operation is unfortunate..
+ *
+ * For legacy QEMUs using per-section sync
+ * =======================================
+ *
+ * This must exist because the EOS below requires the SYNC messages
+ * per-channel to work.
+ *
+ * For modern QEMUs using per-round sync
+ * =====================================
+ *
+ * Logically such sync is not needed, and recv threads should not run
+ * until setup ready (using things like channels_ready on src). Then
+ * we should be all fine.
+ *
+ * However even if we add channels_ready to recv side in new QEMUs, old
+ * QEMU won't have them so this sync will still be needed to make sure
+ * multifd recv threads won't start processing guest pages early before
+ * ram_load_setup() is properly done.
+ *
+ * Let's stick with this. Fortunately the overhead is low to sync
+ * during setup because the VM is running, so at least it's not
+ * accounted as part of downtime.
+ */
bql_unlock();
ret = multifd_ram_flush_and_sync(f);
bql_lock();
--
2.47.0
- [PATCH v3 0/7] migration/multifd: Some VFIO / postcopy preparations on flush, Peter Xu, 2024/12/06
- [PATCH v3 1/7] migration/multifd: Further remove the SYNC on complete, Peter Xu, 2024/12/06
- [PATCH v3 3/7] migration/ram: Move RAM_SAVE_FLAG* into ram.h, Peter Xu, 2024/12/06
- [PATCH v3 4/7] migration/multifd: Unify RAM_SAVE_FLAG_MULTIFD_FLUSH messages, Peter Xu, 2024/12/06
- [PATCH v3 2/7] migration/multifd: Allow to sync with sender threads only, Peter Xu, 2024/12/06
- [PATCH v3 5/7] migration/multifd: Remove sync processing on postcopy, Peter Xu, 2024/12/06
- [PATCH v3 6/7] migration/multifd: Cleanup src flushes on condition check, Peter Xu, 2024/12/06
- [PATCH v3 7/7] migration/multifd: Document the reason to sync for save_setup(),
Peter Xu <=
- Re: [PATCH v3 0/7] migration/multifd: Some VFIO / postcopy preparations on flush, Fabiano Rosas, 2024/12/17