[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 5/5] multifd: Only sync once each full round of memory
From: |
Leonardo Brás |
Subject: |
Re: [PATCH 5/5] multifd: Only sync once each full round of memory |
Date: |
Thu, 30 Jun 2022 23:29:28 -0300 |
User-agent: |
Evolution 3.44.2 |
Hello Juan,
On Tue, 2022-06-21 at 16:05 +0200, Juan Quintela wrote:
> We need to add a new flag to mean to sync at that point.
> Notice that we still synchronize at the end of setup and at the end of
> complete stages.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/migration.c | 2 +-
> migration/ram.c | 42 ++++++++++++++++++++++++++++++------------
> 2 files changed, 31 insertions(+), 13 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index 3f79df0b70..6627787fc2 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -4283,7 +4283,7 @@ static Property migration_properties[] = {
> DEFAULT_MIGRATE_ANNOUNCE_STEP),
> /* We will change to false when we introduce the new mechanism */
> DEFINE_PROP_BOOL("multifd-sync-each-iteration", MigrationState,
> - multifd_sync_each_iteration, true),
> + multifd_sync_each_iteration, false),
>
> /* Migration capabilities */
> DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> diff --git a/migration/ram.c b/migration/ram.c
> index 2c7289edad..6792986565 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -81,6 +81,7 @@
> #define RAM_SAVE_FLAG_XBZRLE 0x40
> /* 0x80 is reserved in migration.h start with 0x100 next */
> #define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
> +#define RAM_SAVE_FLAG_MULTIFD_SYNC 0x200
>
> XBZRLECacheStats xbzrle_counters;
>
> @@ -1482,6 +1483,7 @@ retry:
> * associated with the search process.
> *
> * Returns:
> + * <0: An error happened
> * 0: no page found, give up
> * 1: no page found, retry
> * 2: page found
> @@ -1510,6 +1512,13 @@ static int find_dirty_block(RAMState *rs,
> PageSearchStatus *pss)
> pss->page = 0;
> pss->block = QLIST_NEXT_RCU(pss->block, next);
> if (!pss->block) {
> + if (!migrate_multifd_sync_each_iteration()) {
> + int ret = multifd_send_sync_main(rs->f);
> + if (ret < 0) {
> + return ret;
> + }
> + qemu_put_be64(rs->f, RAM_SAVE_FLAG_MULTIFD_SYNC);
> + }
> /*
> * If memory migration starts over, we will meet a dirtied page
> * which may still exists in compression threads's ring, so we
> @@ -2273,7 +2282,8 @@ static int ram_find_and_save_block(RAMState *rs)
> if (!get_queued_page(rs, &pss)) {
> /* priority queue empty, so just search for something dirty */
> int res = find_dirty_block(rs, &pss);
> - if (res == 0) {
> + if (res <= 0) {
> + pages = res;
> break;
> } else if (res == 1)
> continue;
> @@ -2943,11 +2953,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> ram_control_before_iterate(f, RAM_CONTROL_SETUP);
> ram_control_after_iterate(f, RAM_CONTROL_SETUP);
>
> - if (migrate_multifd_sync_each_iteration()) {
> - ret = multifd_send_sync_main(f);
> - if (ret < 0) {
> - return ret;
> - }
(1) IIUC, the above lines were changed in 2/5 to be reverted now.
Is that correct? was it expected?
> + ret = multifd_send_sync_main(f);
> + if (ret < 0) {
> + return ret;
> + }
> +
> + if (!migrate_multifd_sync_each_iteration()) {
> + qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
(2) I have done some testing with this patchset (because of MSG_ZEROCOPY) and it
seems this part here is breaking migration from this build to 'older' builds
(same commits except for this patchset):
qemu-system-x86_64: Unknown combination of migration flags: 0x200
qemu-system-x86_64: error while loading state section id 2(ram)
qemu-system-x86_64: load of migration failed: Invalid argument
Which makes sense, since there is no RAM_SAVE_FLAG_MULTIFD_SYNC in older
versions. Is this expected / desired ?
Strange enough, it seems to be breaking even with this set in the sending part:
--global migration.multifd-sync-each-iteration=on
Was the idea of this config to allow migration to older qemu builds?
> }
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> qemu_fflush(f);
> @@ -3127,13 +3139,14 @@ static int ram_save_complete(QEMUFile *f, void
> *opaque)
> return ret;
> }
>
> - if (migrate_multifd_sync_each_iteration()) {
> - ret = multifd_send_sync_main(rs->f);
> - if (ret < 0) {
> - return ret;
> - }
> + ret = multifd_send_sync_main(rs->f);
> + if (ret < 0) {
> + return ret;
> }
(3) Same as (1)
>
> + if (migrate_multifd_sync_each_iteration()) {
> + qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
> + }
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> qemu_fflush(f);
>
> @@ -3800,7 +3813,9 @@ int ram_load_postcopy(QEMUFile *f)
> }
> decompress_data_with_multi_threads(f, page_buffer, len);
> break;
> -
> + case RAM_SAVE_FLAG_MULTIFD_SYNC:
> + multifd_recv_sync_main();
> + break;
> case RAM_SAVE_FLAG_EOS:
> /* normal exit */
> if (migrate_multifd_sync_each_iteration()) {
> @@ -4079,6 +4094,9 @@ static int ram_load_precopy(QEMUFile *f)
> break;
> }
> break;
> + case RAM_SAVE_FLAG_MULTIFD_SYNC:
> + multifd_recv_sync_main();
> + break;
> case RAM_SAVE_FLAG_EOS:
> /* normal exit */
> if (migrate_multifd_sync_each_iteration()) {
- [PATCH 0/5] Eliminate multifd flush, Juan Quintela, 2022/06/21
- [PATCH 1/5] multifd: Create property multifd-sync-each-iteration, Juan Quintela, 2022/06/21
- [PATCH 2/5] multifd: Put around all sync calls tests for each iteration, Juan Quintela, 2022/06/21
- [PATCH 4/5] migration: Make find_dirty_block() return a single parameter, Juan Quintela, 2022/06/21
- [PATCH 3/5] migration: Simplify ram_find_and_save_block(), Juan Quintela, 2022/06/21
- [PATCH 5/5] multifd: Only sync once each full round of memory, Juan Quintela, 2022/06/21
- Re: [PATCH 5/5] multifd: Only sync once each full round of memory,
Leonardo Brás <=