[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 03/14] migration/multifd: Drop MultiFDSendParams.quit, cleanu
|
From: |
Fabiano Rosas |
|
Subject: |
Re: [PATCH 03/14] migration/multifd: Drop MultiFDSendParams.quit, cleanup error paths |
|
Date: |
Wed, 31 Jan 2024 12:05:08 -0300 |
peterx@redhat.com writes:
> From: Peter Xu <peterx@redhat.com>
>
> Multifd send side has two fields to indicate error quits:
>
> - MultiFDSendParams.quit
> - &multifd_send_state->exiting
>
> Merge them into the global one. The replacement is done by changing all
> p->quit checks into the global var check. The global check doesn't need
> any lock.
>
> A few more things done on top of this altogether:
>
> - multifd_send_terminate_threads()
>
> Moving the xchg() of &multifd_send_state->exiting upper, so as to cover
> the tracepoint, migrate_set_error() and migrate_set_state().
Good.
>
> - multifd_send_sync_main()
>
> In the 2nd loop, add one more check over the global var to make sure we
> don't keep the looping if QEMU already decided to quit.
Yes, also because we don't necessarily enter at multifd_send_page()
every time.
>
> - multifd_tls_outgoing_handshake()
>
> Use multifd_send_terminate_threads() to set the error state. That has
> a benefit of updating MigrationState.error to that error too, so we can
> persist that 1st error we hit in that specific channel.
Makes sense.
>
> - multifd_new_send_channel_async()
>
> Take similar approach like above, drop the migrate_set_error() because
> multifd_send_terminate_threads() already covers that. Unwrap the helper
> multifd_new_send_channel_cleanup() along the way; not really needed.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> migration/multifd.h | 2 --
> migration/multifd.c | 85 ++++++++++++++++++---------------------------
> 2 files changed, 33 insertions(+), 54 deletions(-)
>
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 35d11f103c..7c040cb85a 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -95,8 +95,6 @@ typedef struct {
> QemuMutex mutex;
> /* is this channel thread running */
> bool running;
> - /* should this thread finish */
> - bool quit;
> /* multifd flags for each packet */
> uint32_t flags;
> /* global number of generated multifd packets */
> diff --git a/migration/multifd.c b/migration/multifd.c
> index b8d2c96533..2c98023d67 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -372,6 +372,11 @@ struct {
> MultiFDMethods *ops;
> } *multifd_send_state;
>
> +static bool multifd_send_should_exit(void)
> +{
> + return qatomic_read(&multifd_send_state->exiting);
> +}
> +
> /*
> * The migration thread can wait on either of the two semaphores. This
> * function can be used to kick the main thread out of waiting on either of
> @@ -409,7 +414,7 @@ static int multifd_send_pages(void)
> MultiFDSendParams *p = NULL; /* make happy gcc */
> MultiFDPages_t *pages = multifd_send_state->pages;
>
> - if (qatomic_read(&multifd_send_state->exiting)) {
> + if (multifd_send_should_exit()) {
> return -1;
> }
>
> @@ -421,14 +426,11 @@ static int multifd_send_pages(void)
> */
> next_channel %= migrate_multifd_channels();
> for (i = next_channel;; i = (i + 1) % migrate_multifd_channels()) {
> - p = &multifd_send_state->params[i];
> -
> - qemu_mutex_lock(&p->mutex);
> - if (p->quit) {
> - error_report("%s: channel %d has already quit!", __func__, i);
> - qemu_mutex_unlock(&p->mutex);
> + if (multifd_send_should_exit()) {
> return -1;
> }
> + p = &multifd_send_state->params[i];
> + qemu_mutex_lock(&p->mutex);
> if (!p->pending_job) {
> p->pending_job++;
> next_channel = (i + 1) % migrate_multifd_channels();
Hm, I'm not sure it's correct to check 'exiting' outside of the
lock. While it is an atomic operation, it is not atomic in relation to
pending_job...
... looking closer, it seems that we can do what you suggest because
p->pending_job is not touched by the multifd_send_thread in case of
error, which means this function will indeed miss the 'exiting' flag,
but pending_job > 0 means it will loop to the next channel and _then_ it
will see the 'exiting' flag.
> @@ -483,6 +485,16 @@ static void multifd_send_terminate_threads(Error *err)
> {
> int i;
>
> + /*
> + * We don't want to exit each threads twice. Depending on where
> + * we get the error, or if there are two independent errors in two
> + * threads at the same time, we can end calling this function
> + * twice.
> + */
> + if (qatomic_xchg(&multifd_send_state->exiting, 1)) {
> + return;
> + }
> +
> trace_multifd_send_terminate_threads(err != NULL);
>
> if (err) {
> @@ -497,26 +509,13 @@ static void multifd_send_terminate_threads(Error *err)
> }
> }
>
> - /*
> - * We don't want to exit each threads twice. Depending on where
> - * we get the error, or if there are two independent errors in two
> - * threads at the same time, we can end calling this function
> - * twice.
> - */
> - if (qatomic_xchg(&multifd_send_state->exiting, 1)) {
> - return;
> - }
> -
> for (i = 0; i < migrate_multifd_channels(); i++) {
> MultiFDSendParams *p = &multifd_send_state->params[i];
>
> - qemu_mutex_lock(&p->mutex);
> - p->quit = true;
Now that you removed this, we decoupled kicking the threads from setting
the exit/error, so this function could be split in two.
We could set the exiting flag at the places the error occurred (multifd
threads, thread creation, etc) and "terminate the threads" at
multifd_save_cleanup(). That second part we already do actually:
void multifd_save_cleanup(void) {
...
multifd_send_terminate_threads(NULL);
^see?
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDSendParams *p = &multifd_send_state->params[i];
if (p->running) {
qemu_thread_join(&p->thread);
}
}
...
}
I think there's no reason anymore for the channels to kick each
other. They would all be waiting at p->sem and multifd_send_cleanup()
would kick + join them.
> qemu_sem_post(&p->sem);
> if (p->c) {
> qio_channel_shutdown(p->c, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
> }
> - qemu_mutex_unlock(&p->mutex);
> }
> }
>
> @@ -615,16 +614,13 @@ int multifd_send_sync_main(void)
> for (i = 0; i < migrate_multifd_channels(); i++) {
> MultiFDSendParams *p = &multifd_send_state->params[i];
>
> - trace_multifd_send_sync_main_signal(p->id);
> -
> - qemu_mutex_lock(&p->mutex);
> -
> - if (p->quit) {
> - error_report("%s: channel %d has already quit", __func__, i);
> - qemu_mutex_unlock(&p->mutex);
> + if (multifd_send_should_exit()) {
> return -1;
> }
>
> + trace_multifd_send_sync_main_signal(p->id);
> +
> + qemu_mutex_lock(&p->mutex);
> p->packet_num = multifd_send_state->packet_num++;
> p->flags |= MULTIFD_FLAG_SYNC;
> p->pending_job++;
> @@ -634,6 +630,10 @@ int multifd_send_sync_main(void)
> for (i = 0; i < migrate_multifd_channels(); i++) {
> MultiFDSendParams *p = &multifd_send_state->params[i];
>
> + if (multifd_send_should_exit()) {
> + return -1;
> + }
> +
> qemu_sem_wait(&multifd_send_state->channels_ready);
> trace_multifd_send_sync_main_wait(p->id);
> qemu_sem_wait(&p->sem_sync);
> @@ -671,7 +671,7 @@ static void *multifd_send_thread(void *opaque)
> qemu_sem_post(&multifd_send_state->channels_ready);
> qemu_sem_wait(&p->sem);
>
> - if (qatomic_read(&multifd_send_state->exiting)) {
> + if (multifd_send_should_exit()) {
> break;
> }
> qemu_mutex_lock(&p->mutex);
> @@ -786,12 +786,7 @@ static void multifd_tls_outgoing_handshake(QIOTask *task,
>
> trace_multifd_tls_outgoing_handshake_error(ioc, error_get_pretty(err));
>
> - migrate_set_error(migrate_get_current(), err);
> - /*
> - * Error happen, mark multifd_send_thread status as 'quit' although it
> - * is not created, and then tell who pay attention to me.
> - */
> - p->quit = true;
> + multifd_send_terminate_threads(err);
> multifd_send_kick_main(p);
> error_free(err);
> }
> @@ -857,22 +852,6 @@ static bool multifd_channel_connect(MultiFDSendParams *p,
> return true;
> }
>
> -static void multifd_new_send_channel_cleanup(MultiFDSendParams *p,
> - QIOChannel *ioc, Error *err)
> -{
> - migrate_set_error(migrate_get_current(), err);
> - /* Error happen, we need to tell who pay attention to me */
> - multifd_send_kick_main(p);
> - /*
> - * Although multifd_send_thread is not created, but main migration
> - * thread need to judge whether it is running, so we need to mark
> - * its status.
> - */
> - p->quit = true;
> - object_unref(OBJECT(ioc));
> - error_free(err);
> -}
> -
> static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque)
> {
> MultiFDSendParams *p = opaque;
> @@ -889,7 +868,10 @@ static void multifd_new_send_channel_async(QIOTask
> *task, gpointer opaque)
> }
>
> trace_multifd_new_send_channel_async_error(p->id, local_err);
> - multifd_new_send_channel_cleanup(p, ioc, local_err);
> + multifd_send_terminate_threads(local_err);
> + multifd_send_kick_main(p);
> + object_unref(OBJECT(ioc));
> + error_free(local_err);
> }
>
> static void multifd_new_send_channel_create(gpointer opaque)
> @@ -921,7 +903,6 @@ int multifd_save_setup(Error **errp)
> qemu_mutex_init(&p->mutex);
> qemu_sem_init(&p->sem, 0);
> qemu_sem_init(&p->sem_sync, 0);
> - p->quit = false;
> p->pending_job = 0;
> p->id = i;
> p->pages = multifd_pages_init(page_count);
- [PATCH 00/14] migration/multifd: Refactor ->send_prepare() and cleanups, peterx, 2024/01/31
- [PATCH 01/14] migration/multifd: Drop stale comment for multifd zero copy, peterx, 2024/01/31
- [PATCH 02/14] migration/multifd: multifd_send_kick_main(), peterx, 2024/01/31
- [PATCH 03/14] migration/multifd: Drop MultiFDSendParams.quit, cleanup error paths, peterx, 2024/01/31
- Re: [PATCH 03/14] migration/multifd: Drop MultiFDSendParams.quit, cleanup error paths,
Fabiano Rosas <=
- [PATCH 04/14] migration/multifd: Postpone reset of MultiFDPages_t, peterx, 2024/01/31
- [PATCH 06/14] migration/multifd: Separate SYNC request with normal jobs, peterx, 2024/01/31
- [PATCH 07/14] migration/multifd: Simplify locking in sender thread, peterx, 2024/01/31
- [PATCH 05/14] migration/multifd: Drop MultiFDSendParams.normal[] array, peterx, 2024/01/31
- [PATCH 08/14] migration/multifd: Drop pages->num check in sender thread, peterx, 2024/01/31