[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v6 15/18] job: detect change of aiocontext within job corouti
From: |
Emanuele Giuseppe Esposito |
Subject: |
Re: [PATCH v6 15/18] job: detect change of aiocontext within job coroutine |
Date: |
Tue, 7 Jun 2022 15:28:26 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 |
Am 03/06/2022 um 18:59 schrieb Kevin Wolf:
> Am 14.03.2022 um 14:37 hat Emanuele Giuseppe Esposito geschrieben:
>> From: Paolo Bonzini <pbonzini@redhat.com>
>>
>> We want to make sure access of job->aio_context is always done
>> under either BQL or job_mutex. The problem is that using
>> aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond
>> makes the coroutine immediately resume, so we can't hold the job lock.
>> And caching it is not safe either, as it might change.
>>
>> job_start is under BQL, so it can freely read job->aiocontext, but
>> job_enter_cond is not. In order to fix this, use aio_co_wake():
>> the advantage is that it won't use job->aiocontext, but the
>> main disadvantage is that it won't be able to detect a change of
>> job AioContext.
>>
>> Calling bdrv_try_set_aio_context() will issue the following calls
>> (simplified):
>> * in terms of bdrv callbacks:
>> .drained_begin -> .set_aio_context -> .drained_end
>> * in terms of child_job functions:
>> child_job_drained_begin -> child_job_set_aio_context ->
>> child_job_drained_end
>> * in terms of job functions:
>> job_pause_locked -> job_set_aio_context -> job_resume_locked
>>
>> We can see that after setting the new aio_context, job_resume_locked
>> calls again job_enter_cond, which then invokes aio_co_wake(). But
>> while job->aiocontext has been set in job_set_aio_context,
>> job->co->ctx has not changed, so the coroutine would be entering in
>> the wrong aiocontext.
>>
>> Using aio_co_schedule in job_resume_locked() might seem as a valid
>> alternative, but the problem is that the bh resuming the coroutine
>> is not scheduled immediately, and if in the meanwhile another
>> bdrv_try_set_aio_context() is run (see test_propagate_mirror() in
>> test-block-iothread.c), we would have the first schedule in the
>> wrong aiocontext, and the second set of drains won't even manage
>> to schedule the coroutine, as job->busy would still be true from
>> the previous job_resume_locked().
>>
>> The solution is to stick with aio_co_wake(), but then detect every time
>> the coroutine resumes back from yielding if job->aio_context
>> has changed. If so, we can reschedule it to the new context.
>>
>> Check for the aiocontext change in job_do_yield_locked because:
>> 1) aio_co_reschedule_self requires to be in the running coroutine
>> 2) since child_job_set_aio_context allows changing the aiocontext only
>> while the job is paused, this is the exact place where the coroutine
>> resumes, before running JobDriver's code.
>>
>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>> job.c | 24 +++++++++++++++++++++---
>> 1 file changed, 21 insertions(+), 3 deletions(-)
>>
>> diff --git a/job.c b/job.c
>> index 89c0e6bed9..10a5981748 100644
>> --- a/job.c
>> +++ b/job.c
>> @@ -543,11 +543,12 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job
>> *job))
>> return;
>> }
>>
>> - assert(!job->deferred_to_main_loop);
>
> Why doesn't this assertion hold true any more?
Theoretically this is useless, since we are in the same critical section
once the new lock is used, right? I don't recall any other reason, I
will restore it, if I need to respin (depends on what we decide in the
other patches feedback you provided)
Thank you,
Emanuele
>
>> timer_del(&job->sleep_timer);
>> job->busy = true;
>> real_job_unlock();
>> - aio_co_enter(job->aio_context, job->co);
>> + job_unlock();
>> + aio_co_wake(job->co);
>> + job_lock();
>> }
>>
>> void job_enter(Job *job)
>> @@ -568,6 +569,8 @@ void job_enter(Job *job)
>> */
>> static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns)
>> {
>> + AioContext *next_aio_context;
>> +
>> real_job_lock();
>> if (ns != -1) {
>> timer_mod(&job->sleep_timer, ns);
>> @@ -579,6 +582,20 @@ static void coroutine_fn job_do_yield_locked(Job *job,
>> uint64_t ns)
>> qemu_coroutine_yield();
>> job_lock();
>>
>> + next_aio_context = job->aio_context;
>> + /*
>> + * Coroutine has resumed, but in the meanwhile the job AioContext
>> + * might have changed via bdrv_try_set_aio_context(), so we need to move
>> + * the coroutine too in the new aiocontext.
>> + */
>> + while (qemu_get_current_aio_context() != next_aio_context) {
>> + job_unlock();
>> + aio_co_reschedule_self(next_aio_context);
>> + job_lock();
>> + next_aio_context = job->aio_context;
>> + }
>> +
>> +
>
> Extra empty line.
>
>> /* Set by job_enter_cond_locked() before re-entering the coroutine. */
>> assert(job->busy);
>> }
>> @@ -680,7 +697,6 @@ void job_resume_locked(Job *job)
>> if (job->pause_count) {
>> return;
>> }
>> -
>> /* kick only if no timer is pending */
>> job_enter_cond_locked(job, job_timer_not_pending_locked);
>> }
>
> This hunk looks unrelated.
>
> Kevin
>