qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 06/18] jobs: protect jobs with job_lock/unlock


From: Kevin Wolf
Subject: Re: [PATCH v6 06/18] jobs: protect jobs with job_lock/unlock
Date: Fri, 3 Jun 2022 18:40:52 +0200

Am 14.03.2022 um 14:36 hat Emanuele Giuseppe Esposito geschrieben:
> Introduce the job locking mechanism through the whole job API,
> following the comments  in job.h and requirements of job-monitor
> (like the functions in job-qmp.c, assume lock is held) and
> job-driver (like in mirror.c and all other JobDriver, lock is not held).
> 
> Use the _locked helpers introduced before to differentiate
> between functions called with and without job_mutex.
> This only applies to function that are called under both
> cases, all the others will be renamed later.
> 
> job_{lock/unlock} is independent from real_job_{lock/unlock}.
> 
> Note: at this stage, job_{lock/unlock} and job lock guard macros
> are *nop*.
> 
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> ---
>  block.c             |  18 ++++---
>  block/replication.c |   8 ++-
>  blockdev.c          |  17 ++++--
>  blockjob.c          |  56 +++++++++++++-------
>  job-qmp.c           |   2 +
>  job.c               | 125 +++++++++++++++++++++++++++++++-------------
>  monitor/qmp-cmds.c  |   6 ++-
>  qemu-img.c          |  41 +++++++++------
>  8 files changed, 187 insertions(+), 86 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 718e4cae8b..5dc46fde11 100644
> --- a/block.c
> +++ b/block.c
> @@ -4978,7 +4978,9 @@ static void bdrv_close(BlockDriverState *bs)
>  
>  void bdrv_close_all(void)
>  {
> -    assert(job_next(NULL) == NULL);
> +    WITH_JOB_LOCK_GUARD() {
> +        assert(job_next(NULL) == NULL);
> +    }
>      GLOBAL_STATE_CODE();

This series seems really hard to review patch by patch, in this case
because I would have to know whether you intended job_next() to be
called with the lock held or not. Nothing in job.h indicates either way
at this point in the series.

Patch 11 answers this by actually renaming it job_next_locked(), but
always having to refer to the final state after the whole series is
applied is really not how things should work. We're splitting the work
into individual patches so that the state after each single patch makes
sense on its own. Otherwise the whole series could as well be a single
patch. :-(

So I'd argue that patch 11 should probably come before this one.

Anyway, I guess I'll try to make my way to the end of the series quickly
and then somehow try to verify whatever the state is then.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]