qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH v3 4/6] blockjob: add block_job_sta


From: Kevin Wolf
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH v3 4/6] blockjob: add block_job_start
Date: Tue, 8 Nov 2016 10:16:24 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Am 08.11.2016 um 03:05 hat Jeff Cody geschrieben:
> On Mon, Nov 07, 2016 at 09:02:14PM -0500, John Snow wrote:
> > On 11/03/2016 08:17 AM, Kevin Wolf wrote:
> > >Am 02.11.2016 um 18:50 hat John Snow geschrieben:
> > >>+void block_job_start(BlockJob *job)
> > >>+{
> > >>+    assert(job && !block_job_started(job) && job->paused &&
> > >>+           !job->busy && job->driver->start);
> > >>+    job->paused = false;
> > >>+    job->busy = true;
> > >>+    job->co = qemu_coroutine_create(job->driver->start, job);
> > >>+    qemu_coroutine_enter(job->co);
> > >>+}
> > >
> > >We allow the user to pause a job while it's not started yet. You
> > >classified this as "harmless". But if we accept this, can we really
> > >unconditionally enter the coroutine even if the job has been paused?
> > >Can't a user expect that a job remains in paused state when they
> > >explicitly requested a pause and the job was already internally paused,
> > >like in this case by block_job_create()?
> > >
> > 
> > What will end up happening is that we'll enter the job, and then it'll pause
> > immediately upon entrance. Is that a problem?
> > 
> > If the jobs themselves are not checking their pause state fastidiously, it
> > could be (but block/backup does -- after it creates a write notifier.)
> > 
> > Do we want a stronger guarantee here?
> > 
> > Naively I think it's OK as-is, but I could add a stronger boolean in that
> > lets us know if it's okay to start or not, and we could delay the actual
> > creation and start until the 'resume' comes in if you'd like.
> > 
> > I'd like to avoid the complexity if we can help it, but perhaps I'm not
> > thinking carefully enough about the existing edge cases.
> > 
> 
> Is there any reason we can't just use job->pause_count here?  When the job
> is created, set job->paused = true, and job->pause_count = 1.  In the
> block_job_start(), check the pause_count prior to qemu_coroutine_enter():
> 
>     void block_job_start(BlockJob *job)
>     {
>         assert(job && !block_job_started(job) && job->paused &&
>               !job->busy && job->driver->start);
>         job->co = qemu_coroutine_create(job->driver->start, job);
>         job->paused = --job->pause_count > 0;
>         if (!job->paused) {
>             job->busy = true;
>             qemu_coroutine_enter(job->co);
>         }
>     }

Yes, something like this is what I had in mind.

> > >The same probably also applies to the internal job pausing during
> > >bdrv_drain_all_begin/end, though as you know there is a larger problem
> > >with starting jobs under drain_all anyway. For now, we just need to keep
> > >in mind that we can neither create nor start a job in such sections.
> > >
> > 
> > Yeah, there are deeper problems there. As long as the existing critical
> > sections don't allow us to create jobs (started or not) I think we're
> > probably already OK.

My point here was that we would like the get rid of that restriction
eventually, and if we add more and more things that depend on the
restriction, getting rid of it will only become harder.

But with the above code, I think this specific problem is solved.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]