qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/2] QEMUBH: make AioContext's bh re-entrant


From: mdroth
Subject: Re: [Qemu-devel] [PATCH v2 2/2] QEMUBH: make AioContext's bh re-entrant
Date: Tue, 18 Jun 2013 17:26:52 -0500
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Jun 18, 2013 at 09:20:26PM +0200, Paolo Bonzini wrote:
> Il 18/06/2013 17:14, mdroth ha scritto:
> > Could we possibly simplify this by introducing a recursive mutex that we
> > could use to protect the whole list loop and hold even during the cb?
> 
> If it is possible, we should avoid recursive locks.  It makes impossible
> to establish a lock hierarchy.  For example:
> 
> > I assume we can't hold the lock during the cb currently since we might
> > try to reschedule, but if it's a recursive mutex would that simplify
> > things?
> 
> If you have two callbacks in two different AioContexts, both of which do
> bdrv_drain_all(), you get an AB-BA deadlock

I think I see what you mean. That problem exists regardless of whether we
introduce a recursive mutex though right? I guess the main issue is the
fact that we'd be encouraging sloppy locking practices without
addressing the root problem?

I'm just worried what other subtle problems pop up if we instead rely
heavily on memory barriers and inevitably forget one here or there, but
maybe that's just me not having a good understanding of when to use them.

But doesn't rcu provide higher-level interfaces for these kinds of things?
Is it possible to hide any of this behind our list interfaces?

> 
> Also, the barriers for qemu_bh_schedule are needed anyway.  It's only
> those that order bh->next vs. ctx->first_bh that would go away.

I see, I guess it's unavoidable for some cases.

> 
> Paolo
> 
> > I've been doing something similar with IOHandlers for the QContext
> > stuff, and that's the approach I took. This patch introduces the
> > recursive mutex:
> > 
> > https://github.com/mdroth/qemu/commit/c7ee0844da62283c9466fcb10ddbfadd0b8bfc53
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]