qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v6 08/22] raw-posix: Add image locking support


From: Fam Zheng
Subject: Re: [Qemu-block] [PATCH v6 08/22] raw-posix: Add image locking support
Date: Wed, 22 Jun 2016 16:27:47 +0800
User-agent: Mutt/1.6.1 (2016-04-27)

On Fri, 06/17 15:07, Kevin Wolf wrote:
> Am 03.06.2016 um 10:49 hat Fam Zheng geschrieben:
> > virtlockd in libvirt locks the first byte, we lock byte 1 to avoid
> > the intervene.
> > 
> > Both file and host device protocols are covered.
> > 
> > The complication is with reopen. We have three different locking states,
> > namely "unlocked", "shared locked" and "exclusively locked".
> > 
> > There have three different states, "unlocked", "shared locked" and 
> > "exclusively
> > locked".
> 
> This seems to be a corrupted copy of the previous sentence. :-)

Right, fixing.

> 
> > When we reopen, the new fd may need a new locking mode. Moving away to
> > or from exclusive is a bit tricky because we cannot do it atomically. This
> > patch solves it by dup() s->fd to s->lock_fd and avoid close(), so that 
> > there
> > isn't a racy window where we drop the lock on one fd before acquiring the
> > exclusive lock on the other.
> > 
> > To make the logic easier to manage, and allow better reuse, the code is
> > internally organized by state transition table (old_lock -> new_lock).
> > 
> > Signed-off-by: Fam Zheng <address@hidden>
> 
> I must admit that I don't fully understand yet why we can't change the
> lock atomincally and how s->lock_fd helps. In any case, I think it
> deserves comments in the code and not only in the commit message.

I'll add comments in the code too.

> > +static const struct RawReopenFuncRecord {
> > +    BdrvLockfCmd old_lock;
> > +    BdrvLockfCmd new_lock;
> > +    RawReopenFunc func;
> > +    bool need_lock_fd;
> > +} reopen_functions[] = {
> > +    {BDRV_LOCKF_UNLOCK, BDRV_LOCKF_UNLOCK, raw_reopen_identical, false},
> > +    {BDRV_LOCKF_UNLOCK, BDRV_LOCKF_SHARED, raw_reopen_from_unlock, true},
> > +    {BDRV_LOCKF_UNLOCK, BDRV_LOCKF_EXCLUSIVE, raw_reopen_from_unlock, 
> > true},
> > +    {BDRV_LOCKF_SHARED, BDRV_LOCKF_UNLOCK, raw_reopen_to_unlock, false},
> > +    {BDRV_LOCKF_SHARED, BDRV_LOCKF_SHARED, raw_reopen_identical, false},
> > +    {BDRV_LOCKF_SHARED, BDRV_LOCKF_EXCLUSIVE, raw_reopen_upgrade, true},
> > +    {BDRV_LOCKF_EXCLUSIVE, BDRV_LOCKF_UNLOCK, raw_reopen_to_unlock, false},
> > +    {BDRV_LOCKF_EXCLUSIVE, BDRV_LOCKF_SHARED, raw_reopen_downgrade, true},
> > +    {BDRV_LOCKF_EXCLUSIVE, BDRV_LOCKF_EXCLUSIVE, raw_reopen_identical, 
> > false},
> > +};
> > +
> > +static int raw_reopen_handle_lock(BDRVReopenState *state,
> > +                                  RawReopenOperation op,
> > +                                  Error **errp)
> 
> I think we have one big problem here: We don't know whether raw_s->fd is
> already locked or not. If dup() and setting the new flags with fcntl()
> succeeded, it is, but if we had to fall back on qemu_open(), it isn't.
> 
> This means that doing nothing in the raw_reopen_identical case isn't
> right because reopening without intending to change anything about the
> locking could end up unlocking the image.
> 

Unless I'm missing something, we don't rely on that, becasue raw_s->fd is never
locked. Instead, raw_s->lock_fd, as a dup() of raw_s->fd, is what we actually
handle in raw_reopen_identical(), and it always has the correct state when
raw_reopen_handle_lock() is called.  (That is also an advantage of introducing
raw_s->lock_fd.)

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]