qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 04/22] block: Introduce image file locking


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH v6 04/22] block: Introduce image file locking
Date: Wed, 22 Jun 2016 17:04:12 +0800
User-agent: Mutt/1.6.1 (2016-04-27)

On Wed, 06/22 10:30, Kevin Wolf wrote:
> Am 22.06.2016 um 09:23 hat Fam Zheng geschrieben:
> > On Fri, 06/17 13:34, Kevin Wolf wrote:
> > > Am 03.06.2016 um 10:48 hat Fam Zheng geschrieben:
> > > > Block drivers can implement this new operation .bdrv_lockf to actually 
> > > > lock the
> > > > image in the protocol specific way.
> > > > 
> > > > Signed-off-by: Fam Zheng <address@hidden>
> > > > ---
> > > >  block.c                   | 57 
> > > > +++++++++++++++++++++++++++++++++++++++++++++++
> > > >  include/block/block.h     | 11 ++++++++-
> > > >  include/block/block_int.h |  5 +++++
> > > >  3 files changed, 72 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/block.c b/block.c
> > > > index 736432f..4c2a3ff 100644
> > > > --- a/block.c
> > > > +++ b/block.c
> > > > @@ -854,6 +854,50 @@ out:
> > > >      g_free(gen_node_name);
> > > >  }
> > > >  
> > > > +BdrvLockfCmd bdrv_get_locking_cmd(int flags)
> > > > +{
> > > > +    if (flags & BDRV_O_NO_LOCK) {
> > > > +        return BDRV_LOCKF_UNLOCK;
> > > > +    } else if (flags & BDRV_O_SHARED_LOCK) {
> > > > +        return BDRV_LOCKF_SHARED;
> > > > +    } else if (flags & BDRV_O_RDWR) {
> > > > +        return BDRV_LOCKF_EXCLUSIVE;
> > > > +    } else {
> > > > +        return BDRV_LOCKF_SHARED;
> > > > +    }
> > > > +}
> > > 
> > > It feels a bit counterintuitive to use the very same operation for
> > > "start of critical section, but don't actually lock" and "end of
> > > critical section".
> > > 
> > > Is there a specific reason why you chose this instead of separate
> > > .bdrv_lock/bdrv_unlock callbacks?
> > 
> > Because unlike open(2)/close(2), locking and unlocking are typically
> > implemented with one parameterized operation (fcntl(2)) underneath, I 
> > followed
> > that naturally.
> 
> Is it really "typically", or is this an idiosyncrasy of POSIX and we end
> up modeling our interface for one particular block driver even though
> for others a different interface might be preferable? Did you look at
> more interfaces?

Yes I did.

RBD has rbd_lock_exclusive, rbd_lock_shared and rbd_unlock; Windows have
LockFile and UnlockFile.

But I was also thinking about flock(2), lockf(3) and glfs_posix_lock(). None of
them decided to use a different interface from posix, and I really have had no
personal preference on this.

Either way, we'd have to adapt one style to another by the day when we support
locking in more drivers. At this point the one first implementation, raw-posix,
fits fine.

Back to your intuition concern, I think there is no "start critical section,
but don't actually lock" semantic in this series, the block layer won't call
the driver function unless locking is needed.

> 
> Apart from that, we generally make our internal interfaces so that they
> are easy to use and code is easy to read rather than directly mapping
> POSIX everywhere. For example, our I/O function never do short
> reads/writes even though POSIX does that.

This is kinda of personal preference but as I am not insisting on either side,
we can go the other way if you don't like it.

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]