qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH v4 08/27] osdep: Add qemu_lock_fd a


From: Daniel P. Berrange
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH v4 08/27] osdep: Add qemu_lock_fd and qemu_unlock_fd
Date: Wed, 11 May 2016 10:01:30 +0100
User-agent: Mutt/1.6.0 (2016-04-01)

On Wed, May 11, 2016 at 08:48:18AM +0800, Fam Zheng wrote:
> On Tue, 05/10 09:57, Daniel P. Berrange wrote:
> > On Tue, May 10, 2016 at 10:50:40AM +0800, Fam Zheng wrote:
> > > They are wrappers of POSIX fcntl file locking, with the additional
> > > interception of open/close (through qemu_open and qemu_close) to offer a
> > > better semantics that preserves the locks across multiple life cycles of
> > > different fds on the same file.  The reason to make this semantics
> > > change over the fcntl locks is to make the API cleaner for QEMU
> > > subsystems.
> > > 
> > > More precisely, we remove this "feature" of fcntl lock:
> > > 
> > >     If a process closes any file descriptor referring to a file, then
> > >     all of the process's locks on that file are released, regardless of
> > >     the file descriptor(s) on which the locks were obtained.
> > > 
> > > as long as the fd is always open/closed through qemu_open and
> > > qemu_close.
> > 
> > You're not actually really removing that problem - this is just hacking
> > around it in a manner which is racy & susceptible to silent failure.
> > 
> > 
> > > +static int qemu_fd_close(int fd)
> > > +{
> > > +    LockEntry *ent, *tmp;
> > > +    LockRecord *rec;
> > > +    char *path;
> > > +    int ret;
> > > +
> > > +    assert(fd_to_path);
> > > +    path = g_hash_table_lookup(fd_to_path, GINT_TO_POINTER(fd));
> > > +    assert(path);
> > > +    g_hash_table_remove(fd_to_path, GINT_TO_POINTER(fd));
> > > +    rec = g_hash_table_lookup(lock_map, path);
> > > +    ret = close(fd);
> > > +
> > > +    if (rec) {
> > > +        QLIST_FOREACH_SAFE(ent, &rec->records, next, tmp) {
> > > +            int r;
> > > +            if (ent->fd == fd) {
> > > +                QLIST_REMOVE(ent, next);
> > > +                g_free(ent);
> > > +                continue;
> > > +            }
> > > +            r = qemu_lock_do(ent->fd, ent->start, ent->len,
> > > +                             ent->readonly ? F_RDLCK : F_WRLCK);
> > > +            if (r == -1) {
> > > +                fprintf(stderr, "Failed to acquire lock on fd %d: %s\n",
> > > +                        ent->fd, strerror(errno));
> > > +            }
> > 
> > If another app has acquired the lock between the close and this attempt
> > to re-acquire the lock, then QEMU is silently carrying on without any
> > lock being held. For something that's intending to provide protection
> > against concurrent use I think this is not an acceptable failure
> > scenario.
> > 
> > 
> > > +int qemu_open(const char *name, int flags, ...)
> > > +{
> > > +    int mode = 0;
> > > +    int ret;
> > > +
> > > +    if (flags & O_CREAT) {
> > > +        va_list ap;
> > > +
> > > +        va_start(ap, flags);
> > > +        mode = va_arg(ap, int);
> > > +        va_end(ap);
> > > +    }
> > > +    ret = qemu_fd_open(name, flags, mode);
> > > +    if (ret >= 0) {
> > > +        qemu_fd_add_record(ret, name);
> > > +    }
> > > +    return ret;
> > > +}
> > 
> > I think the approach you have here is fundamentally not usable with
> > fcntl locks, because it is still using the pattern
> > 
> >    fd = open(path)
> >    lock(fd)
> >    if failed lock
> >       close(fd)
> >    ...do stuff.
> > 
> > 
> > As mentioned in previous posting I believe the block layer must be
> > checking whether it already has a lock held against "path" *before*
> > even attempting to open the file. Once QEMU has the file descriptor
> > open it is too later, because closing the FD will release pre-existing
> > locks QEMU has.
> 
> There will still be valid close() calls, that will release necessary locks
> temporarily.  You are right the "close + re-acquire" in qemu_fd_close() is a
> racy problem. Any suggestion how this could be fixed?  Something like
> introducing a "close2" syscall that doesn't drop locks on other fds?

I guess the problem scenario is with the "nolock" option - if you have a
block driver which has an image open and locked, if you have another block
driver opening that same image with 'nolock' then you then hit the potential
close() problem.

One idea might be to simply have a delayed close(). ie don't close the
file descriptor until all locks have been released by the other block
driver.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]