qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Fix refcounting in hugetlbfs quota handling


From: Christoph Hellwig
Subject: Re: [Qemu-devel] Fix refcounting in hugetlbfs quota handling
Date: Sat, 13 Aug 2011 00:20:03 +0200
User-agent: Mutt/1.5.17 (2007-11-01)

On Thu, Aug 11, 2011 at 04:40:59PM +1000, David Gibson wrote:
> Linus, please apply
> 
> hugetlbfs tracks the current usage of hugepages per hugetlbfs
> mountpoint.  To correctly track this when hugepages are released, it
> must find the right hugetlbfs super_block from the struct page
> available in free_huge_page().

a superblock is not a mountpoint, it's a filesystem instance.  You can happily
have a single filesystem mounted at multiple mount points.

> However, this usage is buggy, because nothing ensures that the
> address_space is not freed before all the hugepages that belonged to
> it are.  In practice that will usually be the case, but if extra page
> references have been taken by e.g. drivers or kvm doing
> get_user_pages() then the file, inode and address space may be
> destroyed before all the pages.
> 
> In addition, the quota functions use the mapping only to get the inode
> then the super_block.  However, most of the callers already have the
> inode anyway and have to get the mapping from there.
> 
> This patch, therefore, stores a pointer to the inode instead of the
> address_space in the page private data for hugepages.

What's sthe point?  The lifetime of inode->i_mapping is exactly the
same as that of the inode, except for those few filesystem that use
one from a different inode (and then for the whole lifetime of the
inode), so I can't see how your patch will make a difference.

> More
> importantly it correctly adjusts the reference count on the inodes
> when they're added to the page private data.  This ensures that the
> inode (and therefore the super block) will not be freed before we use
> it from free_huge_page.

That seems like the real fix.  And even if you'd still do the other bits
it should be a separate patch/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]