]> git.hungrycats.org Git - linux/commitdiff
mm/hugetlb: improve locking in dissolve_free_huge_pages()
authorGerald Schaefer <gerald.schaefer@de.ibm.com>
Sat, 8 Oct 2016 00:01:13 +0000 (17:01 -0700)
committerBen Hutchings <ben@decadent.org.uk>
Sat, 11 Nov 2017 13:33:43 +0000 (13:33 +0000)
commit eb03aa008561004257900983193d024e57abdd96 upstream.

For every pfn aligned to minimum_order, dissolve_free_huge_pages() will
call dissolve_free_huge_page() which takes the hugetlb spinlock, even if
the page is not huge at all or a hugepage that is in-use.

Improve this by doing the PageHuge() and page_count() checks already in
dissolve_free_huge_pages() before calling dissolve_free_huge_page().  In
dissolve_free_huge_page(), when holding the spinlock, those checks need
to be revalidated.

Link: http://lkml.kernel.org/r/20160926172811.94033-4-gerald.schaefer@de.ibm.com
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Rui Teng <rui.teng@linux.vnet.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
mm/hugetlb.c

index 467d04b629488c9269b37fb90896d46bf4cdbf66..c892a6b51e8e4d24a694936ec62bdc5a8a587341 100644 (file)
@@ -1105,14 +1105,20 @@ out:
 int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
        unsigned long pfn;
+       struct page *page;
        int rc = 0;
 
        if (!hugepages_supported())
                return rc;
 
-       for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
-               if (rc = dissolve_free_huge_page(pfn_to_page(pfn)))
-                       break;
+       for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
+               page = pfn_to_page(pfn);
+               if (PageHuge(page) && !page_count(page)) {
+                       rc = dissolve_free_huge_page(page);
+                       if (rc)
+                               break;
+               }
+       }
 
        return rc;
 }