move_one_page() is awkward. It grabs an atomic_kmap of the source pte
(because it needs to know if there's really a page there) and then it needs
to allocate a pte for the dest. But it cannot allocate the dest pte while
holding the src's atomic kmap.
So it performs this little dance peeking at pagetables to predict if
alloc_one_pte_map() might need to perform a pte page allocation.
When I wrote this code I made it conditional on CONFIG_HIGHPTE. But that was
bogus: even in the !CONFIG_HIGHPTE case, get_one_pte_map_nested() will run
atomic_kmap() against the pte page, which disables preemption.
Net effect: with CONFIG_HIGHMEM && !CONFIG_HIGHPTE we can end up performing a
GFP_KERNEL pte page allocation while preemption is disabled. It triggers a
might_sleep() warning and indeed is buggy.
So the patch removes the conditionality: even in the !CONFIG_HIGHPTE case we
still do the pagetable peek and drop the kmap if necessary.
(Arguably, we shouldn't be performing the atomic_kmap() at all if
!CONFIG_HIGHPTE: all it does is a pointless preemption disable).
(Arguably, kmap_atomic() should not be disabling preemption if the target
page is not highmem. But we're doing it anyway at present for consistency
(ie: debug coverage) and because the filemap.c pagecache copying functions
rely on kmap_atomic() disabling do_no_page() for all pages: see
do_no_page()'s use of in_atomic()).
return pte;
}
-#ifdef CONFIG_HIGHPTE /* Save a few cycles on the sane machines */
static inline int page_table_present(struct mm_struct *mm, unsigned long addr)
{
pgd_t *pgd;
pmd = pmd_offset(pgd, addr);
return pmd_present(*pmd);
}
-#else
-#define page_table_present(mm, addr) (1)
-#endif
static inline pte_t *alloc_one_pte_map(struct mm_struct *mm, unsigned long addr)
{