From ba1f08f14b523e1722ee423eb729663e3fa5b192 Mon Sep 17 00:00:00 2001 From: Andrew Morton Date: Fri, 7 Jan 2005 22:03:01 -0800 Subject: [PATCH] [PATCH] readpage-vs-invalidate fix A while ago we merged a patch which tried to solve a problem wherein a concurrent read() and invalidate_inode_pages() would cause the read() to return -EIO because invalidate cleared PageUptodate() at the wrong time. That patch tests for (page_count(page) != 2) in invalidate_complete_page() and bales out if false. Problem is, the page may be in the per-cpu LRU front-ends over in lru_cache_add. This elevates the refcount pending spillage of the page onto the LRU for real. That causes a false positive in invalidate_complete_page(), causing the page to not get invalidated. This screws up the logic in my new O_DIRECT-vs-buffered coherency fix. So let's solve the invalidate-vs-read in a different manner. Over on the read() side, add an explicit check to see if the page was invalidated. If so, just drop it on the floor and redo the read from scratch. Note that only do_generic_mapping_read() needs treatment. filemap_nopage(), filemap_getpage() and read_cache_page() are already doing the oh-it-was-invalidated-so-try-again thing. Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/filemap.c | 12 +++++++++++- mm/truncate.c | 4 ---- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index b832d146f6c4..57e39e6a6b3a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -802,11 +802,21 @@ readpage: goto readpage_error; if (!PageUptodate(page)) { - wait_on_page_locked(page); + lock_page(page); if (!PageUptodate(page)) { + if (page->mapping == NULL) { + /* + * invalidate_inode_pages got it + */ + unlock_page(page); + page_cache_release(page); + goto find_page; + } + unlock_page(page); error = -EIO; goto readpage_error; } + unlock_page(page); } /* diff --git a/mm/truncate.c b/mm/truncate.c index 733aa033678d..b18ec4c41ae5 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -82,10 +82,6 @@ invalidate_complete_page(struct address_space *mapping, struct page *page) } BUG_ON(PagePrivate(page)); - if (page_count(page) != 2) { - spin_unlock_irq(&mapping->tree_lock); - return 0; - } __remove_from_page_cache(page); spin_unlock_irq(&mapping->tree_lock); ClearPageUptodate(page); -- 2.47.3