The recent introduction of ptep_set_access_flags() with the optimisation
of not flushing the TLB unfortunately broke ppc32 CPUs with no hash
table.
The data access exception code path in assembly for these doesn't
properly deal with the case where the TLB entry is present with the
wrong PAGE_RW and will just call do_page_fault again instead of just
replacing the TLB entry.
Fixing the asm code for all the different CPU types affected (yah,
embedded PPCs all have different MMUs =P) is painful and need testing I
can't do at the moment, so here's a fix that will just flush the TLB
page when changing the access flags on non-hash based machines. Please
apply.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org
flush_hash_pages(mm->context, addr, ptephys, 1);
}
+/*
+ * Called by ptep_set_access_flags, must flush on CPUs for which the
+ * DSI handler can't just "fixup" the TLB on a write fault
+ */
+void flush_tlb_page_nohash(struct vm_area_struct *vma, unsigned long addr)
+{
+ if (Hash != 0)
+ return;
+ _tlbie(addr);
+}
+
/*
* Called at the end of a mmu_gather operation to make sure the
* TLB flush is completely done.
(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW);
pte_update(ptep, 0, bits);
}
+
#define ptep_set_access_flags(__vma, __address, __ptep, __entry, __dirty) \
- __ptep_set_access_flags(__ptep, __entry, __dirty)
+ do { \
+ __ptep_set_access_flags(__ptep, __entry, __dirty); \
+ flush_tlb_page_nohash(__vma, __address); \
+ } while(0)
/*
* Macro to mark a page protection value as "uncacheable".