Correct a long-standing error in _pmap_unwire_pte_hold() affecting
multiprocessors. Specifically, the error is conditioning the call to pmap_invalidate_page() on whether the pmap is active on the current CPU. This call must be unconditional. Regardless of whether the pmap is active on the CPU performing _pmap_unwire_pte_hold(), it could be active on another CPU. For example, a call to pmap_remove_all() by the page daemon could result in a call to _pmap_unwire_pte_hold() with the pmap inactive on the current CPU and active on another CPU. In such circumstances, failing to call pmap_invalidate_page() results in a stale TLB entry on the other CPU that still maps the now deallocated page table page. What happens next is typically a mysterious panic in pmap_enter() by the other CPU, either "pmap_enter: attempted pmap_enter on 4MB page" or "pmap_enter: pte vanished, va: 0x%lx". Both occur because the former page table page has been recycled and allocated to a new purpose. Consequently, it no longer contains zeroes. See also Peter's i386/i386/pmap.c revision 1.448 and the related e-mail thread last year. Many thanks to the engineers at Sandvine for providing clear and concise information until all of the pieces of the puzzle fell into place and for testing an earlier patch. MT5 Candidate
This commit is contained in:
parent
5015e1ce1f
commit
a6bec8ad06
@ -1014,13 +1014,12 @@ _pmap_unwire_pte_hold(pmap_t pmap, vm_offset_t va, vm_page_t m)
|
||||
pdppg = PHYS_TO_VM_PAGE(*pmap_pml4e(pmap, va) & PG_FRAME);
|
||||
pmap_unwire_pte_hold(pmap, va, pdppg);
|
||||
}
|
||||
if (pmap_is_current(pmap)) {
|
||||
/*
|
||||
* Do an invltlb to make the invalidated mapping
|
||||
* take effect immediately.
|
||||
*/
|
||||
pmap_invalidate_page(pmap, pteva);
|
||||
}
|
||||
|
||||
/*
|
||||
* Do an invltlb to make the invalidated mapping
|
||||
* take effect immediately.
|
||||
*/
|
||||
pmap_invalidate_page(pmap, pteva);
|
||||
|
||||
vm_page_free_zero(m);
|
||||
atomic_subtract_int(&cnt.v_wire_count, 1);
|
||||
|
@ -1030,18 +1030,13 @@ _pmap_unwire_pte_hold(pmap_t pmap, vm_page_t m)
|
||||
*/
|
||||
pmap->pm_pdir[m->pindex] = 0;
|
||||
--pmap->pm_stats.resident_count;
|
||||
|
||||
/*
|
||||
* We never unwire a kernel page table page, making a
|
||||
* check for the kernel_pmap unnecessary.
|
||||
* Do an invltlb to make the invalidated mapping
|
||||
* take effect immediately.
|
||||
*/
|
||||
if ((pmap->pm_pdir[PTDPTDI] & PG_FRAME) == (PTDpde[0] & PG_FRAME)) {
|
||||
/*
|
||||
* Do an invltlb to make the invalidated mapping
|
||||
* take effect immediately.
|
||||
*/
|
||||
pteva = VM_MAXUSER_ADDRESS + i386_ptob(m->pindex);
|
||||
pmap_invalidate_page(pmap, pteva);
|
||||
}
|
||||
pteva = VM_MAXUSER_ADDRESS + i386_ptob(m->pindex);
|
||||
pmap_invalidate_page(pmap, pteva);
|
||||
|
||||
vm_page_free_zero(m);
|
||||
atomic_subtract_int(&cnt.v_wire_count, 1);
|
||||
|
Loading…
Reference in New Issue
Block a user