In pmap_set_pte(), make sure to enforce ordering by inserting a memory

fence. Under system load, the CPU has been found to change the order
by which the stores are made visible. When the tag is made visible
before the other TLB values, other CPUs may use the invalid TLB values
and do bad things.

While here (i.e. not a fix) don't return errors from pmap_remove_vhpt()
to callers of pmap_remove_pte(). Those callers don't check the return
value and as such don't do what is needed to keep a consistent state.
More importantly, pmap_remove_vhpt() can't really have an error without
it indicating something unintended. Using KASSERT is therefore better.

PR:		182999, 183227
This commit is contained in:
Marcel Moolenaar 2014-01-20 18:37:35 +00:00
parent 0894229871
commit 229d894543

View File

@ -1303,6 +1303,8 @@ pmap_set_pte(struct ia64_lpte *pte, vm_offset_t va, vm_offset_t pa,
pte->itir = PAGE_SHIFT << 2;
ia64_mf();
pte->tag = ia64_ttag(va);
}
@ -1321,8 +1323,8 @@ pmap_remove_pte(pmap_t pmap, struct ia64_lpte *pte, vm_offset_t va,
* First remove from the VHPT.
*/
error = pmap_remove_vhpt(va);
if (error)
return (error);
KASSERT(error == 0, ("%s: pmap_remove_vhpt returned %d",
__func__, error));
pmap_invalidate_page(va);