Move INVLPG to pmap_quick_enter_page() from pmap_quick_remove_page().

If processor prefetches neighboring TLB entries to the one being accessed
(as some have been reported to do), then the spin lock does not prevent
the situation described in the "AMD64 Architecture Programmer's Manual
Volume 2: System Programming" rev. 3.23, "7.3.1 Special Coherency
Considerations".

Reported and reviewed by:	alc
Sponsored by:	The FreeBSD Foundation
MFC after:	1 week
Differential revision:	https://reviews.freebsd.org/D37770
This commit is contained in:
Konstantin Belousov 2022-12-29 22:48:51 +02:00
parent cde70e312c
commit 231d75568f

View File

@ -10423,6 +10423,13 @@ pmap_quick_enter_page(vm_page_t m)
return (PHYS_TO_DMAP(paddr));
mtx_lock_spin(&qframe_mtx);
KASSERT(*vtopte(qframe) == 0, ("qframe busy"));
/*
* Since qframe is exclusively mapped by us, and we do not set
* PG_G, we can use INVLPG here.
*/
invlpg(qframe);
pte_store(vtopte(qframe), paddr | X86_PG_RW | X86_PG_V | X86_PG_A |
X86_PG_M | pmap_cache_bits(kernel_pmap, m->md.pat_mode, 0));
return (qframe);
@ -10435,14 +10442,6 @@ pmap_quick_remove_page(vm_offset_t addr)
if (addr != qframe)
return;
pte_store(vtopte(qframe), 0);
/*
* Since qframe is exclusively mapped by
* pmap_quick_enter_page() and that function doesn't set PG_G,
* we can use INVLPG here.
*/
invlpg(qframe);
mtx_unlock_spin(&qframe_mtx);
}