Invalidate cache for the PDPTE page when using PAE paging but PAT is

not supported.

According to SDM rev. 69 vol. 3, for PDPTE registers loads:
- when PAT is not supported, access to the PDPTE page is performed as
  UC, see 4.9.1;
- when PAT is supported, the access is WB, see 4.9.2.

So potentially CPU might load stale memory as PDPTEs if both PAT and
self-snoop are not implemented.  To be safe, add total local cache
flush to pmap_cold() before initial load of cr3, and flush PDPTE page
in pmap_pinit(), if PAT is not implemented.

Reviewed by:	markj
Sponsored by:	The FreeBSD Foundation
MFC after:	2 weeks
Differential revision:	https://reviews.freebsd.org/D19365
This commit is contained in:
Konstantin Belousov 2019-02-28 19:19:02 +00:00
parent 1ece6232d2
commit bced332adf

View File

@ -564,6 +564,8 @@ __CONCAT(PMTYPE, cold)(void)
/* Now enable paging */
#ifdef PMAP_PAE_COMP
cr3 = (u_int)IdlePDPT;
if ((cpu_feature & CPUID_PAT) == 0)
wbinvd();
#else
cr3 = (u_int)IdlePTD;
#endif
@ -2040,6 +2042,14 @@ __CONCAT(PMTYPE, pinit)(pmap_t pmap)
}
pmap_qenter((vm_offset_t)pmap->pm_pdir, pmap->pm_ptdpg, NPGPTD);
#ifdef PMAP_PAE_COMP
if ((cpu_feature & CPUID_PAT) == 0) {
pmap_invalidate_cache_range(
trunc_page((vm_offset_t)pmap->pm_pdpt),
round_page((vm_offset_t)pmap->pm_pdpt +
NPGPTD * sizeof(pdpt_entry_t)));
}
#endif
for (i = 0; i < NPGPTD; i++)
if ((pmap->pm_ptdpg[i]->flags & PG_ZERO) == 0)