Fix the undersupported option KERNLOAD, part 1: fix crashes in locore

when KERNLOAD is not a multiple of NBPDR (not the default) and PSE is
enabled (the default if the CPU supports it).  Addresses in PDEs must
be a multiple of NBPDR in the PSE case, but were not so in the crashing
case.

KERNLOAD defaults to NBPDR.  NBPDR is 4 MB for !PAE and 2 MB for PAE.
The default can be changed by editing i386/include/vmparam.h or using
makeoptions.  It can be changed to less than NBPDR to save real and
virtual memory at a small cost in time, or to more than NBPDR to waste
real and virtual memory.  It must be larger than 1 MB and a multiple of
PAGE_SIZE.  When it is less than NBPDR, it is necessarily not a multiple
of NBPDR.  This case has much larger bugs which will be fixed in part 2.

The fix is to only use PSE for physical addresses above <KERNLOAD
rounded _up_ to an NBPDR boundary>.  When the rounding is non-null,
this leaves part of the kernel not using large pages.  Rounding down
would avoid this pessimization, but would break setting of PAT bits
on i/o pages if it goes below 1MB.  Since rounding down always goes
below 1MB when KERNLOAD < NBPDR and the KERNLOAD > NBPDR case is not
useful, never round down.

Fix related style bugs (e.g., wrong literal values for NBPDR in comments).

Reviewed by:	kib
This commit is contained in:
bde 2017-12-18 09:32:56 +00:00
parent 1e07b95bcf
commit 622efbbef8
2 changed files with 9 additions and 9 deletions

View File

@ -668,7 +668,7 @@ over_symalloc:
no_kernend:
addl $PDRMASK,%esi /* Play conservative for now, and */
andl $~PDRMASK,%esi /* ... wrap to next 4M. */
andl $~PDRMASK,%esi /* ... round up to PDR boundary */
movl %esi,R(KERNend) /* save end of kernel */
movl %esi,R(physfree) /* next free page is at end of kernel */
@ -784,7 +784,8 @@ no_kernend:
/*
* Create an identity mapping for low physical memory, including the kernel.
* The part of this mapping that covers the first 1 MB of physical memory
* The part of this mapping given by the first PDE (for the first 4 MB or 2
* MB of physical memory)
* becomes a permanent part of the kernel's address space. The rest of this
* mapping is destroyed in pmap_bootstrap(). Ordinarily, the same page table
* pages are shared by the identity mapping and the kernel's native mapping.
@ -815,10 +816,9 @@ no_kernend:
#endif
/*
* For the non-PSE case, install PDEs for PTs covering the KVA.
* For the PSE case, do the same, but clobber the ones corresponding
* to the kernel (from btext to KERNend) with 4M (2M for PAE) ('PS')
* PDEs immediately after.
* Install PDEs for PTs covering enough kva to bootstrap. Then for the PSE
* case, replace the PDEs whose coverage is strictly within the kernel
* (between KERNLOAD (rounded up) and KERNend) by large-page PDEs.
*/
movl R(KPTphys), %eax
movl $KPTDI, %ebx
@ -828,10 +828,10 @@ no_kernend:
je done_pde
movl R(KERNend), %ecx
movl $KERNLOAD, %eax
movl $(KERNLOAD + PDRMASK) & ~PDRMASK, %eax
subl %eax, %ecx
shrl $PDRSHIFT, %ecx
movl $(KPTDI+(KERNLOAD/(1 << PDRSHIFT))), %ebx
movl $KPTDI + ((KERNLOAD + PDRMASK) >> PDRSHIFT), %ebx
shll $PDESHIFT, %ebx
addl R(IdlePTD), %ebx
orl $(PG_V|PG_RW|PG_PS), %eax

View File

@ -662,7 +662,7 @@ pmap_set_pg(void)
endva = KERNBASE + KERNend;
if (pseflag) {
va = KERNBASE + KERNLOAD;
va = KERNBASE + roundup2(KERNLOAD, NBPDR);
while (va < endva) {
pdir_pde(PTD, va) |= pgeflag;
invltlb(); /* Flush non-PG_G entries. */