pmap on i386
- check for change in executable status in pmap_enter
- pmap_qenter and pmap_qremove only need to invalidate the range if one
of the pages has been referenced
- remove pmap_kenter/pmap_kremove as they were only used by pmap_qenter
and pmap_qremove
- in pmap_copy don't copy wired bit to destination pmap
- mpte was unused in pmap_enter_object - remove
- pmap_enter_quick_locked is not called on the kernel_pmap, remove check
- move pmap_remove_write specific logic out of tte_clear_phys_bit
- in pmap_protect check for removal of execute bit
- panic in the presence of a wired page in pmap_remove_all
- pmap_zero_range can call hwblkclr if offset is zero and size is PAGE_SIZE
- tte_clear_virt_bit is only used by pmap_change_wiring - thus it can be
greatly simplified
- pmap_invalidate_page need only be called in tte_clear_phys_bit if there
is a match with flags
- lock the pmap in tte_clear_phys_bit so that clearing the page bits is
atomic with invalidating the page
- these changes result in 100s reduction in buildworld from a malloc backed
disk to a malloc backed disk - ~2.5%
usage to conform to that of tl0_trap - the separate code path
for unaligned faults was never getting used (and evidently doesn't
work), so ifdef out for now
- The PCPU usage was to ensure that there were no faults on the stack while
the tte_hash_bucket lock was held - but this can be avoided by making sure
the address on the stack is already referenced.
- PCPU removal obviates the need for critical_{enter, exit}
- rename skip_utrap to tl0_skip_utrap to indicate its use by the fill trap fault handler
- handle a null kstack by switching to the idle threads stack and then going to trap
- correctly handle a unaligned or unmapped stack during a fill trap
- save off some extra data in the pcpu pad in ptl1_panic
- add an assert that PCB is valid in vm_machdep.c
at which the kernel should start allocating physical memory. The primary
purpose of this is to test 64-bit cleanness of the data path by setting
hw.physmemstart=4G so that all physical allocations are above 4GB. AMD64
and i386/PAE could also benefit from having this option.
is already bounded by hw.physmem to calculate phys_avail[] - previously only
real_phys_avail[] was being bound by hw.physmem so we were allocating memory
that wasn't mapped in the direct map
- shuffle memory range following kernel to the beginning of phys_avail
- have the direct area use 256MB pages where possible
- remove dead code from the end of pmap_bootstrap
- have pmap_alloc_contig_pages check all memory ranges in phys_avail before
giving up
- informal benchmarking indicates a ~5% speedup on buildworld
Make part of John Birrell's KSE patch permanent..
Specifically, remove:
Any reference of the ksegrp structure. This feature was
never fully utilised and made things overly complicated.
All code in the scheduler that tried to make threaded programs
fair to unthreaded programs. Libpthread processes will already
do this to some extent and libthr processes already disable it.
Also:
Since this makes such a big change to the scheduler(s), take the opportunity
to rename some structures and elements that had to be moved anyhow.
This makes the code a lot more readable.
The ULE scheduler compiles again but I have no idea if it works.
The 4bsd scheduler still reqires a little cleaning and some functions that now do
ALMOST nothing will go away, but I thought I'd do that as a separate commit.
Tested by David Xu, and Dan Eischen using libthr and libpthread.
- create real_phys_avail which includes all memory ranges to be added to the direct map
- merge in nucleus memory to real_phys_avail
- distinguish between tag VA and index VA in tsb_set_tte_real for cases where page_size != index_page_size
- clean up direct map loop
by default for sun4v where it is absolutely required.
This change moves the buffer from struct pcpu to the stack to avoid
using the critical section which created a LOR in a couple of cases
due to interaction with the tty code and kqueue. The LOR can't be
fixed with the critical section and the pcpu buffer can't be used
without the critical section.
Putting the buffer on the stack was my initial solution, but it was
pointed out that the stress on the stack might cause problems
depending on the call path. We don't have a way of creating tests
for those possible cases, so it's best to leave this as an option
for the time being. In time we may get enough data to enable this
option more generally.
retrieval, rather than using pad
- save the fault address in sfar for use by the alignment fixup handler
- mask off the trap number, so the context id doesn't confuse the UT_MAX
comparison
This change fixes alignment fixup handling which is needed for traceroute
to work in spite of its copious unaligned accesses
as this happens much earlier in trap handling.
The fact that we continued to do this when it was no longer necessary caused
breapoint to map to SIGILL as opposed to SIGTRAP :-(.