by UltraSparc-IV and -IV+ as well as SPARC64 V, VI, VII and VIIIfx CPUs.
- Replace TLB_PCXR_PGSZ_MASK and TLB_SCXR_PGSZ_MASK with TLB_CXR_PGSZ_MASK
which just is the complement of TLB_CXR_CTX_MASK instead of trying to
assemble it from the page size bits which vary across CPUs.
- Add macros for the remainder of the SFSR bits, which are useful for at
least debugging purposes.
to 43 bits so update TD_PA_BITS accordingly. For the most part this
increase is transparent to the existing code except for when reading
the physical address from ASI_{D,I}TLB_DATA_ACCESS_REG, which we
only do in the loader and which was already adjusted in r182478, or
from the OFW translations node.
While at it, ensure we are only taking valid OFW mapping entries
into account.
no particular reason for them to be implemented in assembler and
having them in C allows easier extension as well as using more C
macros and {d,i}tlb_slot_max rather than hard-coding magic (and
actually spitfire-only) values.
- Fix the compilation of pmap_print_tte().
- Change pmap_print_tlb() to use ldxa() rather than re-rolling it
inline as well as TLB_DAR_SLOT and {d,i}tlb_slot_max rather than
hardcoding magic (and actually spitfire-only) values.
- While at it, suffix the above mentioned functions with "_sun4u" to
underline they're architecture-specific.
- Use __FBSDID and macros instead of magic values in locore.S.
- Remove unused includes and smp_stack in locore.S.
pages which represent actual physical memory we must strip off the fake
page in order to allow illegal aliases to be detected. Otherwise we map
uncacheable in the virtual and physical caches and set the side effect bit,
as is required for mapping device memory.
This fixes gstat on sparc64, which wants to mmap kernel memory through a
character device.
a mapping belongs to by setting it in the vm_page_t structure that backs
the tsb page that the tte for a mapping is in. This allows the pmap that
a mapping belongs to to be found without keeping a pointer to it in the
tte itself.
- Remove the pmap pointer from struct tte and use the space to make the
tte pv lists doubly linked (TAILQs), like on other architectures. This
makes entering or removing a mapping O(1) instead of O(n) where n is the
number of pmaps a page is mapped by (including kernel_pmap).
- Use atomic ops for setting and clearing bits in the ttes, now that they
return the old value and can be easily used for this purpose.
- Use __builtin_memset for zeroing ttes instead of bzero, so that gcc will
inline it (4 inline stores using %g0 instead of a function call).
- Initially set the virtual colour for all the vm_page_ts to be equal to their
physical colour. This will be more useful once uma_small_alloc is
implemented, but basically pages with virtual colour equal to phsyical
colour are easier to handle at the pmap level because they can be safely
accessed through cachable direct virtual to physical mappings with that
colour, without fear of causing illegal dcache aliases.
In total these changes give a minor performance improvement, about 1%
reduction in system time during buildworld.
value of the tag or data field.
Add macros for getting the page shift, size and mask for the physical page
that a tte maps (which may be one of several sizes).
Use the new cache functions for invalidating single pages.
virtual page number in a much more convenient way; all in one piece. This
greatly simplifies the comparison for a matching tte, and allows the fault
handlers to be much simpler due to not having to load wierd masks.
Rewrite the tlb fault handlers to account for the new format. These are also
written to allow faults on the user tsb inside of the fault handlers; the
kernel fault handler must be aware of this and not clobber the other's
registers. The faults do not yet occur due to other support that is needed
(and still under my desk).
Bug fixes from: tmm
Remove the modified tte bit and add a softwrite bit. Mappings are only
writeable if they have been written to, thus in general modify just
duplicates the write bit. The softwrite bit makes it easier to distinguish
mappings which should be writeable but are not yet modified.
Move the exec bit down one, it was being sign extended when used as an
immediate operand.
Use the lock bit to mean tsb page and remove the tsb bit. These are the
only form of locked (tsb) entries we support and we need to conserve bits
where possible.
Implement pmap_copy_page and pmap_is_modified and friends.
Detect mappings that are being being upgraded from read-only to read-write
due to copy-on-write and update the write bit appropriately.
Make trap_mmu_fault do the right thing for protection faults, which is
necessary to implement copy on write correctly. Also handle a bunch
more userland trap types and add ktr traces.
- mostly complete kernel pmap support, and tested but currently turned
off userland pmap support
- low level assembly language trap, context switching and support code
- fully implemented atomic.h and supporting cpufunc.h
- some support for kernel debugging with ddb
- various header tweaks and filling out of machine dependent structures