Move inappropriate stuff in cpu.h elsewhere:
{s,g}et_intr_mask -> md_var.h
num_tlbentries -> tlb.h
Remove #define clockframe trapframe and fix clock, which was the only place
this was used.
All the rest of this stuff was unused.
# we're not quite minimal yet, since we duplicate a few status register things
# here...
Inspired by: bde@
calls mips_cpu_call via an obfuscated assembler call. Instead, delete
the current cpu_throw, and rename mips_cpu_throw to cpu_throw. This
is nicer to the cache on each context switch (since fixed jumps can be
prefected, while jumps through a register can't). Incidentally, it
also saves about 5 or 6 instructions.
Reviewed by: jmallet@
Use int32/intptr casts for exception vector names.
Define MIPS_SR_INT_MASK again
Change MIPS_XKPHYS_CCA_* to MIPS_CCA_* since we can use them in many contexts
Minor gratuitous whitespace churn
The problem with setting it there is that the last CPU to come up
wins, it seems. This also removes one more ifdef in locore.S, a noble
goal too. Since they are unused, and pollute cpu.h, remove them.
Submitted by: bde.h (cpu.h pollution)
Approved in theory by: jmallet@
Merge changes for initial n64 support in pmap.c. Use direct mapped (XKPHYS)
access for a lot of operations that earlier needed temporary mapping. Add
support for using XKSEG for kernel mappings.
Reviewed by: imp
Obtained from: jmallett (http://svn.freebsd.org/base/user/jmallett/octeon)
The existing code only checked the alignment of the first mbuf and
didn't enforce the size constraints.
This commit introduces a simple function to check the alignment and
size of all mbufs in the list. This fixes the initial issue in the
PR.
PR: kern/148307
Reviewed by: gonzo@
Updated PTE/PDE macros from http://svn.freebsd.org/base/user/jmallett/octeon
Introduce pmap_segshift() macro, use pmap_segmap() in place of pmap_pde, and
remove pmap_pde().
Approved by: rrs (mentor)
Obtained from: jmallett@
If we save/restore the PageMask, the value set by the bootloader will
persist, and will cause problems later in TLB exception handler.
This caused a crash in AR71xx boards.
Also fixes the EntryHi mask in pte.h
Reported by: Luiz Otavio O Souza <lists.br@gmail.com>
Tested by: Luiz Otavio O Souza <lists.br@gmail.com>
Approved by: rrs (mentor)
Initial support for n32 and n64 ABIs from
http://svn.freebsd.org/base/user/jmallett/octeon
Changes are:
- syscall, exception and trap support for n32/n64 ABIs
- 64-bit address space defines
- _jmp_buf for n32/n64
- casts between registers and ptr/int updated to work on n32/n64
Approved by: rrs(mentor), jmallett
PTE flag cleanup from http://svn.freebsd.org/base/user/jmallett/octeon
- Rename PTE_xx flags to match their MIPS names
- Use the new pte_set/test/clear macros uniformly, instead of a mixture
of mips_pg_xxx(), pmap_pte_x() macros and direct access.
- Remove unused macros and defines from pte.h and pmap.c
Discussed on freebsd-mips@
Approved by: rrs(mentor), jmallett
* Add some per-device sysctl entries which record the watchdog state -
whether it is armed; whether the last reboot was due to the watchdog.
* Add a per-device sysctl debug flag to enable logging watchdog arming/
disarming.
Reviewed by: gonzo@
allow pmap_enter() to be performed on an unmanaged page that doesn't have
VPO_BUSY set. Having VPO_BUSY set really only matters for managed pages.
(See, for example, pmap_remove_write().)
PG_REFERENCED changes in vm_pageout_object_deactivate_pages().
Simplify this function's inner loop using TAILQ_FOREACH(), and shorten
some of its overly long lines. Update a stale comment.
Assert that PG_REFERENCED may be cleared only if the object containing
the page is locked. Add a comment documenting this.
Assert that a caller to vm_page_requeue() holds the page queues lock,
and assert that the page is on a page queue.
Push down the page queues lock into pmap_ts_referenced() and
pmap_page_exists_quick(). (As of now, there are no longer any pmap
functions that expect to be called with the page queues lock held.)
Neither pmap_ts_referenced() nor pmap_page_exists_quick() should ever
be passed an unmanaged page. Assert this rather than returning "0"
and "FALSE" respectively.
ARM:
Simplify pmap_page_exists_quick() by switching to TAILQ_FOREACH().
Push down the page queues lock inside of pmap_clearbit(), simplifying
pmap_clear_modify(), pmap_clear_reference(), and pmap_remove_write().
Additionally, this allows for avoiding the acquisition of the page
queues lock in some cases.
PowerPC/AIM:
moea*_page_exits_quick() and moea*_page_wired_mappings() will never be
called before pmap initialization is complete. Therefore, the check
for moea_initialized can be eliminated.
Push down the page queues lock inside of moea*_clear_bit(),
simplifying moea*_clear_modify() and moea*_clear_reference().
The last parameter to moea*_clear_bit() is never used. Eliminate it.
PowerPC/BookE:
Simplify mmu_booke_page_exists_quick()'s control flow.
Reviewed by: kib@
the page is managed.
Don't set the machine-independent layer's dirty field for the page being
mapped in init_pte_prot(). (The dirty field is only supposed to set when
a mapping is removed or write-protected and the page was managed and
modified.)
Determine whether or not to perform dirty bit emulation based on whether
or not the page is managed, i.e., pageable, not based on whether the page
is being mapped into the kernel address space. Nearly all of the kernel
address space consists of unmanaged pages, so this has neglible impact on
the overhead of dirty bit emulation for the kernel address space. However,
there can also exist unmanaged pages in the user address space. Previously,
dirty bit emulation was unnecessarily performed on these pages.
Tested by: jchandra@
fails to allocate MIPS page table pages. The current usage of VM_WAIT in
case of vm_phys_alloc_contig() failure is not correct, because:
"There is no guarantee that any of the available free (or cached) pages
after the VM_WAIT will fall within the range of suitable physical
addresses. Every time this function sleeps and a single page is freed
(or cached) by someone else, this function will be reawakened. With
a little bad luck, you could spin indefinitely."
We also add low and high parameters to vm_contig_grow_cache() and
vm_contig_launder() so that we restrict vm_contig_launder() to the range
of pages we are interested in.
Reported by: alc
Reviewed by: alc
Approved by: rrs (mentor)