Merge changes for initial n64 support in pmap.c. Use direct mapped (XKPHYS)
access for a lot of operations that earlier needed temporary mapping. Add
support for using XKSEG for kernel mappings.
Reviewed by: imp
Obtained from: jmallett (http://svn.freebsd.org/base/user/jmallett/octeon)
The existing code only checked the alignment of the first mbuf and
didn't enforce the size constraints.
This commit introduces a simple function to check the alignment and
size of all mbufs in the list. This fixes the initial issue in the
PR.
PR: kern/148307
Reviewed by: gonzo@
Updated PTE/PDE macros from http://svn.freebsd.org/base/user/jmallett/octeon
Introduce pmap_segshift() macro, use pmap_segmap() in place of pmap_pde, and
remove pmap_pde().
Approved by: rrs (mentor)
Obtained from: jmallett@
If we save/restore the PageMask, the value set by the bootloader will
persist, and will cause problems later in TLB exception handler.
This caused a crash in AR71xx boards.
Also fixes the EntryHi mask in pte.h
Reported by: Luiz Otavio O Souza <lists.br@gmail.com>
Tested by: Luiz Otavio O Souza <lists.br@gmail.com>
Approved by: rrs (mentor)
Initial support for n32 and n64 ABIs from
http://svn.freebsd.org/base/user/jmallett/octeon
Changes are:
- syscall, exception and trap support for n32/n64 ABIs
- 64-bit address space defines
- _jmp_buf for n32/n64
- casts between registers and ptr/int updated to work on n32/n64
Approved by: rrs(mentor), jmallett
PTE flag cleanup from http://svn.freebsd.org/base/user/jmallett/octeon
- Rename PTE_xx flags to match their MIPS names
- Use the new pte_set/test/clear macros uniformly, instead of a mixture
of mips_pg_xxx(), pmap_pte_x() macros and direct access.
- Remove unused macros and defines from pte.h and pmap.c
Discussed on freebsd-mips@
Approved by: rrs(mentor), jmallett
* Add some per-device sysctl entries which record the watchdog state -
whether it is armed; whether the last reboot was due to the watchdog.
* Add a per-device sysctl debug flag to enable logging watchdog arming/
disarming.
Reviewed by: gonzo@
allow pmap_enter() to be performed on an unmanaged page that doesn't have
VPO_BUSY set. Having VPO_BUSY set really only matters for managed pages.
(See, for example, pmap_remove_write().)
PG_REFERENCED changes in vm_pageout_object_deactivate_pages().
Simplify this function's inner loop using TAILQ_FOREACH(), and shorten
some of its overly long lines. Update a stale comment.
Assert that PG_REFERENCED may be cleared only if the object containing
the page is locked. Add a comment documenting this.
Assert that a caller to vm_page_requeue() holds the page queues lock,
and assert that the page is on a page queue.
Push down the page queues lock into pmap_ts_referenced() and
pmap_page_exists_quick(). (As of now, there are no longer any pmap
functions that expect to be called with the page queues lock held.)
Neither pmap_ts_referenced() nor pmap_page_exists_quick() should ever
be passed an unmanaged page. Assert this rather than returning "0"
and "FALSE" respectively.
ARM:
Simplify pmap_page_exists_quick() by switching to TAILQ_FOREACH().
Push down the page queues lock inside of pmap_clearbit(), simplifying
pmap_clear_modify(), pmap_clear_reference(), and pmap_remove_write().
Additionally, this allows for avoiding the acquisition of the page
queues lock in some cases.
PowerPC/AIM:
moea*_page_exits_quick() and moea*_page_wired_mappings() will never be
called before pmap initialization is complete. Therefore, the check
for moea_initialized can be eliminated.
Push down the page queues lock inside of moea*_clear_bit(),
simplifying moea*_clear_modify() and moea*_clear_reference().
The last parameter to moea*_clear_bit() is never used. Eliminate it.
PowerPC/BookE:
Simplify mmu_booke_page_exists_quick()'s control flow.
Reviewed by: kib@
the page is managed.
Don't set the machine-independent layer's dirty field for the page being
mapped in init_pte_prot(). (The dirty field is only supposed to set when
a mapping is removed or write-protected and the page was managed and
modified.)
Determine whether or not to perform dirty bit emulation based on whether
or not the page is managed, i.e., pageable, not based on whether the page
is being mapped into the kernel address space. Nearly all of the kernel
address space consists of unmanaged pages, so this has neglible impact on
the overhead of dirty bit emulation for the kernel address space. However,
there can also exist unmanaged pages in the user address space. Previously,
dirty bit emulation was unnecessarily performed on these pages.
Tested by: jchandra@
fails to allocate MIPS page table pages. The current usage of VM_WAIT in
case of vm_phys_alloc_contig() failure is not correct, because:
"There is no guarantee that any of the available free (or cached) pages
after the VM_WAIT will fall within the range of suitable physical
addresses. Every time this function sleeps and a single page is freed
(or cached) by someone else, this function will be reawakened. With
a little bad luck, you could spin indefinitely."
We also add low and high parameters to vm_contig_grow_cache() and
vm_contig_launder() so that we restrict vm_contig_launder() to the range
of pages we are interested in.
Reported by: alc
Reviewed by: alc
Approved by: rrs (mentor)
When I pushed down the page queues lock into pmap_is_modified(), I created
an ordering dependence: A pmap operation that clears PG_WRITEABLE and calls
vm_page_dirty() must perform the call first. Otherwise, pmap_is_modified()
could return FALSE without acquiring the page queues lock because the page
is not (currently) writeable, and the caller to pmap_is_modified() might
believe that the page's dirty field is clear because it has not seen the
effect of the vm_page_dirty() call.
When I pushed down the page queues lock into pmap_is_modified(), I
overlooked one place where this ordering dependence is violated:
pmap_enter(). In a rare situation pmap_enter() can be called to replace a
dirty mapping to one page with a mapping to another page. (I say rare
because replacements generally occur as a result of a copy-on-write fault,
and so the old page is not dirty.) This change delays clearing PG_WRITEABLE
until after vm_page_dirty() has been called.
Fixing the ordering dependency also makes it easy to introduce a small
optimization: When pmap_enter() used to replace a mapping to one page with a
mapping to another page, it freed the pv entry for the first mapping and
later called the pv entry allocator for the new mapping. Now, pmap_enter()
attempts to recycle the old pv entry, saving two calls to the pv entry
allocator.
will be called automatically by 'timer1clock()'.
Do profiling as often as possible by running it as the same frequency as
'timer1hz'. The statistics clock is run as close to 128Hz as possible.
Pointed out by: mav@
pmap_is_referenced(). Eliminate the corresponding page queues lock
acquisitions from vm_map_pmap_enter() and mincore(), respectively. In
mincore(), this allows some additional cases to complete without ever
acquiring the page queues lock.
Assert that the page is managed in pmap_is_referenced().
On powerpc/aim, push down the page queues lock acquisition from
moea*_is_modified() and moea*_is_referenced() into moea*_query_bit().
Again, this will allow some additional cases to complete without ever
acquiring the page queues lock.
Reorder a few statements in vm_page_dontneed() so that a race can't lead
to an old reference persisting. This scenario is described in detail by a
comment.
Correct a spelling error in vm_page_dontneed().
Assert that the object is locked in vm_page_clear_dirty(), and restrict the
page queues lock assertion to just those cases in which the page is
currently writeable.
Add object locking to vnode_pager_generic_putpages(). This was the one
and only place where vm_page_clear_dirty() was being called without the
object being locked.
Eliminate an unnecessary vm_page_lock() around vnode_pager_setsize()'s call
to vm_page_clear_dirty().
Change vnode_pager_generic_putpages() to the modern-style of function
definition. Also, change the name of one of the parameters to follow
virtual memory system naming conventions.
Reviewed by: kib
independent code. Move this code into mincore(), and eliminate the
page queues lock from pmap_mincore().
Push down the page queues lock into pmap_clear_modify(),
pmap_clear_reference(), and pmap_is_modified(). Assert that these
functions are never passed an unmanaged page.
Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m:
Contrary to what the comment says, pmap_mincore() is not simply an
optimization. Without a complete pmap_mincore() implementation,
mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED
because only the pmap can provide this information.
Eliminate the page queues lock from vfs_setdirty_locked_object(),
vm_pageout_clean(), vm_object_page_collect_flush(), and
vm_object_page_clean(). Generally speaking, these are all accesses
to the page's dirty field, which are synchronized by the containing
vm object's lock.
Reduce the scope of the page queues lock in vm_object_madvise() and
vm_page_dontneed().
Reviewed by: kib (an earlier version)
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
memory on a platform. Tested on the Sibyte with 256MB and 1GB memory
configurations.
- Replace vtophys() with MIPS_KSEG0_TO_PHYS() to convert a page table
page's virtual address to physical. We can safely do this because
page table pages are allocated out of KSEG0.
- Add an assertion to verify that when a page table page is freed it
contains all zeroes. We can now use it after allocation without
zeroing it.
DDB so that all the fields line up.
- Print out the tid of the per-CPU idlethread instead of the pid since
the idle process is now shared across all idle threads.
MFC after: 1 month
- Adds re-partitioning TLB per core for enabled threads.
- Adds hardware thread id to cpuid mapping
- updates rge driver packet distribution and message ring handling
threads to be started based on hardware thread id.
- remove unused early debugging code to set control registers.
- coding style fixes
Approved by: rrs (mentor)