for ARM.
This is quite ugly, because it has to work around a clang bug that does not
allow built-in functions to be defined, even when they're ones that are
expected to be built as part of a library.
Reviewed by: ed
MADV_DONTNEED) and madvise(..., MADV_FREE). Specifically, introduce a new
pmap function, pmap_advise(), that operates on a range of virtual addresses
within the specified pmap, allowing for a more efficient implementation of
MADV_DONTNEED and MADV_FREE. Previously, the implementation of
MADV_DONTNEED and MADV_FREE relied on per-page pmap operations, such as
pmap_clear_reference(). Intuitively, the problem with this implementation
is that the pmap-level locks are acquired and released and the page table
traversed repeatedly, once for each resident page in the range
that was specified to madvise(2). A more subtle flaw with the previous
implementation is that pmap_clear_reference() would clear the reference bit
on all mappings to the specified page, not just the mapping in the range
specified to madvise(2).
Since our malloc(3) makes heavy use of madvise(2), this change can have a
measureable impact. For example, the system time for completing a parallel
"buildworld" on a 6-core amd64 machine was reduced by about 1.5% to 2.0%.
Note: This change only contains pmap_advise() implementations for a subset
of our supported architectures. I will commit implementations for the
remaining architectures after further testing. For now, a stub function is
sufficient because of the advisory nature of pmap_advise().
Discussed with: jeff, jhb, kib
Tested by: pho (i386), marcel (ia64)
Sponsored by: EMC / Isilon Storage Division
Promoting base pages to superpages can increase TLB coverage and allow for
efficient use of page table entries. This development provides FreeBSD/ARM
with superpages management mechanism roughly equivalent to what we have for
i386 and amd64 architectures.
1. Add mechanism for automatic promotion of 4KB page mappings to 1MB section
mappings (and demotion when not needed, respectively).
2. Managed and non-kernel mappings are now superpages-aware.
3. The functionality can be enabled by setting "vm.pmap.sp_enabled" tunable to
a non-zero value (either in loader.conf or by modifying "sp_enabled"
variable in pmap-v6.c file). By default, automatic promotion is currently
disabled.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: alc
Sponsored by: The FreeBSD Foundation, Semihalf
This allows for enabling and configuring superpages reservation mechanism in
order to allocate and populate 256 4KB base pages (for the purpose of
promotion to a 1MB superpage).
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: alc
Sponsored by: The FreeBSD Foundation, Semihalf
which is the part of struct vmspace, allocated from UMA_ZONE_NOFREE
zone. Initialize the pmap lock in the vmspace zone init function, and
remove pmap lock initialization and destruction from pmap_pinit() and
pmap_release().
Suggested and reviewed by: alc (previous version)
Tested by: pho
Sponsored by: The FreeBSD Foundation
The TI uart hardware is ns16550-compatible, except that before it can
be used the clocks and power have to be enabled and a non-standard
mode control register has to be set to put the device in uart mode
(as opposed to irDa or other serial protocols). This adds the extra
code in an extension to the standard ns8250 probe routine, and the
rest of the driver is just the standard ns8250 code.
The MMCHS hardware is pretty much a standard SDHCI v2.0 controller with a
couple quirks, which are now supported by sdhci(4) as of r254507.
This should work for all TI SoCs that use the MMCHS hardware, but it has
only been tested on AM335x right now, so this enables it on those platforms
but leaves the existing ti_mmchs driver in place for other OMAP variants
until they can be tested.
This initial incarnation lacks DMA support (coming soon). Even without it
this improves performance pretty noticibly over the ti_mmchs driver,
primarily because it now does multiblock IO.
There is no need for calling vm_page_dirty() when clearing "modified" flag as
it is already set for that page in pmap_fault_fixup() or pmap_enter() thanks
to "modified" bit emulation.
Also, there is no need for checking PTE "referenced" or "writeable" flags. If
there is a request to clear a particular flag we should just do it.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: gber
Sponsored by: The FreeBSD Foundation, Semihalf
Last input argument in pmap_modify_pv() should be a mask of flags to be set.
In pmap_change_wiring() however, the straight wired status was used, which
does not represent valid flags (and is of type boolean).
This commit fixes the issue so that wired flag is passed to pmap_modify_pv()
properly.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: gber
Sponsored by: The FreeBSD Foundation, Semihalf
Revise L2_S_PROT_MASK to include all of the protection bits. Notice that
clearing these bits does not always take away the corresponding permissions
(for example, permission is granted when the bit is cleared). The bits are
cleared but are to be set or left cleared accordingly in pmap_set_prot(),
pmap_enter_locked(), etc.
Clear L2_XN along with L2_S_PROT_MASK in pmap_set_prot() so that all
permissions related bits are cleared before actual configuration.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: gber
Sponsored by: The FreeBSD Foundation, Semihalf
- PGA_WRITEABLE indicates that there *might be* a writable mapping for the
particular page, so to avoid frequent sweeping of the pv_entries whenever
pmap_nuke_pv(), pmap_modify_pv(), etc. is called, it is sufficient to
clear that flag if there are no managed mappings for that page anymore
(notice that only pmap_enter is authorized to set this flag).
- Avoid redundant checking for PVF_WIRED flag when this flag cannot be set
anyway.
- Clear PGA_WRITEABLE only once for each vm_page instead of multiple,
redundant clearing it in loop when there are no writeable mappings
to that page anymore.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: gber
Sponsored by: The FreeBSD Foundation, Semihalf
- Use the right address when calling kva_free()
(Is there any reason why the s3c2xx0 comes with its own version of bs_map/
bs_unmap ? It seems to be just the same as in bus_space_generic.c)
Unify the 2 concept into a real, minimal, sxlock where the shared
acquisition represent the soft busy and the exclusive acquisition
represent the hard busy.
The old VPO_WANTED mechanism becames the hard-path for this new lock
and it becomes per-page rather than per-object.
The vm_object lock becames an interlock for this functionality:
it can be held in both read or write mode.
However, if the vm_object lock is held in read mode while acquiring
or releasing the busy state, the thread owner cannot make any
assumption on the busy state unless it is also busying it.
Also:
- Add a new flag to directly shared busy pages while vm_page_alloc
and vm_page_grab are being executed. This will be very helpful
once these functions happen under a read object lock.
- Move the swapping sleep into its own per-object flag
The KPI is heavilly changed this is why the version is bumped.
It is very likely that some VM ports users will need to change
their own code.
Sponsored by: EMC / Isilon storage division
Discussed with: alc
Reviewed by: jeff, kib
Tested by: gavin, bapt (older version)
Tested by: pho, scottl
line boundary. It has never been 100% correct, and it can't work on SMP,
because nothing prevents another core from accessing data from an unrelated
buffer in the same cache line while we invalidated it. Just use bounce pages
instead.
Reviewed by: ian
Approved by: mux (mentor) (implicit)
Add support for A20 timer.
Correct interrupt offset depending from chip.
Add basic code for CPU configuration module.
For now, add kernel config and dts file
(only FDT blob related problem needs to be solved later in
order to have one kernel for both cubieboard1 and 2).
Approved by: ray@
transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_*
- Those that allocate address space are named kva_*
- Those that operate on maps are named kmap_*
- Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc
Tested by: pho
Sponsored by: EMC / Isilon Storage Division