Provide both __sync_*-style and __atomic_*-style functions that perform
the atomic operations on ARMv5 by using Restartable Atomic Sequences.
While there, clean up some pieces of code where it's sufficient to use
regular uint32_t to store register contents and don't need full reg_t's.
Also sync this back to the MIPS code.
check which variant we are on, and if it is a VFPv3 or v4, and has 32
double registers we save these. This fixes VFP support on Raspberry Pi.
While here clean fmrx and fmxr up to use the register names from vfp.h
as opposed to the raw register names.
Basically the situation is as follows:
- When using Clang + armv6, we should not need any intrinsics. It should
support it, even though due to a target misconfiguration it does not.
We should fix this in Clang.
- When using Clang + noarmv6, provide __atomic_* functions that disable
interrupts.
- When using GCC + armv6, we can provide __sync_* intrinsics, similar to
what we did for MIPS. As ARM and MIPS are quite similar, simply base
this implementation on the one I did for MIPS.
- When using GCC + noarmv6, disable the interrupts, like we do for
Clang.
This implementation still lacks functions for noarmv6 userspace. To be
done.
* Stop pretending we support anything other than ELF by removing code
surrounded by #ifdef __ELF__ ... #endif.
* Remove _JB_MAGIC_SETJMP and _JB_MAGIC__SETJMP, they are defined in
setjmp.h, which is able to be included from asm.
* Fix the spelling of dependent.
* Rename END _END and add END and ASEND to complement ENTRY and ASENTRY
respectively
* Add macros to simplify accessing the Global Offset Table, some of these
will be used in the upcoming update to the setjmp functions.
In order to become independent of Coherency Fabric frequency, configure
Timer and Watchdog to operate in 25MHz mode.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Copy the given range of mappings from the source map to the
destination map, thereby reducing the number of VM faults on fork.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
pmap_enter_locked() implementation was very ambiguous and confusing.
Rearrange it so that each part of the mapping creation is separated.
Avoid walking through the redundant conditions.
Extract vector_page specific PTE setup from normal PTE setting.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
Using PVF_MOD, PVF_REF and PVF_EXEC is redundant as we can get the proper
info from PTE bits.
When the mapping is marked as executable and has been referenced we assume
that it has been executed. Similarly, when the mapping is set to be writable
and is referenced, it must have been due to write access to it.
PVF_MOD and PVF_REF flags are kept just for pmap_clearbit() usage,
to pass the information on which bit should be cleared.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
Use pmap_find_pv if needed instead of multiplying its code throughout
pmap-v6.
Avoid possible NULL pointer dereference in pmap_enter_locked()
When trying to get m->md.pv_memattr, make sure that m != NULL,
in particular that vector_page is set to be NULL.
Do not set PGA_REFERENCED flag in pmap_enter_pv().
On ARM any new page reference will result in either entering the new
mapping by calling pmap_enter, etc. or fixing-up the existing mapping in
pmap_fault_fixup().
Therefore we set PGA_REFERENCED flag in the earlier mentioned cases and
setting it later in pmap_enter_pv() is just waste of cycles.
Delete unused pm_pdir pointer from the pmap structure.
Rearrange brackets in the fault cause detection in trap.c
Place the brackets correctly in order to see course of the conditions
instantaneously.
Unify naming in pmap-v6.c and improve style
Use naming common for whole pmap and compatible with other pmaps,
improve style where possible:
pm -> pmap
pg -> m
opg -> om
*pt -> *ptep
*pte -> *ptep
*pde -> *pdep
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
bit in PTE.
Enable Access Flag in CPU control. With AF enabled each valid mapping
needs to have referenced bit in PTE set in order to be able to cache
it in the TLB.
AP[0] bit is to be used as reference flag.
All access permissions are encoded by AP[2:1] wherein AP[1] is in fact
"user enable" and AP[2](APX) is "write disable".
All mappings are always set to be valid. Reference emulation is performed
by setting/clearing reference flag in PTE.
md.pvh_attrs are no longer necessary however pv_flags are still being used
for now.
Marking vm_page as "dirty" or "referenced" is being performed on:
- page or flag fault servicing in pmap_fault_fixup(), basing on the fault
type
- vm_fault servicing in pmap_enter() according to the desired protections
and faulty access type
Redundant page marking has been removed as on ARM we know exactly when the
particular page is referenced or is going to be written.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
o Relax locking assertions for pmap_enter_object() and add them also
to architectures that currently don't have any
o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade
operation on the per-object rwlock
o Use all the mechanisms above to make vm_map_pmap_enter() to work
mostl of the times only with readlocks.
Sponsored by: EMC / Isilon storage division
Reviewed by: alc
PV entries are now roughly half the size.
Instead of using a shared UMA zone for 28 byte pv entries
(two 8-byte tailq nodes, a 4 byte pointer, a 4 byte address and 4 byte
flags), we allocate a page at a time per process.
This provides 252 pv entries per process (actually, per pmap address space)
and eliminates one of the 8-byte tailq entries since we now can track
per-process pv entries implicitly.
The pointer to the pmap can be eliminated by doing address arithmetic to
find the metadata on the page headers to find a single pointer shared by
all 252 entries. There is an 8-int bitmap for the freelist of those 252
entries.
When in serious low memory condition, allocation of another pv_chunk is
possible by freeing some pages in pmap_pv_reclaim().
Added pv_entry/pv_chunk related statistics to pmap.
pv_entry/pv_chunk statistics can be accessed via sysctl vm.pmap.
Ported PTE freelist of KVA allocation and maintenance from i386.
Using an idea from Stephan Uphoff, use the empty pte's that correspond
to the unused kva in the pv memory block to thread a freelist through.
This allows us to free pages that used to be used for pv entry chunks
since we can now track holes in the kva memory block.
As both ARM pmap.c and pmap-v6.c use the same header and pv_entry, pmap and
md_page structures are different, it was needed to separate code designed
for ARMv6/7 from the one for other ARMs.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: alc
Sponsored by: The FreeBSD Foundation, Semihalf
EABI ARM kernels or clang-compiled ARM kernels.
This fixes a crash seen in clang-compiled ARM
kernels that include WITNESS.
This code could be easily modified to walk the stack
for current clang-generated code (including EABI)
but Andrew Turner has raised concerns that the
stack frame currently emitted by clang isn't actually
required by EABI so such a change might cause problems
down the road.
In case anyone wants to experiment, the change
to support current clang-compiled kernels
involves simply setting FR_RFP=0 and FR_SCP=1.
order to match the MAXCPU concept. The change should also be useful
for consolidation and consistency.
Sponsored by: EMC / Isilon storage division
Obtained from: jeff
Reviewed by: alc
Keep following access permissions:
APX AP Kernel User
1 01 R N
1 10 R R
0 01 R/W N
0 11 R/W R/W
Avoid using reserved in ARMv6 APX|AP settings:
- In case of unprivileged (user) access without permission to write,
the access permission bits were being set to reserved for ARMv6
(but valid for ARMv7) value of APX|AP = 111.
Fix-up faulting userland accesses properly:
- Wrong condition statement in pmap_fault_fixup() caused that
any genuine, unprivileged access was being fixed-up instead of
just skip doing anything and return. Staring from now we ensure
proper reaction for illicit user accesses.
L2_S_PROT_R and L2_S_PROT_U names might be misleading as they do not
reflect real permission levels. It will be clarified in following
patches (switch to AP[2:1] permissions model).
Obtained from: Semihalf
- On ARMADAXP B0 (GP development board) we are not able to use PCI due to
whole 32-bit address space used by 4GB of RAM memory.
- Change is required to destroy unnecessary window to free address space
for PCI and other devices
- Fix offset value for SDRAM decoding windows
Obtained from: Semihalf
to driver specific files.
- window initialization is done during device attach
- CESA TDMA decoding windows values are set based on DTS,
not copied from CPU registers
- remove unnecessary virtual mapping
- update dts file
Obtained from: Semihalf
fork_trampoline (thread entry point) assembler routines, because it's
not possible to unwind beyond those points.
Also insert STOP_UNWINDING in the exception_exit routine, to prevent an
unwind-loop at that point. This is just a stopgap until we get around
to instrumenting all assembler functions with proper unwind metadata.
exit the loop until after printing info about the current frame. Also,
if executing the unwind function for a frame doesn't change the values of
any registers, log that and exit the loop rather than looping endlessly.
Communication on src-commiters, Sat, 27 Apr 2013 22:09:06 -0700,
Subject was: "Re: svn commit: r249997"
As I'm here, fix the style main block comments in files' headers.
sys/arm and sys/mips), squelching the clang 3.3 warnings about this.
Noticed by: tinderbox and many irate spectators
Submitted by: Luiz Otavio O Souza <loos.br@gmail.com>
PR: kern/177759
MFC after: 3 days
and kern.cam.ctl.disable tunable; those were introduced as a workaround
to make it possible to boot GENERIC on low memory machines.
With ctl(4) being built as a module and automatically loaded by ctladm(8),
this makes CTL work out of the box.
Reviewed by: ken
Sponsored by: FreeBSD Foundation