casuword(9) and others, use LDRT and STRT instructions to access
memory with the privileges of userspace. If the *RT instruction
faults on the kernel address, then additional checks must be done to
not confuse the VM system with invalid kernel-mode faults.
Put ARM on line with other FreeBSD architectures and disallow usermode
buffers which intersect with the kernel address space in advance,
before any accesses are performed. In other words, vm_fault(9) is no
longer called when e.g. suword(9) stores to invalid (i.e. not
userspace) address.
Also, switch ARM to use fueword(9) and casueword(9).
Note: there is a pending patch in D3617, which adds the special
processing for faults from LDRT and STRT. The addition of the
processing is useful for potential other uses of the instructions and
for completeness, but standard userspace accessors are better served
by not allowing such faults beforehand.
Reviewed by: andrew
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D3816
MFC after: 2 weeks
pre-VFPv3 processors, since they do require software support code to
handle denormals. For VFPv3 and later, enable flush-to-zero if
hardware does not claim full denormals arithmetic support by VMVFR1_FZ
field in mvfr1 register.
The end result is that we do use correct fpu environment on Cortexes
with VFPv3, while ARM11 (e.g. rpi) is in non-compliant flush-to-zero
mode. At least CPUs without complete hardware implementation of
IEEE 754 do not cause unhandled floating point exception on underflow,
as it was before r288492.
Noted by: ian
Tested by: gjb
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
ARMv6/7:
- Define _SAVE() macro to allow unwind data to be conditionally defined for
ARM assembly code in the kernel.
- Use _SAVE() to provide unwind information for bcopy_page(), and two (of
many) instances of copyin() and copyout().
Reviewed by: andrew, imp
MFC after: 3 days
Sponsored by: University of Cambridge
self-consistent, there is no need in anything but compiler barrier in
the implementation of atomic_thread_fence_*() on ARMv5. Split
implementation of fences for ARMv4/5 and ARMv6; the former use
compiler barriers, the later also perform hardware barriers.
An issue which is fixed by the change is the faults from the CP15
coprocessor accesses in the user mode. This was uncovered by the
pthread_once() changes in r287556.
Reported by: Mattia Rossi <mattia.rossi.mailinglists@gmail.com>
Discussed with: alc, cognet, jhb
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
initial thread stack is not adjusted by the tunable, the stack is
allocated too early to get access to the kernel environment. See
TD0_KSTACK_PAGES for the thread0 stack sizing on i386.
The tunable was tested on x86 only. From the visual inspection, it
seems that it might work on arm and powerpc. The arm
USPACE_SVC_STACK_TOP and powerpc USPACE macros seems to be already
incorrect for the threads with non-default kstack size. I only
changed the macros to use variable instead of constant, since I cannot
test.
On arm64, mips and sparc64, some static data structures are sized by
KSTACK_PAGES, so the tunable is disabled.
Sponsored by: The FreeBSD Foundation
MFC after: 2 week
vm_offset_t pmap_quick_enter_page(vm_page_t m)
void pmap_quick_remove_page(vm_offset_t kva)
These will create and destroy a temporary, CPU-local KVA mapping of a specified page.
Guarantees:
--Will not sleep and will not fail.
--Safe to call under a non-sleepable lock or from an ithread
Restrictions:
--Not guaranteed to be safe to call from an interrupt filter or under a spin mutex on all platforms
--Current implementation does not guarantee more than one page of mapping space across all platforms. MI code should not make nested calls to pmap_quick_enter_page.
--MI code should not perform locking while holding onto a mapping created by pmap_quick_enter_page
The idea is to use this in busdma, for bounce buffer copies as well as virtually-indexed cache maintenance on mips and arm.
NOTE: the non-i386, non-amd64 implementations of these functions still need review and testing.
Reviewed by: kib
Approved by: kib (mentor)
Differential Revision: http://reviews.freebsd.org/D3013
provide a semantic defined by the C11 fences with corresponding
memory_order.
atomic_thread_fence_acq() gives r | r, w, where r and w are read and
write accesses, and | denotes the fence itself.
atomic_thread_fence_rel() is r, w | w.
atomic_thread_fence_acq_rel() is the combination of the acquire and
release in single operation. Note that reads after the acq+rel fence
could be made visible before writes preceeding the fence.
atomic_thread_fence_seq_cst() orders all accesses before/after the
fence, and the fence itself is globally ordered against other
sequentially consistent atomic operations.
Reviewed by: alc
Discussed with: bde
Sponsored by: The FreeBSD Foundation
MFC after: 3 weeks
discrimination between different subarch binaries, at least for mips
and arm. Arm is implemented, mips is still tbd, so not currently
exported. aarch64 does not export this because aarch64 binaries use
different tags and flags than arm.
Differential Revision: https://reviews.freebsd.org/D2611
1. Align to a 64-bit address so 64-bit data will be correctly aligned.
2. Add a comment explaining why.
3. Remove an unneeded value from the struct.
This fixes an issue where the struct may not be correctly aligned on the
stack in the syscall function. This may lead to accesing a 64-bit value
at a non 64-bit. This will raise an exception and panic the kernel.
We have been lucky where on arm and armv6 both clang and gcc correctly
align the data, even without us asking to, however, on armeb with clang to
not be the case. This tells the compiler we really do need this to be
aligned.
Reported and tested by: jmg (on armeb with clang)
MFC after: 1 Week [1, 2]
Perform cache writebacks and invalidations in the correct (inner to outer
or vice versa) order, and add comments that explain that.
Consistantly use 'va' as the variable name for virtual addresses.
Submitted by: Michal Meloun <meloun@miracle.cz>
For consistency with the naming conventions used by the other
implementations kill armv7_sleep and keep armv7_cpu_sleep.
Differential Revision: https://reviews.freebsd.org/D2537
Submitted by: John Wehle
Reviewed by: ian@, andrew@
because the i386 pmap on which the new armv6 pmap is based had it, and in
r281707 pmap_lazyfix() was removed from the i386 pmap.
Discussed with: kib
Submitted by: Michal Meloun (via Svatopluk Kraus)
Offet for the power control register was specified incorrectly (it had
the same value as the prefetch control register.) This change corrects
the offset value to 0xF80, per the ARM PL310 documentation.
Submitted by: Steve Kiernan <stevek@juniper.net>
Obtained from: Juniper Networks, Inc.
Previously we used pmap_kremove(), but with ARM_NEW_PMAP it does the remove
in a way that isn't SMP-coherent (which is appropriate in some circumstances
such as mapping/unmapping sf buffers). With matching enter/remove routines
for device mappings, each low-level implementation can do the right thing.
Reviewed by: Svatopluk Kraus <onwahe@gmail.com>
the PMC_IN_KERNEL() macro definition.
Add missing macros to extract the return address (LR) from the trapframe.
Discussed with: andrew
Obtained from: Cambridge/L41
Sponsored by: DARPA, AFRL
MFC after: 2 weeks
This is pretty much a complete rewrite based on the existing i386 code. The
patches have been circulating for a couple years and have been looked at by
plenty of people, but I'm not putting anybody on the hook as having reviewed
this in any formal sense except myself.
After this has gotten wider testing from the user community, ARM_NEW_PMAP
will become the default and various dregs of the old pmap code will be
removed.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
Michal Meloun <meloun@miracle.cz>
Each plaform performs virtual memory split between kernel and user space
and assigns kernel certain amount of memory space. However, is is sometimes
reasonable to change the default values. Such situation may happen on
systems where the demand for kernel buffers is high, many devices occupying
memory etc. This of course comes with the cost of decreasing user space
memory range so shall be used with care. Most embedded systems will not
suffer from this limtation but rather take advantage of this potential
since default behavior is left unchanged.
Submitted by: Wojciech Macek <wma@semihalf.com>
Reviewed by: imp
Obtained from: Semihalf
used by other places that expect to unwind the stack, e.g. dtrace and
stack(9).
As I have written most of this code I'm changing the license to the
standard FreeBSD license. I have received approval from the other
developers who have changed any of the affected code.
Approved by: ian, imp, rpaulo, eadler (all license change)
Switch the cache line size during invalidations/flushes
to be read from CP15 cache type register.
Submitted by: Wojciech Macek <wma@semihalf.com>
Reviewed by: ian, imp
Obtained from: Semihalf
the data the inline functions access together at the start of the bus_space
struct. The start-of part isn't so important, it's the grouping-together
that's the point: now all the most-accessed data should be in one cache line.
Suggested by: cognet
every operation to retrieve the bs_cookie value almost nothing actually uses.
The bus_space struct contains a private data pointer (poorly named bs_cookie,
now renamed to bs_privdata) which is used only by a few old armv4 xscale
implementations. The bus_space functions were all defined to take this
value as the first parameter instead of the bus_space_tag_t, requiring all
the inline macro and function expansions to dereference the tag to pass it
to another function, which never uses it. Now all the functions take the tag
as the first parameter and retrieve the privdata if they need it.
Also fix a couple bus_space_unmap() implementations that were calling
kva_free() instead of pmap_unmapdev().
Discussed with: cognet
that some #ifdef SMP code is also conditional on __ARM_ARCH >= 7; we don't
support SMP on armv6, but some drivers and modules are compiled with it
forced on via the compiler command line.
code in sys/kern/kern_dump.c. Most dumpsys() implementations are nearly
identical and simply redefine a number of constants and helper subroutines;
a generic implementation will make it easier to implement features around
kernel core dumps. This change does not alter any minidump code and should
have no functional impact.
PR: 193873
Differential Revision: https://reviews.freebsd.org/D904
Submitted by: Conrad Meyer <conrad.meyer@isilon.com>
Reviewed by: jhibbits (earlier version)
Sponsored by: EMC / Isilon Storage Division
mostly paves the way for the new pmap code, and shouldn't result in any
noticible behavior differences.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
Michal Meloun <meloun@miracle.cz
The ancient gas we've been using interprets .align 0 as align to the
minimum required alignment for the current section. Clang's integrated
assembler interprets it as align to a byte boundary. Fortunately both
assemblers interpret a non-zero value as align to 2^N so just make sure
we have appropriate non-zero values everywhere.
The elftoolchain project includes these additional defines for various
userland programs. Given that arch-specific defines are still interesting
in the context of userland programs reading or writing ELF metadata, they
should be included in top-level ELF headers.
Remove duplicate defines from ARM and MIPS elf headers.
Submitted by: will (initial version)
Reviewed by: imp, will
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D844
are inline functions that handle all the routine maintenance operations
except the flush-all and invalidate-all routines which are required only
during early kernel init.
These inline functions should be very much faster than the old mechanism
that involved jumping through the big cpufuncs table, especially for
common operations such as invalidating a single TLB entry. Note that
nothing is calling these yet, this just is just required infrastructure
for upcoming changes to the pmap-v6 code.
mechanism defined for armv7 (and also present on some armv6 chips including
the arm1176 used on rpi). The information is parsed into a global cpuinfo
structure, which will be used by (upcoming) new cache and tlb maintenance
code to handle cpu-specific variations of the maintence sequences.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
Michal Meloun <meloun@miracle.cz
code, passing a 0/1 flag that indicates which type of abort it was. This
sets the stage for unifying the handling of page faults in a single routine.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
Michal Meloun <meloun@miracle.cz
If it seems like this is getting out of hand, I quite agree. I wonder if
it's safe, here in the 21st century, to lose the distinction between C and
ASM symbols?
around so that related things are more grouped together, rewrite comments.
No functional changes, this is all so that the functional changes in the
next commit will stand out.
'extra' entry points which are nested within or provide a synonym name
for another function. It's most likely not safe to be messing with the
IP and LR registers at anything other than the primary entry point to a
function. Anywhere beyond initial function entry, those registers may
be in use as scratch or variable registers.
- Eliminate unused irqframe
- Eliminate unused saframe
- Instead of splitting r4-sp storage between the stack and switchframe,
just put all the registers in switchframe and eliminate the un_32 struct.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
Michal Meloun <meloun@miracle.cz>
If this feels like deja vu... the last time this was fixed in this file
only ARM_MMU_V6 was fixed, this time it's ARM_ARCH_V6 (and this time I
searched for other occurrances of pj4b in here).
It is automatically set when -fPIC is passed to the compiler.
Reviewed by: dim, kib
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D1179
and casuword(9), but do not mix value read and indication of fault.
I know (or remember) enough assembly to handle x86 and powerpc. For
arm, mips and sparc64, implement fueword() and casueword() as wrappers
around fuword() and casuword(), which means that the functions cannot
distinguish between -1 and fault.
On architectures where fueword() and casueword() are native, implement
fuword() and casuword() using fueword() and casuword(), to reduce
assembly code duplication.
Sponsored by: The FreeBSD Foundation
Tested by: pho
MFC after: 2 weeks (ia64 needs treating)
registers and use it in the ARMv7 CPU functions.
The sysreg.h file has been checked by hand, however it may contain errors
with the comments on when a register was first introduced. The ARMv7 cpu
functions have been checked by compiling both the previous and this version
and comparing the md5 of the object files.
Submitted by: Svatopluk Kraus <onwahe at gmail.com>
Submitted by: Michal Meloun <meloun at miracle.cz>
Reviewed by: ian, rpaulo
Differential Revision: https://reviews.freebsd.org/D795
In the fdt data we've written for ourselves, the interrupt properties
for GIC interrupts have just been a bare interrupt number. In standard
data that conforms to the published bindings, GIC interrupt properties
contain 3-tuples that describe the interrupt as shared vs private, the
interrupt number within the shared/private address space, and configuration
info such as level vs edge triggered.
The new gic_decode_fdt() function parses both types of data, based on the
#interrupt-cells property. Previously, each platform implemented a decode
routine and put a pointer to it into fdt_pic_table. Now they can just
list this function in their table instead if they use arm/gic.c.
header (Elf_Ehdr) to determine if a particular interpretor wants to
accept it or not. Use this mechanism to filter EABI arm on OABI arm
kernels, and vice versa. This method could also be used to implement
OABI on EABI arm kernels, if desired, or to allow a single mips kernel
to run o32, n32 and n64 binaries.
Differential Revision: https://reviews.freebsd.org/D609
By Richard Earnshaw at ARM
>
>GCC has for a number of years provides a set of pre-defined macros for
>use with determining the ISA and features of the target during
>pre-processing. However, the design was always somewhat cumbersome in
>that each new architecture revision created a new define and then
>removed the previous one. This meant that it was necessary to keep
>updating the support code simply to recognise a new architecture being
>added.
>
>The ACLE specification (ARM C Language Extentions)
>(http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.set.swdev/index.html)
>provides a much more suitable interface and GCC has supported this
>since gcc-4.8.
>
>This patch makes use of the ACLE pre-defines to map to the internal
>feature definitions. To support older versions of GCC a compatibility
>header is provided that maps the traditional pre-defines onto the new
>ACLE ones.
Stop using __FreeBSD_ARCH_armv6__ and switch to __ARM_ARCH >= 6 in the
couple of places in tree. clang already implements ACLE. Add a define
that says we implement version 1.1, even though the implementation
isn't quite complete.
The MD allocators were very common, however there were some minor
differencies. These differencies were all consolidated in the MI allocator,
under ifdefs. The defines from machine/vmparam.h turn on features required
for a particular machine. For details look in the comment in sys/sf_buf.h.
As result no MD code left in sys/*/*/vm_machdep.c. Some arches still have
machine/sf_buf.h, which is usually quite small.
Tested by: glebius (i386), tuexen (arm32), kevlo (arm32)
Reviewed by: kib
Sponsored by: Netflix
Sponsored by: Nginx, Inc.
don't need any #ifdef stuff to use atomic_load/store_64() elsewhere in
the kernel. For armv4 the atomics are trivial to implement for kernel
code (just disable interrupts), less so for user mode, so this only has
the kernel mode implementations for now.
value shared across multiple cores is with atomic_load_64() and
atomic_store_64(), because the normal 64-bit load/store instructions
are not atomic on 32-bit arm. Luckily the ldrexd/strexd instructions
that are atomic are fairly cheap on armv6. Because it's fairly simple
to do, this implements all the ops for 64-bit, not just load/store.
Reviewed by: andrew, cognet
We have functions nested within functions, and places where we start a
function then never end it, we just jump to the middle of something else.
We tried to express this with nested ENTRY()/END() macros (which result
in .fnstart and .fnend directives), but it turns out there's no way to
express that nesting in ARM EHABI unwind info, and newer tools treat
multiple .fnstart directives without an intervening .fnend as an error.
These changes introduce two new macros, EENTRY() and EEND(). EENTRY()
creates a global label you can call/jump to just like ENTRY(), but it
doesn't emit a .fnstart. EEND() is a no-op that just documents the
conceptual endpoint that matches up with the same-named EENTRY().
This is based on patches submitted by Stepan Dyatkovskiy, but I made some
changes and added the EEND() stuff, so blame any problems on me.
Submitted by: Stepan Dyatkovskiy <stpworld@narod.ru>
handling. For statically linked apps this uses the __exidx_start/end
symbols set up by the linker. For dynamically linked apps it finds the
shared object that contains the given address and returns the location and
size of the exidx section in that shared object.
The dl_unwind_find_exidx() name is used by other BSD projects and Android,
and is mentioned in clang 3.5 comments as "the BSD interface" for finding
exidx data. GCC (in libgcc_s) expects the exact same API and functionality
to be provided by a function named __gnu_Unwind_Find_exidx(), so we provide
that with an alias ("strong reference").
Reviewed by: kib@
MFC after: 1 week
memory ordering model allows writes to different devices to complete out
of order, leading to a situation where the write that clears an interrupt
source at a device can complete after a write that unmasks and EOIs the
interrupt at the interrupt controller, leading to a spurious re-interrupt.
This adds a generic barrier function specific to the needs of interrupt
controllers, and calls that function from the GIC and TI AINTC controllers.
There may still be other soc-specific controllers that need to make the call.
Reviewed by: cognet, Svatopluk Kraus <onwahe@gmail.com>
MFC after: 3 days
platform code, it is expected these will be merged in the future when the
ARM code is more complete.
Until more boards can be tested only use this with the Raspberry Pi and
rrename the functions on the other SoCs.
Reviewed by: ian@
Here, "suitably endowed" means that the System Control Coprocessor
(#15) has Performance Monitoring Registers, including a CCNT (Cycle
Count) register.
The CCNT register is used in a way similar to the TSC register in
x86 processors by the get_cyclecount(9) function. The entropy-harvesting
thread is a heavy user of this function, and will benefit from not
having to call binuptime(9) instead.
One problem with the CCNT register is that it is 32-bit only, so
the upper 32-bits of the returned number are always 0. The entropy
harvester does not care, but in case any one else does, follow-up
work may include an interrup trap to increment an upper-32-bit
counter on CCNT overflow.
Another problem is that the CCNT register is not readable in user-mode
code; in can be made readable by userland, but then it is also
writable, and so is a good chunk of the PMU system. For that reason,
the CCNT is not enabled for user-mode access in this commit.
Like the x86, there is one CCNT per core, so they don't all run in
perfect sync.
Reviewed by: ian@ (an earlier version)
Tested by: ian@ (same earlier version)
Committed from: WANDBOARD-QUAD
On modern ARM SoCs the L2 cache controller sits between the CPU and the
AXI bus, and most on-chip memory-mapped devices are on the AXI bus. We
map the device registers using the 'Device' memory attribute, which means
the memory is not cached, but writes to it are buffered. Ensuring that a
write has made it all the way to a device may require that the L2
controller take some action.
There is currently only one implementation of the new function, for the
PL310 cache controller. It invokes a function that the controller
manual calls "cache sync" but it actually has nothing to do with cache at
all, it triggers a drain of all pending store buffer writes and it blocks
until they complete.
The sheeva and xscale L2 controllers (which predate the concept of Device
memory) don't seem to have a corresponding function. It appears that the
standard armv5 drain_writebuf function includes draining all the way
through the L2 controller.
This was added ca. 2004 for the purpose of ensuring the caches were in the
right state after the debugger set a breakpoint. kdb_cpu_sync_icache()
was added in 2007 to handle that situation, and now the wbinv_all is
actually harmful because the operation isn't broadcast to other cores.
using armv7_idcache_wbinv_all, because wbinv_all doesn't broadcast the
operation to other cores. In elf_cpu_load_file() use icache_sync_all()
and explain why it's needed (and why other sync operations aren't).
As part of doing this, all callers of cpu_icache_sync_all() were
inspected to ensure they weren't relying on the old side effect of
doing a wbinv_all along with the icache work.
* Save the required VFP registers on context switch. If the exception bit
is set we need to save and restore the FPINST register, and if the fp2v
bit is also set we need to save and restore FPINST2.
* Move saving and restoring the floating point control registers to C.
* Clear the fpexc exception and fp2v flags on a floating-point exception.
* Signal a SIGFPE if the fpexc exception flag is set on an undefined
instruction. This is how the ARM core signals to software there is a
floating-point exception.
which was added by cognet in 2012, so remove the no-longer-applicable
license stuff that referred to all the old contents, and put in a
standard 2-clause BSD license (to cover the 6 lines of useful code left
in here).
swi_exit code in exception.S instead of having its own inline expansion
of the DO_AST and PULLFRAME macros. That means that now all references
to the PUSH/PULLFRAME and DO_AST macros are localized to exception.S,
so move the macros themselves into there and remove them from asmacros.h
never actually ran on these chips (other than using SA1 support in an
emulator to do the early porting to FreeBSD long long ago). The clutter
and complexity of some of this code keeps getting in the way of other
maintenance, so it's time to go.
enabled. In vfp_discard(), if the state in the VFP hardware belongs to
the thread which is dying, NULL out pcpu fpcurthread to indicate the
state currently in the hardware belongs to nobody.
Submitted by: Juergen Weiss
Pointy hat to: me
a leftover from the days when a low-level debugger had hooks in the
undefined exception vector and needed stack space to function. These days
it effectively isn't used because we switch immediately to the svc32 mode
stack on exception entry. For that, the single undef mode stack per core
that gets set up at init time works fine.
The stack wasn't necessary but it was harmful, because the space for it
was carved out of the normal per-thread svc32 stack, in effect cutting
that 8K stack in half. If svc32 mode used more than 4k of stack space it
wandered down into the undef mode stack, and then an undef exception would
overwrite a couple words on the stack while switching to svc32 mode,
corrupting the scv32 stack. Having another stack abut the bottom of the
svc32 stack also effectively mooted the guard page below the stack.
This work is based on analysis and patches submitted by Juergen Weiss.
The old code was full of complexity that would only matter if the
kernel itself used the VFP hardware. Now that's reduced to either killing
the userland process or panicking the kernel on an illegal VFP instruction.
This removes most of the complexity from the assembler code, reducing it
to just calling the save code if the outgoing thread used the VFP.
The routine that stores the VFP state now takes a flag that indicates
whether the hardware should be disabled after saving state. Right now it
always is, but this makes the code ready to be used by get/set_mcontext()
(doing so will be addressed in a future commit).
Remove the arm-specific pc_vfpcthread from struct pcpu and use the MI
field pc_fpcurthread instead.
Reviewed by: cognet
we've been using was actually just spinning due to ARM having redefined
the old 'wait for interrupt' operation via the system coprocessor as a nop
and replacing it with a WFI instruction.
implementation in arm/machdep.c. Most arm platforms either don't need to
do anything, or just need to call the standard eventtimer init routines.
A generic implementation that does that is now provided via weak linkage.
Any platform that needs to do something different can provide a its own
implementation to override the generic one.
implementations for each of the chips we support. Most chips up through
armv6 can use the armv4 implementation which has a single coprocessor
opcode for this operation. The rather more complex armv7 implementation
comes from netbsd.
it into a bunch of different .c files. Remove declarations for the unused
mptramp() function from everywhere except AramadaXP (and I think it's
really not used there either, because the code that references it appears
to be insanely does-nothing in nature).
Invalidate L1 PTE regardles of existance of the corresponding
l2_bucket. This is relevant when superpage is entered via
pmap_enter_object() and will fix crash on entering page
in place of not properly removed superpage.
communicate the kernel's physical load address from where it's known in
initarm() into cpu_mp_start() which is called from non-arm code and
takes no parameters.
This adds the global variable and ensures that all the various copies
of initarm() set it. It uses the variable in cpu_mp_start(), eliminating
the last uses of KERNPHYSADDR outside of locore.S (where we can now
calculate it instead of relying on the constant).
a new physmem.c file. The new code provides helper routines that can be
used by legacy SoCs and newer FDT-based systems. There are routines to
add one or more regions of physically contiguous ram, and exclude one or
more physically contiguous regions of ram. Ram can be excluded from crash
dumps, from being given over to the vm system for allocation management,
or both. After all the included and excluded regions have been added,
arm_physmem_init_kernel_globals() processes the regions into the global
dump_avail and phys_avail arrays and realmem and physmem variables that
communicate memory configuration to the rest of the kernel.
Convert all existing SoCs to use the new helper code.
This was an optimization used only by a few xscale platforms. Part of
the optimization was to create a direct map for all physical pages, and
that resulted in making multiple mappings of pages in a way that bypassed
the logic in pmap.c to handle VIVT cache aliasing. It also just generally
made the code more complex and hard to maintain for all SoCs.
Reviewed by: cognet
the old way was to store pcpu in a register, and get curthread from pcpu,
which is not very atomic, and led to issues if the thread was migrated
to another core between the time we got the pcpu address and the time we
got curthread.
Instead, we now store curthread where pcpu used to be store, and we
calculate the pcpu address based on the cpu id.
It turns out the version of gas we're using interprets the old '_all' mask
as 'fc' instead of 'fsxc'. That is, "all" doesn't really mean "all".
This was the cause of the "wrong-endian register restore" bug that's
been causing problems with some cortex-a9 chips. The 'endian' bit in the
spsr register would never get changed (it falls into the 'x' mask group)
and the first return-from-exception would fail if the chip had powered on
with garbage in the spsr register that included the big-endian bit. It's
unknown why this affected only certain cortex-a9 chips.
related to setting up static device mappings. Since it was only used by
arm/mv/mv_pci.c, it's now just static functions within that file, plus
one public function that gets called only from arm/mv/mv_machdep.c.
obsolete. This involves the following pieces:
- Remove it entirely on PowerPC, where it is not used by MD code either
- Remove all references to machine/fdt.h in non-architecture-specific code
(aside from uart_cpu_fdt.c, shared by ARM and MIPS, and so is somewhat
non-arch-specific).
- Fix code relying on header pollution from machine/fdt.h includes
- Legacy fdtbus.c (still used on x86 FDT systems) now passes resource
requests to its parent (nexus). This allows x86 FDT devices to allocate
both memory and IO requests and removes the last notionally MI use of
fdtbus_bs_tag.
- On those architectures that retain a machine/fdt.h, unused bits like
FDT_MAP_IRQ and FDT_INTR_MAX have been removed.
Add suport for setting triggering level and polarity in GIC.
New function pointer was added to nexus which corresponds
to the function which sets level/sense in the hardware (GIC).
Submitted by: Wojciech Macek <wma@semihalf.com>
Obtained from: Semihalf
Qualcomm Snapdragon S4 and Snapdragon 400/600/800 SoCs and has architectural
similarities to ARM Cortex-A15. As for development boards IFC6400 series embedded
boards from Inforce Computing uses Snapdragon S4 Pro/APQ8064.
Approved by: stas (mentor)
shifts into the sign bit. Instead use (1U << 31) which gets the
expected result.
This fix is not ideal as it assumes a 32 bit int, but does fix the issue
for most cases.
A similar change was made in OpenBSD.
Discussed with: -arch, rdivacky
Reviewed by: cperciva
words, every architecture is now auto-sizing the kmem arena. This revision
changes kmeminit() so that the definition of VM_KMEM_SIZE_SCALE becomes
mandatory and the definition of VM_KMEM_SIZE becomes optional.
Replace or eliminate all existing definitions of VM_KMEM_SIZE. With
auto-sizing enabled, VM_KMEM_SIZE effectively became an alternate spelling
for VM_KMEM_SIZE_MIN on most architectures. Use VM_KMEM_SIZE_MIN for
clarity.
Change kmeminit() so that the effect of defining VM_KMEM_SIZE is similar to
that of setting the tunable vm.kmem_size. Whereas the macros
VM_KMEM_SIZE_{MAX,MIN,SCALE} have had the same effect as the tunables
vm.kmem_size_{max,min,scale}, the effects of VM_KMEM_SIZE and vm.kmem_size
have been distinct. In particular, whereas VM_KMEM_SIZE was overridden by
VM_KMEM_SIZE_{MAX,MIN,SCALE} and vm.kmem_size_{max,min,scale}, vm.kmem_size
was not. Remedy this inconsistency. Now, VM_KMEM_SIZE can be used to set
the size of the kmem arena at compile-time without that value being
overridden by auto-sizing.
Update the nearby comments to reflect the kmem submap being replaced by the
kmem arena. Stop duplicating the auto-sizing formula in every machine-
dependent vmparam.h and place it in kmeminit() where auto-sizing takes
place.
Reviewed by: kib (an earlier version)
Sponsored by: EMC / Isilon Storage Division
allocates kva space from the top down for the device mappings and builds
entries in an internal table which is automatically used later by
arm_devmap_bootstrap(). The platform code just calls the new
arm_devmap_add_entry() function as many times as it needs to (up to 32
entries allowed; most platforms use 2 or 3 at most).
There is also a new arm_devmap_lastaddr() function that returns the lowest
kva address allocated; this can be used to implement initarm_lastaddr()
which is used to initialize vm_max_kernel_address.
The new code is based on a similar concept developed for the imx family
SoCs recently. They will soon be converted to use this new common code.
static device mappings, rather than as the first of the initializations
that a platform can hook into. This allows a platform to allocate KVA
from the top of the address space downwards for things like static device
mapping, and return the final "last usable address" result after that and
other early init work is done.
Because some platforms were doing work in initarm_lastaddr() that needs to
be done early, add a new initarm_early_init() routine and move the early
init code to that routine on those platforms.
Rename platform_devmap_init() to initarm_devmap_init() to match all the
other init routines called from initarm() that are designed to be
implemented by platform code.
Add a comment block that explains when these routines are called and the
type of work expected to be done in each of them.
new devmap.[ch] files. Emphasize the MD nature of these things by using
the prefix arm_devmap_ on the function and type names (already a few of
these things found their way into MI code, hopefully it will be harder to
do by accident in the future).
out common code related to mapping device memory into a new devmap.c file.
Remove the growing duplication of code that used pmap_devmap_find_pa() and
then did some math with the returned results to generate a virtual address,
and likewise in reverse to get a physical address. Now there are a pair
of functions, arm_devmap_vtop() and arm_devmap_ptov(), to do that. The
bus_space_map() implementations are rewritten in terms of these.
accessed through the direct map unless the kernel configuration actually
includes a direct map. Only a few configurations do, and for the rest the
unnecessary free page pool is a small pessimization.
Tested by: zbb
MFC after: 6 weeks
Use values of the correct defines to determine statement's result.
ARM_ARCH_ symbols are always defined, hence only values are relevant.
Reviewed by: cognet
Sheeva PJ4Bv6 - based chips were only prototypes for V7 class Armada
SoC family. Current in-tree support for PJ4Bv6 will not work and also
there should be no platforms in active use that would incorporate that
CPU revision.
The only remaining user was the code that allocates bounce pages for armv4
busdma. It's not clear why bounce pages would need uncached memory, but
if that ever changes, kmem_alloc_attr() would be the way to get it.
really need it. That would be almost everywhere it was included. Add
it in a couple files that really do need it and were previously getting
it by accident via another header.
included by vm/pmap.h, which is a prerequisite for arm/machine/pmap.h
so there's no reason to ever include it directly.
Thanks to alc@ for pointing this out.
Promoting base pages to superpages can increase TLB coverage and allow for
efficient use of page table entries. This development provides FreeBSD/ARM
with superpages management mechanism roughly equivalent to what we have for
i386 and amd64 architectures.
1. Add mechanism for automatic promotion of 4KB page mappings to 1MB section
mappings (and demotion when not needed, respectively).
2. Managed and non-kernel mappings are now superpages-aware.
3. The functionality can be enabled by setting "vm.pmap.sp_enabled" tunable to
a non-zero value (either in loader.conf or by modifying "sp_enabled"
variable in pmap-v6.c file). By default, automatic promotion is currently
disabled.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: alc
Sponsored by: The FreeBSD Foundation, Semihalf
This allows for enabling and configuring superpages reservation mechanism in
order to allocate and populate 256 4KB base pages (for the purpose of
promotion to a 1MB superpage).
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: alc
Sponsored by: The FreeBSD Foundation, Semihalf
Revise L2_S_PROT_MASK to include all of the protection bits. Notice that
clearing these bits does not always take away the corresponding permissions
(for example, permission is granted when the bit is cleared). The bits are
cleared but are to be set or left cleared accordingly in pmap_set_prot(),
pmap_enter_locked(), etc.
Clear L2_XN along with L2_S_PROT_MASK in pmap_set_prot() so that all
permissions related bits are cleared before actual configuration.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: gber
Sponsored by: The FreeBSD Foundation, Semihalf
because an exception may happen at any time. The stack alignment rules on
ARM EABI state the only place the stack must be 8-byte aligned is on a
function boundary.
If an exception happens while a function is setting up or tearing down it's
stack frame it may not be correctly aligned. There is also no requirement
for it to be when the function is a leaf node.
The fix is to align the stack after we have stored a backup of the old stack
pointer, but before we have stored anything in the trapframe. Along with
this we need to adjust the size of the trapframe by 4 bytes to ensure the
stack below it is also correctly aligned.
instruction set. Thumb-2 requires an if-then instruction to implement
conditional codes.
When building for ARM mode the it-then instructions do not generate any
assembled instruction as per the ARMv7-A Architecture Reference Manual, and
are safe to use.
While this allows the atomic instructions to be built, it doesn't mean we
fully support Thumb code. It works in small tests, but is still known to
fail in a large number of places.
While here add a check for the armv6t2 architecture.
Issues were noted by Bruce Evans and are present on all architectures.
On i386, a counter fetch should use atomic read of 64bit value,
otherwise carry from the increment on other CPU could be lost for the
given fetch, making error of 2^32. If 64bit read (cmpxchg8b) is not
available on the machine, it cannot be SMP and it is enough to disable
preemption around read to avoid the split read.
On x86 the counter increment is not atomic on purpose, which makes it
possible for the store of the incremented result to override just
zeroed per-cpu slot. The effect would be a counter going off by
arbitrary value after zeroing. Perform the counter zeroing on the
same processor which does the increments, making the operations
mutually exclusive. On i386, same as for the fetching, if the
cmpxchg8b is not available, machine is not SMP and we disable
preemption for zeroing.
PowerPC64 is treated the same as amd64.
For other architectures, the changes made to allow the compilation to
succeed, without fixing the issues with zeroing or fetching. It
should be possible to handle them by using the 64bit loads and stores
atomic WRT preemption (assuming the architectures also converted from
using critical sections to proper asm). If architecture does not
provide the facility, using global (spin) mutex would be non-optimal
but working solution.
Noted by: bde
Sponsored by: The FreeBSD Foundation
past a trapframe.
Use this macro in exception_exit as it is the function the unwinder enters
as the functions that store the frame setting lr to point to it.
check which variant we are on, and if it is a VFPv3 or v4, and has 32
double registers we save these. This fixes VFP support on Raspberry Pi.
While here clean fmrx and fmxr up to use the register names from vfp.h
as opposed to the raw register names.
* Stop pretending we support anything other than ELF by removing code
surrounded by #ifdef __ELF__ ... #endif.
* Remove _JB_MAGIC_SETJMP and _JB_MAGIC__SETJMP, they are defined in
setjmp.h, which is able to be included from asm.
* Fix the spelling of dependent.
* Rename END _END and add END and ASEND to complement ENTRY and ASENTRY
respectively
* Add macros to simplify accessing the Global Offset Table, some of these
will be used in the upcoming update to the setjmp functions.
Using PVF_MOD, PVF_REF and PVF_EXEC is redundant as we can get the proper
info from PTE bits.
When the mapping is marked as executable and has been referenced we assume
that it has been executed. Similarly, when the mapping is set to be writable
and is referenced, it must have been due to write access to it.
PVF_MOD and PVF_REF flags are kept just for pmap_clearbit() usage,
to pass the information on which bit should be cleared.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
Use pmap_find_pv if needed instead of multiplying its code throughout
pmap-v6.
Avoid possible NULL pointer dereference in pmap_enter_locked()
When trying to get m->md.pv_memattr, make sure that m != NULL,
in particular that vector_page is set to be NULL.
Do not set PGA_REFERENCED flag in pmap_enter_pv().
On ARM any new page reference will result in either entering the new
mapping by calling pmap_enter, etc. or fixing-up the existing mapping in
pmap_fault_fixup().
Therefore we set PGA_REFERENCED flag in the earlier mentioned cases and
setting it later in pmap_enter_pv() is just waste of cycles.
Delete unused pm_pdir pointer from the pmap structure.
Rearrange brackets in the fault cause detection in trap.c
Place the brackets correctly in order to see course of the conditions
instantaneously.
Unify naming in pmap-v6.c and improve style
Use naming common for whole pmap and compatible with other pmaps,
improve style where possible:
pm -> pmap
pg -> m
opg -> om
*pt -> *ptep
*pte -> *ptep
*pde -> *pdep
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
bit in PTE.
Enable Access Flag in CPU control. With AF enabled each valid mapping
needs to have referenced bit in PTE set in order to be able to cache
it in the TLB.
AP[0] bit is to be used as reference flag.
All access permissions are encoded by AP[2:1] wherein AP[1] is in fact
"user enable" and AP[2](APX) is "write disable".
All mappings are always set to be valid. Reference emulation is performed
by setting/clearing reference flag in PTE.
md.pvh_attrs are no longer necessary however pv_flags are still being used
for now.
Marking vm_page as "dirty" or "referenced" is being performed on:
- page or flag fault servicing in pmap_fault_fixup(), basing on the fault
type
- vm_fault servicing in pmap_enter() according to the desired protections
and faulty access type
Redundant page marking has been removed as on ARM we know exactly when the
particular page is referenced or is going to be written.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Sponsored by: The FreeBSD Foundation, Semihalf
PV entries are now roughly half the size.
Instead of using a shared UMA zone for 28 byte pv entries
(two 8-byte tailq nodes, a 4 byte pointer, a 4 byte address and 4 byte
flags), we allocate a page at a time per process.
This provides 252 pv entries per process (actually, per pmap address space)
and eliminates one of the 8-byte tailq entries since we now can track
per-process pv entries implicitly.
The pointer to the pmap can be eliminated by doing address arithmetic to
find the metadata on the page headers to find a single pointer shared by
all 252 entries. There is an 8-int bitmap for the freelist of those 252
entries.
When in serious low memory condition, allocation of another pv_chunk is
possible by freeing some pages in pmap_pv_reclaim().
Added pv_entry/pv_chunk related statistics to pmap.
pv_entry/pv_chunk statistics can be accessed via sysctl vm.pmap.
Ported PTE freelist of KVA allocation and maintenance from i386.
Using an idea from Stephan Uphoff, use the empty pte's that correspond
to the unused kva in the pv memory block to thread a freelist through.
This allows us to free pages that used to be used for pv entry chunks
since we can now track holes in the kva memory block.
As both ARM pmap.c and pmap-v6.c use the same header and pv_entry, pmap and
md_page structures are different, it was needed to separate code designed
for ARMv6/7 from the one for other ARMs.
Submitted by: Zbigniew Bodek <zbb@semihalf.com>
Reviewed by: alc
Sponsored by: The FreeBSD Foundation, Semihalf
order to match the MAXCPU concept. The change should also be useful
for consolidation and consistency.
Sponsored by: EMC / Isilon storage division
Obtained from: jeff
Reviewed by: alc
Keep following access permissions:
APX AP Kernel User
1 01 R N
1 10 R R
0 01 R/W N
0 11 R/W R/W
Avoid using reserved in ARMv6 APX|AP settings:
- In case of unprivileged (user) access without permission to write,
the access permission bits were being set to reserved for ARMv6
(but valid for ARMv7) value of APX|AP = 111.
Fix-up faulting userland accesses properly:
- Wrong condition statement in pmap_fault_fixup() caused that
any genuine, unprivileged access was being fixed-up instead of
just skip doing anything and return. Staring from now we ensure
proper reaction for illicit user accesses.
L2_S_PROT_R and L2_S_PROT_U names might be misleading as they do not
reflect real permission levels. It will be clarified in following
patches (switch to AP[2:1] permissions model).
Obtained from: Semihalf
Introduce counter(9) API, that implements fast and raceless counters,
provided (but not limited to) for gathering of statistical data.
See http://lists.freebsd.org/pipermail/freebsd-arch/2013-April/014204.html
for more details.
In collaboration with: kib
Reviewed by: luigi
Tested by: ae, ray
Sponsored by: Nginx, Inc.
add the ability for userland to be notified of changes on gpio pins via
a select(2)/read(2) interface.
Change the interrupt handler from filtered to threaded.
Because of the uiomove() calls in the new interface, change locking from
standard mutex to sx.
Add / restore the at91_gpio_high_z() function.
Reviewed by: imp (long ago)
register from a bus space resource.
Note that this macro is just for ARM, and is intended to have a short
lifespan. The DMA engines in some SoCs need the physical address of a
memory-mapped device register as one of the arguments for the transfer.
Several scattered ad-hoc solutions have been converted to use this macro,
which now also serves to mark the places where a more complete fix needs
to be applied (after that fix has been designed).
pages around, taking array of vm_page_t both for source and
destination. Starting offsets and total transfer size are specified.
The function implements optimal algorithm for copying using the
platform-specific optimizations. For instance, on the architectures
were the direct map is available, no transient mappings are created,
for i386 the per-cpu ephemeral page frame is used. The code was
typically borrowed from the pmap_copy_page() for the same
architecture.
Only i386/amd64, powerpc aim and arm/arm-v6 implementations were
tested at the time of commit. High-level code, not committed yet to
the tree, ensures that the use of the function is only allowed after
explicit enablement.
For sparc64, the existing code has known issues and a stab is added
instead, to allow the kernel linking.
Sponsored by: The FreeBSD Foundation
Tested by: pho (i386, amd64), scottl (amd64), ian (arm and arm-v6)
MFC after: 2 weeks