initial thread stack is not adjusted by the tunable, the stack is
allocated too early to get access to the kernel environment. See
TD0_KSTACK_PAGES for the thread0 stack sizing on i386.
The tunable was tested on x86 only. From the visual inspection, it
seems that it might work on arm and powerpc. The arm
USPACE_SVC_STACK_TOP and powerpc USPACE macros seems to be already
incorrect for the threads with non-default kstack size. I only
changed the macros to use variable instead of constant, since I cannot
test.
On arm64, mips and sparc64, some static data structures are sized by
KSTACK_PAGES, so the tunable is disabled.
Sponsored by: The FreeBSD Foundation
MFC after: 2 week
vm_offset_t pmap_quick_enter_page(vm_page_t m)
void pmap_quick_remove_page(vm_offset_t kva)
These will create and destroy a temporary, CPU-local KVA mapping of a specified page.
Guarantees:
--Will not sleep and will not fail.
--Safe to call under a non-sleepable lock or from an ithread
Restrictions:
--Not guaranteed to be safe to call from an interrupt filter or under a spin mutex on all platforms
--Current implementation does not guarantee more than one page of mapping space across all platforms. MI code should not make nested calls to pmap_quick_enter_page.
--MI code should not perform locking while holding onto a mapping created by pmap_quick_enter_page
The idea is to use this in busdma, for bounce buffer copies as well as virtually-indexed cache maintenance on mips and arm.
NOTE: the non-i386, non-amd64 implementations of these functions still need review and testing.
Reviewed by: kib
Approved by: kib (mentor)
Differential Revision: http://reviews.freebsd.org/D3013
provide a semantic defined by the C11 fences with corresponding
memory_order.
atomic_thread_fence_acq() gives r | r, w, where r and w are read and
write accesses, and | denotes the fence itself.
atomic_thread_fence_rel() is r, w | w.
atomic_thread_fence_acq_rel() is the combination of the acquire and
release in single operation. Note that reads after the acq+rel fence
could be made visible before writes preceeding the fence.
atomic_thread_fence_seq_cst() orders all accesses before/after the
fence, and the fence itself is globally ordered against other
sequentially consistent atomic operations.
Reviewed by: alc
Discussed with: bde
Sponsored by: The FreeBSD Foundation
MFC after: 3 weeks
discrimination between different subarch binaries, at least for mips
and arm. Arm is implemented, mips is still tbd, so not currently
exported. aarch64 does not export this because aarch64 binaries use
different tags and flags than arm.
Differential Revision: https://reviews.freebsd.org/D2611
1. Align to a 64-bit address so 64-bit data will be correctly aligned.
2. Add a comment explaining why.
3. Remove an unneeded value from the struct.
This fixes an issue where the struct may not be correctly aligned on the
stack in the syscall function. This may lead to accesing a 64-bit value
at a non 64-bit. This will raise an exception and panic the kernel.
We have been lucky where on arm and armv6 both clang and gcc correctly
align the data, even without us asking to, however, on armeb with clang to
not be the case. This tells the compiler we really do need this to be
aligned.
Reported and tested by: jmg (on armeb with clang)
MFC after: 1 Week [1, 2]
Perform cache writebacks and invalidations in the correct (inner to outer
or vice versa) order, and add comments that explain that.
Consistantly use 'va' as the variable name for virtual addresses.
Submitted by: Michal Meloun <meloun@miracle.cz>
For consistency with the naming conventions used by the other
implementations kill armv7_sleep and keep armv7_cpu_sleep.
Differential Revision: https://reviews.freebsd.org/D2537
Submitted by: John Wehle
Reviewed by: ian@, andrew@
because the i386 pmap on which the new armv6 pmap is based had it, and in
r281707 pmap_lazyfix() was removed from the i386 pmap.
Discussed with: kib
Submitted by: Michal Meloun (via Svatopluk Kraus)
Offet for the power control register was specified incorrectly (it had
the same value as the prefetch control register.) This change corrects
the offset value to 0xF80, per the ARM PL310 documentation.
Submitted by: Steve Kiernan <stevek@juniper.net>
Obtained from: Juniper Networks, Inc.
Previously we used pmap_kremove(), but with ARM_NEW_PMAP it does the remove
in a way that isn't SMP-coherent (which is appropriate in some circumstances
such as mapping/unmapping sf buffers). With matching enter/remove routines
for device mappings, each low-level implementation can do the right thing.
Reviewed by: Svatopluk Kraus <onwahe@gmail.com>
the PMC_IN_KERNEL() macro definition.
Add missing macros to extract the return address (LR) from the trapframe.
Discussed with: andrew
Obtained from: Cambridge/L41
Sponsored by: DARPA, AFRL
MFC after: 2 weeks
This is pretty much a complete rewrite based on the existing i386 code. The
patches have been circulating for a couple years and have been looked at by
plenty of people, but I'm not putting anybody on the hook as having reviewed
this in any formal sense except myself.
After this has gotten wider testing from the user community, ARM_NEW_PMAP
will become the default and various dregs of the old pmap code will be
removed.
Submitted by: Svatopluk Kraus <onwahe@gmail.com>,
Michal Meloun <meloun@miracle.cz>
Each plaform performs virtual memory split between kernel and user space
and assigns kernel certain amount of memory space. However, is is sometimes
reasonable to change the default values. Such situation may happen on
systems where the demand for kernel buffers is high, many devices occupying
memory etc. This of course comes with the cost of decreasing user space
memory range so shall be used with care. Most embedded systems will not
suffer from this limtation but rather take advantage of this potential
since default behavior is left unchanged.
Submitted by: Wojciech Macek <wma@semihalf.com>
Reviewed by: imp
Obtained from: Semihalf