Commit Graph

426 Commits

Author SHA1 Message Date
Andrew Turner
c5de72378c Rename device vfp to option VFP and retire the ARM_VFP_SUPPORT option. This
simplifies enabling as previously both options were required to be enabled,
now we only need a single option.

While here enable VFP on the PandaBoard.
2013-08-17 18:51:38 +00:00
Andrew Turner
65b412607b Remove fpe_sp_state as we don't support fpe. 2013-08-17 14:53:53 +00:00
Olivier Houchard
e137643ef3 Instead of just trying to do it for arm, make sure vm_kmem_size is properly
aligned in kmeminit(), where it'll work for any arch.

Suggested by:	alc
2013-08-09 22:30:54 +00:00
Olivier Houchard
c76853ec15 Make sure vm_kmem_size is aligned on a page boundary, since that's what vmem
expects.
2013-08-09 21:53:02 +00:00
Andrew Turner
d8e3f572e2 When entering exception handlers we may not have an aligned stack. This is
because an exception may happen at any time. The stack alignment rules on
ARM EABI state the only place the stack must be 8-byte aligned is on a
function boundary.

If an exception happens while a function is setting up or tearing down it's
stack frame it may not be correctly aligned. There is also no requirement
for it to be when the function is a leaf node.

The fix is to align the stack after we have stored a backup of the old stack
pointer, but before we have stored anything in the trapframe. Along with
this we need to adjust the size of the trapframe by 4 bytes to ensure the
stack below it is also correctly aligned.
2013-08-05 19:06:28 +00:00
Ganbold Tsagaankhuu
dd5c5e7147 Add identification for Cortex-A7 (R0) cores.
Reviewed by: cognet@
2013-08-01 10:06:19 +00:00
Olivier Houchard
18f8f46e9f Explicitely include <machine/pcb.h>, so that we get the definition of
struct pcb.

Submitted by:	Zbyszek Bodek <zbb@semihalf.com>
Pointy hat to: 	cognet
2013-07-29 12:55:37 +00:00
Olivier Houchard
36bc03ee96 Define KDB_STOPPEDPCB, so that we can access the backtraces of threads running
on other cores.
2013-07-29 08:07:35 +00:00
Andriy Gapon
a29cc9a34b Revert r253748,253749
This WIP should not have been committed yet.

Pointyhat to:	avg
2013-07-28 18:44:17 +00:00
Andriy Gapon
366d8bfb7b put contents of cpu.h under _KERNEL
no userland-serviceable parts inside

MFC after:	20 days
2013-07-28 18:32:27 +00:00
Andrew Turner
b18f8431a0 Start adding support to build bits of our code using the Thumb-2
instruction set. Thumb-2 requires an if-then instruction to implement
conditional codes.

When building for ARM mode the it-then instructions do not generate any
assembled instruction as per the ARMv7-A Architecture Reference Manual, and
are safe to use.

While this allows the atomic instructions to be built, it doesn't mean we
fully support Thumb code. It works in small tests, but is still known to
fail in a large number of places.

While here add a check for the armv6t2 architecture.
2013-07-20 09:24:48 +00:00
Konstantin Belousov
70a7dd5d5b Fix issues with zeroing and fetching the counters, on x86 and ppc64.
Issues were noted by Bruce Evans and are present on all architectures.

On i386, a counter fetch should use atomic read of 64bit value,
otherwise carry from the increment on other CPU could be lost for the
given fetch, making error of 2^32.  If 64bit read (cmpxchg8b) is not
available on the machine, it cannot be SMP and it is enough to disable
preemption around read to avoid the split read.

On x86 the counter increment is not atomic on purpose, which makes it
possible for the store of the incremented result to override just
zeroed per-cpu slot.  The effect would be a counter going off by
arbitrary value after zeroing.  Perform the counter zeroing on the
same processor which does the increments, making the operations
mutually exclusive.  On i386, same as for the fetching, if the
cmpxchg8b is not available, machine is not SMP and we disable
preemption for zeroing.

PowerPC64 is treated the same as amd64.

For other architectures, the changes made to allow the compilation to
succeed, without fixing the issues with zeroing or fetching.  It
should be possible to handle them by using the 64bit loads and stores
atomic WRT preemption (assuming the architectures also converted from
using critical sections to proper asm).  If architecture does not
provide the facility, using global (spin) mutex would be non-optimal
but working solution.

Noted by:  bde
Sponsored by:	The FreeBSD Foundation
2013-07-01 02:48:27 +00:00
Aleksandr Rybalko
b94bc5ff9e Bump max number of IRQs for Cortex-Ax family to cover Exynos5 requirement.
Submitted by:	Ruslan Bukin <br@bsdpad.com>
2013-06-28 22:47:33 +00:00
Aleksandr Rybalko
57ae6edf31 Add identification for Cortex-A15 (R0) cores.
Submitted by:	Ruslan Bukin <br@bsdpad.com>
2013-06-28 22:31:17 +00:00
Andrew Turner
da01dd9e1e Add UNWINDSVCFRAME to provide the unwind pseudo ops to allow us to unwind
past a trapframe.

Use this macro in exception_exit as it is the function the unwinder enters
as the functions that store the frame setting lr to point to it.
2013-06-27 18:54:18 +00:00
Andrew Turner
93ef7ecb75 Fix the vfp code to work with the 16 register variants of the VFP unit. We
check which variant we are on, and if it is a VFPv3 or v4, and has 32
double registers we save these. This fixes VFP support on Raspberry Pi.

While here clean fmrx and fmxr up to use the register names from vfp.h
as opposed to the raw register names.
2013-06-13 21:31:33 +00:00
Andrew Turner
864cbcb81b Merge in changes from NetBSD:
* Remove support for non-elf files.
 * Add the VFP setjmp magic numbers.
 * Add the offsets for the VFP registers within the buffer.
2013-06-08 07:16:22 +00:00
Andrew Turner
0a79452954 Reduce the difference to NetBSD.
* Stop pretending we support anything other than ELF by removing code
   surrounded by #ifdef __ELF__ ... #endif.
 * Remove _JB_MAGIC_SETJMP and _JB_MAGIC__SETJMP, they are defined in
   setjmp.h, which is able to be included from asm.
 * Fix the spelling of dependent.
 * Rename END _END and add END and ASEND to complement ENTRY and ASENTRY
   respectively
 * Add macros to simplify accessing the Global Offset Table, some of these
   will be used in the upcoming update to the setjmp functions.
2013-06-07 21:23:11 +00:00
Grzegorz Bernacki
3bc567b6ad Stop using PVF_MOD, PVF_REF & PVF_EXEC flags in pv_entry, use PTE.
Using PVF_MOD, PVF_REF and PVF_EXEC is redundant as we can get the proper
info from PTE bits.
When the mapping is marked as executable and has been referenced we assume
that it has been executed. Similarly, when the mapping is set to be writable
and is referenced, it must have been due to write access to it.
PVF_MOD and PVF_REF flags are kept just for pmap_clearbit() usage,
to pass the information on which bit should be cleared.

Submitted by:   Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:   The FreeBSD Foundation, Semihalf
2013-05-23 12:23:18 +00:00
Grzegorz Bernacki
2b3e821bcc Improve, optimize and clean-up ARMv6/v7 memory management related code.
Use pmap_find_pv if needed instead of multiplying its code throughout
pmap-v6.

Avoid possible NULL pointer dereference in pmap_enter_locked()
When trying to get m->md.pv_memattr, make sure that m != NULL,
in particular that vector_page is set to be NULL.

Do not set PGA_REFERENCED flag in pmap_enter_pv().
On ARM any new page reference will result in either entering the new
mapping by calling pmap_enter, etc. or fixing-up the existing mapping in
pmap_fault_fixup().
Therefore we set PGA_REFERENCED flag in the earlier mentioned cases and
setting it later in pmap_enter_pv() is just waste of cycles.

Delete unused pm_pdir pointer from the pmap structure.

Rearrange brackets in the fault cause detection in trap.c
Place the brackets correctly in order to see course of the conditions
instantaneously.

Unify naming in pmap-v6.c and improve style
Use naming common for whole pmap and compatible with other pmaps,
improve style where possible:
pm   -> pmap
pg   -> m
opg  -> om
*pt  -> *ptep
*pte -> *ptep
*pde -> *pdep

Submitted by:   Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:   The FreeBSD Foundation, Semihalf
2013-05-23 12:15:23 +00:00
Grzegorz Bernacki
b8b08befd0 Switch to AP[2:1] access permissions model. Store "referenced"
bit in PTE.

Enable Access Flag in CPU control. With AF enabled each valid mapping
needs to have referenced bit in PTE set in order to be able to cache
it in the TLB.

AP[0] bit is to be used as reference flag.
All access permissions are encoded by AP[2:1] wherein AP[1] is in fact
"user enable" and AP[2](APX) is "write disable".

All mappings are always set to be valid. Reference emulation is performed
by setting/clearing reference flag in PTE.

md.pvh_attrs are no longer necessary however pv_flags are still being used
for now.

Marking vm_page as "dirty" or "referenced" is being performed on:
- page or flag fault servicing in pmap_fault_fixup(), basing on the fault
  type
- vm_fault servicing in pmap_enter() according to the desired protections
  and faulty access type
Redundant page marking has been removed as on ARM we know exactly when the
particular page is referenced or is going to be written.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-05-23 12:07:41 +00:00
Grzegorz Bernacki
4442f74b81 Port the new PV entry allocator from amd64/i386/mips to armv6/v7.
PV entries are now roughly half the size.
Instead of using a shared UMA zone for 28 byte pv entries
(two 8-byte tailq nodes, a 4 byte pointer, a 4 byte address and 4 byte
flags), we allocate a page at a time per process.
This provides 252 pv entries per process (actually, per pmap address space)
and eliminates one of the 8-byte tailq entries since we now can track
per-process pv entries implicitly.
The pointer to the pmap can be eliminated by doing address arithmetic to
find the metadata on the page headers to find a single pointer shared by
all 252 entries. There is an 8-int bitmap for the freelist of those 252
entries.
When in serious low memory condition, allocation of another pv_chunk is
possible by freeing some pages in pmap_pv_reclaim().

Added pv_entry/pv_chunk related statistics to pmap.
pv_entry/pv_chunk statistics can be accessed via sysctl vm.pmap.

Ported PTE freelist of KVA allocation and maintenance from i386.
Using an idea from Stephan Uphoff, use the empty pte's that correspond
to the unused kva in the pv memory block to thread a freelist through.
This allows us to free pages that used to be used for pv entry chunks
since we can now track holes in the kva memory block.

As both ARM pmap.c and pmap-v6.c use the same header and pv_entry, pmap and
md_page structures are different, it was needed to separate code designed
for ARMv6/7 from the one for other ARMs.

Submitted by:	Zbigniew Bodek <zbb@semihalf.com>
Reviewed by:	alc
Sponsored by:	The FreeBSD Foundation, Semihalf
2013-05-14 09:47:58 +00:00
Attilio Rao
941646f5ec Rename VM_NDOMAIN into MAXMEMDOM and move it into machine/param.h in
order to match the MAXCPU concept.  The change should also be useful
for consolidation and consistency.

Sponsored by:	EMC / Isilon storage division
Obtained from:	jeff
Reviewed by:	alc
2013-05-07 22:46:24 +00:00
Grzegorz Bernacki
4c8add8a96 Fix L2 PTE access permissions management.
Keep following access permissions:

APX     AP     Kernel     User
 1      01       R         N
 1      10       R         R
 0      01      R/W        N
 0      11      R/W       R/W

Avoid using reserved in ARMv6 APX|AP settings:
- In case of unprivileged (user) access without permission to write,
  the access permission bits were being set to reserved for ARMv6
  (but valid for ARMv7) value of APX|AP = 111.

Fix-up faulting userland accesses properly:
- Wrong condition statement in pmap_fault_fixup() caused that
  any genuine, unprivileged access was being fixed-up instead of
  just skip doing anything and return. Staring from now we ensure
  proper reaction for illicit user accesses.

L2_S_PROT_R and L2_S_PROT_U names might be misleading as they do not
reflect real permission levels. It will be clarified in following
patches (switch to AP[2:1] permissions model).

Obtained from: Semihalf
2013-05-06 15:30:34 +00:00
Wojciech A. Koszek
735c7fe55e Add Xilinx Zynq ARM/FPGA SoC support to FreeBSD/arm port.
Submitted by:	Thomas Skibo <ThomasSkibo (at) sbcglobal.net>
Tested by:	wkoszek (ZedBoard)
Reviewed by:	wkoszek, freebsd-arm@ (no objections raised)
2013-04-27 23:07:49 +00:00
Gabor Kovesdan
ab3f6b347e - Correct mispellings of the word occurrence
Submitted by:	Christoph Mallon <christoph.mallon@gmx.de> (via private mail)
2013-04-17 11:40:10 +00:00
Gleb Smirnoff
4e76af6a41 Merge from projects/counters: counter(9).
Introduce counter(9) API, that implements fast and raceless counters,
provided (but not limited to) for gathering of statistical data.

See http://lists.freebsd.org/pipermail/freebsd-arch/2013-April/014204.html
for more details.

In collaboration with:	kib
Reviewed by:		luigi
Tested by:		ae, ray
Sponsored by:		Nginx, Inc.
2013-04-08 19:40:53 +00:00
Gleb Smirnoff
17dece86fe Merge from projects/counters:
Pad struct pcpu so that its size is denominator of PAGE_SIZE. This
is done to reduce memory waste in UMA_PCPU_ZONE zones.

Sponsored by:	Nginx, Inc.
2013-04-08 19:19:10 +00:00
Andrew Turner
64277b97f9 Hide non-assembler bits behind #ifndef __ASSEMBLER__ 2013-04-06 00:47:33 +00:00
Ian Lepore
63cdf42e8c Add userland access to at91 gpio functionality via ioctl calls. Also,
add the ability for userland to be notified of changes on gpio pins via
a select(2)/read(2) interface.

Change the interrupt handler from filtered to threaded.

Because of the uiomove() calls in the new interface, change locking from
standard mutex to sx.

Add / restore the at91_gpio_high_z() function.

Reviewed by:	imp (long ago)
2013-03-29 19:52:57 +00:00
Ian Lepore
fce4536cfd Add a couple forward declarations, so that board support routines don't have
to pre-include a bunch of header files they don't need just to use this one.
2013-03-29 18:43:10 +00:00
Aleksandr Rybalko
4117c1db9e o Switch to use physical addresses in rman for FDT.
o Remove vtophys used to translate virtual address to physical in case rman carry virtual.

Sponsored by:	The FreeBSD Foundation
2013-03-18 15:18:55 +00:00
Ian Lepore
33ff10ea55 Add a macro that gets the physical address of a memory mapped device
register from a bus space resource.

Note that this macro is just for ARM, and is intended to have a short
lifespan.  The DMA engines in some SoCs need the physical address of a
memory-mapped device register as one of the arguments for the transfer.
Several scattered ad-hoc solutions have been converted to use this macro,
which now also serves to mark the places where a more complete fix needs
to be applied (after that fix has been designed).
2013-03-17 03:04:43 +00:00
Andrew Turner
573447b6a5 Add an END macro to ARM. This is mostly used to tell gas where the bounds
of the functions are when creating the EABI unwind tables.
2013-03-16 02:48:49 +00:00
Konstantin Belousov
e8a4a618cf Add pmap function pmap_copy_pages(), which copies the content of the
pages around, taking array of vm_page_t both for source and
destination.  Starting offsets and total transfer size are specified.

The function implements optimal algorithm for copying using the
platform-specific optimizations.  For instance, on the architectures
were the direct map is available, no transient mappings are created,
for i386 the per-cpu ephemeral page frame is used.  The code was
typically borrowed from the pmap_copy_page() for the same
architecture.

Only i386/amd64, powerpc aim and arm/arm-v6 implementations were
tested at the time of commit. High-level code, not committed yet to
the tree, ensures that the use of the function is only allowed after
explicit enablement.

For sparc64, the existing code has known issues and a stab is added
instead, to allow the kernel linking.

Sponsored by:	The FreeBSD Foundation
Tested by:	pho (i386, amd64), scottl (amd64), ian (arm and arm-v6)
MFC after:	2 weeks
2013-03-14 20:18:12 +00:00
Olivier Houchard
6aee0b4448 Don't use an empty struct. 2013-03-11 10:56:46 +00:00
Andrew Turner
fb769e0f72 __FreeBSD_ARCH_armv6__ is undefined on clang. We can use __ARM_ARCH in
it's place. This makes 'uname -p' correctly output 'armv6' on a kernel
built with clang.
2013-03-09 23:55:23 +00:00
Andrew Turner
078996e049 Fix stack alignment in the kernel to be on an 8 byte boundary as required
by AAPCS.
2013-03-06 06:19:56 +00:00
Andrew Turner
e40f53aa44 Move some virtual memory constants to the top of the file where they are on
other architectures [1].

While here:
 - Remove an unused and commented out include.
 - Add a comment describing the file that other copies have.
 - Fix the style of the defines and add a comment on what each one is.

Suggested by:	[1] alc
2013-03-02 05:02:29 +00:00
Andrew Turner
5f61931668 Increase the maximum text size on ARM to 64MiB. Without this clang would be
sent a SIGABRT when it is loaded as it is too large. This is the smallest
power of two MiB value that allows us to execute clang.

While here wrap it in an #ifndef to be consistent with the other
architectures.

Submitted by:	Daisuke Aoyama <aoyama at peach.ne.jp>
2013-03-01 21:59:23 +00:00
Alan Cox
99c8999856 Copy the definition of VM_MAX_AUTOTUNE_MAXUSERS from i386. (See r242847.)
Tested by:	andrew
2013-03-01 08:30:31 +00:00
Olivier Houchard
d99fd70143 Export vfp_init() prototype, for use in the MP code. 2013-02-26 20:01:05 +00:00
Alan Cox
219d956550 Be more conservative in auto-sizing and capping the kmem submap. In
fact, use the same values here that we use on 32-bit x86 and MIPS.  Some
machines were reported to have problems with the more aggressive values.

Reported and tested by:	andrew
2013-02-26 08:17:34 +00:00
Alan Cox
5c9f7b1a91 Initialize vm_max_kernel_address on non-FDT platforms. (This should have
been included in r246926.)

The second parameter to pmap_bootstrap() is redundant.  Eliminate it.

Reviewed by:	andrew
2013-02-20 16:48:52 +00:00
Alan Cox
837a2c513d Place a cap on the size of the kernel's heap, also known as the kmem
submap.  Otherwise, after r246204, the auto-scaling logic in kern_malloc.c
tries to create a kmem submap that consumes the entire kernel map on a
Pandaboard with 1 GB of RAM.

Tested by:	gonzo
2013-02-18 01:22:20 +00:00
Alan Cox
fc23011bc3 On arm, like sparc64, the end of the kernel map varies from one type of
machine to another.  Therefore, VM_MAX_KERNEL_ADDRESS can't be a constant.
Instead, #define it to be a variable, vm_max_kernel_address, just like we
do on sparc64.

Reviewed by:	kib
Tested by:	ian
2013-02-18 01:02:48 +00:00
Andre Oppermann
1211375f6e Add VM_KMEM_SIZE_SCALE parameter set to 2 (50%) for all ARM platforms.
VM_KMEM_SIZE_SCALE specifies which fraction of the available physical
memory, after deduction of the kernel itself and other early statically
allocated memory, can be used for the kmem_map.  The kmem_map provides
for all UMA/malloc allocations in KVM space.

Previously ARM was using a fixed kmem_map size of (12*1024*1024) = 12MB
without regard to effectively available memory.  This is too small for
recent ARM SoC with more than 128MB of RAM.

For reference a description of others related kmem_map parameters:

 VM_KMEM_SIZE		default start size of kmem_map if SCALE is
			not defined
 VM_KMEM_SIZE_MIN	hard floor on the kmem_map size
 VM_KMEM_SIZE_MAX	hard ceiling on the kmem_map size
 VM_KMEM_SIZE_SCALE	fraction of the available real memory to
			be used for the kmem_map, limited by the
			MIN and MAX parameters.

Tested by:	ian
MFC after:	1 week
2013-02-01 10:26:31 +00:00
Ian Lepore
ae4f863c89 Eliminate the need for an intermediate array of indices into the arrays of
interrupt counts and names, by making the names into an array of fixed-length
strings that can be directly indexed.  This eliminates extra memory accesses
on every interrupt to increment the counts.

As a side effect, it also fixes a bug that would corrupt the names data
if a name was longer than MAXCOMLEN, which led to incorrect vmstat -i output.

Approved by:	cognet (mentor)
2013-01-19 00:50:12 +00:00
Andrew Turner
9fd57b44e4 * Correct KINFO_PROC_SIZE for ARM EABI.
* Update the syscall interface to pass in the syscall value in register r7.
2013-01-17 09:52:35 +00:00
Olivier Houchard
58fdb5f3e6 Don't define rel/acq variants of some atomic operations as the regular
version for armv6.
2013-01-15 22:08:03 +00:00