Commit Graph

1019 Commits

Author SHA1 Message Date
Nathan Whitehorn
619282986d Change the default MSR values used when starting userland and kernel
threads from compile-time defines to global variables. This removes a
significant amount of duplicated runtime patches to the compile-time
defines, centralizing the conditional logic in the early startup code.

Reviewed by:	jhibbits
2018-02-01 05:31:24 +00:00
Nathan Whitehorn
564ac41556 Fix build on 32-bit PowerPC, broken in r328537. 2018-02-01 05:28:02 +00:00
Wojciech Macek
d32802f0c3 PowerNV: fix compilation on non-NV platforms
Submitted by:          Wojciech Macek <wma@semihalf.com>
Obtained from:         Semihalf
Sponsored by:          IBM, QCM Technologies
2018-01-31 06:42:01 +00:00
Wojciech Macek
70bb600a0a PowerNV: move LPCR and LPID altering to cpudep_ap_early_bootstrap
It turns out that under some circumstances we can get DSI or DSE before we set
LPCR and LPID so we should set it as early as possible.

Authored by:           Patryk Duda <pdk@semihalf.com>
Submitted by:          Wojciech Macek <wma@semihalf.com>
Obtained from:         Semihalf
Sponsored by:          IBM, QCM Technologies
2018-01-29 09:27:02 +00:00
Nathan Whitehorn
eb1baf72ae Remove hard-coded trap-handling logic involving the segmented memory model
used with hashed page tables on AIM and place it into a new, modular pmap
function called pmap_decode_kernel_ptr(). This function is the inverse
of pmap_map_user_ptr(). With POWER9 radix tables, which mapping to use
becomes more complex than just AIM/BOOKE and it is best to have it in
the same place as pmap_map_user_ptr().

Reviewed by:	jhibbits
2018-01-29 04:33:41 +00:00
Nathan Whitehorn
7790e46cf0 On AIM systems without a software-managed SLB, such as POWER9 systems using
either hardware segment tables or radix-tree-based page tables, do not try
to install SLB entries at trap boundaries.
2018-01-19 22:19:50 +00:00
Nathan Whitehorn
fc8ea4be2a Install the SLB miss trap-handling code in the SLB-based MMU driver set up,
to which it is specific, rather than in the generic AIM startup code. This
will be required to support the radix-table-based MMU introduced with POWER9.
2018-01-15 16:08:34 +00:00
Nathan Whitehorn
04329fa708 Move the pmap-specific code in copyinout.c that gets pointers to userland
buffers into a new pmap-module function pmap_map_user_ptr() that can
be implemented by the respective modules. This is required to implement
non-segment-based AIM-ish MMU systems such as the radix-tree page tables
introduced by POWER ISA 3.0 and present on POWER9.

Reviewed by:	jhibbits
2018-01-15 06:46:33 +00:00
Nathan Whitehorn
68b9c019aa Document places we assume that physical memory is direct-mapped at zero by
using a new macro PHYS_TO_DMAP, which deliberately has the same name as the
equivalent macro on amd64. This also sets the stage for moving the direct
map to another base address.
2018-01-13 23:14:53 +00:00
Jeff Roberson
ab3185d15e Implement NUMA support in uma(9) and malloc(9). Allocations from specific
domains can be done by the _domain() API variants.  UMA also supports a
first-touch policy via the NUMA zone flag.

The slab layer is now segregated by VM domains and is precise.  It handles
iteration for round-robin directly.  The per-cpu cache layer remains
a mix of domains according to where memory is allocated and freed.  Well
behaved clients can achieve perfect locality with no performance penalty.

The direct domain allocation functions have to visit the slab layer and
so require per-zone locks which come at some expense.

Reviewed by:	Attilio (a slightly older version)
Tested by:	pho
Sponsored by:	Netflix, Dell/EMC Isilon
2018-01-12 23:25:05 +00:00
Nathan Whitehorn
a891d21aac Make sure the first instruction of the low-memory spinloop is in the
cacheline being invalidated.

MFC after:	1 month
2017-12-31 05:38:19 +00:00
Nathan Whitehorn
70f654991a Add support for 64-bit PowerPC kernels to be directly loaded by kexec, which
is used as the bootloader on a number of PPC64 platforms. This involves the
following pieces:
- Making the first instruction a valid kernel entry point, since kexec
  ignores the ELF entry value. This requires a separate section and linker
  magic to prevent the linker from filling the beginning of the section
  with stubs.
- Adding an entry point at 0x60 past the first instruction for systems
  lacking firmware CPU shutdown support (notably PS3).
- Linker script changes to support the above.

MFC after:	1 month
2017-12-29 20:30:10 +00:00
Nathan Whitehorn
8469e0fe35 Maintain alignment of in-code 64-bit quantities by design rather than luck.
If these are not aligned, the linker has to emit a different type of
relocation that the early boot self-relocation code cannot handle, even
in principle, resulting in them being set to zero and the kernel crashing.

MFC after:	1 week
2017-12-29 20:25:15 +00:00
Pedro F. Giffuni
71e3c3083b sys/powerpc: further adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.

The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
2017-11-27 15:09:59 +00:00
Nathan Whitehorn
47f69f4f2b Use the cookie now set by loader to determine whether the value passed to
PowerPC kernels in r6 is actually metadata from loader(8) or gibberish
left in r6, which is not required to be anything under the
PAPR/ePAPR/CHRP/OF standards, by another boot loader.

Note that, as a result, systems need a new boot loader to boot PPC kernels
after this revision without ending up at a mountroot prompt. New boot
loaders are backwards compatible and can boot older kernels.

Reviewed by:	jhibbits
MFC after:	2 months
2017-11-26 03:53:20 +00:00
Nathan Whitehorn
50d82d6f6a Missed gate on __powerpc64__ for setting LPCR in r326207.
MFC after:	3 weeks
X-MFC-with:	r326207
2017-11-25 22:15:56 +00:00
Nathan Whitehorn
5bcc3e4277 Allow platform modules to set the size of large pizes, as potentially
discovered from firmware, and better handle highly-discontiguous memory
and CPU maps.

MFC after:	3 weeks
2017-11-25 22:13:19 +00:00
Nathan Whitehorn
312fb3d8dd Invalidate TLB at boot using the correct IS settings on newer-than-POWER5
CPUs.

MFC after:	3 weeks
2017-11-25 22:10:10 +00:00
Nathan Whitehorn
66d6978c27 Missed platform_smp_timebase_sync() in r326205.
MFC after:	3 weeks
X-MFC-With:	r326205
2017-11-25 22:06:40 +00:00
Nathan Whitehorn
5d7c76afc6 Make n_slbs public in a more straightforward way. Some platforms (like
PowerNV) use firmware-assisted mechanisms to discover it and need access
to the variable.

MFC after:	3 weeks
2017-11-25 22:05:05 +00:00
Nathan Whitehorn
cb74659e0c Preserve the LPCR on new-ish (POWER7 and POWER8) CPUs, preventing exceptions
and such from ending on the wrong CPU on SMP systems. It would be good to
have this be more generic somehow as POWER9s appear, but PPC does not
have features bits, unfortunately.

MFC after:	3 weeks
2017-11-25 22:03:25 +00:00
Nathan Whitehorn
de2dd83fb9 Whether you can use mttb() or not is more complicated than whether PSL_HV
is set and the right thing to do may be platform-dependent (it requires
firmware on PowerNV, for instance). Make it a new platform method called
platform_smp_timebase_sync().

MFC after:	3 weeks
2017-11-25 21:59:59 +00:00
Jeff Roberson
8d6fbbb867 Replace manyinstances of VM_WAIT with blocking page allocation flags
similar to the kernel memory allocator.

This simplifies NUMA allocation because the domain will be known at wait
time and races between failure and sleeping are eliminated.  This also
reduces boilerplate code and simplifies callers.

A wait primitive is supplied for uma zones for similar reasons.  This
eliminates some non-specific VM_WAIT calls in favor of more explicit
sleeps that may be satisfied without new pages.

Reviewed by:	alc, kib, markj
Tested by:	pho
Sponsored by:	Netflix, Dell/EMC Isilon
2017-11-08 02:39:37 +00:00
Mark Johnston
b999e9c813 Implement mmu_page_init for AIM platforms.
As of r323290 we cannot rely on the vm_page array being
zero-initialized.

Reported and tested by:	andreast
MFC after:	1 week
2017-09-17 15:40:12 +00:00
Justin Hibbits
18e367f4aa Use the explicit expanded form of cmp.
Clang apparently requires the explicit form of this instruction, and rejects
uses which ignore the optional cmpD register.  This was the only use of the
shorthand form of the instruction, so just fix it up to match the others.

PR:		kern/215681
Submitted by:	Mark Millard
Reported by:	Mark Millard <markmi _AT_ dsl-only.net>
MFC after:	2 weeks
2017-01-18 03:42:21 +00:00
Mark Johnston
dbbaf04f1e Remove support for idle page zeroing.
Idle page zeroing has been disabled by default on all architectures since
r170816 and has some bugs that make it seemingly unusable. Specifically,
the idle-priority pagezero thread exacerbates contention for the free page
lock, and yields the CPU without releasing it in non-preemptive kernels. The
pagezero thread also does not behave correctly when superpage reservations
are enabled: its target is a function of v_free_count, which includes
reserved-but-free pages, but it is only able to zero pages belonging to the
physical memory allocator.

Reviewed by:	alc, imp, kib
Differential Revision:	https://reviews.freebsd.org/D7714
2016-09-03 20:38:13 +00:00
Justin Hibbits
cbc3c68d9a Add a kdb show command to print arbitrary SPRs on PowerPC
Summary:
There is often a need at the debugger to print arbitrary special
purpose registers (SPRs) on PowerPC.  Using a rewritable asm stub, print any SPR
provided on the command line.

Note, as there is no checking in this, attempting to print a nonexistent SPR
may cause a Program exception (illegal instruction, or boundedly undefined).

Note also that this relies on the kernel text pages being writable.  If in the
future this is made not the case, this will need to be reworked.

Test Plan:
Printing the Processor Version Register (PVR, SPR 287):

db> show spr 11f
SPR 287(11f): 80240012

Differential Revision: https://reviews.freebsd.org/D7403
2016-08-13 18:46:49 +00:00
Nathan Whitehorn
0081393d79 Do not rely on firmware having pre-enabled the MMU in a reasonable way for
late boot: enable it explicitly after installing the page tables. If booting
from an FDT, also make sure to escape the firmware's MMU context early
before overwriting firmware page tables.

Approved by:	re (gjb)
2016-06-29 14:40:43 +00:00
Nathan Whitehorn
d929c32b7f Enter 64-bit mode as early as possible in the 64-bit PowerPC boot sequence.
Most of the effect of setting MSR[SF] is that the CPU will stop ignoring
the high 32 bits of registers containing addresses in load/store
instructions. As such, the kernel was setting it only when it began to
need access to high memory. MSR[SF] also affects the operation of some
conditional instructions, however, and so setting it at late times could
subtly break code at very early times. This fixes use of the FDT mode in
loader, and FDT boot more generally, on 64-bit PowerPC systems.

Hardware provided by: IBM LTC
Approved by: re (kib)
2016-06-26 18:43:42 +00:00
Pedro F. Giffuni
d9c9c81c08 sys: use our roundup2/rounddown2() macros when param.h is available.
rounddown2 tends to produce longer lines than the original code
and when the code has a high indentation level it was not really
advantageous to do the replacement.

This tries to strike a balance between readability using the macros
and flexibility of having the expressions, so not everything is
converted.
2016-04-21 19:57:40 +00:00
Justin Hibbits
54ac2713d5 Add VM_MEMATTR_CACHEABLE support for AIM, for parity with Book-E.
Not used right now, but may be in the future anyway.
2016-03-01 00:50:39 +00:00
Svatopluk Kraus
a1e1814d76 As <machine/pmap.h> is included from <vm/pmap.h>, there is no need to
include it explicitly when <vm/pmap.h> is already included.

Reviewed by:	alc, kib
Differential Revision:	https://reviews.freebsd.org/D5373
2016-02-22 09:02:20 +00:00
Nathan Whitehorn
ca496abd5a Remove dead code and dead comments, most notably the implemenation of the
now-obsolete setfault(). No NetBSD code exists in the AIM locore files, so
update the copyrights there.
2016-01-10 18:00:01 +00:00
Andreas Tobler
76cbcfdcfc Fix booting of 32-bit kernels on 64-bit G5 hardware.
For rs6000, most memory insns and addi/addis do not allow GPR0 for RA
(they use literal zero there instead). So use a 'b' constraint to make
sure to have a base register other than GPR0.
GCC-4.7 and up handles this with allocating r9 instead of r0.
2016-01-02 22:04:37 +00:00
Nathan Whitehorn
86c94d24fa Switch setting MSR[SF] to C code. This removes any CPU-specific code
(MSF[SF] is a Book 3-S thing) in the 64-bit locore64.S.
2016-01-02 18:10:53 +00:00
Nathan Whitehorn
7c259020fb Make ELFv2 powerpc64 kernels build and run. Loader support will come in a
separate commit.
2015-11-29 07:16:08 +00:00
Nathan Whitehorn
8accb33404 Use what we really mean (powerpc_lwsync()) rather than the Linux-compat
mb() here and provide some more documentation on what, exactly, makes this
code safe.

Requested by and discussed with:	kib, alc
2015-11-24 16:10:21 +00:00
Nathan Whitehorn
4a38fe54fa Make native page table access endian-safe. Even on CPUs running in
little-endian mode, the hardware page table is big-endian. This is a
no-op on all currently supported systems.

MFC after:	1 month
2015-11-17 16:09:26 +00:00
Nathan Whitehorn
509142e189 Where appropriate, use the endian-flipping OF_getencprop() instead of
OF_getprop() to get encode-int encoded values from the OF tree. This is
a no-op at present, since all existing PowerPC ports are big-endian, but
it is a correctness improvement and will be required if we have a
little-endian kernel at some future point.

Where it is totally impossible for the code ever to be used on a
little-endian system (much of powerpc/powermac, for instance), I have not
necessarily made the appropriate changes.

MFC after:	1 month
2015-11-17 16:07:43 +00:00
Konstantin Belousov
edc8222303 Make kstack_pages a tunable on arm, x86, and powepc. On i386, the
initial thread stack is not adjusted by the tunable, the stack is
allocated too early to get access to the kernel environment. See
TD0_KSTACK_PAGES for the thread0 stack sizing on i386.

The tunable was tested on x86 only.  From the visual inspection, it
seems that it might work on arm and powerpc.  The arm
USPACE_SVC_STACK_TOP and powerpc USPACE macros seems to be already
incorrect for the threads with non-default kstack size.  I only
changed the macros to use variable instead of constant, since I cannot
test.

On arm64, mips and sparc64, some static data structures are sized by
KSTACK_PAGES, so the tunable is disabled.

Sponsored by:	The FreeBSD Foundation
MFC after:	2 week
2015-08-10 17:18:21 +00:00
Jason A. Harmening
713841afb2 Add two new pmap functions:
vm_offset_t pmap_quick_enter_page(vm_page_t m)
void pmap_quick_remove_page(vm_offset_t kva)

These will create and destroy a temporary, CPU-local KVA mapping of a specified page.

Guarantees:
--Will not sleep and will not fail.
--Safe to call under a non-sleepable lock or from an ithread

Restrictions:
--Not guaranteed to be safe to call from an interrupt filter or under a spin mutex on all platforms
--Current implementation does not guarantee more than one page of mapping space across all platforms. MI code should not make nested calls to pmap_quick_enter_page.
--MI code should not perform locking while holding onto a mapping created by pmap_quick_enter_page

The idea is to use this in busdma, for bounce buffer copies as well as virtually-indexed cache maintenance on mips and arm.

NOTE: the non-i386, non-amd64 implementations of these functions still need review and testing.

Reviewed by:	kib
Approved by:	kib (mentor)
Differential Revision:	http://reviews.freebsd.org/D3013
2015-08-04 19:46:13 +00:00
Justin Hibbits
96f3c2adbe Fix userland program exception handling for powerpc64.
It appears that the linker will not handle 64-bit relocations at addresses that
are not aligned to 8-byte boundaries.  Prior to this change the line:

  .llong generictrap

was aligned to a 4-byte address, and the linker replaced that with an 8-byte
0x0.  Aligning that address to 8 bytes caused the linker to generate the proper
relocation.  As a follow-through, the dblow from trap_subr33.S used the code
sequence 'lwz %r1, TRAP_GENTRAP(0)', so this reproduces the analogue of that for
64-bit.
2015-07-16 05:13:08 +00:00
Justin Hibbits
3f3cffedce Merge booke and aim interrupt.c files.
Summary:
Both booke and AIM interrupt.c files contain nearly identical code.  This merges
the two files, to reduce duplication.

Reviewers: #powerpc, marcel

Reviewed By: marcel

Subscribers: imp

Differential Revision: https://reviews.freebsd.org/D2991
2015-07-06 05:08:57 +00:00
Justin Hibbits
0936003e3d Use the correct type for physical addresses.
On Book-E, physical addresses are actually 36-bits, not 32-bits.  This is
currently worked around by ignoring the top bits.  However, in some cases, the
boot loader configures CCSR to something above the 32-bit mark.  This is stage 1
in updating the pmap to handle 36-bit physaddr.
2015-07-04 19:00:38 +00:00
Justin Hibbits
6c8df58287 Unify booke and AIM machdep.
Much of the code was common to begin with.  There is one nit, which is likely
not an issue at all.  With the old code, the AIM machdep would __syncicache()
the entire kernel core at setup.  However, in the unified setup, that seems to
hang on the MPC7455, perhaps because it's running later than before.  Removing
this allows it to boot just fine.  Examining the code, the FreeBSD loader
already does syncicache of the full kernel, and each module loaded, so this
doesn't appear to be an actual problem.

Initial code by Nathan Whitehorn.
2015-04-30 01:24:25 +00:00
Justin Hibbits
28cbb9b173 Unify Book-E and AIM trap.c
Summary:
Book-E and AIM trap.c are almost identical, except for a few bits.  This is step
1 in unifying them.

This also renumbers EXC_DEBUG, to not conflict with AIM vector numbers.  Since
this is the only one thus far that is used in the switch statement in trap(),
it's the only one renumbered.  If others get added to the switch, which conflict
with AIM numbers, they should also be renumbered.

Reviewers: #powerpc, marcel, nwhitehorn

Reviewed By: marcel

Subscribers: imp

Differential Revision: https://reviews.freebsd.org/D2215
2015-04-05 02:42:52 +00:00
Ryan Stone
f2c2231e0c Fix integer truncation bug in malloc(9)
A couple of internal functions used by malloc(9) and uma truncated
a size_t down to an int.  This could cause any number of issues
(e.g. indefinite sleeps, memory corruption) if any kernel
subsystem tried to allocate 2GB or more through malloc.  zfs would
attempt such an allocation when run on a system with 2TB or more
of RAM.

Note to self: When this is MFCed, sparc64 needs the same fix.

Differential revision:	https://reviews.freebsd.org/D2106
Reviewed by:	kib
Reported by:	Michael Fuckner <michael@fuckner.net>
Tested by:	Michael Fuckner <michael@fuckner.net>
MFC after:	2 weeks
2015-04-01 12:42:26 +00:00
Nathan Whitehorn
1cd30eb6dd Deallocate any leftover page table entries in the LPAR at boot. This
prevents contamination from a previous kernel (e.g. after shutdown -r).
2015-03-13 00:08:58 +00:00
Nathan Whitehorn
c3e2821f5f Make assembly slightly more idiomatic (and able to be handled by clang's
integrated assembler).
2015-03-07 20:27:00 +00:00
Nathan Whitehorn
5c845fde2e Make 32-bit PowerPC kernels, like 64-bit PowerPC kernels, position-independent
executables. The goal here, not yet accomplished, is to let the e500 kernel
run under QEMU by setting KERNBASE to something that fits in low memory and
then having the kernel relocate itself at runtime.
2015-03-07 20:14:46 +00:00
Nathan Whitehorn
f14cf38dbe The AIM DAR (data access fault address register) and Book-E DEAR registers
have the same meaning and occupy the same memory address in the trapframe
courtesy of union. Avoid some pointless #ifdef by spelling them both 'DAR'
in the trapframe.
2015-03-04 21:06:57 +00:00
Nathan Whitehorn
d1295abdc0 Move Book-E/AIM dependent bits for setting user PMAP during thread switch
out of cpu_switch() and into pmap_activate() where they belong. This also
removes all the #ifdef from cpu_switch().
2015-03-04 16:45:31 +00:00
Nathan Whitehorn
d4eb568e07 Fix unitialized variable. 2015-02-27 20:32:09 +00:00
Nathan Whitehorn
827cc9b981 New pmap implementation for 64-bit PowerPC processors. The main focus of
this change is to improve concurrency:
- Drop global state stored in the shadow overflow page table (and all other
  global state)
- Remove all global locks
- Use per-PTE lock bits to allow parallel page insertion
- Reconstruct state when requested for evicted PTEs instead of buffering
  it during overflow

This drops total wall time for make buildworld on a 32-thread POWER8 system
by a factor of two and system time by a factor of three, providing performance
20% better than similarly clocked Core i7 Xeons per-core. Performance on
smaller SMP systems, where PMAP lock contention was not as much of an issue,
is nearly unchanged.

Tested on:	POWER8, POWER5+, G5 UP, G5 SMP (64-bit and 32-bit kernels)
Merged from:	user/nwhitehorn/ppc64-pmap-rework
Looked over by:	jhibbits, andreast
MFC after:	3 months
Relnotes:	yes
Sponsored by:	FreeBSD Foundation
2015-02-24 21:37:20 +00:00
Nathan Whitehorn
35f612b88a Kernel support for the Vector-Scalar eXtension (VSX) found on the POWER7
and POWER8. This instruction set unifies the 32 64-bit scalar floating
point registers with the 32 128-bit vector registers into a single bank
of 64 128-bit registers. Kernel support mostly amounts to saving and
restoring the wider version of the floating point registers and making
sure that both scalar FP and vector registers are enabled once a VSX
instruction is executed. get_mcontext() and friends currently cannot
see the high bits, which will require a little more work.

As the system compiler (GCC 4.2) does not support VSX, making use of this
from userland requires either newer GCC or clang.

Relnotes:	yes
Sponsored by:	FreeBSD Foundation
2015-02-22 21:40:27 +00:00
Rui Paulo
29d0137a8d Remove FreeBSD/wii.
This port failed to gain traction and probably only a couple Wii consoles
ran FreeBSD all the way to single user mode with an md(4). IPC
support was never implemented, so it was impossible to use any peripheral

Any further development, if any, will happen at https://github.com/rpaulo/wii.

Discussed with:	nathanw (a long time ago), jhibbits
2015-02-10 06:35:16 +00:00
Nathan Whitehorn
9ddcd32269 Set thread priorities on multithreaded CPUs so that threads holding a
spinlock are high-priority and threads waiting for a spinlock are set to
low priority.
2015-02-10 00:55:42 +00:00
Nathan Whitehorn
0eb1bfa5e7 Simplify trapcode setup by placing a copy of the generic trap handler at
every possible trap address by default. This also makes sure the kernel
notices (and panics at) traps from newer CPUs that the kernel was not
expecting rather than executing gibberish memory.
2015-02-09 02:12:38 +00:00
Nathan Whitehorn
01fc52e76d Fix typo in r277561. 2015-01-24 01:58:15 +00:00
Nathan Whitehorn
ec336f0f0c Use relocation-safe methods to determine the sizes of the exception handlers.
A "size" symbol with its address set to the length of handler would be
shifted forward with all other addresses when relocations are processed.
Instead, just note the end and do the subtraction at runtime.
2015-01-23 07:36:51 +00:00
Nathan Whitehorn
88a6aee146 Add POWER7+ and POWER8 to the list of CPUs with 32 SLB slots. This is
mostly a no-op since all currently-supported instances of these CPUs give
the number of SLB slots in the device tree, but keep it here as well just
in case.
2015-01-21 19:11:15 +00:00
Nathan Whitehorn
7a28efd9ee Make sure to relocate tmpstk with everything else and avoid processing
non-relative relocations that the UART code makes for absent modules.
2015-01-21 19:09:15 +00:00
Nathan Whitehorn
554dab448e Make 64-bit AIM trap handlers relocatable by changing all absolute branch
instructions to call through pointers instead. In general, these are set
implicitly through relocation processing. One has to be set explicitly in
machdep.c, however, to fit one handler in the tiny (8 instruction) space
available.

Reviewed by:	andreast
Differential revision:	D1554
Tested on:	UP and SMP G5, Cell, POWER5+
2015-01-21 19:07:45 +00:00
Nathan Whitehorn
e5fadf2a31 On 64-bit PowerPC, use more native forms of the PPC 970 HID restore
sequences, like are used to read the HIDs. This is both easier to read
and avoids a miscompilation by GCC in certain circumstances. Also avoid
double restoration of HID4 and HID5.

MFC after:	2 weeks
2015-01-21 02:57:54 +00:00
Nathan Whitehorn
3dcd1c9585 Zero BSS explicitly if not started by loader(8). Add a check for the magic
values that ePAPR-compliant loaders (like skiboot) put in the register
loader uses for the metadata pointer to avoid confusing them.
2015-01-20 05:28:03 +00:00
Nathan Whitehorn
98cd7a6655 Add some initial infrastructure for relocating the kernel in place.
MFC after:	2 months
Differential revision:	D1554
2015-01-19 17:58:01 +00:00
Nathan Whitehorn
c5e8bb4f2e Provide a tunable (machdep.moea64_bpvo_pool_size) to set the bootstrap
PVO pool size. The default errs on the exceedingly large side, so absent
any intelligent automatic tuning, at least let the user set it to save
RAM on memory-constrained systems.

MFC after:	2 weeks
2015-01-19 05:14:07 +00:00
Nathan Whitehorn
9cecb88ce3 Use TOC to look up all kernel globals on powerpc64 instead of doing the
non-relocatable lis @ha, ori @l dance and hoping they are below 4 GB.

MFC after:	2 months
2015-01-18 20:00:33 +00:00
Nathan Whitehorn
bb80825435 Refactor PowerPC (especially AIM) init sequence to be less baroque.
MFC after:	2 months
2015-01-18 18:32:43 +00:00
Nathan Whitehorn
bf27800837 Do not remap Open Firmware mappings covered by the direct map. It's
pointless and wastes resources.

MFC after:	1 week
2015-01-14 02:18:29 +00:00
Mark Johnston
bdb9ab0dd9 Factor out duplicated code from dumpsys() on each architecture into generic
code in sys/kern/kern_dump.c. Most dumpsys() implementations are nearly
identical and simply redefine a number of constants and helper subroutines;
a generic implementation will make it easier to implement features around
kernel core dumps. This change does not alter any minidump code and should
have no functional impact.

PR:		193873
Differential Revision:	https://reviews.freebsd.org/D904
Submitted by:	Conrad Meyer <conrad.meyer@isilon.com>
Reviewed by:	jhibbits (earlier version)
Sponsored by:	EMC / Isilon Storage Division
2015-01-07 01:01:39 +00:00
Nathan Whitehorn
44d29d4762 Allow booting with both a real Open Firmware tree and a flattened version of
the Open Firmware, as provided by petitboot, for example. Note that this is
not quite complete, since RTAS instantiation still depends on callable
firmware.

MFC after:	2 weeks
2015-01-01 22:26:12 +00:00
Mark Johnston
cafe874475 Restore the trap type argument to the DTrace trap hook, removed in r268600.
It's redundant at the moment since it can be obtained from the trapframe
on the architectures where DTrace is supported, but this won't be the case
with ARM.
2014-12-23 15:38:19 +00:00
Andreas Tobler
85859dfe8f Fix build for powerpc(32|64) kernels. 2014-12-10 18:13:14 +00:00
Justin Hibbits
a8920f67f3 Add support for dtrace:fbt on modules for PowerPC
Summary:
Revert the initial FBT-with-KDB changes for trap_subr*.S, and instead use the
db_trap filter function to handle dtrace trap filtering.  With this, the MMU is
enabled by the support code, simplifying the codepath altogether.

Test Plan: Tested on my G4 PowerBook

Reviewers: #powerpc, nwhitehorn

Reviewed By: nwhitehorn

Differential Revision: https://reviews.freebsd.org/D1207

MFC after:	3 weeks
2014-11-29 20:54:33 +00:00
Justin Hibbits
eaed5fd136 cpudep_ap_early_bootstrap() takes no arguments, so no need to give it one.
MFC after:	3 weeks
2014-11-20 06:32:47 +00:00
Davide Italiano
2be111bf7d Follow up to r225617. In order to maximize the re-usability of kernel code
in userland rename in-kernel getenv()/setenv() to kern_setenv()/kern_getenv().
This fixes a namespace collision with libc symbols.

Submitted by:   kmacy
Tested by:      make universe
2014-10-16 18:04:43 +00:00
Roger Pau Monné
c98a2727cc ddb: allow specifying the exact address of the symtab and strtab
When the FreeBSD kernel is loaded from Xen the symtab and strtab are
not loaded the same way as the native boot loader. This patch adds
three new global variables to ddb that can be used to specify the
exact position and size of those tables, so they can be directly used
as parameters to db_add_symbol_table. A new helper is introduced, so callers
that used to set ksym_start and ksym_end can use this helper to set the new
variables.

It also adds support for loading them from the Xen PVH port, that was
previously missing those tables.

Sponsored by: Citrix Systems R&D
Reviewed by:	kib

ddb/db_main.c:
 - Add three new global variables: ksymtab, kstrtab, ksymtab_size that
   can be used to specify the position and size of the symtab and
   strtab.
 - Use those new variables in db_init in order to call db_add_symbol_table.
 - Move the logic in db_init to db_fetch_symtab in order to set ksymtab,
   kstrtab, ksymtab_size from ksym_start and ksym_end.

ddb/ddb.h:
 - Add prototype for db_fetch_ksymtab.
 - Declate the extern variables ksymtab, kstrtab and ksymtab_size.

x86/xen/pv.c:
 - Add support for finding the symtab and strtab when booted as a Xen
   PVH guest. Since Xen loads the symtab and strtab as NetBSD expects
   to find them we have to adapt and use the same method.

amd64/amd64/machdep.c:
arm/arm/machdep.c:
i386/i386/machdep.c:
mips/mips/machdep.c:
pc98/pc98/machdep.c:
powerpc/aim/machdep.c:
powerpc/booke/machdep.c:
sparc64/sparc64/machdep.c:
 - Use the newly introduced db_fetch_ksymtab in order to set ksymtab,
   kstrtab and ksymtab_size.
2014-09-25 08:28:10 +00:00
Nathan Whitehorn
6be7abb3ff We should have an isync after switching MSR[SF] in bootstrap.
Submitted by:	Mark Millard
MFC after:	3 days
2014-09-23 04:13:21 +00:00
Konstantin Belousov
39ffa8c138 Change pmap_enter(9) interface to take flags parameter and superpage
mapping size (currently unused).  The flags includes the fault access
bits, wired flag as PMAP_ENTER_WIRED, and a new flag
PMAP_ENTER_NOSLEEP to indicate that pmap should not sleep.

For powerpc aim both 32 and 64 bit, fix implementation to ensure that
the requested mapping is created when PMAP_ENTER_NOSLEEP is not
specified, in particular, wait for the available memory required to
proceed.

In collaboration with:	alc
Tested by:	nwhitehorn (ppc aim32 and booke)
Sponsored by:	The FreeBSD Foundation and EMC / Isilon Storage Division
MFC after:	2 weeks
2014-08-08 17:12:03 +00:00
Justin Hibbits
6bff82d03d Set the si_code appropriately for exception-caused signals.
LLDB checks the si_code, and aborts if a code isn't known.

MFC after:	2 weeks
Relnotes:	yes
2014-08-08 06:22:32 +00:00
Alan Cox
a695d9b25b Retire pmap_change_wiring(). We have never used it to wire virtual pages.
We continue to use pmap_enter() for that.  For unwiring virtual pages, we
now use pmap_unwire(), which unwires a range of virtual addresses instead
of a single virtual page.

Sponsored by:	EMC / Isilon Storage Division
2014-08-03 20:40:51 +00:00
Alan Cox
081b8e203b Simplify the selection of the pvo_head and pvo allocation zone in
moea_enter_locked() and moea64_enter().

Eliminate an unused variable from moea64_enter().
2014-08-01 17:09:50 +00:00
Alan Cox
eb2af3e758 Retire PVO_EXECUTABLE. It's neither used nor set correctly. 2014-08-01 04:53:35 +00:00
Alan Cox
add0359068 Correct a long-standing problem in moea{,64}_pvo_enter() that was revealed
by the combination of r268591 and r269134: When we attempt to add the
wired attribute to an existing mapping, moea{,64}_pvo_enter() do nothing.
(They only set the wired attribute on newly created mappings.)

Tested by:	andreast
2014-08-01 01:48:41 +00:00
Alan Cox
974524373d Correct a defect in r268591. In the implementation of the new function
pmap_unwire(), the call to MOEA64_PVO_TO_PTE() must be performed before
any changes are made to the PVO.  Otherwise, MOEA64_PVO_TO_PTE() will
panic.

Reported by:	andreast
2014-07-31 16:17:30 +00:00
Nathan Whitehorn
34a856a200 Allow mappings of memory not previously direct-mapped by the kernel when
calling mmap on /dev/mem and add a handler for the possible userland
machine checks that may result. Remove some pointless and wrong copy/paste
that has been in here for a decade as well.

This results in a /dev/mem with identical semantics to the x86 version.

MFC after:	1 week
2014-07-19 15:11:58 +00:00
Mark Johnston
291624fdf6 Invoke the DTrace trap handler before calling trap() on amd64. This matches
the upstream implementation and helps ensure that a trap induced by tracing
fbt::trap:entry is handled without recursively generating another trap.

This makes it possible to run most (but not all) of the DTrace tests under
common/safety/ without triggering a kernel panic.

Submitted by:	Anton Rang <anton.rang@isilon.com> (original version)
Phabric:	D95
2014-07-14 04:38:17 +00:00
Alan Cox
f26bcf99e0 Eliminate an unused variable. Refresh two comments. 2014-07-13 17:52:07 +00:00
Alan Cox
a844c68fc2 Implement pmap_unwire(). See r268327 for the motivation behind this change. 2014-07-13 16:27:57 +00:00
Alan Cox
bfc30490a7 Correct the accounting code for wired mappings. The wrong field of the PVO
entry was being tested.  We were incrementing and decrementing the pmap's
wired mapping count based on whether the physical page being mapped or
unmapped was cache coherent, not whether it was a wired mapping.

Reviewed by:	nwhitehorn
2014-07-10 20:55:38 +00:00
Mark Johnston
f2789bd5c7 Commit the rest of the changes that were intended to be part of r266826.
X-MFC-with:	r266826
2014-05-29 01:42:22 +00:00
Justin Hibbits
46da9e44cc oea64 uses 4k pages, too.
MFC after:	1 week
X-MFC-with:	r266116
2014-05-15 15:17:44 +00:00
Justin Hibbits
8e244752cb A page mask size is 12-bits, not 11.
MFC after:	1 week
2014-05-15 04:18:06 +00:00
Bryan Drewery
44f1c91610 Rename global cnt to vm_cnt to avoid shadowing.
To reduce the diff struct pcu.cnt field was not renamed, so
PCPU_OP(cnt.field) is still used. pc_cnt and pcpu are also used in
kvm(3) and vmstat(8). The goal was to not affect externally used KPI.

Bump __FreeBSD_version_ in case some out-of-tree module/code relies on the
the global cnt variable.

Exp-run revealed no ports using it directly.

No objection from:	arch@
Sponsored by:	EMC / Isilon Storage Division
2014-03-22 10:26:09 +00:00
Ed Maste
0fcefb433d Update NetBSD Foundation copyrights to 2-clause BSD
The NetBSD Foundation states "Third parties are encouraged to change the
license on any files which have a 4-clause license contributed to the
NetBSD Foundation to a 2-clause license."

This change removes clauses 3 and 4 from copyright / license blocks that
list The NetBSD Foundation as the only copyright holder.

Sponsored by:	The FreeBSD Foundation
2014-03-18 01:40:25 +00:00
Justin Hibbits
e1c161e74c Unbreak non-SMP builds. This was broken by r259284. Also, reorganize the
code introduced in that revision a bit.

Reviewed by:	nwhitehorn
MFC after:	3 weeks
2014-01-31 03:55:34 +00:00
Justin Hibbits
fe938c0835 Use a loop of dcbz, instead of calling bzero() to zero a page. This matches
what is done in mmu_oea64.c.

MFC after:	1 month
2014-01-29 05:58:08 +00:00
Justin Hibbits
ac01bc33c9 Save r3 before using it for the trap check, else we end up saving the new r3,
containing the trap instruction encoding (0x7c810808), and restoring it back
with the frame on return.  This caused it to panic on my ppc32 machine, but
somehow my ppc64 machine overlooked it, because I was using such a simple
dtrace probe.

X-MFC-with:	r259245
MFC after:	2 weeks
2013-12-15 18:07:25 +00:00
Justin Hibbits
4702d987cd Add PMU-based CPU frequency scaling. This method is used on most Titanium
PowerBooks.

MFC after:	1 month
2013-12-13 02:37:35 +00:00
Justin Hibbits
8d7b300516 FBT now does work fully on PowerPC.
MFC after:	2 weeks
2013-12-12 04:12:19 +00:00
Nathan Whitehorn
3df046183e The kernel stack guard pages are only below the stack pointer, not above.
Prevent erroneous detection of stack overflows on legitimate faults on the
page after this thread's stack.

MFC after:	3 days
2013-12-01 17:28:28 +00:00
Nathan Whitehorn
a0d0d6d88b badaddr() is used only in the grackle PCI driver, so move its definition
there. Clean up a spurious setfault() declaration as well.
2013-11-27 22:01:09 +00:00
Attilio Rao
54366c0bd7 - For kernel compiled only with KDTRACE_HOOKS and not any lock debugging
option, unbreak the lock tracing release semantic by embedding
  calls to LOCKSTAT_PROFILE_RELEASE_LOCK() direclty in the inlined
  version of the releasing functions for mutex, rwlock and sxlock.
  Failing to do so skips the lockstat_probe_func invokation for
  unlocking.
- As part of the LOCKSTAT support is inlined in mutex operation, for
  kernel compiled without lock debugging options, potentially every
  consumer must be compiled including opt_kdtrace.h.
  Fix this by moving KDTRACE_HOOKS into opt_global.h and remove the
  dependency by opt_kdtrace.h for all files, as now only KDTRACE_FRAMES
  is linked there and it is only used as a compile-time stub [0].

[0] immediately shows some new bug as DTRACE-derived support for debug
in sfxge is broken and it was never really tested.  As it was not
including correctly opt_kdtrace.h before it was never enabled so it
was kept broken for a while.  Fix this by using a protection stub,
leaving sfxge driver authors the responsibility for fixing it
appropriately [1].

Sponsored by:	EMC / Isilon storage division
Discussed with:	rstone
[0] Reported by:	rstone
[1] Discussed with:	philip
2013-11-25 07:38:45 +00:00
Andreas Tobler
f367ffdecc Save and restore the trap vectors when doing OF calls on pSeries machines.
It turned out that on pSeries machines the call into OF modified the trap
vectors and this made further behaviour unpredictable.

With this commit I'm now able to boot multi user on a network booted
environment on my IntelliStation 285. This is a POWER5+ machine.

Discussed with:		nwhitehorn
MFC after:	1 week
2013-11-23 18:58:17 +00:00
Justin Hibbits
c27f33b56c Remove stale comment. The PID provider is handled elsewhere already. 2013-11-21 06:54:28 +00:00
Nathan Whitehorn
fe875ee040 Do not assume a value for #address-cells when parsing the OF translations
map. This allows the kernel to get farther with OpenBIOS on 64-bit CPUs.
2013-11-17 18:03:03 +00:00
Nathan Whitehorn
e537388b84 Unify handling of illegal instruction faults between AIM and Book-E. This
allows FPU emulation on AIM as well as providing support for the mfpvr
and lwsync instructions from userland on e500 cores. lwsync, in particular,
is required for many C++ programs to work correctly.

MFC after:	1 week
2013-11-17 15:12:03 +00:00
Justin Hibbits
1eb04b44aa Fix copy+paste-o, OEA64 uses LPTE, not PTE.
X-MFC with:	r257941
2013-11-14 07:41:52 +00:00
Nathan Whitehorn
e39c26a950 Use the same implementation of copyinout.c for both AIM and Book-E. This
fixes some bugs in both implementations related to validity checks on
mapping bounds.
2013-11-11 23:37:16 +00:00
Nathan Whitehorn
bdac436008 Follow up r223485, which made AIM use the ABI thread pointer instead of
PCPU fields for curthread, by doing the same to Book-E. This closes
some potential races switching between CPUs. As a side effect, it turns out
the AIM and Book-E swtch.S implementations were the same to within a few
registers, so move that to powerpc/powerpc.

MFC after: 3 months
2013-11-11 17:37:50 +00:00
Justin Hibbits
a470e71336 Add the necessary bits for dumps on ppc64.
MFC after:	2 weeks
2013-11-11 03:17:38 +00:00
Mark Johnston
57170f49f2 Remove references to an unused fasttrap probe hook, and remove the
corresponding x86 trap type. Userland DTrace probes are currently handled
by the other fasttrap hooks (dtrace_pid_probe_ptr and
dtrace_return_probe_ptr).

Discussed with:	rpaulo
2013-10-31 02:35:00 +00:00
Nathan Whitehorn
10d06e67fd Add some extra sanity checking and checks to printf format specifiers. 2013-10-26 18:19:36 +00:00
Nathan Whitehorn
33724f17d2 Interrelated improvements to early boot mappings:
- Remove explicit requirement that the SOC registers be found except as an
  optimization (although the MPC85XX LAW drivers still require they be found
  externally, which should change).
- Remove magic CCSRBAR_VA value.
- Allow bus_machdep.c's early-boot code to handle non 1:1 mappings and
  systems not in real-mode or global 1:1 maps in early boot.
- Allow pmap_mapdev() on Book-E to reissue previous addresses if the
  area is already mapped. Additionally have it check all mappings, not
  just the CCSR area.

This allows the console on e500 systems to actually work on systems where
the boot loader was not kind enough to set up a 1:1 mapping before starting
the kernel.
2013-10-26 18:18:14 +00:00
Nathan Whitehorn
258dbffe6e Clean up missed header references. 2013-10-26 17:54:31 +00:00
Nathan Whitehorn
e4cf0633b8 Since the PS3 port was committed, the AIM nexus device works perfectly fine
on all PowerPC platforms, whether or not they have Open Firmware. Remove
some more duplication and have there be only one nexus driver.
2013-10-20 18:40:55 +00:00
Nathan Whitehorn
228f09b3ef Replace the two almost-exactly-identical AIM and Book-E clock.c
implementations with a single one after the application of a very small
amount of #ifdef.
2013-10-20 16:37:03 +00:00
Nathan Whitehorn
1cfdc97153 Unify the AIM and Book-E vm_machdep.c implementations, which previously
differed only with respect to the AIM version not following style(9) and
some additional features for 64-bit systems and machines with direct maps
in the AIM implementation that are no-ops on Book-E (at least for now).
2013-10-20 16:14:03 +00:00
Justin Hibbits
6141b794bf Fix the Wii build, and remove an extraneous critical_enter(). 2013-10-16 04:11:42 +00:00
Justin Hibbits
30b318b92f Add fasttrap for PowerPC. This is the last piece of the dtrace/ppc puzzle.
It's incomplete, it doesn't contain full instruction emulation, but it should be
sufficient for most cases.

MFC after:	1 month
2013-10-15 15:00:29 +00:00
Justin Hibbits
b2da17ea54 Move the PMC handling to the first level interrupt handler where it belongs.
Also add the pmc_hook use, to handle callchain tracing.

MFC after:	1 week
2013-10-15 14:52:44 +00:00
Gleb Smirnoff
255c1caae3 - Create kern.ipc.sendfile namespace, and put the new "readhead" OID
there as "kern.ipc.sendfile.readahead".
- Push all nsfbuf related tunables into MD code. Don't move them
  to new namespace in favor of POLA.

Reviewed by:	scottl
Approved by:	re (gjb)
2013-09-22 13:36:52 +00:00
Alan Cox
deb179bb4c The pmap function pmap_clear_reference() is no longer used. Remove it.
pmap_clear_reference() has had exactly one caller in the kernel for
several years, more precisely, since FreeBSD 8.  Now, that call no
longer exists.

Approved by:	re (kib)
Sponsored by:	EMC / Isilon Storage Division
2013-09-20 04:30:18 +00:00
Nathan Whitehorn
1330c354c5 Change VM object lock assertion to match locking higher in the call
chain. This repairs a panic observed during pageout on some 64-bit
PowerPC systems.

Submitted by:	grehan
Approved by:	re (kib)
MFC after:	2 weeks
Revisit after:	10.0
2013-09-13 01:12:45 +00:00
Nathan Whitehorn
c84bb047d4 Raise artificial limits on number of CPUs and number of interrupts.
Approved by:	re (kib)
2013-09-09 12:52:34 +00:00
Nathan Whitehorn
c5915fdc44 Add POWER CPUs to the kernel's knowledge. This does not imply we currently
actually run on any machines with POWER CPUs but avoids closing that door
unnecessarily.

Approved by:	re (kib)
2013-09-09 12:51:24 +00:00
Nathan Whitehorn
a5715964b1 Align stacks of kernel threads correctly at 16-byte boundaries rather than
making sure they are all misaligned at +8 bytes. This fixes clang builds
of powerpc64 kernels (aside from a required increase in KSTACK_PAGES which
will come later).

This commit from FreeBSD/powerpc64 with a clang-built kernel.

MFC after:	2 weeks
2013-09-05 23:00:24 +00:00
Justin Hibbits
44045369a6 Enable PMC interrupt handling, and fix a DTrace trap handling bug. 2013-09-03 00:42:15 +00:00
Konstantin Belousov
e68c64f0ba Revert r254501. Instead, reuse the type stability of the struct pmap
which is the part of struct vmspace, allocated from UMA_ZONE_NOFREE
zone.  Initialize the pmap lock in the vmspace zone init function, and
remove pmap lock initialization and destruction from pmap_pinit() and
pmap_release().

Suggested and reviewed by:	alc (previous version)
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
2013-08-22 18:12:24 +00:00
Attilio Rao
c7aebda8a1 The soft and hard busy mechanism rely on the vm object lock to work.
Unify the 2 concept into a real, minimal, sxlock where the shared
acquisition represent the soft busy and the exclusive acquisition
represent the hard busy.
The old VPO_WANTED mechanism becames the hard-path for this new lock
and it becomes per-page rather than per-object.
The vm_object lock becames an interlock for this functionality:
it can be held in both read or write mode.
However, if the vm_object lock is held in read mode while acquiring
or releasing the busy state, the thread owner cannot make any
assumption on the busy state unless it is also busying it.

Also:
- Add a new flag to directly shared busy pages while vm_page_alloc
  and vm_page_grab are being executed.  This will be very helpful
  once these functions happen under a read object lock.
- Move the swapping sleep into its own per-object flag

The KPI is heavilly changed this is why the version is bumped.
It is very likely that some VM ports users will need to change
their own code.

Sponsored by:	EMC / Isilon storage division
Discussed with:	alc
Reviewed by:	jeff, kib
Tested by:	gavin, bapt (older version)
Tested by:	pho, scottl
2013-08-09 11:11:11 +00:00
Jeff Roberson
5df87b21d3 Replace kernel virtual address space allocation with vmem. This provides
transparent layering and better fragmentation.

 - Normalize functions that allocate memory to use kmem_*
 - Those that allocate address space are named kva_*
 - Those that operate on maps are named kmap_*
 - Implement recursive allocation handling for kmem_arena in vmem.

Reviewed by:	alc
Tested by:	pho
Sponsored by:	EMC / Isilon Storage Division
2013-08-07 06:21:20 +00:00
Justin Hibbits
a3b2cab451 Remove an unnecessary panic. The PVO's PTE entry and the PTEG's PTE entry may
not match, if the PVO's PTE is invalid.
2013-08-06 02:58:16 +00:00
Justin Hibbits
804d1cc1b6 Evict pages from the PTEG when it's full and trying to insert a new PTE,
rather than panicking.

Reviewed by:	nwhitehorn
MFC after:	3 weeks
2013-08-06 01:01:15 +00:00
Andrey V. Elsukov
05d1f5bce0 Introduce new structure sfstat for collecting sendfile's statistics
and remove corresponding fields from struct mbstat. Use PCPU counters
and SFSTAT_INC() macro for update these statistics.

Discussed with:	glebius
2013-07-15 06:16:57 +00:00
Nathan Whitehorn
d9675cb0af Fix check: bitwise and has only one &.
MFC after:	1 week
2013-07-12 15:56:30 +00:00
Attilio Rao
9af6d512f5 o Relax locking assertions for vm_page_find_least()
o Relax locking assertions for pmap_enter_object() and add them also
  to architectures that currently don't have any
o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade
  operation on the per-object rwlock
o Use all the mechanisms above to make vm_map_pmap_enter() to work
  mostl of the times only with readlocks.

Sponsored by:	EMC / Isilon storage division
Reviewed by:	alc
2013-05-21 20:38:19 +00:00
Alan Cox
658f180b3b Relax the object locking assertion in pmap_enter_locked().
Reviewed by:	attilio
Sponsored by:	EMC / Isilon Storage Division
2013-05-17 18:59:00 +00:00
Justin Hibbits
70c29e3930 Remove a comment that shouldn't have gone in.
X-MFC-with:	r249864
2013-04-26 05:18:18 +00:00
Justin Hibbits
afd9cb6c6d Introduce kernel coredumps to ppc32 AIM. Leeched from the booke code.
MFC after:	2 weeks
2013-04-25 00:39:43 +00:00
Justin Hibbits
d3d5100b03 Print out DSISR in a fatal DSI trap.
Sponsored by:
2013-04-05 04:53:43 +00:00
Konstantin Belousov
ee75e7de7b Implement the concept of the unmapped VMIO buffers, i.e. buffers which
do not map the b_pages pages into buffer_map KVA.  The use of the
unmapped buffers eliminate the need to perform TLB shootdown for
mapping on the buffer creation and reuse, greatly reducing the amount
of IPIs for shootdown on big-SMP machines and eliminating up to 25-30%
of the system time on i/o intensive workloads.

The unmapped buffer should be explicitely requested by the GB_UNMAPPED
flag by the consumer.  For unmapped buffer, no KVA reservation is
performed at all. The consumer might request unmapped buffer which
does have a KVA reserve, to manually map it without recursing into
buffer cache and blocking, with the GB_KVAALLOC flag.

When the mapped buffer is requested and unmapped buffer already
exists, the cache performs an upgrade, possibly reusing the KVA
reservation.

Unmapped buffer is translated into unmapped bio in g_vfs_strategy().
Unmapped bio carry a pointer to the vm_page_t array, offset and length
instead of the data pointer.  The provider which processes the bio
should explicitely specify a readiness to accept unmapped bio,
otherwise g_down geom thread performs the transient upgrade of the bio
request by mapping the pages into the new bio_transient_map KVA
submap.

The bio_transient_map submap claims up to 10% of the buffer map, and
the total buffer_map + bio_transient_map KVA usage stays the
same. Still, it could be manually tuned by kern.bio_transient_maxcnt
tunable, in the units of the transient mappings.  Eventually, the
bio_transient_map could be removed after all geom classes and drivers
can accept unmapped i/o requests.

Unmapped support can be turned off by the vfs.unmapped_buf_allowed
tunable, disabling which makes the buffer (or cluster) creation
requests to ignore GB_UNMAPPED and GB_KVAALLOC flags.  Unmapped
buffers are only enabled by default on the architectures where
pmap_copy_page() was implemented and tested.

In the rework, filesystem metadata is not the subject to maxbufspace
limit anymore. Since the metadata buffers are always mapped, the
buffers still have to fit into the buffer map, which provides a
reasonable (but practically unreachable) upper bound on it. The
non-metadata buffer allocations, both mapped and unmapped, is
accounted against maxbufspace, as before. Effectively, this means that
the maxbufspace is forced on mapped and unmapped buffers separately.
The pre-patch bufspace limiting code did not worked, because
buffer_map fragmentation does not allow the limit to be reached.

By Jeff Roberson request, the getnewbuf() function was split into
smaller single-purpose functions.

Sponsored by:	The FreeBSD Foundation
Discussed with:	jeff (previous version)
Tested by:	pho, scottl (previous version), jhb, bf
MFC after:	2 weeks
2013-03-19 14:13:12 +00:00
Justin Hibbits
80a5635c8b Add FBT for PowerPC DTrace. Also, clean up the DTrace assembly code,
much of which is not necessary for PowerPC.

The FBT module can likely be factored into 3 separate files: common,
intel, and powerpc, rather than duplicating most of the code between
the x86 and PowerPC flavors.

All DTrace modules for PowerPC will be MFC'd together once Fasttrap is
completed.
2013-03-18 05:30:18 +00:00
Konstantin Belousov
e8a4a618cf Add pmap function pmap_copy_pages(), which copies the content of the
pages around, taking array of vm_page_t both for source and
destination.  Starting offsets and total transfer size are specified.

The function implements optimal algorithm for copying using the
platform-specific optimizations.  For instance, on the architectures
were the direct map is available, no transient mappings are created,
for i386 the per-cpu ephemeral page frame is used.  The code was
typically borrowed from the pmap_copy_page() for the same
architecture.

Only i386/amd64, powerpc aim and arm/arm-v6 implementations were
tested at the time of commit. High-level code, not committed yet to
the tree, ensures that the use of the function is only allowed after
explicit enablement.

For sparc64, the existing code has known issues and a stab is added
instead, to allow the kernel linking.

Sponsored by:	The FreeBSD Foundation
Tested by:	pho (i386, amd64), scottl (amd64), ian (arm and arm-v6)
MFC after:	2 weeks
2013-03-14 20:18:12 +00:00
Attilio Rao
89f6b8632c Switch the vm_object mutex to be a rwlock. This will enable in the
future further optimizations where the vm_object lock will be held
in read mode most of the time the page cache resident pool of pages
are accessed for reading purposes.

The change is mostly mechanical but few notes are reported:
* The KPI changes as follow:
  - VM_OBJECT_LOCK() -> VM_OBJECT_WLOCK()
  - VM_OBJECT_TRYLOCK() -> VM_OBJECT_TRYWLOCK()
  - VM_OBJECT_UNLOCK() -> VM_OBJECT_WUNLOCK()
  - VM_OBJECT_LOCK_ASSERT(MA_OWNED) -> VM_OBJECT_ASSERT_WLOCKED()
    (in order to avoid visibility of implementation details)
  - The read-mode operations are added:
    VM_OBJECT_RLOCK(), VM_OBJECT_TRYRLOCK(), VM_OBJECT_RUNLOCK(),
    VM_OBJECT_ASSERT_RLOCKED(), VM_OBJECT_ASSERT_LOCKED()
* The vm/vm_pager.h namespace pollution avoidance (forcing requiring
  sys/mutex.h in consumers directly to cater its inlining functions
  using VM_OBJECT_LOCK()) imposes that all the vm/vm_pager.h
  consumers now must include also sys/rwlock.h.
* zfs requires a quite convoluted fix to include FreeBSD rwlocks into
  the compat layer because the name clash between FreeBSD and solaris
  versions must be avoided.
  At this purpose zfs redefines the vm_object locking functions
  directly, isolating the FreeBSD components in specific compat stubs.

The KPI results heavilly broken by this commit.  Thirdy part ports must
be updated accordingly (I can think off-hand of VirtualBox, for example).

Sponsored by:	EMC / Isilon storage division
Reviewed by:	jeff
Reviewed by:	pjd (ZFS specific review)
Discussed with:	alc
Tested by:	pho
2013-03-09 02:32:23 +00:00
Alexander Motin
fdc5dd2d2f MFcalloutng:
Switch eventtimers(9) from using struct bintime to sbintime_t.
Even before this not a single driver really supported full dynamic range of
struct bintime even in theory, not speaking about practical inexpediency.
This change legitimates the status quo and cleans up the code.
2013-02-28 13:46:03 +00:00
Attilio Rao
dc1558d1cd Merge from vmobj-rwlock:
VM_OBJECT_LOCKED() macro is only used to implement a custom version
of lock assertions right now (which likely spread out thanks to
copy and paste).
Remove it and implement actual assertions.

Sponsored by:	EMC / Isilon storage division
Reviewed by:	alc
Tested by:	pho
2013-02-27 18:12:13 +00:00
Attilio Rao
590f9303e5 Merge from vmobj-rwlock branch:
Remove unused inclusion of vm/vm_pager.h and vm/vnode_pager.h.

Sponsored by:	EMC / Isilon storage division
Tested by:	pho
Reviewed by:	alc
2013-02-26 01:00:11 +00:00
Adrian Chadd
aef8ef5168 Setup BAT0 and BAT1 on the Wii.
This is the missing piece for FreeBSD/Wii, but there's still a lot of
work ahead. We have to reset the MMU in locore before continuing
the boot process because we don't know how the boot loaders might
have setup the BATs. We also disable the PCI BAT because there's no PCI
bus on the Wii.

Thanks to Nathan Whitehorn and Peter Grenhan for their help.

Submitted by:	Margarida Gouveia
2012-11-21 08:04:21 +00:00
Konstantin Belousov
b32ecf44bc Flip the semantic of M_NOWAIT to only require the allocation to not
sleep, and perform the page allocations with VM_ALLOC_SYSTEM
class. Previously, the allocation was also allowed to completely drain
the reserve of the free pages, being translated to VM_ALLOC_INTERRUPT
request class for vm_page_alloc() and similar functions.

Allow the caller of malloc* to request the 'deep drain' semantic by
providing M_USE_RESERVE flag, now translated to VM_ALLOC_INTERRUPT
class. Previously, it resulted in less aggressive VM_ALLOC_SYSTEM
allocation class.

Centralize the translation of the M_* malloc(9) flags in the single
inline function malloc2vm_flags().

Discussion started by:	"Sears, Steven" <Steven.Sears@netapp.com>
Reviewed by:	alc, mdf (previous version)
Tested by:	pho (previous version)
MFC after:	2 weeks
2012-11-14 20:01:40 +00:00
Justin Hibbits
c757049235 Implement DTrace for PowerPC. This includes both 32-bit and 64-bit.
There is one known issue:  Some probes will display an error message along the
lines of:  "Invalid address (0)"

I tested this with both a simple dtrace probe and dtruss on a few different
binaries on 32-bit.  I only compiled 64-bit, did not run it, but I don't expect
problems without the modules loaded.  Volunteers are welcome.

MFC after:	1 month
2012-11-07 23:45:09 +00:00
Attilio Rao
cfedf924d3 Rework the known rwlock to benefit about staying on their own
cache line in order to avoid manual frobbing but using
struct rwlock_padalign.

Reviewed by:	alc, jimharris
2012-11-03 23:03:14 +00:00
Alan Cox
e4b8a2fc5a Eliminate a stale comment. It describes another use case for the pmap in
Mach that doesn't exist in FreeBSD.
2012-09-28 05:30:59 +00:00
Attilio Rao
324e57150d userret() already checks for td_locks when INVARIANTS is enabled, so
there is no need to check if Giant is acquired after it.

Reviewed by:	kib
MFC after:	1 week
2012-09-08 18:27:11 +00:00
Rui Paulo
800bd92190 Unbreak tinderbox. 2012-08-25 17:15:33 +00:00
Rui Paulo
794bad6548 Set mdp only under #ifdef WII. 2012-08-25 00:47:55 +00:00
Adrian Chadd
2467c62fc6 On Nintendo Wii CPUs, the mdp value will be garbage. Set it to NULL
so as to not confuse things.

Submitted by:	Margarida Gouveia
2012-08-21 06:34:21 +00:00
Alan Cox
8d9e6d9f93 Avoid recursion on the pvh global lock in the aim oea pmap.
Correct the return type of the pmap_ts_referenced() implementations.

Reported by:	jhibbits [1]
Tested by:	andreast
2012-07-10 22:10:21 +00:00
Alan Cox
3653f5cbcb Replace all uses of the vm page queues lock by a r/w lock that is private
to this pmap.

Tested by:	andreast, jhibbits
2012-07-06 02:18:49 +00:00
Rui Paulo
0381f4905f The `end' symbol doesn't match the end of the kernel image because it's
relative to the start address (unless the start address is 0, which is
not the case).
This is currently not a problem because all powerpc architectures are
using loader(8) which passes metadata to the kernel including the
correct `endkernel' address.  If we don't use loader(8), register 4
and 5 will have the size of the kernel ELF file, not its end address.
We fix that simply by adding `kernel_text' to `end' to compute
`endkernel'.

Discussed with:	nathanw
2012-06-29 01:55:20 +00:00
Rafal Jaworowski
d7c8c7fdfb Fix physical address type to vm_paddr_t also for powerpc64. 2012-05-25 18:17:26 +00:00
Rafal Jaworowski
20b7961267 Fix physical address type to vm_paddr_t. 2012-05-24 21:13:24 +00:00
Nathan Whitehorn
ccc4a5c761 Replace the list of PVOs owned by each PMAP with an RB tree. This simplifies
range operations like pmap_remove() and pmap_protect() as well as allowing
simple operations like pmap_extract() not to involve any global state.
This substantially reduces lock coverages for the global table lock and
improves concurrency.
2012-05-20 14:33:28 +00:00
Nathan Whitehorn
bc96dccc69 Fix final bugs in memory barriers on PowerPC:
- Use isync/lwsync unconditionally for acquire/release. Use of isync
  guarantees a complete memory barrier, which is important for serialization
  of bus space accesses with mutexes on multi-processor systems.
- Go back to using sync as the I/O memory barrier, which solves the same
  problem as above with respect to mutex release using lwsync, while not
  penalizing non-I/O operations like a return to sync on the atomic release
  operations would.
- Place an acquisition barrier around thread lock acquisition in
  cpu_switchin().
2012-05-04 16:00:22 +00:00
Nathan Whitehorn
284ea61312 Fix build on 32-bit systems. 2012-04-28 14:42:49 +00:00
Nathan Whitehorn
50e13823c8 After switching mutexes to use lwsync, they no longer provide sufficient
guarantees on acquire for the tlbie mutex. Conversely, the TLB invalidation
sequence provides guarantees that do not need to be redundantly applied on
release. Roll a small custom lock that is just right. Simultaneously,
convert the SLB tree changes back to lwsync, as changing them to sync
was a misdiagnosis of the tlbie barrier problem this commit actually fixes.
2012-04-28 00:12:23 +00:00
Nathan Whitehorn
8387bb0c78 Revert r234581 for this file. The lockless SLB tree code does in fact need
a heavyweight sync instead of a lightweight sync to function properly.
Thanks to mdf for the clarification.
2012-04-24 13:36:41 +00:00
Nathan Whitehorn
6f26a88999 Use lwsync to provide memory barriers on systems that support it instead
of sync (lwsync is an alternate encoding of sync on systems that do not
support it, providing graceful fallback). This provides more than an order
of magnitude reduction in the time required to acquire or release a mutex.

MFC after:	2 months
2012-04-22 19:00:51 +00:00
Nathan Whitehorn
0b852c03eb Avoid a lock order reversal in pmap_extract_and_hold() from relocking
the page. This PMAP requires an additional lock besides the PMAP lock
in pmap_extract_and_hold(), which vm_page_pa_tryrelock() did not release.

Suggested by:	kib
MFC after:	4 days
2012-04-22 17:58:30 +00:00
Nathan Whitehorn
c13aac3896 Make sure all pending operations have completed on the existing thread
before (potentially) migrating it to a different CPU.

MFC after:	5 days
2012-04-20 23:01:36 +00:00
Nathan Whitehorn
e3c2930d36 We don't need kcopy() in any of the remaining places it is used, so
remove it.

MFC after:	2 weeks
2012-04-11 22:23:50 +00:00
Nathan Whitehorn
b6aeb1ab97 Only manipulate the PGA_EXECUTABLE flag on managed pages. This is a proxy
for whether the page is physical. On dense phys mem systems (32-bit),
VM_PHYS_TO_PAGE will not return NULL for device memory pages if device
memory is above physical memory even if there is no allocated vm_page.
Attempting to use the returned page could then cause either memory
corruption or a page fault.
2012-04-11 21:56:55 +00:00
Nathan Whitehorn
805bee55eb Fix error in r233949. Synchronizing icaches on uncacheable pages turns out
not to be a good idea, and of course the PV entry list for a page is never
empty after the page has been mapped.
2012-04-11 20:28:05 +00:00
Nathan Whitehorn
b7d0d1fabf Execute an initial ptesync if and only if the PTE is actually being
invalidated, as opposed to a ref/changed bit update.
2012-04-06 22:33:13 +00:00
Nathan Whitehorn
348bc07000 Substantially reduce the scope of the locks held in pmap_enter(), which
improves concurrency slightly.
2012-04-06 18:18:48 +00:00
Nathan Whitehorn
57bd5cce62 Reduce the frequency that the PowerPC/AIM pmaps invalidate instruction
caches, by invalidating kernel icaches only when needed and not flushing
user caches for shared pages.

Suggested by:	kib
MFC after:	2 weeks
2012-04-06 16:03:38 +00:00
Nathan Whitehorn
7e55df27cb More PMAP performance improvements: skip 256 MB segments entirely if they
are are not mapped during ranged operations and reduce the scope of the
tlbie lock only to the actual tlbie instruction instead of the entire
sequence. There are a few more optimization possibilities here as well.
2012-03-28 17:25:29 +00:00
Nathan Whitehorn
a3e9e259b3 Make sure to call vm_page_dirty() before the pmap lock is released to
prevent a race where another process could conclude the page was clean.

Submitted by:	alc
2012-03-27 01:26:00 +00:00
Nathan Whitehorn
5afcb4c91e More PMAP concurrency improvements: replace the table lock and (almost) all
uses of the page queues mutex with a new rwlock that protects the page
table and the PV lists. This reduces system time during a parallel
buildworld by 35%.

Reviewed by:	alc
2012-03-27 01:24:18 +00:00
Nathan Whitehorn
e71dfa7b84 More PMAP performance improvements: on powerpc64, when TLBIE can be run
with exceptions enabled, leave them enabled and use a regular mutex to
guard TLB invalidations instead of a spinlock.
2012-03-25 06:01:34 +00:00
Nathan Whitehorn
d456d3e31f Only call vm_page_dirty() on pages that are writable in order not to
confuse the VM.
2012-03-24 22:32:19 +00:00
Nathan Whitehorn
8e7c7ea2ea Following suggestions from alc, skip wired mappings in pmap_remove_pages()
and remove moea64_attr_*() in favor of direct calls to vm_page_dirty()
and friends.
2012-03-24 19:59:14 +00:00
Nathan Whitehorn
07b638a98e Remove acquisition of VM page queues lock from pmap_protect(). Any actual
manipulation of the pvo_vlink and pvo_olink entries is already protected
by the table lock, so most remaining instances of the acquisition of the
page queues lock can likely be replaced with the table lock, or removed
if the table lock is already held.

Reviewed by:	alc
2012-03-18 13:22:42 +00:00
Nathan Whitehorn
cd907a68aa Implement pmap_remove_pages(). This will be added later to the 32-bit MMU
module.

Suggested by:	alc
2012-03-15 22:50:48 +00:00
Nathan Whitehorn
246e44956e Improve algorithm for deciding whether to loop through all process pages
or look them up individually in pmap_remove() and apply the same logic
in the other ranged operation (pmap_protect). This speeds up make
installworld by a factor of 2 on powerpc64.

MFC after:	1 week
2012-03-15 19:36:52 +00:00
Nathan Whitehorn
cbfa304088 Use LIST_FOREACH_SAFE() instead of LIST_FOREACH() in pmap_remove(), since
the point of this loop is to remove elements. This worked by accident before.

MFC after:	2 days
2012-03-14 20:19:49 +00:00
Andreas Tobler
179e996c9f Revert the _NOPROF entries on cpu_throw, cpu_switch and savectx. They can be
profiled too now.

MFC after:	2 weeks
2012-02-05 15:59:18 +00:00
Konstantin Belousov
75ce221fa1 Fix build for the case of powerpc64 kernel without COMPAT_FREEBSD32.
MFC after:	2 months
2012-01-30 19:31:17 +00:00
Konstantin Belousov
62c625fdd2 Finally, try to enable the nxstacks on amd64 and powerpc64 for both 64bit
and 32bit ABIs. Also try to enable nxstacks for PAE/i386 when supported,
and some variants of powerpc32.

MFC after:	2 months (if ever)
2012-01-30 07:56:00 +00:00
Andreas Tobler
9eab2f146a This commit adds profiling support for powerpc64. Now we can do application
profiling and kernel profiling. To enable kernel profiling one has to build
kgmon(8). I will enable the build once I managed to build and test powerpc
(32-bit) kernels with profiling support.

- add a powerpc64 PROF_PROLOGUE for _mcount.
- add macros to avoid adding the PROF_PROLOGUE in certain assembly entries.
- apply these macros where needed.
- add size information to the MCOUNT function.

MFC after:	3 weeks, together with r230291
2012-01-20 22:34:19 +00:00
Nathan Whitehorn
ae09ab8f63 Rework SLB trap handling so that double-faults into an SLB trap handler are
possible, and double faults within an SLB trap handler are not. The result
is that it possible to take an SLB fault at any time, on any address, for
any reason, at any point in the kernel.

This lets us do two important things. First, it removes the (soft) 16 GB RAM
ceiling on PPC64 as well as any architectural limitations on KVA space.
Second, it lets the kernel tolerate poorly designed hypervisors that
have a tendency to fail to restore the SLB properly after a hypervisor
context switch.

MFC after:	6 weeks
2012-01-15 00:08:14 +00:00
Justin Hibbits
7b25dcca76 Implement hwpmc counting PMC support for PowerPC G4+ (MPC745x/MPC744x).
Sampling is in progress.

Approved by:	nwhitehorn (mentor)
MFC after:	9.0-RELEASE
2011-12-24 19:34:52 +00:00
Nathan Whitehorn
e347e23bfe Allow this to work on embedded systems without Open Firmware by making
lack of a /chosen non-fatal, and manually removing memory in use by the
kernel from the physical memory map.

Submitted by:	rpaulo
2011-12-16 23:46:05 +00:00
Nathan Whitehorn
b059c637fb Zero BSS on start, in case the ELF loader that started the kernel did not
do this for us. This can happen on some embedded systems.

Submitted by:	rpaulo
2011-12-16 23:40:56 +00:00
Alan Cox
3b03ca3bbe Eliminate vestiges of page coloring. 2011-12-15 05:07:16 +00:00
Nathan Whitehorn
598d99ddee Keep track of PVO entries in each pmap, which allows much faster
pmap_remove() for large sparse requests. This can prevent pmap_remove()
operations on 64-bit process destruction or swapout that would take
several hundred times the lifetime of the universe to complete. This
behavior is largely indistinguishable from a hang.
2011-12-11 17:19:48 +00:00
Marius Strobl
4b7ec27007 - There's no need to overwrite the default device method with the default
one. Interestingly, these are actually the default for quite some time
  (bus_generic_driver_added(9) since r52045 and bus_generic_print_child(9)
  since r52045) but even recently added device drivers do this unnecessarily.
  Discussed with: jhb, marcel
- While at it, use DEVMETHOD_END.
  Discussed with: jhb
- Also while at it, use __FBSDID.
2011-11-22 21:28:20 +00:00
Nathan Whitehorn
a897298940 Use a global __pure2 function instead of a global register variable for
curthread, like on x86 and sparc64. This makes the kernel somewhat more
clang friendly, which doesn't support global register variables.
2011-11-17 15:49:42 +00:00
Nathan Whitehorn
46e93cbbc5 Add an extra invariant here which was useful on 64-bit CPUs. 2011-11-17 15:48:12 +00:00
Alan Cox
fbd80bd047 Refactor the code that performs physically contiguous memory allocation,
yielding a new public interface, vm_page_alloc_contig().  This new function
addresses some of the limitations of the current interfaces, contigmalloc()
and kmem_alloc_contig().  For example, the physically contiguous memory that
is allocated with those interfaces can only be allocated to the kernel vm
object and must be mapped into the kernel virtual address space.  It also
provides functionality that vm_phys_alloc_contig() doesn't, such as wiring
the returned pages.  Moreover, unlike that function, it respects the low
water marks on the paging queues and wakes up the page daemon when
necessary.  That said, at present, this new function can't be applied to all
types of vm objects.  However, that restriction will be eliminated in the
coming weeks.

From a design standpoint, this change also addresses an inconsistency
between vm_phys_alloc_contig() and the other vm_phys_alloc*() functions.
Specifically, vm_phys_alloc_contig() manipulated vm_page fields that other
functions in vm/vm_phys.c didn't.  Moreover, vm_phys_alloc_contig() knew
about vnodes and reservations.  Now, vm_page_alloc_contig() is responsible
for these things.

Reviewed by:	kib
Discussed with:	jhb
2011-11-16 16:46:09 +00:00