824 Commits

Author SHA1 Message Date
jhibbits
d106d865ac Use a loop of dcbz, instead of calling bzero() to zero a page. This matches
what is done in mmu_oea64.c.

MFC after:	1 month
2014-01-29 05:58:08 +00:00
jhibbits
72b0bd154f Save r3 before using it for the trap check, else we end up saving the new r3,
containing the trap instruction encoding (0x7c810808), and restoring it back
with the frame on return.  This caused it to panic on my ppc32 machine, but
somehow my ppc64 machine overlooked it, because I was using such a simple
dtrace probe.

X-MFC-with:	r259245
MFC after:	2 weeks
2013-12-15 18:07:25 +00:00
jhibbits
7a0c976f7e Add PMU-based CPU frequency scaling. This method is used on most Titanium
PowerBooks.

MFC after:	1 month
2013-12-13 02:37:35 +00:00
jhibbits
2ff1272b66 FBT now does work fully on PowerPC.
MFC after:	2 weeks
2013-12-12 04:12:19 +00:00
nwhitehorn
95789e5736 The kernel stack guard pages are only below the stack pointer, not above.
Prevent erroneous detection of stack overflows on legitimate faults on the
page after this thread's stack.

MFC after:	3 days
2013-12-01 17:28:28 +00:00
nwhitehorn
b777ab82ba badaddr() is used only in the grackle PCI driver, so move its definition
there. Clean up a spurious setfault() declaration as well.
2013-11-27 22:01:09 +00:00
attilio
7ee4e910ce - For kernel compiled only with KDTRACE_HOOKS and not any lock debugging
option, unbreak the lock tracing release semantic by embedding
  calls to LOCKSTAT_PROFILE_RELEASE_LOCK() direclty in the inlined
  version of the releasing functions for mutex, rwlock and sxlock.
  Failing to do so skips the lockstat_probe_func invokation for
  unlocking.
- As part of the LOCKSTAT support is inlined in mutex operation, for
  kernel compiled without lock debugging options, potentially every
  consumer must be compiled including opt_kdtrace.h.
  Fix this by moving KDTRACE_HOOKS into opt_global.h and remove the
  dependency by opt_kdtrace.h for all files, as now only KDTRACE_FRAMES
  is linked there and it is only used as a compile-time stub [0].

[0] immediately shows some new bug as DTRACE-derived support for debug
in sfxge is broken and it was never really tested.  As it was not
including correctly opt_kdtrace.h before it was never enabled so it
was kept broken for a while.  Fix this by using a protection stub,
leaving sfxge driver authors the responsibility for fixing it
appropriately [1].

Sponsored by:	EMC / Isilon storage division
Discussed with:	rstone
[0] Reported by:	rstone
[1] Discussed with:	philip
2013-11-25 07:38:45 +00:00
andreast
c2f1851691 Save and restore the trap vectors when doing OF calls on pSeries machines.
It turned out that on pSeries machines the call into OF modified the trap
vectors and this made further behaviour unpredictable.

With this commit I'm now able to boot multi user on a network booted
environment on my IntelliStation 285. This is a POWER5+ machine.

Discussed with:		nwhitehorn
MFC after:	1 week
2013-11-23 18:58:17 +00:00
jhibbits
bb668ab52a Remove stale comment. The PID provider is handled elsewhere already. 2013-11-21 06:54:28 +00:00
nwhitehorn
4db5557688 Do not assume a value for #address-cells when parsing the OF translations
map. This allows the kernel to get farther with OpenBIOS on 64-bit CPUs.
2013-11-17 18:03:03 +00:00
nwhitehorn
f754a8eebe Unify handling of illegal instruction faults between AIM and Book-E. This
allows FPU emulation on AIM as well as providing support for the mfpvr
and lwsync instructions from userland on e500 cores. lwsync, in particular,
is required for many C++ programs to work correctly.

MFC after:	1 week
2013-11-17 15:12:03 +00:00
jhibbits
874c517764 Fix copy+paste-o, OEA64 uses LPTE, not PTE.
X-MFC with:	r257941
2013-11-14 07:41:52 +00:00
nwhitehorn
b136bd67d8 Use the same implementation of copyinout.c for both AIM and Book-E. This
fixes some bugs in both implementations related to validity checks on
mapping bounds.
2013-11-11 23:37:16 +00:00
nwhitehorn
a1b4155638 Follow up r223485, which made AIM use the ABI thread pointer instead of
PCPU fields for curthread, by doing the same to Book-E. This closes
some potential races switching between CPUs. As a side effect, it turns out
the AIM and Book-E swtch.S implementations were the same to within a few
registers, so move that to powerpc/powerpc.

MFC after: 3 months
2013-11-11 17:37:50 +00:00
jhibbits
95e5132b64 Add the necessary bits for dumps on ppc64.
MFC after:	2 weeks
2013-11-11 03:17:38 +00:00
markj
a5fb1fbfd8 Remove references to an unused fasttrap probe hook, and remove the
corresponding x86 trap type. Userland DTrace probes are currently handled
by the other fasttrap hooks (dtrace_pid_probe_ptr and
dtrace_return_probe_ptr).

Discussed with:	rpaulo
2013-10-31 02:35:00 +00:00
nwhitehorn
00c1fadd81 Add some extra sanity checking and checks to printf format specifiers. 2013-10-26 18:19:36 +00:00
nwhitehorn
26c2b7ea5e Interrelated improvements to early boot mappings:
- Remove explicit requirement that the SOC registers be found except as an
  optimization (although the MPC85XX LAW drivers still require they be found
  externally, which should change).
- Remove magic CCSRBAR_VA value.
- Allow bus_machdep.c's early-boot code to handle non 1:1 mappings and
  systems not in real-mode or global 1:1 maps in early boot.
- Allow pmap_mapdev() on Book-E to reissue previous addresses if the
  area is already mapped. Additionally have it check all mappings, not
  just the CCSR area.

This allows the console on e500 systems to actually work on systems where
the boot loader was not kind enough to set up a 1:1 mapping before starting
the kernel.
2013-10-26 18:18:14 +00:00
nwhitehorn
285be8a709 Clean up missed header references. 2013-10-26 17:54:31 +00:00
nwhitehorn
d0ef44bd3a Since the PS3 port was committed, the AIM nexus device works perfectly fine
on all PowerPC platforms, whether or not they have Open Firmware. Remove
some more duplication and have there be only one nexus driver.
2013-10-20 18:40:55 +00:00
nwhitehorn
abd243270e Replace the two almost-exactly-identical AIM and Book-E clock.c
implementations with a single one after the application of a very small
amount of #ifdef.
2013-10-20 16:37:03 +00:00
nwhitehorn
41c4967a72 Unify the AIM and Book-E vm_machdep.c implementations, which previously
differed only with respect to the AIM version not following style(9) and
some additional features for 64-bit systems and machines with direct maps
in the AIM implementation that are no-ops on Book-E (at least for now).
2013-10-20 16:14:03 +00:00
jhibbits
99ca70f361 Fix the Wii build, and remove an extraneous critical_enter(). 2013-10-16 04:11:42 +00:00
jhibbits
0b9629ab6a Add fasttrap for PowerPC. This is the last piece of the dtrace/ppc puzzle.
It's incomplete, it doesn't contain full instruction emulation, but it should be
sufficient for most cases.

MFC after:	1 month
2013-10-15 15:00:29 +00:00
jhibbits
3b8af4143e Move the PMC handling to the first level interrupt handler where it belongs.
Also add the pmc_hook use, to handle callchain tracing.

MFC after:	1 week
2013-10-15 14:52:44 +00:00
glebius
12fb397079 - Create kern.ipc.sendfile namespace, and put the new "readhead" OID
there as "kern.ipc.sendfile.readahead".
- Push all nsfbuf related tunables into MD code. Don't move them
  to new namespace in favor of POLA.

Reviewed by:	scottl
Approved by:	re (gjb)
2013-09-22 13:36:52 +00:00
alc
88a4d0f31a The pmap function pmap_clear_reference() is no longer used. Remove it.
pmap_clear_reference() has had exactly one caller in the kernel for
several years, more precisely, since FreeBSD 8.  Now, that call no
longer exists.

Approved by:	re (kib)
Sponsored by:	EMC / Isilon Storage Division
2013-09-20 04:30:18 +00:00
nwhitehorn
a19fbd832b Change VM object lock assertion to match locking higher in the call
chain. This repairs a panic observed during pageout on some 64-bit
PowerPC systems.

Submitted by:	grehan
Approved by:	re (kib)
MFC after:	2 weeks
Revisit after:	10.0
2013-09-13 01:12:45 +00:00
nwhitehorn
b2fa807b0e Raise artificial limits on number of CPUs and number of interrupts.
Approved by:	re (kib)
2013-09-09 12:52:34 +00:00
nwhitehorn
d54687609c Add POWER CPUs to the kernel's knowledge. This does not imply we currently
actually run on any machines with POWER CPUs but avoids closing that door
unnecessarily.

Approved by:	re (kib)
2013-09-09 12:51:24 +00:00
nwhitehorn
4a4705a68f Align stacks of kernel threads correctly at 16-byte boundaries rather than
making sure they are all misaligned at +8 bytes. This fixes clang builds
of powerpc64 kernels (aside from a required increase in KSTACK_PAGES which
will come later).

This commit from FreeBSD/powerpc64 with a clang-built kernel.

MFC after:	2 weeks
2013-09-05 23:00:24 +00:00
jhibbits
67d6bf7992 Enable PMC interrupt handling, and fix a DTrace trap handling bug. 2013-09-03 00:42:15 +00:00
kib
05a9dff802 Revert r254501. Instead, reuse the type stability of the struct pmap
which is the part of struct vmspace, allocated from UMA_ZONE_NOFREE
zone.  Initialize the pmap lock in the vmspace zone init function, and
remove pmap lock initialization and destruction from pmap_pinit() and
pmap_release().

Suggested and reviewed by:	alc (previous version)
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
2013-08-22 18:12:24 +00:00
attilio
16c7563cf4 The soft and hard busy mechanism rely on the vm object lock to work.
Unify the 2 concept into a real, minimal, sxlock where the shared
acquisition represent the soft busy and the exclusive acquisition
represent the hard busy.
The old VPO_WANTED mechanism becames the hard-path for this new lock
and it becomes per-page rather than per-object.
The vm_object lock becames an interlock for this functionality:
it can be held in both read or write mode.
However, if the vm_object lock is held in read mode while acquiring
or releasing the busy state, the thread owner cannot make any
assumption on the busy state unless it is also busying it.

Also:
- Add a new flag to directly shared busy pages while vm_page_alloc
  and vm_page_grab are being executed.  This will be very helpful
  once these functions happen under a read object lock.
- Move the swapping sleep into its own per-object flag

The KPI is heavilly changed this is why the version is bumped.
It is very likely that some VM ports users will need to change
their own code.

Sponsored by:	EMC / Isilon storage division
Discussed with:	alc
Reviewed by:	jeff, kib
Tested by:	gavin, bapt (older version)
Tested by:	pho, scottl
2013-08-09 11:11:11 +00:00
jeff
de4ecca213 Replace kernel virtual address space allocation with vmem. This provides
transparent layering and better fragmentation.

 - Normalize functions that allocate memory to use kmem_*
 - Those that allocate address space are named kva_*
 - Those that operate on maps are named kmap_*
 - Implement recursive allocation handling for kmem_arena in vmem.

Reviewed by:	alc
Tested by:	pho
Sponsored by:	EMC / Isilon Storage Division
2013-08-07 06:21:20 +00:00
jhibbits
e280c38ab5 Remove an unnecessary panic. The PVO's PTE entry and the PTEG's PTE entry may
not match, if the PVO's PTE is invalid.
2013-08-06 02:58:16 +00:00
jhibbits
271fe456c4 Evict pages from the PTEG when it's full and trying to insert a new PTE,
rather than panicking.

Reviewed by:	nwhitehorn
MFC after:	3 weeks
2013-08-06 01:01:15 +00:00
ae
6f8e41d6cb Introduce new structure sfstat for collecting sendfile's statistics
and remove corresponding fields from struct mbstat. Use PCPU counters
and SFSTAT_INC() macro for update these statistics.

Discussed with:	glebius
2013-07-15 06:16:57 +00:00
nwhitehorn
bb3d14f834 Fix check: bitwise and has only one &.
MFC after:	1 week
2013-07-12 15:56:30 +00:00
attilio
fdf82ef9cf o Relax locking assertions for vm_page_find_least()
o Relax locking assertions for pmap_enter_object() and add them also
  to architectures that currently don't have any
o Introduce VM_OBJECT_LOCK_DOWNGRADE() which is basically a downgrade
  operation on the per-object rwlock
o Use all the mechanisms above to make vm_map_pmap_enter() to work
  mostl of the times only with readlocks.

Sponsored by:	EMC / Isilon storage division
Reviewed by:	alc
2013-05-21 20:38:19 +00:00
alc
b55ba81a92 Relax the object locking assertion in pmap_enter_locked().
Reviewed by:	attilio
Sponsored by:	EMC / Isilon Storage Division
2013-05-17 18:59:00 +00:00
jhibbits
94794a6904 Remove a comment that shouldn't have gone in.
X-MFC-with:	r249864
2013-04-26 05:18:18 +00:00
jhibbits
58ae34c700 Introduce kernel coredumps to ppc32 AIM. Leeched from the booke code.
MFC after:	2 weeks
2013-04-25 00:39:43 +00:00
jhibbits
6de9f1bf18 Print out DSISR in a fatal DSI trap.
Sponsored by:
2013-04-05 04:53:43 +00:00
kib
7c26a038f9 Implement the concept of the unmapped VMIO buffers, i.e. buffers which
do not map the b_pages pages into buffer_map KVA.  The use of the
unmapped buffers eliminate the need to perform TLB shootdown for
mapping on the buffer creation and reuse, greatly reducing the amount
of IPIs for shootdown on big-SMP machines and eliminating up to 25-30%
of the system time on i/o intensive workloads.

The unmapped buffer should be explicitely requested by the GB_UNMAPPED
flag by the consumer.  For unmapped buffer, no KVA reservation is
performed at all. The consumer might request unmapped buffer which
does have a KVA reserve, to manually map it without recursing into
buffer cache and blocking, with the GB_KVAALLOC flag.

When the mapped buffer is requested and unmapped buffer already
exists, the cache performs an upgrade, possibly reusing the KVA
reservation.

Unmapped buffer is translated into unmapped bio in g_vfs_strategy().
Unmapped bio carry a pointer to the vm_page_t array, offset and length
instead of the data pointer.  The provider which processes the bio
should explicitely specify a readiness to accept unmapped bio,
otherwise g_down geom thread performs the transient upgrade of the bio
request by mapping the pages into the new bio_transient_map KVA
submap.

The bio_transient_map submap claims up to 10% of the buffer map, and
the total buffer_map + bio_transient_map KVA usage stays the
same. Still, it could be manually tuned by kern.bio_transient_maxcnt
tunable, in the units of the transient mappings.  Eventually, the
bio_transient_map could be removed after all geom classes and drivers
can accept unmapped i/o requests.

Unmapped support can be turned off by the vfs.unmapped_buf_allowed
tunable, disabling which makes the buffer (or cluster) creation
requests to ignore GB_UNMAPPED and GB_KVAALLOC flags.  Unmapped
buffers are only enabled by default on the architectures where
pmap_copy_page() was implemented and tested.

In the rework, filesystem metadata is not the subject to maxbufspace
limit anymore. Since the metadata buffers are always mapped, the
buffers still have to fit into the buffer map, which provides a
reasonable (but practically unreachable) upper bound on it. The
non-metadata buffer allocations, both mapped and unmapped, is
accounted against maxbufspace, as before. Effectively, this means that
the maxbufspace is forced on mapped and unmapped buffers separately.
The pre-patch bufspace limiting code did not worked, because
buffer_map fragmentation does not allow the limit to be reached.

By Jeff Roberson request, the getnewbuf() function was split into
smaller single-purpose functions.

Sponsored by:	The FreeBSD Foundation
Discussed with:	jeff (previous version)
Tested by:	pho, scottl (previous version), jhb, bf
MFC after:	2 weeks
2013-03-19 14:13:12 +00:00
jhibbits
7b62f31cdf Add FBT for PowerPC DTrace. Also, clean up the DTrace assembly code,
much of which is not necessary for PowerPC.

The FBT module can likely be factored into 3 separate files: common,
intel, and powerpc, rather than duplicating most of the code between
the x86 and PowerPC flavors.

All DTrace modules for PowerPC will be MFC'd together once Fasttrap is
completed.
2013-03-18 05:30:18 +00:00
kib
63efc821c3 Add pmap function pmap_copy_pages(), which copies the content of the
pages around, taking array of vm_page_t both for source and
destination.  Starting offsets and total transfer size are specified.

The function implements optimal algorithm for copying using the
platform-specific optimizations.  For instance, on the architectures
were the direct map is available, no transient mappings are created,
for i386 the per-cpu ephemeral page frame is used.  The code was
typically borrowed from the pmap_copy_page() for the same
architecture.

Only i386/amd64, powerpc aim and arm/arm-v6 implementations were
tested at the time of commit. High-level code, not committed yet to
the tree, ensures that the use of the function is only allowed after
explicit enablement.

For sparc64, the existing code has known issues and a stab is added
instead, to allow the kernel linking.

Sponsored by:	The FreeBSD Foundation
Tested by:	pho (i386, amd64), scottl (amd64), ian (arm and arm-v6)
MFC after:	2 weeks
2013-03-14 20:18:12 +00:00
attilio
e98f58faf6 MFC 2013-03-02 14:48:41 +00:00
mav
6cf7cc6e4d MFcalloutng:
Switch eventtimers(9) from using struct bintime to sbintime_t.
Even before this not a single driver really supported full dynamic range of
struct bintime even in theory, not speaking about practical inexpediency.
This change legitimates the status quo and cleans up the code.
2013-02-28 13:46:03 +00:00
attilio
8d28f94790 Merge from vmobj-rwlock:
VM_OBJECT_LOCKED() macro is only used to implement a custom version
of lock assertions right now (which likely spread out thanks to
copy and paste).
Remove it and implement actual assertions.

Sponsored by:	EMC / Isilon storage division
Reviewed by:	alc
Tested by:	pho
2013-02-27 18:12:13 +00:00