Commit Graph

634 Commits

Author SHA1 Message Date
dchagin
1e124ec538 Add macro to test the sv_flags of any process. Change some places to test
the flags instead of explicit comparing with address of known sysentvec
structures.

MFC after:	1 month
2011-01-26 20:03:58 +00:00
pluknet
5f536fc1d3 Make MSGBUF_SIZE kernel option a loader tunable kern.msgbufsize.
Submitted by:	perryh pluto.rain.com (previous version)
Reviewed by:	jhb
Approved by:	kib (mentor)
Tested by:	universe
2011-01-21 10:26:26 +00:00
kib
ef4e87bddc For architectures not using direct map , and requiring real KVA page for
sf buf allocation, use wakeup() instead of wakeup_one() to notify sf
buffer waiters about free buffer.

sf_buf_alloc() calls msleep(PCATCH) when SFB_CATCH flag was given,
and for simultaneous wakeup and signal delivery, msleep() returns
EINTR/ERESTART despite the thread was selected for wakeup_one(). As
result, we loose a wakeup, and some other waiter will not be woken up.

Reported and tested by:	az
Reviewed by:	alc, jhb
MFC after:	1 week
2011-01-18 21:57:02 +00:00
andreast
3e024e1110 Remove unused variables. Spotted by a cppcheck
(devel/cppcheck, http://sourceforge.net/projects/cppcheck) run.

Approved by: nwhitehorn (mentor)
2011-01-15 19:16:05 +00:00
nwhitehorn
612c23ffbb Fix handling of NX pages on capable CPUs. Thanks to kib for prodding me
in the right direction.
2011-01-13 04:37:48 +00:00
andreast
b3910b1de1 Remove unused variables. Spotted by a cppcheck
(devel/cppcheck, http://sourceforge.net/projects/cppcheck) run.

Approved by: nwhitehorn (mentor)
2011-01-06 20:19:01 +00:00
nwhitehorn
df478abcd2 Only keep track of PTE validity statistics for pages not locked in the
table. The 'locked' attribute is used to circumvent the regular page table
locking for some special pages, with the result that including locked pages
here causes races when updating the stats.
2010-12-28 17:02:15 +00:00
nwhitehorn
708cd7af18 Garbage-collect unused variable. 2010-12-19 16:07:53 +00:00
nwhitehorn
b636c3db31 Add some isync()s related to the 64-bit MMU scratch page to avoid race
conditions on its invalidation.
2010-12-11 20:29:52 +00:00
nwhitehorn
0ff0fd520d Add an abstraction layer to the 64-bit AIM MMU's page table manipulation
logic to support modifying the page table through a hypervisor. This
uses KOBJ inheritance to provide subclasses of the base 64-bit AIM MMU
class with additional methods for page table manipulation.

Many thanks to Peter Grehan for suggesting this design and implementing
the MMU KOBJ inheritance mechanism.
2010-12-04 02:42:52 +00:00
dim
fb307d7d1d After some off-list discussion, revert a number of changes to the
DPCPU_DEFINE and VNET_DEFINE macros, as these cause problems for various
people working on the affected files.  A better long-term solution is
still being considered.  This reversal may give some modules empty
set_pcpu or set_vnet sections, but these are harmless.

Changes reverted:

------------------------------------------------------------------------
r215318 | dim | 2010-11-14 21:40:55 +0100 (Sun, 14 Nov 2010) | 4 lines

Instead of unconditionally emitting .globl's for the __start_set_xxx and
__stop_set_xxx symbols, only emit them when the set_vnet or set_pcpu
sections are actually defined.

------------------------------------------------------------------------
r215317 | dim | 2010-11-14 21:38:11 +0100 (Sun, 14 Nov 2010) | 3 lines

Apply the STATIC_VNET_DEFINE and STATIC_DPCPU_DEFINE macros throughout
the tree.

------------------------------------------------------------------------
r215316 | dim | 2010-11-14 21:23:02 +0100 (Sun, 14 Nov 2010) | 2 lines

Add macros to define static instances of VNET_DEFINE and DPCPU_DEFINE.
2010-11-22 19:32:54 +00:00
dim
fda4020a88 Apply the STATIC_VNET_DEFINE and STATIC_DPCPU_DEFINE macros throughout
the tree.
2010-11-14 20:38:11 +00:00
nwhitehorn
2666f52107 Partially revert r215182. There appears to be a silicon bug on the 970
that causes AP bringup to fail if some of the Cell HID-register code
is anywhere in the instruction stream. Pending a better solution, cache
performance on SMP Cell systems running without a hypervisor will be
suboptimal.
2010-11-12 20:26:34 +00:00
nwhitehorn
a1ec11b11a Add CPU support code for the IBM Cell Broadband Engine. 2010-11-12 15:20:10 +00:00
nwhitehorn
358939cf28 Remove use of a separate ofw_pmap on 32-bit CPUs. Many Open Firmware
mappings need to end up in the kernel anyway since the kernel begins
executing in OF context. Separating them adds needless complexity,
especially since the powerpc64 and mmu_oea64 code gave up on it a long
time ago.

As a side effect, the PPC ofw_machdep code is no longer AIM-specific,
so move it to powerpc/ofw.
2010-11-12 05:12:38 +00:00
nwhitehorn
81f6e90db2 Remove or conditionalize some hypervisor-unfriendly instruction sequences. 2010-11-12 04:22:00 +00:00
nwhitehorn
b5495356b2 Add some platform KOBJ extensions and continue integrating PowerPC
hypervisor infrastructure support:
- Fix coexistence of multiple platform modules in the same kernel
- Allow platform modules to provide an SMP topology
- PowerPC hypervisors limit the amount of memory accessible in real mode.
  Allow the platform modules to specify the maximum real-mode address,
  and modify the bits of the kernel that need to allocate
  real-mode-accessible buffers to respect this limits.
2010-11-12 04:18:19 +00:00
nwhitehorn
3d8843cae1 Fix an error in r215067. An existing /chosen/mmu but missing translations
property just means we shouldn't add any translations, not that we should
panic.
2010-11-12 04:13:48 +00:00
nwhitehorn
b60b3b7753 Centralize CPU idle routines into powerpc/cpu.c and use the same
cpu_idle_hook mechanism that x86 uses for overriding the idle routine.
This is required for supporting ilding the CPU under PowerPC hypervisors.
2010-11-12 03:43:22 +00:00
raj
c5bcf55d35 Fix typo in the comment. 2010-11-11 13:46:28 +00:00
nwhitehorn
c4343a6448 Add support for the IMISS, DLMISS, and DSMISS traps required to run
FreeBSD on a G2 core.

PR:		powerpc/111296
Submitted by:	Andrew Turner
2010-11-11 02:40:00 +00:00
nwhitehorn
023e6598a8 Make AIM early-boot code function correctly without Open Firmware. 2010-11-09 23:53:47 +00:00
jhb
45c0759920 Adjust the order of operations in spinlock_enter() and spinlock_exit() to
work properly with single-stepping in a kernel debugger.  Specifically,
these routines have always disabled interrupts before increasing the nesting
count and restored the prior state of interrupts after decreasing the nesting
count to avoid problems with a nested interrupt not disabling interrupts
when acquiring a spin lock.  However, trap interrupts for single-stepping
can still occur even when interrupts are disabled.  Now the saved state of
interrupts is not saved in the thread until after interrupts have been
disabled and the nesting count has been increased.  Similarly, the saved
state from the thread cannot be read once the nesting count has been
decreased to zero.  To fix this, use temporary variables to store interrupt
state and shuffle it between the thread's MD area and the appropriate
registers.

In cooperation with:	bde
MFC after:     1 month
2010-11-05 13:42:58 +00:00
nwhitehorn
d608c982a1 Fix two mistakes on 32-bit systems. The slbmte code in syscall() is 64-bit
only, and should be protected with an ifdef, and the no-execute bit in
32-bit set_user_sr() should be set before the comparison, not after, or
it will never match.
2010-11-03 16:21:47 +00:00
nwhitehorn
8857600893 Clean up the user segment handling code a little more. Now that
set_user_sr() itself caches the user segment VSID, there is no need for
cpu_switch() to do it again. This change also unifies the 32 and 64-bit
code paths for kernel faults on user pages and remaps the user SLB slot
on 64-bit systems when taking a syscall to avoid some unnecessary segment
exception traps.
2010-11-03 15:15:48 +00:00
alc
0fbc128e60 Implement pmap_is_prefaultable().
Reviewed by:	nwhitehorn
2010-11-01 02:22:48 +00:00
nwhitehorn
c108d430e6 Add a security nit to recent copyin/out changes: map the user segment
no-execute in case of exploitable kernel bugs.

MFC after:	1 week
2010-10-31 23:04:15 +00:00
nwhitehorn
bd1e7d8ed8 Next-to-leading-order perturbation of synchronization operations for
switching the user segment register. All races should now be closed and
a minimum of pipelines flushes be required to close them.
2010-10-31 22:55:51 +00:00
nwhitehorn
eb3515b958 Add some missing parentheses so that moea_bat_mapped() actually works.
Submitted by:	alc
MFC after:	3 days
2010-10-31 15:07:09 +00:00
nwhitehorn
ecfb41d217 Restructure the way the copyin/copyout segment is stored to prevent a
concurrency bug. Since all SLB/SR entries were invalidated during an
exception, a decrementer exception could cause the user segment to be
invalidated during a copyin()/copyout() without a thread switch that
would cause it to be restored from the PCB, potentially causing the
operation to continue on invalid memory. This is now handled by explicit
restoration of segment 12 from the PCB on 32-bit systems and a check in
the Data Segment Exception handler on 64-bit.

While here, cause copyin()/copyout() to check whether the requested
user segment is already installed, saving some pipeline flushes, and
fix the synchronization primitives around the mtsr and slbmte
instructions to prevent accessing stale segments.

MFC after:	2 weeks
2010-10-30 23:07:30 +00:00
nwhitehorn
306dfd834d Handle vector assist traps without a kernel panic, by setting denormalized
values to zero. A correct solution would involve emulating vector
operations on denormalized values, but this has little effect on accuracy
and is much less complicated for now.

MFC after:	2 weeks
2010-10-05 18:08:07 +00:00
nwhitehorn
09583f5780 Follow exactly the steps in architecture manual for correctly invalidating
TLB entries instead of trying to cut corners.
2010-10-04 16:07:48 +00:00
nwhitehorn
92b2ae5c12 Fix pmap_page_set_memattr() behavior in the presence of fictitious pages
by just caching the mode for later use by pmap_enter(), following amd64.
While here, correct some mismerges from mmu_oea64 -> mmu_oea and clean
up some dead code found while fixing the fictitious page behavior.
2010-10-01 18:59:30 +00:00
nwhitehorn
d3610bff0a Add support for memory attributes (pmap_mapdev_attr() and friends) on
PowerPC/AIM. This is currently stubbed out on Book-E, since I have no
idea how to implement it there.
2010-09-30 18:14:12 +00:00
nwhitehorn
7f0b02b79c Split the SLB mirror cache into two kinds of object, one for kernel maps
which are similar to the previous ones, and one for user maps, which
are arrays of pointers into the SLB tree. This changes makes user SLB
updates atomic, closing a window for memory corruption. While here,
rearrange the allocation functions to make context switches faster.
2010-09-16 03:46:17 +00:00
nwhitehorn
489c1437aa Replace the SLB backing store splay tree used on 64-bit PowerPC AIM
hardware with a lockless sparse tree design. This marginally improves
the performance of PMAP and allows copyin()/copyout() to run without
acquiring locks when used on wired mappings.

Submitted by:	mdf
2010-09-16 00:22:25 +00:00
grehan
bd5391ac7c Introduce inheritance into the PowerPC MMU kobj interface.
include/mmuvar.h - Change the MMU_DEF macro to also create the class
definition as well as define the DATA_SET. Add a macro, MMU_DEF_INHERIT,
which has an extra parameter specifying the MMU class to inherit methods
from. Update the comments at the start of the header file to describe the
new macros.

booke/pmap.c
aim/mmu_oea.c
aim/mmu_oea64.c - Collapse mmu_def_t declaration into updated MMU_DEF macro

The MMU_DEF_INHERIT macro will be used in the PS3 MMU implementation to
allow it to inherit the stock powerpc64 MMU methods.

Reviewed by:	nwhitehorn
2010-09-15 00:17:52 +00:00
grehan
93d9948943 Resurrect PSIM support by moving the cacheline size-detection warning
printf outside of the MMU-disabled region. A call into OpenFirmware
with the MMU off resulted in an internal PSIM assert.
2010-09-14 03:18:11 +00:00
mav
eb4931dc6c Refactor timer management code with priority to one-shot operation mode.
The main goal of this is to generate timer interrupts only when there is
some work to do. When CPU is busy interrupts are generating at full rate
of hz + stathz to fullfill scheduler and timekeeping requirements. But
when CPU is idle, only minimum set of interrupts (down to 8 interrupts per
second per CPU now), needed to handle scheduled callouts is executed.
This allows significantly increase idle CPU sleep time, increasing effect
of static power-saving technologies. Also it should reduce host CPU load
on virtualized systems, when guest system is idle.

There is set of tunables, also available as writable sysctls, allowing to
control wanted event timer subsystem behavior:
  kern.eventtimer.timer - allows to choose event timer hardware to use.
On x86 there is up to 4 different kinds of timers. Depending on whether
chosen timer is per-CPU, behavior of other options slightly differs.
  kern.eventtimer.periodic - allows to choose periodic and one-shot
operation mode. In periodic mode, current timer hardware taken as the only
source of time for time events. This mode is quite alike to previous kernel
behavior. One-shot mode instead uses currently selected time counter
hardware to schedule all needed events one by one and program timer to
generate interrupt exactly in specified time. Default value depends of
chosen timer capabilities, but one-shot mode is preferred, until other is
forced by user or hardware.
  kern.eventtimer.singlemul - in periodic mode specifies how much times
higher timer frequency should be, to not strictly alias hardclock() and
statclock() events. Default values are 2 and 4, but could be reduced to 1
if extra interrupts are unwanted.
  kern.eventtimer.idletick - makes each CPU to receive every timer interrupt
independently of whether they busy or not. By default this options is
disabled. If chosen timer is per-CPU and runs in periodic mode, this option
has no effect - all interrupts are generating.

As soon as this patch modifies cpu_idle() on some platforms, I have also
refactored one on x86. Now it makes use of MONITOR/MWAIT instrunctions
(if supported) under high sleep/wakeup rate, as fast alternative to other
methods. It allows SMP scheduler to wake up sleeping CPUs much faster
without using IPI, significantly increasing performance on some highly
task-switching loads.

Tested by:	many (on i386, amd64, sparc64 and powerc)
H/W donated by:	Gheorghe Ardelean
Sponsored by:	iXsystems, Inc.
2010-09-13 07:25:35 +00:00
mav
f9956f69fb Update PowerPC event timer code to use new event timers infrastructure.
Reviewed by:	nwitehorn
Tested by:	andreast
H/W donated by:	Gheorghe Ardelean
2010-09-11 04:45:51 +00:00
avg
c9fe8ad7f0 bus_add_child: change type of order parameter to u_int
This reflects actual type used to store and compare child device orders.
Change is mostly done via a Coccinelle (soon to be devel/coccinelle)
semantic patch.
Verified by LINT+modules kernel builds.

Followup to:	r212213
MFC after:	10 days
2010-09-10 11:19:03 +00:00
nwhitehorn
a03821b388 Reorder statistics tracking and table lock acquisitions already in place
to avoid race conditions updating the PVO statistics.
2010-09-09 16:06:55 +00:00
nwhitehorn
04e78f792b Fix a printf specifier on 64-bit systems. 2010-09-08 19:28:43 +00:00
nwhitehorn
e956a1ec84 Fix a typo in the original import of this code from NetBSD that caused the
wrong element of the VSID bitmap array to be examined after a collision,
leading to reallocation of in-use VSIDs under some circumstances, with
attendant memory corruption. Also add an assert to check for this kind of
problem in the future.

MFC after:	4 days
2010-09-08 16:58:06 +00:00
nwhitehorn
6e924b9074 Fix an error made in r209975 related to context ID allocation for 64-bit
PowerPC CPUs running a 32-bit kernel. This bug could cause in-use VSIDs
to be allocated again to another process, causing memory space overlaps
and corruption.

Reported by:	linimon
2010-09-07 23:31:48 +00:00
nwhitehorn
4701f590b3 Fix the same race condition on 32-bit AIM CPUs that was fixed for 64-bit
ones in r211967 involving VSID allocation.
2010-09-06 23:07:58 +00:00
mav
90db0224ff Make nexus report name and compat fields as pnpinfo for devices on the
first level of hierarchy, same as done on deeper levels.
2010-09-05 19:57:24 +00:00
nwhitehorn
655a96888d Restructure how reset and poweroff are handled on PowerPC systems, since
the existing code was very platform specific, and broken for SMP systems
trying to reboot from KDB.

- Add a new PLATFORM_RESET() method to the platform KOBJ interface, and
  migrate existing reset functions into platform modules.
- Modify the OF_reboot() routine to submit the request by hand to avoid
  the IPIs involved in the regular openfirmware() routine. This fixes
  reboot from KDB on SMP machines.
- Move non-KDB reset and poweroff functions on the Powermac platform
  into the relevant power control drivers (cuda, pmu, smu), instead of
  using them through the Open Firmware backdoor.
- Rename platform_chrp to platform_powermac since it has become
  increasingly Powermac specific. When we gain support for IBM systems,
  we will grow a new platform_chrp.
2010-08-31 15:27:46 +00:00
nwhitehorn
97e3e36708 Remove some code made obsolete by the powerpc64 import. 2010-08-31 15:22:09 +00:00
nwhitehorn
7bec697fa3 Missed one place the SLB lock should be held in r211967. 2010-08-31 02:07:13 +00:00