133 Commits

Author SHA1 Message Date
Nathan Whitehorn
a107d8aac9 Change the arguments of exec_setregs() so that it receives a pointer
to the image_params struct instead of several members of that struct
individually. This makes it easier to expand its arguments in the future
without touching all platforms.

Reviewed by:	jhb
2010-03-25 14:24:00 +00:00
Nathan Whitehorn
ab73970649 Reduce KVA pressure on OEA64 systems running in bridge mode by mapping
UMA segments at their physical addresses instead of into KVA. This emulates
the direct mapping behavior of OEA32 in an ad-hoc way. To make this work
properly required sharing the entire kernel PMAP with Open Firmware, so
ofw_pmap is transformed into a stub on 64-bit CPUs.

Also implement some more tweaks to get more mileage out of our limited
amount of KVA, principally by extending KVA into segment 16 until the
beginning of the first OFW mapping.

Reported by:	linimon
2010-02-20 16:23:29 +00:00
Nathan Whitehorn
9fbbaac0bb The first argument of dcbz interprets r0 as a literal zero, not the second.
This worked before by accident.

MFC after:	1 week
2009-12-03 20:55:09 +00:00
Nathan Whitehorn
227f66048e Add a CPU features framework on PowerPC and simplify CPU setup a little
more. This provides three new sysctls to user space:
hw.cpu_features - A bitmask of available CPU features
hw.floatingpoint - Whether or not there is hardware FP support
hw.altivec - Whether or not Altivec is available

PR:		powerpc/139154
MFC after:	10 days
2009-11-28 17:33:19 +00:00
Nathan Whitehorn
d8cd25d022 Turn off Altivec data-stream prefetching before going into power-save
mode on those CPUs that need it.
2009-10-29 14:22:09 +00:00
Konstantin Belousov
d6e029adbe In r197963, a race with thread being selected for signal delivery
while in kernel mode, and later changing signal mask to block the
signal, was fixed for sigprocmask(2) and ptread_exit(3). The same race
exists for sigreturn(2), setcontext(2) and swapcontext(2) syscalls.

Use kern_sigprocmask() instead of direct manipulation of td_sigmask to
reschedule newly blocked signals, closing the race.

Reviewed by:	davidxu
Tested by:	pho
MFC after:	1 month
2009-10-27 10:47:58 +00:00
Nathan Whitehorn
dab90f68ed Add some more paranoia to setting HID registers, and update the AIM
clock routines to work better with SMP. This makes SMP work fully and
stably on an Xserve G5.

Obtained from:	Book-E (clock bits)
2009-10-23 21:36:33 +00:00
Peter Grehan
f61afb4498 Get the gdb/psim emulator functioning again.
aim/machdep.c:
  - the	RI status register bit needs to be set when	doing the mtmsrd 64-bit
    instruction	test
  - psim doesn't implement the dcbz instruction	so the run-time	cacheline
    test fails.	Set the	cachline size to 32 to avoid infinite loops in
    future calls to __syncicache()

aim/platform_chrp.c:
  - if after iterating through / and a name property of "cpus" still isn't
    found, just	search directly	for '/cpus'.
  - psim doesn't put a "reg" property on it's cpu nodes, so assume 0
    since it is	uniprocessor-only at this point

powerpc/openpic.c
  - the	number of CPUs reported	is 1 too many on psim's	openpic

Reviewed by:	nwhitehorn
MFC after:	1 week (openpic part)
2009-06-10 12:47:54 +00:00
Nathan Whitehorn
9eb9db93da Introduce support for cpufreq on PowerPC with the dynamic frequency
switching capabilities of the MPC7447A and MPC7448.
2009-05-31 09:01:23 +00:00
Marcel Moolenaar
dbb95048da Add cpu_flush_dcache() for use after non-DMA based I/O so that a
possible future I-cache coherency operation can succeed. On ARM
for example the L1 cache can be (is) virtually mapped, which
means that any I/O that uses temporary mappings will not see the
I-cache made coherent. On ia64 a similar behaviour has been
observed. By flushing the D-cache, execution of binaries backed
by md(4) and/or NFS work reliably.
For Book-E (powerpc), execution over NFS exhibits SIGILL once in
a while as well, though cpu_flush_dcache() hasn't been implemented
yet.

Doing an explicit D-cache flush as part of the non-DMA based I/O
read operation eliminates the need to do it as part of the
I-cache coherency operation itself and as such avoids pessimizing
the DMA-based I/O read operations for which D-cache are already
flushed/invalidated. It also allows future optimizations whereby
the bcopy() followed by the D-cache flush can be integrated in a
single operation, which could be implemented using on-chips DMA
engines, by-passing the D-cache altogether.
2009-05-18 18:37:18 +00:00
Nathan Whitehorn
b40ce02a2f Factor out platform dependent things unrelated to device drivers into a
new platform module. These are probed in early boot, and have the
responsibility of determining the layout of physical memory, determining
the CPU timebase frequency, and handling the zoo of SMP mechanisms
found on PowerPC.

Reviewed by:	marcel, raj
Book-E parts by: raj
2009-05-14 00:34:26 +00:00
Rafal Jaworowski
6a5f0fd39d Zero PCB during early AIM PowerPC init.
When memory is not zero'ed by firmware, uninitialized PCB can have bogus
contents, which appear as a saved onfault condition, Altivec context to
restore etc. and lead to corruption/crashes. This commit fixes such issues.

Submitted by:	Michal Mazur arg ! semihalf dot com
Tested by:	Andreas Tobler andreast-list ! fgznet dot ch
2009-04-24 08:57:54 +00:00
Nathan Whitehorn
8cf9d6cd7e Rework the way we get the cacheline size. Instead of having a table of
CPUs known to use 128 byte cache lines and defaulting to 32, use the dcbz
instruction to measure it. Also make dcbz behave the way you would
expect on PPC 970.
2009-04-12 03:03:55 +00:00
Nathan Whitehorn
029c6e958c Fix the build when KDB is disabled. The second instance of rfi in
trap_subr.S that is patched at runtime to rfid on 64-bit systems
is inside KDB-specific code, so don't patch it without KDB.
2009-04-05 21:52:13 +00:00
Nathan Whitehorn
1c96bdd146 Add support for 64-bit PowerPC CPUs operating in the 64-bit bridge mode
provided, for example, on the PowerPC 970 (G5), as well as on related CPUs
like the POWER3 and POWER4.

This also adds support for various built-in hardware found on Apple G5
hardware (e.g. the IBM CPC925 northbridge).

Reviewed by:    grehan
2009-04-04 00:22:44 +00:00
Nathan Whitehorn
1ac37bcb77 Add Altivec support for supported CPUs. This is derived from the FPU support
code, and also reducing the size of trapcode to fit inside a 32 byte handler
slot.

Reviewed by:	grehan
MFC after:	2 weeks
2009-02-20 17:48:40 +00:00
Nathan Whitehorn
91416fb268 Modularize the Open Firmware client interface to allow run-time switching
of OFW access semantics, in order to allow future support for real-mode
OF access and flattened device frees. OF client interface modules are
implemented using KOBJ, in a similar way to the PPC PMAP modules.

Because we need Open Firmware to be available before mutexes can be used on
sparc64, changes are also included to allow KOBJ to be used very early in
the boot process by only using the mutex once we know it has been initialized.

Reviewed by:    marius, grehan
2008-12-20 00:33:10 +00:00
Nathan Whitehorn
4c01c0b965 Allow the cacheline size on PowerPC to be set at runtime. This is essential for
supporting 64-bit CPUs, which often have 128-byte cache lines instead of the
standard 32.
2008-09-24 00:28:46 +00:00
Marcel Moolenaar
20c5910af7 Call powerpc_sync() instead of using an asm statement. 2008-08-30 18:39:29 +00:00
Alan Cox
d1fdd63483 The VM system no longer uses setPQL2(). Remove it and its helpers. 2008-05-23 04:03:54 +00:00
Marcel Moolenaar
12640815f8 MFp4: SMP support 2008-04-27 22:33:43 +00:00
Jeff Roberson
6c47aaae12 - Add an integer argument to idle to indicate how likely we are to wake
from idle over the next tick.
 - Add a new MD routine, cpu_wake_idle() to wakeup idle threads who are
   suspended in cpu specific states.  This function can fail and cause the
   scheduler to fall back to another mechanism (ipi).
 - Implement support for mwait in cpu_idle() on i386/amd64 machines that
   support it.  mwait is a higher performance way to synchronize cpus
   as compared to hlt & ipis.
 - Allow selecting the idle routine by name via sysctl machdep.idle.  This
   replaces machdep.cpu_idle_hlt.  Only idle routines supported by the
   current machine are permitted.

Sponsored by:	Nokia
2008-04-25 05:18:50 +00:00
Poul-Henning Kamp
9b4a8ab7ba Now that all platforms use genclock, shuffle things around slightly
for better structure.

Much of this is related to <sys/clock.h>, which should really have
been called <sys/calendar.h>, but unless and until we need the name,
the repocopy can wait.

In general the kernel does not know about minutes, hours, days,
timezones, daylight savings time, leap-years and such.  All that
is theoretically a matter for userland only.

Parts of kernel code does however care: badly designed filesystems
store timestamps in local time and RTC chips almost universally
track time in a YY-MM-DD HH:MM:SS format, and sometimes in local
timezone instead of UTC.  For this we have <sys/clock.h>

<sys/time.h> on the other hand, deals with time_t, timeval, timespec
and so on.  These know only seconds and fractions thereof.

Move inittodr() and resettodr() prototypes to <sys/time.h>.
Retain the names as it is one of the few surviving PDP/VAX references.

Move startrtclock() to <machine/clock.h> on relevant platforms, it
is a MD call between machdep.c/clock.c.  Remove references to it
elsewhere.

Remove a lot of unnecessary <sys/clock.h> includes.

Move the machdep.disable_rtc_set sysctl to subr_rtc.c where it belongs.
XXX: should be kern.disable_rtc_set really, it's not MD.
2008-04-22 19:38:30 +00:00
Marcel Moolenaar
014ffa990d Allocate a stack (with optional guard pages) for thread0 and
switch to it before calling mi_startup().
2008-04-16 23:28:12 +00:00
Robert Watson
237fdd787b In keeping with style(9)'s recommendations on macros, use a ';'
after each SYSINIT() macro invocation.  This makes a number of
lightweight C parsers much happier with the FreeBSD kernel
source, including cflow's prcc and lxr.

MFC after:	1 month
Discussed with:	imp, rink
2008-03-16 10:58:09 +00:00
Marcel Moolenaar
8a109fa3d8 For AIM, have cpu_idle() set MSR_POW when the powerpc_pow_enabled
variable is set. On my Mac Mini this puts the CPU in NAP mode when
the kernel is idle and, any technical or environmental reasons
aside, avoids that I have to listen to the fan all day :-)
2008-03-07 22:27:06 +00:00
Rafal Jaworowski
786e4a1b04 Unify and generalize PowerPC headers, adjust AIM code accordingly.
Rework of this area is a pre-requirement for importing e500 support (and
other PowerPC core variations in the future). Mainly the following
headers are refactored so that we can cover for low-level differences between
various machines within PowerPC architecture:

  <machine/pcpu.h>
  <machine/pcb.h>
  <machine/kdb.h>
  <machine/hid.h>
  <machine/frame.h>

Areas which use the above are adjusted and cleaned up.

Credits for this rework go to marcel@

Approved by:	cognet (mentor)
MFp4:		e500
2008-03-02 17:05:57 +00:00
Marcel Moolenaar
b0c2bc946d Remove SMP left-overs from NetBSD. 2008-02-12 20:55:51 +00:00
Robert Watson
3de213cc00 Add a new 'why' argument to kdb_enter(), and a set of constants to use
for that argument.  This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.

Assign approximate why values to all current consumers of the
kdb_enter() interface.
2007-12-25 17:52:02 +00:00
Peter Grehan
2058844493 Split decr_init() into two, with the section that reads the timebase
frequency from OpenFirmware moved out and into a routine that is called
from cpu_startup().

This allows correct reporting of the CPU clockspeed when printing out
CPU information at boot time.

Reported by:	numerous
Reviewed by:	marcel
MFC after:	1 day
2007-11-13 15:47:55 +00:00
Konstantin Belousov
89b57fcf01 Fix for the panic("vm_thread_new: kstack allocation failed") and
silent NULL pointer dereference in the i386 and sparc64 pmap_pinit()
when the kmem_alloc_nofault() failed to allocate address space. Both
functions now return error instead of panicing or dereferencing NULL.

As consequence, vmspace_exec() and vmspace_unshare() returns the errno
int. struct vmspace arg was added to vm_forkproc() to avoid dealing
with failed allocation when most of the fork1() job is already done.

The kernel stack for the thread is now set up in the thread_alloc(),
that itself may return NULL. Also, allocation of the first process
thread is performed in the fork1() to properly deal with stack
allocation failure. proc_linkup() is separated into proc_linkup()
called from fork1(), and proc_linkup0(), that is used to set up the
kernel process (was known as swapper).

In collaboration with:	Peter Holm
Reviewed by:	jhb
2007-11-05 11:36:16 +00:00
Attilio Rao
2feb50bf7d Revert VMCNT_* operations introduction.
Probabilly, a general approach is not the better solution here, so we should
solve the sched_lock protection problems separately.

Requested by: alc
Approved by: jeff (mentor)
2007-05-31 22:52:15 +00:00
Marcel Moolenaar
82c663b4fe Don't initialize the decrementer before initclocks() is called.
Use cpu_initclocks() for that as it assures that relevant locks
have been initialized.
2007-05-27 21:05:35 +00:00
Jeff Roberson
222d01951f - define and use VMCNT_{GET,SET,ADD,SUB,PTR} macros for manipulating
vmcnts.  This can be used to abstract away pcpu details but also changes
   to use atomics for all counters now.  This means sched lock is no longer
   responsible for protecting counts in the switch routines.

Contributed by:		Attilio Rao <attilio@FreeBSD.org>
2007-05-18 07:10:50 +00:00
Kevin Lo
e82e4cb1b8 Remove the cast to caddr_t for sfp, they're not needed.
Reviewed by: marcel
2007-02-12 08:59:33 +00:00
Marcel Moolenaar
e4da8bea2c Propagate the CPU model to the hw.model sysctl. 2007-01-14 21:45:05 +00:00
Julian Elischer
ad1e7d285a Threading cleanup.. part 2 of several.
Make part of John Birrell's KSE patch permanent..
Specifically, remove:
Any reference of the ksegrp structure. This feature was
never fully utilised and made things overly complicated.
All code in the scheduler that tried to make threaded programs
fair to unthreaded programs.  Libpthread processes will already
do this to some extent and libthr processes already disable it.

Also:
Since this makes such a big change to the scheduler(s), take the opportunity
to rename some structures and elements that had to be moved anyhow.
This makes the code a lot more readable.

The ULE scheduler compiles again but I have no idea if it works.

The 4bsd scheduler still reqires a little cleaning and some functions that now do
ALMOST nothing will go away, but I thought I'd do that as a separate commit.

Tested by David Xu, and Dan Eischen using libthr and libpthread.
2006-12-06 06:34:57 +00:00
John Birrell
8460a577a4 Make KSE a kernel option, turned on by default in all GENERIC
kernel configs except sun4v (which doesn't process signals properly
with KSE).

Reviewed by:	davidxu@
2006-10-26 21:42:22 +00:00
Maxim Sobolev
75e73d8796 Use proper trap code for the EXC_ALI traps. This fixes SIGBUS during
unaligned 64-bits load/stores.

MFC after:	2 weeks
2006-08-03 22:44:46 +00:00
Poul-Henning Kamp
c40da00ca3 Since DELAY() was moved, most <machine/clock.h> #includes have been
unnecessary.
2006-05-16 14:37:58 +00:00
Peter Grehan
162138c989 Set the siginfo si_addr field, and also the mysterious 3rd parameter
to old-style signals, to be the DAR register for DSI miss exceptions.
This gives the address of the access rather than the instruction
address. The behaviour is now the same as on i386.

Found by:  libsigsegv tests
2006-01-07 01:55:12 +00:00
Alexander Leidinger
ef39c05baa MI changes:
- provide an interface (macros) to the page coloring part of the VM system,
   this allows to try different coloring algorithms without the need to
   touch every file [1]
 - make the page queue tuning values readable: sysctl vm.stats.pagequeue
 - autotuning of the page coloring values based upon the cache size instead
   of options in the kernel config (disabling of the page coloring as a
   kernel option is still possible)

MD changes:
 - detection of the cache size: only IA32 and AMD64 (untested) contains
   cache size detection code, every other arch just comes with a dummy
   function (this results in the use of default values like it was the
   case without the autotuning of the page coloring)
 - print some more info on Intel CPU's (like we do on AMD and Transmeta
   CPU's)

Note to AMD owners (IA32 and AMD64): please run "sysctl vm.stats.pagequeue"
and report if the cache* values are zero (= bug in the cache detection code)
or not.

Based upon work by:	Chad David <davidc@acns.ab.ca> [1]
Reviewed by:		alc, arch (in 2004)
Discussed with:		alc, Chad David, arch (in 2004)
2005-12-31 14:39:20 +00:00
Peter Grehan
f9c702db84 Insert a layer of indirection to the pmap code, using a kobj for
the interface. This allows run-time selection of MMU code, based
on CPU-type detection, or tunable-overrides when testing new code.

Pre-requisite for G5 support.

conf/files.powerpc
  - remove pmap.c
  - add mmu_if.h, mmu_oea.c, pmap_dispatch.c

powerpc/include/mmuvar.h
  - definitions for MMU implementations

powerpc/include/pmap.h
  - remove pmap_pte_spill declaration
  - add pmap_mmu_install declaration
  - size the phys_avail array
  - pmap_bootstrapped is now global-scope

powerpc/powerpc/machdep.c
  - call kobj_machdep_init early in the boot sequence to allow
    kobj usage prior to SI_SUB_LOCK
  - install the OEA pmap code. This will be moved to CPU-specific
    init code in the future.

powerpc/powerpc/mmu_if.m
  - Kobj MMU interface definitions

powerpc/powerpc/pmap_dispatch.c
  - central dispatch for pmap calls
  - contains the global mmu kobj and the routine to locate the
   the mmu implementation and init the kobj
2005-11-08 06:48:08 +00:00
David Xu
9104847f21 1. Change prototype of trapsignal and sendsig to use ksiginfo_t *, most
changes in MD code are trivial, before this change, trapsignal and
   sendsig use discrete parameters, now they uses member fields of
   ksiginfo_t structure. For sendsig, this change allows us to pass
   POSIX realtime signal value to user code.

2. Remove cpu_thread_siginfo, it is no longer needed because we now always
   generate ksiginfo_t data and feed it to libpthread.

3. Add p_sigqueue to proc structure to hold shared signals which were
   blocked by all threads in the proc.

4. Add td_sigqueue to thread structure to hold all signals delivered to
   thread.

5. i386 and amd64 now return POSIX standard si_code, other arches will
   be fixed.

6. In this sigqueue implementation, pending signal set is kept as before,
   an extra siginfo list holds additional siginfo_t data for signals.
   kernel code uses psignal() still behavior as before, it won't be failed
   even under memory pressure, only exception is when deleting a signal,
   we should call sigqueue_delete to remove signal from sigqueue but
   not SIGDELSET. Current there is no kernel code will deliver a signal
   with additional data, so kernel should be as stable as before,
   a ksiginfo can carry more information, for example, allow signal to
   be delivered but throw away siginfo data if memory is not enough.
   SIGKILL and SIGSTOP have fast path in sigqueue_add, because they can
   not be caught or masked.
   The sigqueue() syscall allows user code to queue a signal to target
   process, if resource is unavailable, EAGAIN will be returned as
   specification said.
   Just before thread exits, signal queue memory will be freed by
   sigqueue_flush.
   Current, all signals are allowed to be queued, not only realtime signals.

Earlier patch reviewed by: jhb, deischen
Tested on: i386, amd64
2005-10-14 12:43:47 +00:00
Peter Grehan
ebc2aa7496 Temporary band-aid to fix hang when a process exec's Altivec instructions.
trap_subr.S:  declare a stub for the a-unavailable trap
              that does an absolute jump to the vector-assist trap.
              This is due to the fact that the vec-unavail trap
              doesn't start at a 256-byte boundary, so the trick of
              masking the bottom 8 bits of the link register to identify
              the interrupt doesn't work, so let the vec-assist
              case handle Altivec-disabled for the time being.

              Note that this will be fixed in the future with a much
              smaller vector code-stub (< 16 bytes) that will allow
              use of strange vector offsets that are also present in
              4xx processors, and also allow smaller differences in
              vector codepaths on the G5.

trap.c:       Treat altivec-unavailable/assist process traps as SIGILL.
              Not quite correct, since altivec-assist should really be a panic,
              but it is fine for the moment due to the above measure.

machdep.c     Install the stub code for the altivec-unavailable trap, and
              the standard trap code at the altivec-assist.

Reported by:	Andreas Tobler <toa at pop agri ch>
MFC after:	3 days
2005-07-30 11:14:31 +00:00
John Baldwin
4d9ae2662a Change an instance of md_savecrit to md_saved_msr that I missed. 2005-04-08 14:26:55 +00:00
John Baldwin
c6a37e8413 Divorce critical sections from spinlocks. Critical sections as denoted by
critical_enter() and critical_exit() are now solely a mechanism for
deferring kernel preemptions.  They no longer have any affect on
interrupts.  This means that standalone critical sections are now very
cheap as they are simply unlocked integer increments and decrements for the
common case.

Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter()
and spinlock_exit().  This KPI is responsible for providing whatever MD
guarantees are needed to ensure that a thread holding a spin lock won't
be preempted by any other code that will try to lock the same lock.  For
now all archs continue to block interrupts in a "spinlock section" as they
did formerly in all critical sections.  Note that I've also taken this
opportunity to push a few things into MD code rather than MI.  For example,
critical_fork_exit() no longer exists.  Instead, MD code ensures that new
threads have the correct state when they are created.  Also, we no longer
try to fixup the idlethreads for APs in MI code.  Instead, each arch sets
the initial curthread and adjusts the state of the idle thread it borrows
in order to perform the initial context switch.

This change is largely a big NOP, but the cleaner separation it provides
will allow for more efficient alternative locking schemes in other parts
of the kernel (bare critical sections rather than per-CPU spin mutexes
for per-CPU data for example).

Reviewed by:	grehan, cognet, arch@, others
Tested on:	i386, alpha, sparc64, powerpc, arm, possibly more
2005-04-04 21:53:56 +00:00
Peter Grehan
e38509dc05 physmem is a much better indicator for 'real' memory on PPC than Maxmem
since there are often significant holes in the memory map due to the
kernel, loader and OFW data structures not being included: Maxmem is
the highest available, so can be misleading.
2005-03-07 01:52:24 +00:00
Peter Grehan
8c8cb52737 Catch up with "physical memory" sysctl change.
(MFi386: rev 1.608)
2005-03-01 07:59:24 +00:00
Peter Grehan
0dab19853f Catch the case where the idle loop is entered with interrupts disabled,
causing a hard hang.
2005-02-28 09:49:00 +00:00