Commit Graph

983 Commits

Author SHA1 Message Date
Marcel Moolenaar
d7f827116f Fix a source of instability specific to an EPC userland. We return
to userland with interrupts disabled until we restore PSR. However,
it has been observed that interrupts do actually happen before they
are enabled again. This is a bit surprising and I don't know yet
what's going on exactly. Nevertheless, the code was not crafted
carefully enough to allow interrupts to happen and we could
clobber the kernel stack of another thread when interrupts did
happen.

This is what happens: we restore the (memory) stack pointer (sp)
and the register stack base prior to restoring ar.k6 and ar.k7.
This is not a problem if interrupts don't happen between setting
sp/ar.bspstore and ar.k6/ar.k7. Alas, interrupts can happen.
Since sp/ar.bspstore already point to the userland stacks, we
need to switch to the kernel stack in interrupt. However, ar.k6
and ar.k7 have not been set, which means that we were switching
to some unrelated kstack and happily clobbered the trapframe
present there if the thread to which the kstack belonged was
in kernel mode or otherwise we could have our trapframe clobbered
if that other thread enters the kernel. Nasty either way.

We now carefully restore ar.k6 prior to restoring ar.bspstore and
likewise for ar.k7 and sp. All we need is the guarantee that an
interrupt does not clobber ar.k6 or ar.k7 before we're back in
userland. That has been achieved by restoring ar.k6/ar.k7
unconditionally (see exception.s)

While here, remove the disabling of interrupts on EPC entry. It
was added as a way to "resolve" the crashes until it was understood
what was going on. I think I achieved the latter, so we can remove
the patch. Note that setting up a trapframe with interrupts
enabled has it's own share of corner cases, but it's better to
properly fixed those than to keep a mostly wrong patch around
because we're afraid to remove it...

Approved by: re@ (blanket)
2003-05-24 22:53:10 +00:00
Marcel Moolenaar
a7b90d80fc Be more careful how we restore interrupts. Don't rewrite most of the
PSR only to achieve setting PSR.i back to it's previous value. It
makes it impossible to change any of the 30+ other unrelated bits
when done between intr_disable() and intr_restore(). That's bad.

Instead have intr_disable() return 1 when interrupts were previously
enabled and 0 otherwise and only enable interrupts in intr_restore()
when given a non-0 value.

This change specifically disallows using intr_restore() to disable
interrupts. The reason is simple: interrupts only need to be restored
after they are being disabled, which means that intr_restore() is
called with interrupts disabled and we only need to enable them if
they were previously enabled.

This change does not fix any bugs, other than that it bugged me...

Approved by: re@ (blanket)
2003-05-24 21:44:24 +00:00
Marcel Moolenaar
95f2dbba40 Consistently us the same metric to differentiate between kernel mode
and user mode. We need to take into account that the EPC syscall path
introduces a grey area in which one can argue either way, including a
third: neither.

We now use the region in which the IP address lies. Regions 5, 6 and 7
are kernel VA regions and if the IP lies any any of those regions we
assume we're in kernel mode. Hence, we can be in kernel mode even if
we're not on the kernel stack and/or have user privileges. There're
gremlins living in the twilight zone :-)

For the EPC syscall path this particularly means that the process
leaves user mode the moment it calls into the gateway page. This
makes the most sense because from a process' point of view the call
represents a request to the kernel for some service and that service
has been performed if the call returns. With the metric we picked,
this also means that we're back in user mode IFF the call returns.

Approved by: re@ (blanket)
2003-05-24 21:16:19 +00:00
Marcel Moolenaar
fb4aa34f3b Unconditionally restore ar.k7 (memory stack) and ar.k6 (register stack)
when returning from an interrupt. Both registers are used on interrupt
to switch to the right kernel stack, but other than that they are not
used. This means we only have to make sure they contain proper values
while in user mode. As such, we conditionally restored these registers
based on whether we returned to userland or not. A nice property of
conditionally restoring ar.k6 and ar.k7 is that it introduces two
invariants: ar.k6 always points to the bottom of the kernel stack and
ar.k7 always points to the top of the kernel stack (immediately below
the PCB we have there).

However, the EPC syscall path introduces an irregularity: there's no
"thin red line" between user and kernel. There's a grey area that's a
couple of instructions wide. Any interruption in that grey area is
bound to see an inconsistent state. One such state is that we're in
kernel space for all practical purposes, but we still need to have
ar.k6 and ar.k7 restored as if we're in userland.

Thus: restore ar.k6 and ar.k7 unconditionally at the cost of losing
a valuable invariant. Both registers now hold the extend of the
usable portion of the kernel stack at any interrupt nesting, which
when in userland mean the bottom and the top of the kstack.
2003-05-24 20:51:55 +00:00
Marcel Moolenaar
d1d7df1905 Fix an alpha inheritance bug:
On alpha, PAL is involved in context management and after wiring
the CPU (in alpha_init()) a context switch was performed to tell
PAL about the context. This was bogusly brought over to ia64
where it introduced bugs, because we restored the context from
a mostly uninitialized PCB.

The cleanup constitutes:
o  Remove the unused arguments from ia64_init().
o  Don't return from ia64_init(), but instead call mi_startup()
   directly. This reduces the amount of muckery in assembly and
   also allows for the next bullet:
o  Save our currect context prior to calling mi_startup(). The
   reason for this is that many threads are created from thread0
   by cloning the PCB. By saving our context in the PCB, we have
   something sane to clone. It also ensures that a cloned thread
   that does not alter the context in any way will return to
   the saved context, where we're ready for the eventuality with
   a nice, user unfriendly panic().

The cleanup fixes at least the following bugs:
o  Entering mi_startup() with the RSE in enforced lazy mode.
o  Re-execution of ia64_init() in certain "lab" conditions.

While here, add proper unwind directives to __start() so that
the unwind knows it has reached the bottom of the (call) stack.

Approved by: re@ (blanket)
2003-05-24 00:17:34 +00:00
Marcel Moolenaar
ca125f9c17 Fix a (new) source of instability:
When interrupting a kernel context, we don't need to switch stacks
(memory nor register). As such, we were also not restoring the
register stack pointer (ar.bspstore). This, however, fails to be
valid in 1 situation: when we interrupt a register stack switch as
is being done in restorectx(). The problem is that restorectx()
needs to have ar.bsp == ar.bspstore before it can assign the new
value to ar.bspstore. This is achieved by doing a loadrs prior to
assigning to ar.bspstore. If we take an interrupt in between the
loadrs and the assignment and we don't make sure we restore the
ar.bspstore prior to returning from the interrupt, we switch
stacks with possibly non-zero dirty registers, which means that
the new frame pointer (ar.bsp) will be invalid.

So, instead of jumping over the restoration of the register frame
pointer and related registers, we conditionalize it based on whether
we return to kernel context or user context. A future performance
tweak is possible by only restoring ar.bspstore when returning to
kernel mode *and* when the RSE is in enforced lazy mode. One cannot
assume ar.bsp == ar.bspstore if the RSE is not in enforced lazy mode
anyway.

While here (well, not quite) don't unconditionally assign to
ar.bspstore in exception_save. Only do that when we actually switch
stacks. It can only harm us to do it unconditionally.

Approved by: re@ (blanket)
2003-05-23 23:55:31 +00:00
Marcel Moolenaar
42b919d4a6 In swapctx(), put the RSE in enforced lazy mode before we flush the
register stack. There's nothing really wrong with flushing before
putting the RSE in enforced lazy mode, provided you don't depend on
ar.bspstore being equal to ar.bsp when the RSE has been put in
enforced lazy more. The small window between the flush and setting
the RSE may be sufficient to have the RSE eagerly increase the dirty
region (and hence cause ar.bspstore != ar.bsp) or have an interrupt
that may even get the laziest RSE to do something.

Anyway: we don't depend on ar.bspstore being equal to ar.bsp, so
nothing was and is broken. But the code was non-intuitive and
easily confuses. This is a source of future bugs.

Note: the advantage of not depending on ar.bspstore is that there's
some recilience against an interrupted flushrs. Clobbering is limited
to stacked register contents only, not to RSE address clobbering.

Approved: re@ (blanket)
2003-05-23 23:16:43 +00:00
Marcel Moolenaar
bfaccb767c o Fix a definite bogon: the dirty bity fault, instruction access
failt and data access fault install the PTE in question into
   the VHPT table. However, a post-increment was missing and we
   wrote the raw PTE data into the pagesize/access key field.
   This leaves a corrupt VHPT entry.
o  While here, remove the explicit cache purge. Insertion into
   the translation implicitly purges any overlapping entries.
o  Make sure there's a cycle break between the itc and the rfi.
o  Whitespace fixes.
2003-05-20 06:57:20 +00:00
Marcel Moolenaar
14d2ae56c7 Rename the "IA64 ITC" counter to "ITC" counter. We don't call the
"TSC" counter on i386 "I386 TSC".

Approved by: re@ (blanket)
2003-05-20 06:51:20 +00:00
Marcel Moolenaar
9b9ce577d4 Prevent corruption of the VHPT collision chain by protecting it with
a mutex. The only volatile chain operations are insertion and deletion
but since updating an existing PTE also updates the VHPT entry itself,
and we have the VHPT mutex in both other cases, we also lock when we
update an existing PTE even though no chain operation is involved.
Note that we perform the insertion and deletion careful enough that
we don't need to lock traversals. If we need to lock traversals, we
also need to lock from the exception handler, which we can't without
creating a trapframe.

We're now able to withstand a -j8 buildworld. More work is needed to
withstand Murphy fields. In other words: we still have a bogon...

Approved by: re@ (blanket)
2003-05-20 02:52:41 +00:00
Alexander Kabaev
980ded9a7d sys/sys/limits.h:
- Fix visibilty test for LONG_BIT and WORD_BIT.  `#if defined(__FOO_VISIBLE)'
   is alays wrong because __FOO_VISIBLE is always defined (to 0 for
   invisibility).

sys/<arch>/include/limits.h
sys/<arch>/include/_limits.h:

 - Style fixes.

Submitted by:	bde
Reviewed by:	bsdmike
Approved by:	re (scottl)
2003-05-19 20:29:07 +00:00
Marcel Moolenaar
b8c4149cff Turn pmap_install_pte() into a critical section. We better not get
interrupted while writing into the VHPT table. While here, make sure
memory accesses a properly ordered. Tag invalidation must happen
first so that the hardware VHPT walker will not be able to match
this entry while we're updating it and we have to make sure the new
new tag gets written only after the PTE is completely updated.

Approved by: re (blanket)
2003-05-19 08:02:36 +00:00
Marcel Moolenaar
a75b99ea2d Unconditionally set pcb_current_pmap. WIP versions of the code
previously committed cleared pcb_current_pmap prior to changing
the region registers, but that was removed before committing.
Since we don't normally (at all?) pass a NULL pointer, the bug
was mostly harmless. Fix it while I'm here...

I'm here because we need to have data serialization after writing
to the region registers. Not doing so was likely the cause of the
hangs we were experiencing. General exceptions in cpu_switch may
also be caused by the lack of serialization.

Approved by: re (blanket)
2003-05-19 06:05:30 +00:00
Marcel Moolenaar
dc0bde0f18 pmap_install() needs to be atomic WRT to context switching. Protect
switching user regions (region 0-4) with schedlock. Avoid unnecessary
recursion on schedlock by moving the core functionality to another
function (pmap_switch()) where we assert schedlock is held. Turn
pmap_install() into a wrapper that grabs schedlock. This minimizes
the number of callsites that need to be changed.
Since we already have schedlock in cpu_switch() and cpu_throw(),
have them call pmap_switch() directly. These were also the only two
calls to pmap_install() outside pmap.c, so make pmap_install() static
and remove its prototype from pmap.h

Approved by: re (blanket)
2003-05-19 04:16:30 +00:00
Marcel Moolenaar
040c5b92bb Remove unused files. cpu_switch() and cpu_throw(), normally in swtch.s,
can be found in machdep.c.

Approved: re@
2003-05-17 04:55:04 +00:00
Marcel Moolenaar
f2c49dd248 Revamp of the syscall path, exception and context handling. The
prime objectives are:
o  Implement a syscall path based on the epc inststruction (see
   sys/ia64/ia64/syscall.s).
o  Revisit the places were we need to save and restore registers
   and define those contexts in terms of the register sets (see
   sys/ia64/include/_regset.h).

Secundairy objectives:
o  Remove the requirement to use contigmalloc for kernel stacks.
o  Better handling of the high FP registers for SMP systems.
o  Switch to the new cpu_switch() and cpu_throw() semantics.
o  Add a good unwinder to reconstruct contexts for the rare
   cases we need to (see sys/contrib/ia64/libuwx)

Many files are affected by this change. Functionally it boils
down to:
o  The EPC syscall doesn't preserve registers it does not need
   to preserve and places the arguments differently on the stack.
   This affects libc and truss.
o  The address of the kernel page directory (kptdir) had to
   be unstaticized for use by the nested TLB fault handler.
   The name has been changed to ia64_kptdir to avoid conflicts.
   The renaming affects libkvm.
o  The trapframe only contains the special registers and the
   scratch registers. For syscalls using the EPC syscall path
   no scratch registers are saved. This affects all places where
   the trapframe is accessed. Most notably the unaligned access
   handler, the signal delivery code and the debugger.
o  Context switching only partly saves the special registers
   and the preserved registers. This affects cpu_switch() and
   triggered the move to the new semantics, which additionally
   affects cpu_throw().
o  The high FP registers are either in the PCB or on some
   CPU. context switching for them is done lazily. This affects
   trap().
o  The mcontext has room for all registers, but not all of them
   have to be defined in all cases. This mostly affects signal
   delivery code now. The *context syscalls are as of yet still
   unimplemented.

Many details went into the removal of the requirement to use
contigmalloc for kernel stacks. The details are mostly CPU
specific and limited to exception_save() and exception_restore().
The few places where we create, destroy or switch stacks were
mostly simplified by not having to construct physical addresses
and additionally saving the virtual addresses for later use.

Besides more efficient context saving and restoring, which of
course yields a noticable speedup, this also fixes the dreaded
SMP bootup problem as a side-effect. The details of which are
still not fully understood.

This change includes all the necessary backward compatibility
code to have it handle older userland binaries that use the
break instruction for syscalls. Support for break-based syscalls
has been pessimized in favor of a clean implementation. Due to
the overall better performance of the kernel, this will still
be notived as an improvement if it's noticed at all.

Approved by: re@ (jhb)
2003-05-16 21:26:42 +00:00
Marcel Moolenaar
baf74b8876 o In pmap_install, don't prevent switching the pmap if we're
switching to kernel_pmap. The pmap is not special enough.
o  Clear the active bit on the pmap we're switching out.
o  Fix some nearby style(9) bugs.

Approved by: re@
2003-05-16 07:57:44 +00:00
Marcel Moolenaar
906f065725 Indent a comment. This makes 1.100.
Still approved by: re@ (blanket)
2003-05-16 07:05:08 +00:00
Marcel Moolenaar
164d4986fd Turn pmap_growkernel() into a critical section. While here, initialize
kernel_vm_end in pmap_bootstrap. Don't delay the initialization until
we need to grow the kernel VM space. This BTW happens twice before
we enter either single- or multi-user mode. Don't adjust kernel_vm_end
while growing based on whether the KPT contains a non-NULL entry. We
trust kernel_vm_end to be correct and we make sure it's still correct
after growing.
Define virtual_avail and virtual_end in terms of VM_MIN_KERNEL_ADDRESS
and VM_MAX_KERNEL_ADDRESS (resp). Don't hardcode region knowledge.
2003-05-16 07:03:15 +00:00
Marcel Moolenaar
8cc31ae5be Revamp the RID allocation code:
o  Limit the size of the region ID map to 64KB. This gives a bitmap
   that is large enough to keep track of 2^19 numbers. The minimal map
   size is 32KB. The reason we limit the map size is that processor
   models may have implemented a 24-bit region ID, which would give
   a 2MB bitmap while the maximum number of allocations is always
   less than PID_MAX*5, which is less than 2^19.
o  Allocate all region IDs up-front. The slight downside of reserving
   more RIDs then a process needs (3 for ia64 native and 1 for ia32)
   is preferable over the call to pmap_ensure_rid() where RIDs are
   allocated on demand. On SMP systems this may lead to a race
   condition.
o  When allocating a region ID, don't use arc4random(). We're not
   interested in randomness or uniform distribution across the
   spectrum. We only need uniqueness. Random numbers may easily
   collide when the number of allocated RIDs is high, creating a
   possibly unbounded retry rate.
2003-05-16 06:40:40 +00:00
Marcel Moolenaar
75189cff08 Move the conditional definition of KSTACK_MAX_PAGES up ahead where
it's more visible.

Approved by: re@ (blanket)
2003-05-16 06:17:34 +00:00
Marcel Moolenaar
794518cd6d This file creates register sets based on the runtime specification.
The advantage of using register sets is that you don't focus on each
register seperately, but instead instroduce a level of abstraction.
This reduces the chance of errors, and also simplifies the code.
The register sers form the basis of everything register.
The sets in this file are:

struct _special
contains all of the control related registers, such as instruction
pointer and stack pointer. It also contains interrupt specific registers
like the faulting address. The set is roughly split in 3 groups. The
first contains the registers that define a context or thread. This is
the only group that the kernel needs to switch threads.  The second group
contains registers needed in addition to the first group needed to switch
userland threads. This group contains the thread pointer and the FP control
register. The third group contains those registers we need for execption
handling and are used on top of the first two groups.

struct _callee_saved, struct _callee_saved_fp
These sets contain the preserved registers, including the NaT after
spilling. The general registers (including branch registers) are
seperated from the FP registers for ptrace(2).

struct _caller_saved, struct _caller_saved_fp
These sets contain the scratch registers based on SDM 2.1, This means that
both ar.csd and ar.ccd are included here, even though they contain ia32
segment register descriptions. We keep seperate NaT bits for scratch and
preserved registers, because they are never saved/restored at the same
time.

struct _high_fp
The upper 96 FP registers that can be enabled/disabled seperately on
the CPU from the lower 32 FP registers. Due to the size of this set,
we treat them specially, even though they are defined as scratch
registers.

CVS ----------------------------------------------------------------------
2003-05-15 08:36:03 +00:00
Marcel Moolenaar
4bae872201 This file contains elementary context related functions used to
save and restore "sets" of registers in various places.
The restorectx and swapctx functions are used by cpu_switch()
and deal with the special registers, as well as the preserved
registers.
The *callee_saved* functions are used to save and restore the
preserved registers (integer and floating-point). They are
useful for signal delivery and ptrace support.
The save_high_fp and restore_high_fp functions are used to
"load" and "unload" to and from the CPU as part of lazy context
switching.
The ia32 specific context functions have been kept with the ia32
code.

Approved by: re@ (blanket)
2003-05-15 08:08:32 +00:00
Marcel Moolenaar
1d67adffd6 This file contains the code that implements the syscall path based
on the epc instruction. The epc instruction, given the permissions
of the page in which the epc is located, allows the privilege level
to be increased with little or no overhead. The previous privilege
level is recorded in the current frame marker and is restored by
a regular (function) return.
Since the epc instruction has to live in a page with non-standard
properties, we hardwire a "gateway" page in the address space. The
address of the gateway page is exported to userland in ar.k7. This
allows us to rewire the page without breaking the ABI.
The syscall stubs in libc are regular function calls that slightly
differ from the normal runtime. The difference is mostly to simplify
the stubs themselves by by moving some of the logic to the kernel.
The libc stubs call into the gateway page (offset 0), from where the
kernel trampolines to the code that sets up a minimal trapframe and
arranges to execute from the kernel stack.
The way back is basicly the same. The kernel returns to the gateway
page, whereby privilege is dropped, and jumps back to the syscall
stub.
Only the special registers are saved in the trapframe. None of the
scratch registers are preserved and since the kernel follows the
same runtime model, none of the preserved registers are saved.
Future enhancements can include the implementation of lightweight
syscalls, where kernel functions are performed without setting up
a trapframe. Good candidates are the *context syscalls for example.

Now that there's a gateway page from which code can be executed in
a non-privileged context, we also have the ideal place to put the
signal trampolines. By moving the signal trampolines from the user
stack to the gateway page, we open up the doors to unexecutable
stacks. The gateway page contains signal trampolines for both the
"legacy" break-based syscall code and the new and improved epc-
based syscall code.

Approved: re@ (blanket)
2003-05-15 07:51:22 +00:00
John Baldwin
90af4afacb - Merge struct procsig with struct sigacts.
- Move struct sigacts out of the u-area and malloc() it using the
  M_SUBPROC malloc bucket.
- Add a small sigacts_*() API for managing sigacts structures: sigacts_alloc(),
  sigacts_free(), sigacts_copy(), sigacts_share(), and sigacts_shared().
- Remove the p_sigignore, p_sigacts, and p_sigcatch macros.
- Add a mutex to struct sigacts that protects all the members of the struct.
- Add sigacts locking.
- Remove Giant from nosys(), kill(), killpg(), and kern_sigaction() now
  that sigacts is locked.
- Several in-kernel functions such as psignal(), tdsignal(), trapsignal(),
  and thread_stopped() are now MP safe.

Reviewed by:	arch@
Approved by:	re (rwatson)
2003-05-13 20:36:02 +00:00
Alexander Kabaev
0eda4c08a5 Style fixes.
Remove DBL_DIG, DBL_MIN, DBL_MAX and their FLT_ counterparts, they
were marked for deprecation ever since SUSv1 at least.
Only define ULLONG_MIN/MAX and LLONG_MAX if long long type is
supported.
Restore a lost comment in MI _limits.h file and remove it from
sys/limits.h where it does not belong.
2003-05-04 22:13:04 +00:00
Marcel Moolenaar
e126260569 Fix c99 victim: the accepted character '0 most now be types as '0'. 2003-05-03 23:05:16 +00:00
Marcel Moolenaar
aea4a02702 Option KADB does not exist. It came from alpha, where it still exists. 2003-05-02 20:34:15 +00:00
Marcel Moolenaar
367165975d Kill MID_MACHINE, its a.out specific, the only platform that supports
it is i386. All of the other platforms should remove it too.
	-- peter@
2003-04-30 23:16:33 +00:00
John Baldwin
d90e753aa8 Range check the syscall number before looking it up in the syscallnames[]
array.

Submitted by:	pho
2003-04-30 17:59:27 +00:00
Alexander Kabaev
104a9b7e3e Deprecate machine/limits.h in favor of new sys/limits.h.
Change all in-tree consumers to include <sys/limits.h>

Discussed on:	standards@
Partially submitted by: Craig Rodrigues <rodrigc@attbi.com>
2003-04-29 13:36:06 +00:00
Marcel Moolenaar
c283dd9dad Revamp the newbus functions:
o  do not use the in* and out* functions. These functions are used by
   legacy drivers and thus must have ia32 compatible behaviour. Hence,
   they  need to have fences. Using these functions for newbus would
   then pessimize performance.
o  remove the conditional compilation of PIO and/or MEMIO support. It's
   a PITA without having any significant benefit. We always support them
   both. Since there are no I/O ports on ia64 (they are simulated by the
   chipset by translating memory mapped I/O to predefined uncacheable
   memory regions) the only difference between PIO and MEMIO is in the
   address calculation. There should be enough ILP that can be exploited
   here that making these computations compile-time conditional is not
   worth it. We now also don't use the read* and write* functions.
o  Add the missing *_8 variants. They were missing, although not missed.
   It's for completeness.
o  Do not add the fences that were present in the low-level support
   functions here. We're using uncacheable memory, which means that
   accesses are in program order. Change the barrier implementation
   to not only do a memory fence, but also an acceptance fence. This
   should more reliably synchronize drivers with the hardware. The
   memory fence enforces ordering, but does not imply visibility (ie
   the access does not necessarily have happened). This is what the
   acceptance deals with.

cpufunc.h cleanup:
o  Remove the low-level memory mapped I/O support functions. They are
   not used. Keep the low-level I/O port access functions for legacy
   drivers and add fences to ensure ia32 compatibility.
o  Remove the syscons specific functions now that we have moved the
   proper definitions where they belong.
o  Replace the ia64_port_address() and ia64_memory_address() functions
   with macros. There's a bigger change inline functions get inlined
   when there aren't function callsi and the calculations are simply
   enough to do it with macros.

Replace the one reference to ia64_memory address in mp_machdep.c to
use the macro.
2003-04-29 09:50:03 +00:00
John Baldwin
7ff022c485 - Push down Giant into the sysarch() calls that still need Giant.
- Standardize on EINVAL rather than EOPNOTSUPP if the sysarch op value is
  invalid.
2003-04-25 20:04:02 +00:00
John Baldwin
d8ca78d02f Regen. 2003-04-25 15:59:44 +00:00
John Baldwin
9fb3809a3a Oops, the thr_* and jail_attach() syscall entries should be NOPROTO rather
than STD.
2003-04-25 15:59:18 +00:00
Daniel Eischen
1328e1c4be Add an argument to get_mcontext() which specified whether the
syscall return values should be cleared.  The system calls
getcontext() and swapcontext() want to return 0 on success
but these contexts can be switched to at a later time so
the return values need to be cleared in the saved register
sets.  Other callers of get_mcontext() would normally want
the context without clearing the return values.

Remove the i386-specific context saving from the KSE code.
get_mcontext() is not i386-specific any more.

Fix a bad pointer in the alpha get_mcontext() code.  The
context was being bcopy()'d from &td->tf_frame, but tf_frame
is itself a pointer, so the thread was being copied instead.
Spotted by jake.

Glanced at by:  jake
Reviewed by:    bde (months ago)
2003-04-25 01:50:30 +00:00
John Baldwin
9bc65d35f2 Regen. 2003-04-24 20:50:57 +00:00
John Baldwin
d46b3412dc Fix the thr_create() entry by adding a trailing \. Also, sync up the
MP safe flag for thr_* with the main table.
2003-04-24 20:49:46 +00:00
Alexander Kabaev
6fd839f9c7 Add a new sys/limits.h file which in turn depends on machine/_limits.h
to get actual constant values. This is in preparation for machine/limits.h
retirement.

Discussed on:	standards@
Submitted by:	Craig Rodrigues <rodrigc@attbi.com>  (*)
Modified by:	kan
2003-04-23 21:41:59 +00:00
John Baldwin
fe8cdcae87 - Replace inline implementations of sigprocmask() with calls to
kern_sigprocmask() in the various binary compatibility emulators.
- Replace calls to sigsuspend(), sigaltstack(), sigaction(), and
  sigprocmask() that used the stackgap with calls to the corresponding
  kern_sig*() functions instead without using the stackgap.
2003-04-22 18:23:49 +00:00
David Xu
5b70587b8a Remove single threading detecting code, these code really should be
replaced by thread_user_enter(), but current we don't want to enable
this in trap.
2003-04-22 03:17:41 +00:00
Marcel Moolenaar
148eac48f1 Don't use the tpa instruction to implement pmap_kextract. The tpa
instruction requires that a translation is present in the TC. This
may trigger a TLB miss and a subsequent call to vm_fault().
This implementation is deliberately non-inline for debugging and
profiling purposes. Partial or full inlining should eventually be
done.

Valuable insights by: jake
2003-04-22 01:48:43 +00:00
Hidetoshi Shimokawa
092cd06fcd Add FireWire drivers to GENERIC. 2003-04-21 16:44:05 +00:00
John Baldwin
889a6b5845 Use the proc lock to protect p_singlethread and a P_WEXIT test. This
fixes a couple of potential KSE panics on non-i386 arch's that weren't
holding the proc lock when calling thread_exit().
2003-04-18 20:20:00 +00:00
Marcel Moolenaar
146324b0d2 Add the EHCI host controller. 2003-04-16 01:29:08 +00:00
Maxime Henrion
7a648f56cf I deserve a big pointy hat for having missed all those references
to bus_dmasync_op_t in my last commit.
2003-04-10 23:50:06 +00:00
Maxime Henrion
141bacb048 Change the operation parameter of bus_dmamap_sync() from an
enum to an int and redefine the BUS_DMASYNC_* constants as
flags.  This allows us to specify several operations in one
call to bus_dmamap_sync() as in NetBSD.
2003-04-10 23:03:33 +00:00
Mike Barcroft
fd7a8150fb o In struct prison, add an allprison linked list of prisons (protected
by allprison_mtx), a unique prison/jail identifier field, two path
  fields (pr_path for reporting and pr_root vnode instance) to store
  the chroot() point of each jail.
o Add jail_attach(2) to allow a process to bind to an existing jail.
o Add change_root() to perform the chroot operation on a specified
  vnode.
o Generalize change_dir() to accept a vnode, and move namei() calls
  to callers of change_dir().
o Add a new sysctl (security.jail.list) which is a group of
  struct xprison instances that represent a snapshot of active jails.

Reviewed by:	rwatson, tjr
2003-04-09 02:55:18 +00:00
Dag-Erling Smørgrav
fe58453891 Introduce an M_ASSERTPKTHDR() macro which performs the very common task
of asserting that an mbuf has a packet header.  Use it instead of hand-
rolled versions wherever applicable.

Submitted by:	Hiten Pandya <hiten@unixdaemons.com>
2003-04-08 14:25:47 +00:00
Marcel Moolenaar
302d51fcb2 Remove COMPAT_FREEBSD4. It's impossible because FreeBSD 4 does not
run on ia64 at all.
2003-04-08 08:32:00 +00:00
Marcel Moolenaar
ca13bfade5 Remove the 32KB VHPT section from the kernel image. We don't really
use it because we allocate a VHPT based on the size of the physical
memory and even if the allocated VHPT is 32KB, we don't use the in-
image section for it. Since the VHPT must be naturally aligned, we
save 48K on average (due to alignment).
Consequently, we start off with the VHPT disabled (it is assumed
the VHPT is disabled because the EFI loader runs without memory
address translation and thus has no need to setup the VHPT). It's
probably a good idea to explicitly disable the VHPT if we make the
use of the VHPT optional.
2003-04-06 21:31:26 +00:00
Marcel Moolenaar
9fac9065b5 Also set the access bit in the PTE when we get a data dirty bit fault.
This avoids an immediate access bit fault when we serviced the dirty
bit fault in case the access bit is unset. This typically happens for
newly allocated memory that's being zeroed and thus very common.
2003-04-06 05:55:36 +00:00
Marcel Moolenaar
2206cb596f Include <geom/geom_disk.h> and stop including <sys/disk.h>. The
former gives us 'struct disk'.
2003-04-05 21:14:05 +00:00
Dag-Erling Smørgrav
9f45b2da8f Define ovbcopy() as a macro which expands to the equivalent bcopy() call,
to take care of the KAME IPv6 code which needs ovbcopy() because NetBSD's
bcopy() doesn't handle overlap like ours.

Remove all implementations of ovbcopy().

Previously, bzero was a function pointer on i386, to save a jmp to
bzero_vector.  Get rid of this microoptimization as it only confuses
things, adds machine-dependent code to an MD header, and doesn't really
save all that much.

This commit does not add my pagezero() / pagecopy() code.
2003-04-04 17:29:55 +00:00
Poul-Henning Kamp
891619a66d Use bioq_flush() to drain a bio queue with a specific error code.
Retain the mistake of not updating the devstat API for now.

Spell bioq_disksort() consistently with the remaining bioq_*().

#include <geom/geom_disk.h> where this is more appropriate.
2003-04-01 15:06:26 +00:00
Jeff Roberson
a0704f9de9 - Add thr and umtx system calls. 2003-04-01 01:15:56 +00:00
Jeff Roberson
b8db34d280 - Define a new md function 'casuptr'. This atomically compares and sets
a pointer that is in user space.  It will be used as the basic primitive
   for a kernel supported user space lock implementation.
 - Implement this function in x86's support.s
 - Provide stubs that return -1 in all other architectures.  Implementations
   will follow along shortly.

Reviewed by:	jake
2003-04-01 00:18:55 +00:00
Jeff Roberson
a9b34138dc - Add a placeholder for sigwait 2003-03-31 23:36:40 +00:00
Jeff Roberson
4093529dee - Move p->p_sigmask to td->td_sigmask. Signal masks will be per thread with
a follow on commit to kern_sig.c
 - signotify() now operates on a thread since unmasked pending signals are
   stored in the thread.
 - PS_NEEDSIGCHK moves to TDF_NEEDSIGCHK.
2003-03-31 22:49:17 +00:00
Jeff Roberson
1bf4700bff - Change trapsignal() to accept a thread and not a proc.
- Change all consumers to pass in a thread.

Right now this does not cause any functional changes but it will be important
later when signals can be delivered to specific threads.
2003-03-31 22:02:38 +00:00
Jeff Roberson
772e5d8d88 - Use sigexit() instead of twiddling the signal mask, catch, ignore, and
action bits to allow SIGILL to work as expected.  This brings this file in
   line with other architectures.
2003-03-31 21:40:47 +00:00
David Schultz
8ee63f6eae Correct LDBL_* constants based on values from i386. 2003-03-27 20:38:22 +00:00
Jake Burkholder
227f9a1c58 - Add vm_paddr_t, a physical address type. This is required for systems
where physical addresses larger than virtual addresses, such as i386s
  with PAE.
- Use this to represent physical addresses in the MI vm system and in the
  i386 pmap code.  This also changes the paddr parameter to d_mmap_t.
- Fix printf formats to handle physical addresses >4G in the i386 memory
  detection code, and due to kvtop returning vm_paddr_t instead of u_long.

Note that this is a name change only; vm_paddr_t is still the same as
vm_offset_t on all currently supported platforms.

Sponsored by:	DARPA, Network Associates Laboratories
Discussed with:	re, phk (cdevsw change)
2003-03-25 00:07:06 +00:00
Ruslan Ermilov
ab0f83bd03 Remove bitrot associated with `maxusers'.
Submitted by:	bde
2003-03-22 14:18:23 +00:00
Maxime Henrion
fd1b2ab0c9 Use atomic operations to increment and decrement the refcount
in busdma tags.  There are currently no tags shared accross
different drivers so this isn't needed at the moment, but it
will be required when we'll have a proper newbus method to get
the parent busdma tag.
2003-03-20 19:45:26 +00:00
Jake Burkholder
5501d40bb9 Made the prototypes for pmap_kenter and pmap_kremove MD. These functions
are machine dependent because they are not required to update the tlb when
mappings are added or removed, and doing so is machine dependent.
In addition, an implementation may require that pages mapped with pmap_kenter
have a backing vm_page_t, which is not necessarily true of all physical
pages, and so may choose to pass the vm_page_t to pmap_kenter instead of the
physical address in order to make this requirement clear.
2003-03-16 04:16:03 +00:00
Maxime Henrion
40b63da2a9 Bah, get it right this time and add sys/lock.h before sys/mutex.h. 2003-03-14 13:30:31 +00:00
Maxime Henrion
7541142438 Oops, add missing includes. Pass me the pointy hat.
Reported by:	jake
2003-03-14 00:04:37 +00:00
Maxime Henrion
c0796d1cb4 Grab Giant around calls to contigmalloc() and contigfree() so
that drivers converted to be MP safe don't have to deal with it.
2003-03-13 17:18:48 +00:00
Maxime Henrion
ea458bbcdb Memory allocated with contigmalloc() should be freed with
contigfree(), not with free().
2003-03-13 17:10:54 +00:00
Marcel Moolenaar
e9523c31c5 Fix two rounds of breakages and cleanup. Remove the sccdebug sysctl
while I'm here and garbage collect dead code (ssc_clone). Define
d_maxsize as DFLTPHYS for now because that's what it will be if we
don't define it.
2003-03-10 01:58:31 +00:00
Poul-Henning Kamp
60794e0478 Centralize the devstat handling for all GEOM disk device drivers
in geom_disk.c.

As a side effect this makes a lot of #include <sys/devicestat.h>
lines not needed and some biofinish() calls can be reduced to
biodone() again.
2003-03-08 08:01:31 +00:00
Marcel Moolenaar
cafd6dbd76 Fix threaded applications on ia64 that are linked dynamicly. We did
not save (restore) the global pointer (GP) in the jmpbuf in setjmp
(longjmp) because it's not needed in general. GP is considered a
scratch register at callsites and hence is always restored after a
call (when it's possible that the call resolves to a symbol in a
different loadmodule; otherwise GP does not have to be saved and
restored at all), including calls to setjmp/longjmp. There's just
one problem with this now that we use setjmp/longjmp for context
switching: A new context must have GP defined properly for the
thread's entry point. This means that we need to put GP in the
jmpbuf and consequently that we have to restore is in longjmp.
This automaticly requires us to save it as well.

When setjmp/longjmp isn't used for context switching, this can be
reverted again.
2003-03-05 04:39:24 +00:00
Marcel Moolenaar
a402169a8e ABI breaker: Move the J_SIGMASK field in the jmpbuf before
the J_SIG0 field. While here, rename J_SIG0 to J_SIGSET and
remove J_SIG1. The main reason for this change is that the
128-bit sigset_t is now aligned on a 16-byte boundary, which
allows us to use 16-byte atomic loads and stores on CPUs that
support it. The removal of J_SIG1 is done to avoid confusion:
it is never accessed and should not be. Renaming J_SIG0 to
J_SIGSET is the icing on the cake that's better done now than
later.
2003-03-05 03:30:54 +00:00
John Baldwin
263067951a Replace calls to WITNESS_SLEEP() and witness_list() with equivalent calls
to WITNESS_WARN().
2003-03-04 21:03:05 +00:00
Poul-Henning Kamp
7ac40f5f59 Gigacommit to improve device-driver source compatibility between
branches:

Initialize struct cdevsw using C99 sparse initializtion and remove
all initializations to default values.

This patch is automatically generated and has been tested by compiling
LINT with all the fields in struct cdevsw in reverse order on alpha,
sparc64 and i386.

Approved by:    re(scottl)
2003-03-03 12:15:54 +00:00
Alan Cox
72c3aad7e8 MFi386 revision 1.88
Remove some long unused declarations.
2003-03-01 10:02:11 +00:00
David Xu
1d56863dd2 Needn't kse.h 2003-02-27 03:16:35 +00:00
Julian Elischer
ac2e415327 Change the process flags P_KSES to be P_THREADED.
This is just a cosmetic change but I've been meaning to do it for about a year.
2003-02-27 02:05:19 +00:00
Maxime Henrion
f6c912dd0c Correctly set BUS_SPACE_MAXSIZE in all the busdma backends.
It was bogusly set to 64 * 1024 or 128 * 1024 because it was
bogusly reused in the BUS_DMAMAP_NSEGS definition.
2003-02-26 02:16:06 +00:00
Maxime Henrion
07159f9c56 Cleanup of the d_mmap_t interface.
- Get rid of the useless atop() / pmap_phys_address() detour.  The
  device mmap handlers must now give back the physical address
  without atop()'ing it.
- Don't borrow the physical address of the mapping in the returned
  int.  Now we properly pass a vm_offset_t * and expect it to be
  filled by the mmap handler when the mapping was successful.  The
  mmap handler must now return 0 when successful, any other value
  is considered as an error.  Previously, returning -1 was the only
  way to fail.  This change thus accidentally fixes some devices
  which were bogusly returning errno constants which would have been
  considered as addresses by the device pager.
- Garbage collect the poorly named pmap_phys_address() now that it's
  no longer used.
- Convert all the d_mmap_t consumers to the new API.

I'm still not sure wheter we need a __FreeBSD_version bump for this,
since and we didn't guarantee API/ABI stability until 5.1-RELEASE.

Discussed with:		alc, phk, jake
Reviewed by:		peter
Compile-tested on:	LINT (i386), GENERIC (alpha and sparc64)
Runtime-tested on:	i386
2003-02-25 03:21:22 +00:00
Poul-Henning Kamp
263444cfbf Change the console interface to pass a "struct consdev *" instead of a
dev_t to the method functions.

The dev_t can still be found at struct consdev *->cn_dev.

Add a void *cn_arg element to struct consdev which the drivers can use
for retrieving their softc.
2003-02-20 20:54:45 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Julian Elischer
3931c9a231 Fix missed patch in last commit 2003-02-17 10:21:32 +00:00
Julian Elischer
4a338afd7a Move a bunch of flags from the KSE to the thread.
I was in two minds as to where to put them in the first case..
I should have listenned to the other mind.

Submitted by:	 parts by davidxu@
Reviewed by:	jeff@ mini@
2003-02-17 09:55:10 +00:00
Marcel Moolenaar
d1d78df69b Define _ALIGNBYTES to be 15. This should have been done right away. 2003-02-17 09:53:29 +00:00
Marcel Moolenaar
a2ca37c83e Print two new processor features:
o  Spontaneous deferral (A feature required by dutch railways :-)
o  16-byte atomic operations (ld, st, cmpxchg)
2003-02-17 08:17:26 +00:00
Jeff Roberson
5215b1872f - Split the struct kse into struct upcall and struct kse. struct kse will
soon be visible only to schedulers.  This greatly simplifies much the
   KSE code.

Submitted by:	davidxu
2003-02-17 05:14:26 +00:00
Jeff Roberson
e4625663c9 - Move ke_sticks, ke_iticks, ke_uticks, ke_uu, ke_su, and ke_iu back into
the proc.  These counters are only examined through calcru.

Submitted by:	davidxu
Tested on:	x86, alpha, UP/SMP
2003-02-17 02:19:58 +00:00
Poul-Henning Kamp
f341ca9891 Remove #include <sys/dkstat.h> 2003-02-16 14:13:23 +00:00
Marcel Moolenaar
89c1ecfade Fix misuse of Maxmem in the calculation of the VHPT size. Maxmem
is already in pages, so we should not convert from bytes to pages.
The result of this bug was bad scaling of the VHPT relative to the
available memory.

Submitted by: Arun Sharma <arun@sharma-home.net>
2003-02-15 20:58:32 +00:00
David E. O'Brien
36dc5b9427 Fix the style of the SCHED_4BSD commit. 2003-02-13 22:24:44 +00:00
Alan Cox
5b0a1f3af2 MFi386
Remove kptobj.  Instead, use VM_ALLOC_NOOBJ.
2003-02-13 07:03:44 +00:00
Mike Barcroft
8cf5ed5125 Implement fpclassify():
o Add a MD header private to libc called _fpmath.h; this header
  contains bitfield layouts of MD floating-point types.
o Add a MI header private to libc called fpmath.h; this header
  contains bitfield layouts of MI floating-point types.
o Add private libc variables to lib/libc/$arch/gen/infinity.c for
  storing NaN values.
o Add __double_t and __float_t to <machine/_types.h>, and provide
  double_t and float_t typedefs in <math.h>.
o Add some C99 manifest constants (FP_ILOGB0, FP_ILOGBNAN, HUGE_VALF,
  HUGE_VALL, INFINITY, NAN, and return values for fpclassify()) to
  <math.h> and others (FLT_EVAL_METHOD, DECIMAL_DIG) to <float.h> via
  <machine/float.h>.
o Add C99 macro fpclassify() which calls __fpclassify{d,f,l}() based
  on the size of its argument.  __fpclassifyl() is never called on
  alpha because (sizeof(long double) == sizeof(double)), which is good
  since __fpclassifyl() can't deal with such a small `long double'.

This was developed by David Schultz and myself with input from bde and
fenner.

PR:		23103
Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
		(significant portions)
Reviewed by:	bde, fenner (earlier versions)
2003-02-08 20:37:55 +00:00
Hartmut Brandt
e557905435 Fix a problem in bus_dmamap_load_{mbuf,uio} when the first mbuf or the first
uio segment is empty. In this case no dma segment is create by
bus_dmamap_load_buffer, but the calling routine clears the first flag.
Under certain combinations of addresses of the first and second mbuf/uio
buffer this leads to corrupted DMA segment descriptors. This was already
fixed by tmm in sparc64/sparc64/iommu.c.

PR:		kern/47733
Reviewed by:	sam
Approved by:	jake (mentor)
2003-02-04 16:30:27 +00:00
Jake Burkholder
238dd3209a Split statclock into statclock and profclock, and made the method for driving
statclock based on profhz when profiling is enabled MD, since most platforms
don't use this anyway.  This removes the need for statclock_process, whose
only purpose was to subdivide profhz, and gets the profiling clock running
outside of sched_lock on platforms that implement suswintr.
Also changed the interface for starting and stopping the profiling clock to
do just that, instead of changing the rate of statclock, since they can now
be separate.

Reviewed by:	jhb, tmm
Tested on:	i386, sparc64
2003-02-03 17:53:15 +00:00
Marcel Moolenaar
871a64fdf0 Don't use the 'c' partition for mounting root. A disklabel is very
likely not present under the simulator. If multiple partitions are
present on the virtual disk, then the 'a' partition would be the
most logical choice. Nowadays partitions are GPT based, which would
make the assumption of a disklabel even more questionable. Given
all the possible scenarios, assuming a raw "device" seems best.
2003-02-03 01:10:01 +00:00
Alfred Perlstein
8deebb0160 Consolidate MIN/MAX macros into one place (param.h).
Submitted by: Hiten Pandya <hiten@unixdaemons.com>
2003-02-02 13:17:30 +00:00
Poul-Henning Kamp
ecc575bfaa We don't need sscopen() and sscclose().
Register sscstrategy directly, instead of using a cdevsw{} for the purpose.

Tested by:	marcel
2003-02-02 10:22:34 +00:00
Marcel Moolenaar
0d50208281 Export IA32 from opt_ia32.h to assembly so that we can eliminate
saving and restoring ia32 specific registers when switching
context and ia32 support has not been compiled-in. The primary
reason for this change is that one of the ia32 registers (ar.fcr)
is wrongly marked as invalid by the simulator. Now that we avoid
using the register when possible, usability is improved. The
secundary reason is that it saves us 7 loads and stores.

Note that the PCB will continue to have room for these registers,
irrespective of the IA32 option. There are no benefits that make
it worthwhile.
2003-02-02 09:07:15 +00:00