Commit Graph

964 Commits

Author SHA1 Message Date
David Xu
0e2a4d3aeb Rename P_THREADED to P_SA. P_SA means a process is using scheduler
activations.
2003-06-15 00:31:24 +00:00
Alan Cox
49a2507bd1 Migrate the thread stack management functions from the machine-dependent
to the machine-independent parts of the VM.  At the same time, this
introduces vm object locking for the non-i386 platforms.

Two details:

1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES.  The
different machine-dependent implementations used various combinations
of KSTACK_GUARD and KSTACK_GUARD_PAGES.  To disable guard page, set
KSTACK_GUARD_PAGES to 0.

2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new.  In
5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed
to vm_page_alloc() or vm_page_grab().
2003-06-14 23:23:55 +00:00
Alan Cox
89f4fca265 Move the *_new_altkstack() and *_dispose_altkstack() functions out of the
various pmap implementations into the machine-independent vm.  They were
all identical.
2003-06-14 06:20:25 +00:00
Marcel Moolenaar
222a7e518c Remove kernel event tracing. The overhead is significant when running
under ski.
2003-06-14 00:01:24 +00:00
Marcel Moolenaar
58f2d986a6 Make sure pcpu->pc_pcb is pointing to a 16-byte aligned address. The
PCB contains FP registers, whose alignment must be 16 bytes at least.
Since the PCB pointed to by pc_pcb is immediately after the PCPU
itself, round-up the size of thge PCPU to a multiple of 16 bytes. The
PCPU is page aligned.

This fixes a misalignment trap caused by stopping a CPU in a SMP
kernel, such as been done when entering the debugger.

Reported by: Alan Robinson <alan.robinson@fujitsu-siemens.com>
2003-06-12 00:15:18 +00:00
Peter Wemm
77e2a274d0 GC unused cpu_wait() function 2003-06-11 05:20:33 +00:00
Juli Mallett
d196a10856 Note that scbus is required for SCSI, not just "required" in general.
Submitted by:	Edward Kaplan (tmbg37 on IRC)
Reviewed by:	rwatson (in principle)
2003-06-08 02:03:02 +00:00
Marcel Moolenaar
6f2071769f pmap_find_vhpt() has been observed to return a NULL pointer when
the caller assumes this to not happen by means of performing an
indirection without checking the return value. Add KASSERTs to
force a kernel with INVARIANTS to panic. This is a short-term
measure. The pmap code is scheduled to be overhauled.
2003-06-07 04:17:39 +00:00
Marcel Moolenaar
f09b81f8be If we get a fault in the gateway page, which would happen if we try
to deliver a signal and the RSE backing store has been exhausted or
the backing store pointer has been clobbered, we need to make sure
we call userret() and do_ast() when we exit from trap(). Not adjusting
the local variable 'user' in this case will prevent the faulty process
from being terminated and we end up in an infinite fault repetition.

Faulty process provided by: bento
2003-06-07 04:10:07 +00:00
Marcel Moolenaar
0785ee125b Use TRAPF_USERMODE() to replace an equivalent check in trap(). While
here, amend the related comment.
2003-06-06 23:44:05 +00:00
Marcel Moolenaar
eaa7bda4a5 Have TRAPF_USERMODE() take into account that the gateway page is not
always kernel space. It should be treated as user space when run with
user privileges (which is the case for the signal trampolines). This
fixes its only use in a KASSERT in subr_trap.c.
2003-06-06 23:27:18 +00:00
Marcel Moolenaar
3f52f44add Fix the dreaded double counting that was present on alpha as well and
got fixed two weeks after the ia64 version was copied from the alpha
version (see rev 1.32 of sys/alpha/alpha/mem.c). As such, we were
missing the same continue as on alpha.

While here, add a default case for the device minor switch and do
some general style(9) cleanups.

WARNING: this file still has bugs. When reading from region 6 or
region 7, we don't validate the physical address. One can trivially
cause a machine check by trying to read from address 0xFFFFFFFFFFFFFFF0
or something that uses the unimplemented physical address bits.

Reported by: Alan Robinson <alan.robinson@fujitsu-siemens.com>
2003-06-04 21:56:10 +00:00
Marcel Moolenaar
11e0f8e16d Change the second (and last) argument of cpu_set_upcall(). Previously
we were passing in a void* representing the PCB of the parent thread.
Now we pass a pointer to the parent thread itself.
The prime reason for this change is to allow cpu_set_upcall() to copy
(parts of) the trapframe instead of having it done in MI code in each
caller of cpu_set_upcall(). Copying the trapframe cannot always be
done with a simply bcopy() or may not always be optimal that way. On
ia64 specifically the trapframe contains information that is specific
to an entry into the kernel and can only be used by the corresponding
exit from the kernel. A trapframe copied verbatim from another frame
is in most cases useless without some additional normalization.

Note that this change removes the assignment to td->td_frame in some
implementations of cpu_set_upcall(). The assignment is redundant.
A previous call to cpu_thread_setup() already did the exact same
assignment. An added benefit of removing the redundant assignment is
that we can now change td_pcb without nasty side-effects.

This change officially marks the ability on ia64 for 1:1 threading.

Not tested on: amd64, powerpc
Compile & boot tested on: alpha, sparc64
Functionally tested on: i386, ia64
2003-06-04 21:13:21 +00:00
Marcel Moolenaar
0fa2b83829 Improve set_mcontext:
o  Don't copy psr verbatim from the user supplied context. Only allow
   userland to change the processor settings that are part of the user
   mask.
2003-06-01 23:22:56 +00:00
Marcel Moolenaar
86f4f6f7b8 Improve on cpu_set_upcall:
o  Use pcb and tf for the new pcb and the new trapframe and use pcb0
   for the old (current) pcb. The mix of pcb, pcb2 and tf was slightly
   confusing.
o  Don't define td->td_frame here. It has already been set previously
   by cpu_thread_setup. Add a KASSERT to make sure pcb and tf are both
   non-NULL.
o  Make sure the number of dirty registers is 0 for the new thread.
   There are no user registers on the backing store because we heven't
   enter userland yet.
2003-06-01 23:19:21 +00:00
Marcel Moolenaar
798c9e50b0 Implement cpu_thread_setup(). This is mostly the same as on i386,
except for the fact that trapframes have a size recorded in it
that we set here too. We need this for proper thread setup.

Pointed out by: mtm
2003-06-01 08:29:43 +00:00
Marcel Moolenaar
5e7019bf32 Now that we have the signal trampolines in the gateway page and the
gateway page is considered kernel space, we can panic when we should
only SIGSEGV. Hence, add the additional constraint that for page
faults we also require running with kernel privileges. The gateway
page is the only kernel code running with user privileges, iso this
is a correct way to exclude the gateway page from kernel land.

We do not currently exclude the gateway page for other faults as it
is not always the right way to do it. Further tuning will happen on
a case by case bases.
2003-05-31 21:21:35 +00:00
Marcel Moolenaar
480a728dee Implement cpu_set_upcall(). Required by libthr and used by
thr_create(2). This implementation is so far only compile tested.
But since this is also the last of the functions required to
support libthr, we're now functionally complete (for some weird
definition of functionally; and complete). Runtime testing can
commence.
2003-05-31 21:14:25 +00:00
Marcel Moolenaar
c01da18e22 Implement set_mcontext() and get_mcontext(). Just as for sendsig() and
sigreturn(), we cheat and assume the preserved registers are still
on-chip and unmodified. This is actually the case, but more by accident
than by design. We need to use unwinding eventually or explicitly
compile the kernel in a way that the compiler steers clear from using
the preserved registers completely.
2003-05-31 21:07:08 +00:00
Marcel Moolenaar
8ef6f226da Make the regset pointers const pointers for the context restore functions.
This works better with set_mcontext() and is more precise in general.
2003-05-31 21:02:18 +00:00
Marcel Moolenaar
3893a77138 Some ia32 related finetuning for the EPC syscall path:
o  The SDM states that flushing the RSE in the cycle prior to the
   call to ia32 code yields the best performance. We don't really
   care to much about performance here, but we do the same anyway.
   I'm being paranoia and conservative here.
o  Only initialize the ia32 state registers, not the registers used
   as scratch by the ia32 engine. This saves a couple of loads from
   the trapframe, but also helps debugging: we don't clobber useful
   debugging data (engineering hints :-)
o  Make sure all general registers constituting ia32 state have been
   initialized. If there's no useful to be loaded from the trapframe,
   clear the register. This avoids accidentally leaking NaT bits.
o  Make sure we set ar.k6 prior to clobbering ar.bspstore and also
   set ar.k7 prior to setting sp. This fixes a race seen for ia64
   native code as well (and previously fixed too).
2003-05-31 20:57:26 +00:00
Marcel Moolenaar
62931aa266 Make sure we have all the dirty registers in user frames on the
backing store before we discard them. It is possible that we
enter the kernel (due to an execve in this case) with a lot of
dirty user registers and that the RSE has only partially spilled
them (to make room for new frames). We cannot move the backing
store pointer down (to discard user registers) when not all of
the user registers are on the backing store.
So, we flush the register stack IFF this happens. Unconditionally
doing the flush is too costly, because the condition in which we
need to flush is very rare.

This change appears to fix the SIGSEGV that sometimes happen for
newly executed processes and so far also appears to fix the last
of the corruption. It is possible, although not likely, that this
change prevents some other bug from happening, even though it is
itself not a fix. Hence the uncertainty. We'll know in a couple
of months I guess :-)
2003-05-31 20:42:35 +00:00
Hiten Pandya
b77c32a07e Rename BUS_DMAMEM_NOSYNC to BUS_DMA_COHERENT.
The current name is confusing, because it indicates to
the client that a bus_dmamap_sync() operation is not
necessary when the flag is specified, which is wrong.

The main purpose of this flag is to hint the underlying
architecture that DMA memory should be mapped in a coherent
way, but the architecture can ignore it.  But if the
architecture does supports coherent mapping of memory, then
it makes bus_dmamap_sync() calls cheap.

This flag is the same as the one in NetBSD's Bus DMA.

Reviewed by: gibbs, scottl, des (implicitly)
Approved by: re@ (jhb)
2003-05-30 20:40:33 +00:00
Marcel Moolenaar
3a8c4f9f9c Move the sysctls of the misalignment handler to where they belong
and use OID_AUTO instead of fixed IDs.

Approved by: re@ (blanket)
2003-05-29 06:30:36 +00:00
Marcel Moolenaar
12cd60b726 Fix what I think is a cut-n-paste bug: use OID_AUTO for the
print_usertrap sysctl instead of CPU_UNALIGNED_PRINT. The
latter is used already.

Approved by: re@ (blanket)
2003-05-29 05:09:15 +00:00
Marcel Moolenaar
81d77e2eed A flushrs must be the first in an instruction group.
Approved by: re@ (blanket)
2003-05-27 07:10:58 +00:00
Scott Long
7e71df9339 Bring back bus_dmasync_op_t. It is now a typedef to an int, though the
BUS_DMASYNC_ definitions remain as before.  The does not change the ABI,
and reverts the API to be a bit more compatible and flexible.  This has
survived a full 'make universe'.

Approved by:	re (bmah)
2003-05-27 04:59:59 +00:00
Marcel Moolenaar
1093ceb088 Have the unwinder allocate memory with M_NOWAIT. The unwinder is
used by DDB and we cannot know in advance whether it's save to
sleep. It often enough isn't. We may want to pre-allocate space
to cover the most common cases without having to use malloc at
all, but that requires some analysis. We leave that for later.

Approved by: re@ (blanket)
2003-05-27 01:15:16 +00:00
Marcel Moolenaar
a47e5d473b Fix fu{byte|word*} and su{byte|word*}:
o  If the address was not within user space we jumped to fusufault
   where we would clear pcb_onfault and return 0. There are two
   bugs here:
   1. We never got to the point where we assigned the address of
      pcb_onfault to r15, which means that we would clobber some
      random memory location, including I/O space or ROM.
   2. We're supposed to return -1 on error.
o  Make sure we have proper memory ordering for setting pcb_onfault,
   doing the memory access to user space and clearing pcb_onfault.
   For the fu* family of functions this means that we need a mf
   instruction, because we don't have acquire semantics on stores
   and release semantics on loads (hence st;ld cannot be ordered
   without intermediate mf).

While here, implement casuptr() so that we are a (small) step
closer to supporting libthr and deobfuscate the non-implementation
of {f|s}uswintr.

Approved by: re@ (blanket)
2003-05-27 01:00:12 +00:00
Marcel Moolenaar
941a057663 Revision 1.99 of this file changed the allocation request from
VM_ALLOC_INTERRUPT to VM_ALLOC_SYSTEM. There was no mention of
this in commit log as it was considered harmless. Guess what:
it does harm. WITNESS showed that we can not safely grab the
page queue lock in vm_page_alloc() in all cases as we may have
to sleep on it. Revert the request to VM_ALLOC_INTERRUPT to
circumvent this. We panic if vm_page_alloc returns 0. I'm not
entirely happy about this, but we have bigger fish to fry.

Approved by: re@ (blanket)
2003-05-26 22:54:18 +00:00
Marcel Moolenaar
dc0545462e Now that we define user mode as any IP address that isn't in the
kernel's VA regions, we cannot limit the use of break-based
syscalls to user mode only. The signal trampolines are in the
gateway page, which is mapped into the process address space in
region 5 and thus is kernel space.

We don't special case the gateway page here. Allow break-based
syscalls from anywhere in the kernel VA space.

Approved by: re@ (blanket)
2003-05-25 01:01:28 +00:00
Marcel Moolenaar
d7f827116f Fix a source of instability specific to an EPC userland. We return
to userland with interrupts disabled until we restore PSR. However,
it has been observed that interrupts do actually happen before they
are enabled again. This is a bit surprising and I don't know yet
what's going on exactly. Nevertheless, the code was not crafted
carefully enough to allow interrupts to happen and we could
clobber the kernel stack of another thread when interrupts did
happen.

This is what happens: we restore the (memory) stack pointer (sp)
and the register stack base prior to restoring ar.k6 and ar.k7.
This is not a problem if interrupts don't happen between setting
sp/ar.bspstore and ar.k6/ar.k7. Alas, interrupts can happen.
Since sp/ar.bspstore already point to the userland stacks, we
need to switch to the kernel stack in interrupt. However, ar.k6
and ar.k7 have not been set, which means that we were switching
to some unrelated kstack and happily clobbered the trapframe
present there if the thread to which the kstack belonged was
in kernel mode or otherwise we could have our trapframe clobbered
if that other thread enters the kernel. Nasty either way.

We now carefully restore ar.k6 prior to restoring ar.bspstore and
likewise for ar.k7 and sp. All we need is the guarantee that an
interrupt does not clobber ar.k6 or ar.k7 before we're back in
userland. That has been achieved by restoring ar.k6/ar.k7
unconditionally (see exception.s)

While here, remove the disabling of interrupts on EPC entry. It
was added as a way to "resolve" the crashes until it was understood
what was going on. I think I achieved the latter, so we can remove
the patch. Note that setting up a trapframe with interrupts
enabled has it's own share of corner cases, but it's better to
properly fixed those than to keep a mostly wrong patch around
because we're afraid to remove it...

Approved by: re@ (blanket)
2003-05-24 22:53:10 +00:00
Marcel Moolenaar
a7b90d80fc Be more careful how we restore interrupts. Don't rewrite most of the
PSR only to achieve setting PSR.i back to it's previous value. It
makes it impossible to change any of the 30+ other unrelated bits
when done between intr_disable() and intr_restore(). That's bad.

Instead have intr_disable() return 1 when interrupts were previously
enabled and 0 otherwise and only enable interrupts in intr_restore()
when given a non-0 value.

This change specifically disallows using intr_restore() to disable
interrupts. The reason is simple: interrupts only need to be restored
after they are being disabled, which means that intr_restore() is
called with interrupts disabled and we only need to enable them if
they were previously enabled.

This change does not fix any bugs, other than that it bugged me...

Approved by: re@ (blanket)
2003-05-24 21:44:24 +00:00
Marcel Moolenaar
95f2dbba40 Consistently us the same metric to differentiate between kernel mode
and user mode. We need to take into account that the EPC syscall path
introduces a grey area in which one can argue either way, including a
third: neither.

We now use the region in which the IP address lies. Regions 5, 6 and 7
are kernel VA regions and if the IP lies any any of those regions we
assume we're in kernel mode. Hence, we can be in kernel mode even if
we're not on the kernel stack and/or have user privileges. There're
gremlins living in the twilight zone :-)

For the EPC syscall path this particularly means that the process
leaves user mode the moment it calls into the gateway page. This
makes the most sense because from a process' point of view the call
represents a request to the kernel for some service and that service
has been performed if the call returns. With the metric we picked,
this also means that we're back in user mode IFF the call returns.

Approved by: re@ (blanket)
2003-05-24 21:16:19 +00:00
Marcel Moolenaar
fb4aa34f3b Unconditionally restore ar.k7 (memory stack) and ar.k6 (register stack)
when returning from an interrupt. Both registers are used on interrupt
to switch to the right kernel stack, but other than that they are not
used. This means we only have to make sure they contain proper values
while in user mode. As such, we conditionally restored these registers
based on whether we returned to userland or not. A nice property of
conditionally restoring ar.k6 and ar.k7 is that it introduces two
invariants: ar.k6 always points to the bottom of the kernel stack and
ar.k7 always points to the top of the kernel stack (immediately below
the PCB we have there).

However, the EPC syscall path introduces an irregularity: there's no
"thin red line" between user and kernel. There's a grey area that's a
couple of instructions wide. Any interruption in that grey area is
bound to see an inconsistent state. One such state is that we're in
kernel space for all practical purposes, but we still need to have
ar.k6 and ar.k7 restored as if we're in userland.

Thus: restore ar.k6 and ar.k7 unconditionally at the cost of losing
a valuable invariant. Both registers now hold the extend of the
usable portion of the kernel stack at any interrupt nesting, which
when in userland mean the bottom and the top of the kstack.
2003-05-24 20:51:55 +00:00
Marcel Moolenaar
d1d7df1905 Fix an alpha inheritance bug:
On alpha, PAL is involved in context management and after wiring
the CPU (in alpha_init()) a context switch was performed to tell
PAL about the context. This was bogusly brought over to ia64
where it introduced bugs, because we restored the context from
a mostly uninitialized PCB.

The cleanup constitutes:
o  Remove the unused arguments from ia64_init().
o  Don't return from ia64_init(), but instead call mi_startup()
   directly. This reduces the amount of muckery in assembly and
   also allows for the next bullet:
o  Save our currect context prior to calling mi_startup(). The
   reason for this is that many threads are created from thread0
   by cloning the PCB. By saving our context in the PCB, we have
   something sane to clone. It also ensures that a cloned thread
   that does not alter the context in any way will return to
   the saved context, where we're ready for the eventuality with
   a nice, user unfriendly panic().

The cleanup fixes at least the following bugs:
o  Entering mi_startup() with the RSE in enforced lazy mode.
o  Re-execution of ia64_init() in certain "lab" conditions.

While here, add proper unwind directives to __start() so that
the unwind knows it has reached the bottom of the (call) stack.

Approved by: re@ (blanket)
2003-05-24 00:17:34 +00:00
Marcel Moolenaar
ca125f9c17 Fix a (new) source of instability:
When interrupting a kernel context, we don't need to switch stacks
(memory nor register). As such, we were also not restoring the
register stack pointer (ar.bspstore). This, however, fails to be
valid in 1 situation: when we interrupt a register stack switch as
is being done in restorectx(). The problem is that restorectx()
needs to have ar.bsp == ar.bspstore before it can assign the new
value to ar.bspstore. This is achieved by doing a loadrs prior to
assigning to ar.bspstore. If we take an interrupt in between the
loadrs and the assignment and we don't make sure we restore the
ar.bspstore prior to returning from the interrupt, we switch
stacks with possibly non-zero dirty registers, which means that
the new frame pointer (ar.bsp) will be invalid.

So, instead of jumping over the restoration of the register frame
pointer and related registers, we conditionalize it based on whether
we return to kernel context or user context. A future performance
tweak is possible by only restoring ar.bspstore when returning to
kernel mode *and* when the RSE is in enforced lazy mode. One cannot
assume ar.bsp == ar.bspstore if the RSE is not in enforced lazy mode
anyway.

While here (well, not quite) don't unconditionally assign to
ar.bspstore in exception_save. Only do that when we actually switch
stacks. It can only harm us to do it unconditionally.

Approved by: re@ (blanket)
2003-05-23 23:55:31 +00:00
Marcel Moolenaar
42b919d4a6 In swapctx(), put the RSE in enforced lazy mode before we flush the
register stack. There's nothing really wrong with flushing before
putting the RSE in enforced lazy mode, provided you don't depend on
ar.bspstore being equal to ar.bsp when the RSE has been put in
enforced lazy more. The small window between the flush and setting
the RSE may be sufficient to have the RSE eagerly increase the dirty
region (and hence cause ar.bspstore != ar.bsp) or have an interrupt
that may even get the laziest RSE to do something.

Anyway: we don't depend on ar.bspstore being equal to ar.bsp, so
nothing was and is broken. But the code was non-intuitive and
easily confuses. This is a source of future bugs.

Note: the advantage of not depending on ar.bspstore is that there's
some recilience against an interrupted flushrs. Clobbering is limited
to stacked register contents only, not to RSE address clobbering.

Approved: re@ (blanket)
2003-05-23 23:16:43 +00:00
Marcel Moolenaar
bfaccb767c o Fix a definite bogon: the dirty bity fault, instruction access
failt and data access fault install the PTE in question into
   the VHPT table. However, a post-increment was missing and we
   wrote the raw PTE data into the pagesize/access key field.
   This leaves a corrupt VHPT entry.
o  While here, remove the explicit cache purge. Insertion into
   the translation implicitly purges any overlapping entries.
o  Make sure there's a cycle break between the itc and the rfi.
o  Whitespace fixes.
2003-05-20 06:57:20 +00:00
Marcel Moolenaar
14d2ae56c7 Rename the "IA64 ITC" counter to "ITC" counter. We don't call the
"TSC" counter on i386 "I386 TSC".

Approved by: re@ (blanket)
2003-05-20 06:51:20 +00:00
Marcel Moolenaar
9b9ce577d4 Prevent corruption of the VHPT collision chain by protecting it with
a mutex. The only volatile chain operations are insertion and deletion
but since updating an existing PTE also updates the VHPT entry itself,
and we have the VHPT mutex in both other cases, we also lock when we
update an existing PTE even though no chain operation is involved.
Note that we perform the insertion and deletion careful enough that
we don't need to lock traversals. If we need to lock traversals, we
also need to lock from the exception handler, which we can't without
creating a trapframe.

We're now able to withstand a -j8 buildworld. More work is needed to
withstand Murphy fields. In other words: we still have a bogon...

Approved by: re@ (blanket)
2003-05-20 02:52:41 +00:00
Alexander Kabaev
980ded9a7d sys/sys/limits.h:
- Fix visibilty test for LONG_BIT and WORD_BIT.  `#if defined(__FOO_VISIBLE)'
   is alays wrong because __FOO_VISIBLE is always defined (to 0 for
   invisibility).

sys/<arch>/include/limits.h
sys/<arch>/include/_limits.h:

 - Style fixes.

Submitted by:	bde
Reviewed by:	bsdmike
Approved by:	re (scottl)
2003-05-19 20:29:07 +00:00
Marcel Moolenaar
b8c4149cff Turn pmap_install_pte() into a critical section. We better not get
interrupted while writing into the VHPT table. While here, make sure
memory accesses a properly ordered. Tag invalidation must happen
first so that the hardware VHPT walker will not be able to match
this entry while we're updating it and we have to make sure the new
new tag gets written only after the PTE is completely updated.

Approved by: re (blanket)
2003-05-19 08:02:36 +00:00
Marcel Moolenaar
a75b99ea2d Unconditionally set pcb_current_pmap. WIP versions of the code
previously committed cleared pcb_current_pmap prior to changing
the region registers, but that was removed before committing.
Since we don't normally (at all?) pass a NULL pointer, the bug
was mostly harmless. Fix it while I'm here...

I'm here because we need to have data serialization after writing
to the region registers. Not doing so was likely the cause of the
hangs we were experiencing. General exceptions in cpu_switch may
also be caused by the lack of serialization.

Approved by: re (blanket)
2003-05-19 06:05:30 +00:00
Marcel Moolenaar
dc0bde0f18 pmap_install() needs to be atomic WRT to context switching. Protect
switching user regions (region 0-4) with schedlock. Avoid unnecessary
recursion on schedlock by moving the core functionality to another
function (pmap_switch()) where we assert schedlock is held. Turn
pmap_install() into a wrapper that grabs schedlock. This minimizes
the number of callsites that need to be changed.
Since we already have schedlock in cpu_switch() and cpu_throw(),
have them call pmap_switch() directly. These were also the only two
calls to pmap_install() outside pmap.c, so make pmap_install() static
and remove its prototype from pmap.h

Approved by: re (blanket)
2003-05-19 04:16:30 +00:00
Marcel Moolenaar
040c5b92bb Remove unused files. cpu_switch() and cpu_throw(), normally in swtch.s,
can be found in machdep.c.

Approved: re@
2003-05-17 04:55:04 +00:00
Marcel Moolenaar
f2c49dd248 Revamp of the syscall path, exception and context handling. The
prime objectives are:
o  Implement a syscall path based on the epc inststruction (see
   sys/ia64/ia64/syscall.s).
o  Revisit the places were we need to save and restore registers
   and define those contexts in terms of the register sets (see
   sys/ia64/include/_regset.h).

Secundairy objectives:
o  Remove the requirement to use contigmalloc for kernel stacks.
o  Better handling of the high FP registers for SMP systems.
o  Switch to the new cpu_switch() and cpu_throw() semantics.
o  Add a good unwinder to reconstruct contexts for the rare
   cases we need to (see sys/contrib/ia64/libuwx)

Many files are affected by this change. Functionally it boils
down to:
o  The EPC syscall doesn't preserve registers it does not need
   to preserve and places the arguments differently on the stack.
   This affects libc and truss.
o  The address of the kernel page directory (kptdir) had to
   be unstaticized for use by the nested TLB fault handler.
   The name has been changed to ia64_kptdir to avoid conflicts.
   The renaming affects libkvm.
o  The trapframe only contains the special registers and the
   scratch registers. For syscalls using the EPC syscall path
   no scratch registers are saved. This affects all places where
   the trapframe is accessed. Most notably the unaligned access
   handler, the signal delivery code and the debugger.
o  Context switching only partly saves the special registers
   and the preserved registers. This affects cpu_switch() and
   triggered the move to the new semantics, which additionally
   affects cpu_throw().
o  The high FP registers are either in the PCB or on some
   CPU. context switching for them is done lazily. This affects
   trap().
o  The mcontext has room for all registers, but not all of them
   have to be defined in all cases. This mostly affects signal
   delivery code now. The *context syscalls are as of yet still
   unimplemented.

Many details went into the removal of the requirement to use
contigmalloc for kernel stacks. The details are mostly CPU
specific and limited to exception_save() and exception_restore().
The few places where we create, destroy or switch stacks were
mostly simplified by not having to construct physical addresses
and additionally saving the virtual addresses for later use.

Besides more efficient context saving and restoring, which of
course yields a noticable speedup, this also fixes the dreaded
SMP bootup problem as a side-effect. The details of which are
still not fully understood.

This change includes all the necessary backward compatibility
code to have it handle older userland binaries that use the
break instruction for syscalls. Support for break-based syscalls
has been pessimized in favor of a clean implementation. Due to
the overall better performance of the kernel, this will still
be notived as an improvement if it's noticed at all.

Approved by: re@ (jhb)
2003-05-16 21:26:42 +00:00
Marcel Moolenaar
baf74b8876 o In pmap_install, don't prevent switching the pmap if we're
switching to kernel_pmap. The pmap is not special enough.
o  Clear the active bit on the pmap we're switching out.
o  Fix some nearby style(9) bugs.

Approved by: re@
2003-05-16 07:57:44 +00:00
Marcel Moolenaar
906f065725 Indent a comment. This makes 1.100.
Still approved by: re@ (blanket)
2003-05-16 07:05:08 +00:00
Marcel Moolenaar
164d4986fd Turn pmap_growkernel() into a critical section. While here, initialize
kernel_vm_end in pmap_bootstrap. Don't delay the initialization until
we need to grow the kernel VM space. This BTW happens twice before
we enter either single- or multi-user mode. Don't adjust kernel_vm_end
while growing based on whether the KPT contains a non-NULL entry. We
trust kernel_vm_end to be correct and we make sure it's still correct
after growing.
Define virtual_avail and virtual_end in terms of VM_MIN_KERNEL_ADDRESS
and VM_MAX_KERNEL_ADDRESS (resp). Don't hardcode region knowledge.
2003-05-16 07:03:15 +00:00