Commit Graph

1244 Commits

Author SHA1 Message Date
Arun Sharma
2d24da614a The existing code fails some corner cases. Replace it with
ia64_bsp_adjust() which has been tested to work in all cases for
arbitrary (bsp, nslots) combinations.

reviewed by: marcel@
2004-08-16 22:09:58 +00:00
Marcel Moolenaar
344bbdbd54 As I said: the previous commit was untested... Remove an #endif which
should have ceased to exist when its corresponding #if was removed.
2004-08-16 19:05:08 +00:00
Marcel Moolenaar
97752b2cbd Catch up with the drive-by renaming of IA32 to COMPAT_IA32. It must
have been rush hour...

While here, move COMPAT_IA32 from opt_global.h to opt_compat.h like on
amd64. Consequently, it's unsafe to use the option in pcb.h. We now
unconditionally have the ia32 specific registers in the PCB.

This commit is untested.
2004-08-16 18:54:23 +00:00
Arun Sharma
646c6dd2c0 ITC.{i,d} instructions use format M41 not M42.
reviewed by: marcel@
2004-08-16 18:41:24 +00:00
Marcel Moolenaar
c66fdb617d Allocate memory in the unwinder with M_NOWAIT. We may need to provide
backtraces with locks held.
2004-08-14 05:00:37 +00:00
Marcel Moolenaar
3a00930042 In set_regs(), flush the dirty registers onto the backingstore before
we update the registers. That way we don't have any dirty registers to
worry about and also know that bsp=bspstore, which makes updating the
RSE related registers predictable.
This is not the end of it. We need more validity checks, but for now
this allows us to complete the gdb testsuite without crashing the
kernel.
2004-08-11 05:29:13 +00:00
Marcel Moolenaar
4da47b2fec Add __elfN(dump_thread). This function is called from __elfN(coredump)
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.

Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
2004-08-11 02:35:06 +00:00
Marcel Moolenaar
b4b7c60d70 Better preserve the original protection for the mappings we maintain.
The hardware always gives read access for privilege level 0, which
means that we cannot use the hardware access rights and privilege
level in the PTE to test whether there's a change in protection.  So,
we save the original vm_prot_t in the PTE as well.
Add pmap_pte_prot() to set the proper access rights and privilege
level on the PTE given a pmap and the requested protection.

The above allows us to compare the protection in pmap_extract_and_hold()
which was missing. While in pmap_extract_and_hold(), add pmap locking.

While here, clean up most (i.e. all but one) PTE macros we inherited
from alpha. They were either unused, used inconsistently, badly named
or simply weren't beneficial. We save the wired and managed state of
the PTE in distinct (bit) fields.

While in pte.h, s/u_int64_t/uint64_t/g

pmap locking obtained from: alc@
feedback & review by: alc@
2004-08-09 20:44:41 +00:00
Marcel Moolenaar
47a86e3cd4 Implement single stepping when we leave the kernel through the EPC syscall
path. The basic problem is that we cannot set the single stepping flag
directly, because we don't leave the kernel via an interrupt return. So,
we need another way to set the single stepping flag.
The way we do this is by enabling the lower-privilege transfer trap, which
gets raised when we drop the privilege level. However, since we're still
running in kernel space (sec), we're not yet done. We clear the lower-
privilege transfer trap, enable the taken-branch trap and continue exiting
the kernel until we branch into user space.
Given the current code, there's a total of two traps this way before
we can raise SIGTRAP.
2004-08-08 00:28:07 +00:00
Marcel Moolenaar
6aa84a056a Slightly move labels around to make sure we call ast() on our way out
after a fork(2) in fork_trampoline(). By moving the epc_syscall_return
label immediately before the call to do_ast() in epc_syscall(), we not
only achieve that but also handle the detour through exception_return
when the frame corresponds to an asynchronous kernel entry. Hence, we
simplified fork_trampoline() as a side-effect.
2004-08-07 21:55:15 +00:00
Marcel Moolenaar
7d9a8b1cd5 De-inline gdb_cpu_signal() because we need to convert the trap vectors
related to breakpoints and single stepping into SIGTRAP so gdb(1) knows
why the remote target has stopped. In particular, gdb(1) needs to know
if the reason is something of its own doing.
2004-08-07 21:40:52 +00:00
Arun Sharma
d7cf64c9a1 Use a 256MB TR instead of a 64MB TR to make sure that the kernel
text/data are covered on APs. This enables the kernel to boot on
a 4 way Intel Itanium-2 platform. This has a secondary effect of
keeping the TRs identical on BP and the APs.

reviewed by: marcel@
2004-08-04 20:09:41 +00:00
Mark Murray
d23a262fc5 Making a loadable null.ko for /dev/(null|zero) proved rather
unpopular, so remove this (mis)feature.

Encouragement provided by:	jhb (and others)
2004-08-03 19:24:54 +00:00
Maxime Henrion
9f1b87f106 Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait().  It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by:	jhb
Reviewed by:	jhb
2004-08-03 18:44:27 +00:00
Marcel Moolenaar
78b9765bee Fix 2 typos in previous commit: both s/strct/struct/ 2004-08-02 18:37:55 +00:00
Marcel Moolenaar
cb1d0dd340 Add the mem and null devices now that they are optional. 2004-08-02 17:53:06 +00:00
Marcel Moolenaar
6d28b03026 Sort the miscellaneous devices to restore ordering after the insertion
of the mem and null devices.
2004-08-02 17:50:39 +00:00
Mark Murray
a5ed4a0ad5 Remove extraneous ';'. 2004-08-01 18:51:44 +00:00
Mark Murray
8ab2f5ecc5 Break out the MI part of the /dev/[k]mem and /dev/io drivers into
their own directory and module, leaving the MD parts in the MD
area (the MD parts _are_ part of the modules). /dev/mem and /dev/io
are now loadable modules, thus taking us one step further towards
a kernel created entirely out of modules. Of course, there is nothing
preventing the kernel from having these statically compiled.
2004-08-01 11:40:54 +00:00
Alan Cox
350fb8ae6a - Add pmap locking to ia64's pmap_enter() and pmap_enter_quick(). (This
brings ia64 to parity with alpha, amd64, and i386 in this area.)
 - Prevent a race in pmap_find_pte(): If pmap_find_pte() sleeps in
   uma_zalloc(), another thread could allocate a pte at the same address.
   Instead, sleep at a higher level and retry the lookup before retrying
   the allocation.

Reviewed and tested by:	marcel@
2004-07-30 20:25:12 +00:00
Marcel Moolenaar
f95c91bcee Fix -O builds with gcc 3.4 by defining ffs as __builtin_ffs instead of
creating an inline function that just calls __builtin_ffs.
2004-07-30 07:56:53 +00:00
Poul-Henning Kamp
0658bb8ef8 Move a relic to its correct location(s): Put nfs diskless initialization
calls with the code they call.  (Yet another example of mindless copy&paste).
2004-07-28 21:54:57 +00:00
Robert Watson
1a8cfbc450 Pass a thread argument into cpu_critical_{enter,exit}() rather than
dereference curthread.  It is called only from critical_{enter,exit}(),
which already dereferences curthread.  This doesn't seem to affect SMP
performance in my benchmarks, but improves MySQL transaction throughput
by about 1% on UP on my Xeon.

Head nodding:	jhb, bmilekic
2004-07-27 16:41:01 +00:00
Marcel Moolenaar
6dd19a884b Work-around a gcc code generation bug for function descriptors
references (target/16559). This fixes SMP configurations.

Obtained from: arun@
2004-07-25 07:07:09 +00:00
Alan Cox
e4242deba7 In pmap_mincore() create a private copy of the pte for use after the pmap
lock is released.
2004-07-22 02:05:46 +00:00
Alan Cox
756e6d1939 Additional pmap locking
Tested by: marcel@
2004-07-21 07:01:48 +00:00
Marcel Moolenaar
fd32d93b97 Unify db_stack_trace_cmd(). All it did was look up the thread given
the thread ID and call db_trace_thread().
Since arm has all the logic in db_stack_trace_cmd(), rename the
new DB_COMMAND function to db_stack_trace to avoid conflicts on
arm.
While here, have db_stack_trace parse its own arguments so that
we can use a more natural radix for IDs. If the ID is not a thread
ID, or more precisely when no thread exists with the ID, try if
there's a process with that ID and return the first thread in it.
This makes it easier to print stack traces from the ps output.

requested by: rwatson@
tested on: amd64, i386, ia64
2004-07-21 05:07:09 +00:00
David Schultz
479f8d2214 Make FLT_ROUNDS correctly reflect the dynamic rounding mode. 2004-07-19 08:17:25 +00:00
Alan Cox
4a5be3f70a Add partial pmap locking.
Tested by: marcel@
2004-07-19 05:39:49 +00:00
Alan Cox
6fe30ff3f2 Remove unused fields from the pmap. 2004-07-16 03:42:45 +00:00
Poul-Henning Kamp
672c05d49c Preparation commit for the tty cleanups that will follow in the near
future:

rename ttyopen() -> tty_open() and ttyclose() -> tty_close().

We need the ttyopen() and ttyclose() for the new generic cdevsw
functions for tty devices in order to have consistent naming.
2004-07-15 20:47:41 +00:00
Alan Cox
3d2e54c317 Push down the acquisition and release of the page queues lock into
pmap_protect() and pmap_remove().  In general, they require the lock in
order to modify a page's pv list or flags.  In some cases, however,
pmap_protect() can avoid acquiring the lock.
2004-07-15 18:00:43 +00:00
Alan Cox
72826d0f4a A loop in pmap_remove() should use TAILQ_FOREACH_SAFE(), not
TAILQ_FOREACH(), because the loop deletes elements from the list.

Reviewed by:	marcel@
2004-07-15 03:20:00 +00:00
David Xu
53dbf30349 Add ptrace_clear_single_step(), alpha already has it for years, the function
will be used by ptrace to clear a thread's single step state.
2004-07-13 07:22:56 +00:00
Alan Cox
440382a953 Simplify pmap_protect(). 2004-07-13 06:54:23 +00:00
Alan Cox
ce8da3091f Push down the acquisition and release of the page queues lock into
pmap_remove_pages().  (The implementation of pmap_remove_pages() is
optional.  If pmap_remove_pages() is unimplemented, the acquisition and
release of the page queues lock is unnecessary.)

Remove spl calls from the alpha, arm, and ia64 pmap_remove_pages().
2004-07-13 02:49:22 +00:00
Marcel Moolenaar
8bcb1e9e84 Add options KDB and GDB. KDB takes on the function of what DDB used
to be. Both DDB and GDB specify which KDB backends to include.
2004-07-11 03:20:09 +00:00
Marcel Moolenaar
1ca618fcaa Remove the now unused GDB stubs. See src/sys/gdb/* for the new KDB
backend.
2004-07-11 01:47:26 +00:00
Marcel Moolenaar
37224cd3fc Mega update for the KDB framework: turn DDB into a KDB backend.
Most of the changes are a direct result of adding thread awareness.
Typically, DDB_REGS is gone. All registers are taken from the
trapframe and backtraces use the PCB based contexts. DDB_REGS was
defined to be a trapframe on all platforms anyway.
Thread awareness introduces the following new commands:
	thread X	switch to thread X (where X is the TID),
	show threads	list all threads.

The backtrace code has been made more flexible so that one can
create backtraces for any thread by giving the thread ID as an
argument to trace.

With this change, ia64 has support for breakpoints.
2004-07-10 23:47:20 +00:00
Marcel Moolenaar
6d33366c74 Update for the KDB framework:
o  ksym_start and ksym_end changed type to vm_offset_t.
o  Make debugging support conditional upon KDB instead of DDB.
o  Call kdb_enter() instead of breakpoint().
o  Remove implementation of Debugger().
o  Call kdb_trap() according to the new world order.

unwinder:
o  s/db_active/kdb_active/g
o  Various s/ddb/kdb/g
o  Add support for unwinding from the PCB as well as the trapframe.
   Abuse a spare field in the special register set to flag whether
   the PCB was actually constructed from a trapframe so that we can
   make the necessary adjustments.

md_var.h:
o   Add RSE convenience macros.
o   Add ia64_bsp_adjust() to add or subtract from BSP while taking
    NaT collections into account.
2004-07-10 22:59:30 +00:00
Marcel Moolenaar
5a39cbaf69 Implement makectx(). The makectx() function is used by KDB to create
a PCB from a trapframe for purposes of unwinding the stack. The PCB
is used as the thread context and all but the thread that entered the
debugger has a valid PCB.
This function can also be used to create a context for the threads
running on the CPUs that have been stopped when the debugger got
entered. This however is not done at the time of this commit.
2004-07-10 19:56:00 +00:00
Marcel Moolenaar
cbc174356c Introduce the KDB debugger frontend. The frontend provides a framework
in which multiple (presumably different) debugger backends can be
configured and which provides basic services to those backends.
Besides providing services to backends, it also serves as the single
point of contact for any and all code that wants to make use of the
debugger functions, such as entering the debugger or handling of the
alternate break sequence. For this purpose, the frontend has been
made non-optional.
All debugger requests are forwarded or handed over to the current
backend, if applicable. Selection of the current backend is done by
the debug.kdb.current sysctl. A list of configured backends can be
obtained with the debug.kdb.available sysctl. One can enter the
debugger by writing to the debug.kdb.enter sysctl.
2004-07-10 18:40:12 +00:00
Marcel Moolenaar
72d44f31a6 Introduce the GDB debugger backend for the new KDB framework. The
backend improves over the old GDB support in the following ways:
o  Unified implementation with minimal MD code.
o  A simple interface for devices to register themselves as debug
   ports, ala consoles.
o  Compression by using run-length encoding.
o  Implements GDB threading support.
2004-07-10 17:47:22 +00:00
Brian Somers
0ac4013324 Change the following environment variables to kernel options:
bootp -> BOOTP
    bootp.nfsroot -> BOOTP_NFSROOT
    bootp.nfsv3 -> BOOTP_NFSV3
    bootp.compat -> BOOTP_COMPAT
    bootp.wired_to -> BOOTP_WIRED_TO

- i.e. back out the previous commit.  It's already possible to
pxeboot(8) with a GENERIC kernel.

Pointed out by: dwmalone
2004-07-08 22:35:36 +00:00
Marcel Moolenaar
1e3e78d239 MFamd64 (1.275):
Reduce the scope of the Giant lock being held for non-mpsafe syscalls.
There was way too much code being covered.
2004-07-08 21:08:07 +00:00
Marcel Moolenaar
469df33664 Better handle the break instruction trap. The runtime specification
has outlined which break numbers are software interrupts, debugger
breakpoints and ABI specific breaks. We mostly treated all break
numbers we didn't care about as debugger breakpoints.
2004-07-08 16:30:42 +00:00
Brian Somers
59e1ebc9b5 Change the following kernel options to environment variables:
BOOTP -> bootp
    BOOTP_NFSROOT -> bootp.nfsroot
    BOOTP_NFSV3 -> bootp.nfsv3
    BOOTP_COMPAT -> bootp.compat
    BOOTP_WIRED_TO -> bootp.wired_to

This lets you PXE boot with a GENERIC kernel by putting this sort of thing
in loader.conf:

    bootp="YES"
    bootp.nfsroot="YES"
    bootp.nfsv3="YES"
    bootp.wired_to="bge1"

or even setting the variables manually from the OK prompt.
2004-07-08 13:40:33 +00:00
Alan Cox
8fe61389a7 - Correct pmap_extract()'s return type. It should be vm_paddr_t, not
vm_offset_t.
 - Convert pmap_extract() to the ANSI style of declaration.
2004-07-05 23:18:48 +00:00
John Baldwin
0c0b25ae91 Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
  determine if a thread about to be added to a run queue should be
  preempted to directly.  If it is not safe to preempt or if the new
  thread does not have a high enough priority, then the function returns
  false and sched_add() adds the thread to the run queue.  If the thread
  should be preempted to but the current thread is in a nested critical
  section, then the flag TDF_OWEPREEMPT is set and the thread is added
  to the run queue.  Otherwise, mi_switch() is called immediately and the
  thread is never added to the run queue since it is switch to directly.
  When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
  then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
  setrunqueue() now does all the correct work.  This also removes the
  do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
  supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
  chance to run if the architecture supports native preemption since
  the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
  architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
  preemption, namely alpha, i386, and amd64.

This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.

Approved by:	scottl (with his re@ hat)
2004-07-02 20:21:44 +00:00
Marcel Moolenaar
3d8f0528e5 Unbreak build: define __RMAN_RESOURCE_VISIBLE
See also src/sys/sys/rman.h rev. 1.21.
2004-06-30 23:55:14 +00:00