Commit Graph

1060 Commits

Author SHA1 Message Date
Marcel Moolenaar
4da47b2fec Add __elfN(dump_thread). This function is called from __elfN(coredump)
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.

Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
2004-08-11 02:35:06 +00:00
Alan Cox
2c0680e659 Add pmap locking to many of the functions.
Implement the protection check required by the pmap_extract_and_hold()
specification.

Remove the acquisition and release of Giant from pmap_extract_and_hold() and
pmap_protect().

Many thanks to Ken Smith for resolving a sparc64-specific initialization
problem in my original patch.

Tested by: kensmith@
2004-08-10 20:53:26 +00:00
Alan Cox
684a62b7bf - Push down the acquisition and release of Giant into pmap_enter_quick()
on those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant in vm_map_pmap_enter().
2004-08-04 22:03:16 +00:00
Mark Murray
d23a262fc5 Making a loadable null.ko for /dev/(null|zero) proved rather
unpopular, so remove this (mis)feature.

Encouragement provided by:	jhb (and others)
2004-08-03 19:24:54 +00:00
Maxime Henrion
9f1b87f106 Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait().  It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by:	jhb
Reviewed by:	jhb
2004-08-03 18:44:27 +00:00
Mark Murray
8ab2f5ecc5 Break out the MI part of the /dev/[k]mem and /dev/io drivers into
their own directory and module, leaving the MD parts in the MD
area (the MD parts _are_ part of the modules). /dev/mem and /dev/io
are now loadable modules, thus taking us one step further towards
a kernel created entirely out of modules. Of course, there is nothing
preventing the kernel from having these statically compiled.
2004-08-01 11:40:54 +00:00
Alan Cox
9bb0e06861 - Push down the acquisition and release of Giant into pmap_protect() on
those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant from vm_map_protect().

(Translation: mprotect(2) runs to completion without touching Giant on
alpha, amd64, i386 and ia64.)
2004-07-30 20:38:30 +00:00
Robert Watson
1a8cfbc450 Pass a thread argument into cpu_critical_{enter,exit}() rather than
dereference curthread.  It is called only from critical_{enter,exit}(),
which already dereferences curthread.  This doesn't seem to affect SMP
performance in my benchmarks, but improves MySQL transaction throughput
by about 1% on UP on my Xeon.

Head nodding:	jhb, bmilekic
2004-07-27 16:41:01 +00:00
Alan Cox
bfdf81ac47 Use kmem_alloc_nofault() rather than kmem_alloc_pageable() for allocating
KVA for explicitly managed mappings, i.e., mappings created with
pmap_qenter().
2004-07-23 06:49:49 +00:00
Marcel Moolenaar
fd32d93b97 Unify db_stack_trace_cmd(). All it did was look up the thread given
the thread ID and call db_trace_thread().
Since arm has all the logic in db_stack_trace_cmd(), rename the
new DB_COMMAND function to db_stack_trace to avoid conflicts on
arm.
While here, have db_stack_trace parse its own arguments so that
we can use a more natural radix for IDs. If the ID is not a thread
ID, or more precisely when no thread exists with the ID, try if
there's a process with that ID and return the first thread in it.
This makes it easier to print stack traces from the ps output.

requested by: rwatson@
tested on: amd64, i386, ia64
2004-07-21 05:07:09 +00:00
Maxim Konovalov
aa355a2679 In -CURRENT pseudo devices are not statically assigned at compile time,
remove a stale comment.

PR:		kern/62285
2004-07-18 09:03:12 +00:00
Alan Cox
3d2e54c317 Push down the acquisition and release of the page queues lock into
pmap_protect() and pmap_remove().  In general, they require the lock in
order to modify a page's pv list or flags.  In some cases, however,
pmap_protect() can avoid acquiring the lock.
2004-07-15 18:00:43 +00:00
David Xu
53dbf30349 Add ptrace_clear_single_step(), alpha already has it for years, the function
will be used by ptrace to clear a thread's single step state.
2004-07-13 07:22:56 +00:00
Marcel Moolenaar
e8d5eed9c1 The SC_DISABLE_DDBKEY options has been renamed to SC_DISABLE_KDBKEY. 2004-07-11 03:21:24 +00:00
Marcel Moolenaar
8bcb1e9e84 Add options KDB and GDB. KDB takes on the function of what DDB used
to be. Both DDB and GDB specify which KDB backends to include.
2004-07-11 03:20:09 +00:00
Marcel Moolenaar
37224cd3fc Mega update for the KDB framework: turn DDB into a KDB backend.
Most of the changes are a direct result of adding thread awareness.
Typically, DDB_REGS is gone. All registers are taken from the
trapframe and backtraces use the PCB based contexts. DDB_REGS was
defined to be a trapframe on all platforms anyway.
Thread awareness introduces the following new commands:
	thread X	switch to thread X (where X is the TID),
	show threads	list all threads.

The backtrace code has been made more flexible so that one can
create backtraces for any thread by giving the thread ID as an
argument to trace.

With this change, ia64 has support for breakpoints.
2004-07-10 23:47:20 +00:00
Marcel Moolenaar
a5a3d76272 Update for the KDB framework:
o  Make debugging code conditional upon KDB instead of DDB.
o  Call kdb_enter() instead of Debugger().
o  Remove implementation of Debugger().
o  Check kdb_active instead of db_active.
o  Call kdb_trap() according to the new world order.
2004-07-10 23:10:07 +00:00
Marcel Moolenaar
eda20064f9 Update for the KDB framework:
o  Call kdb_enter() instead of Debugger().
2004-07-10 23:06:41 +00:00
Marcel Moolenaar
2aaf890c2f Remove obsolete prototype of kdb_trap(). 2004-07-10 23:05:38 +00:00
Marcel Moolenaar
5a39cbaf69 Implement makectx(). The makectx() function is used by KDB to create
a PCB from a trapframe for purposes of unwinding the stack. The PCB
is used as the thread context and all but the thread that entered the
debugger has a valid PCB.
This function can also be used to create a context for the threads
running on the CPUs that have been stopped when the debugger got
entered. This however is not done at the time of this commit.
2004-07-10 19:56:00 +00:00
Marcel Moolenaar
cbc174356c Introduce the KDB debugger frontend. The frontend provides a framework
in which multiple (presumably different) debugger backends can be
configured and which provides basic services to those backends.
Besides providing services to backends, it also serves as the single
point of contact for any and all code that wants to make use of the
debugger functions, such as entering the debugger or handling of the
alternate break sequence. For this purpose, the frontend has been
made non-optional.
All debugger requests are forwarded or handed over to the current
backend, if applicable. Selection of the current backend is done by
the debug.kdb.current sysctl. A list of configured backends can be
obtained with the debug.kdb.available sysctl. One can enter the
debugger by writing to the debug.kdb.enter sysctl.
2004-07-10 18:40:12 +00:00
Marcel Moolenaar
72d44f31a6 Introduce the GDB debugger backend for the new KDB framework. The
backend improves over the old GDB support in the following ways:
o  Unified implementation with minimal MD code.
o  A simple interface for devices to register themselves as debug
   ports, ala consoles.
o  Compression by using run-length encoding.
o  Implements GDB threading support.
2004-07-10 17:47:22 +00:00
Marius Strobl
64e6e863ac - Add missing <sys/module.h>. [1]
- Remove unused includes.
- Sort includes.

Reported by:	Pyun YongHyeon <yongari@kt-is.co.kr> [1]
2004-07-09 23:12:22 +00:00
Warner Losh
48375ebec9 These don't need RMAN_RESOURCE_VISIBLE now that rman is visible 2004-07-03 20:56:16 +00:00
Warner Losh
c82338ca27 Really remove __RMAN_RESORUCE_VISIBLE 2004-07-03 20:49:00 +00:00
Warner Losh
8acf75a06d Use the rman_* functions in preference to reaching into struct resource.
Remove __RMAN_RESOURCE_VISIBLE after compilation confirms it is now not
needed.
2004-07-03 20:48:01 +00:00
John Baldwin
0c0b25ae91 Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
  determine if a thread about to be added to a run queue should be
  preempted to directly.  If it is not safe to preempt or if the new
  thread does not have a high enough priority, then the function returns
  false and sched_add() adds the thread to the run queue.  If the thread
  should be preempted to but the current thread is in a nested critical
  section, then the flag TDF_OWEPREEMPT is set and the thread is added
  to the run queue.  Otherwise, mi_switch() is called immediately and the
  thread is never added to the run queue since it is switch to directly.
  When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
  then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
  setrunqueue() now does all the correct work.  This also removes the
  do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
  supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
  chance to run if the architecture supports native preemption since
  the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
  architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
  preemption, namely alpha, i386, and amd64.

This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.

Approved by:	scottl (with his re@ hat)
2004-07-02 20:21:44 +00:00
Marius Strobl
4eae91a8f7 These need __RMAN_RESOURCE_VISIBLE, too. 2004-06-30 23:21:07 +00:00
Scott Long
770fffe05b Retire BUS_DMAMAP_NSEGS for sparc64 2004-06-28 04:04:43 +00:00
Scott Long
8e0bfc6b32 Switch sparc64 busdma to use a dynamically allocated segment list rather
than a a stack-limited list.  This removes the artifical limit on s/g list
size.
cvs: ----------------------------------------------------------------------
2004-06-28 03:49:13 +00:00
David E. O'Brien
a82b25f9b2 Better OFW console support on Sun Ultra2 machines.
Ultra2 users may want to set OFWCONS_POLL_HZ to a value of '20'.
I have left default value at '4' as higher values can consume a more
than is acceptable amount of CPU, and we don't have a consensus yet
what is an optimal value.

Submitted by:	Pyun YongHyeon <yongari@kt-is.co.kr>
2004-06-24 02:57:11 +00:00
Bruce Evans
4c5f10a672 Backed out previous commit. Blind substitution of dev_t by `struct cdev *'
was just wrong here because the dev_t's are user dev_t's.
2004-06-20 03:52:50 +00:00
Poul-Henning Kamp
89c9c53da0 Do the dreaded s/dev_t/struct cdev */
Bump __FreeBSD_version accordingly.
2004-06-16 09:47:26 +00:00
Scott Long
c08701fd03 Add esp to the sparc64 GENERIC 2004-06-10 05:24:34 +00:00
Scott Long
c31d0cf77b Port the NetBSD esp(4) driver. This only includes the sbus front-end, so
its primary use is for the FEPS/FAS366 SCSI found in Sun Ultra 1e and 2
machines.  Once the pci front-end is ported, this driver can replace the
amd(4) driver.

The code as-is is fairly stable.  I've disabled tagged-queueing until I can
figure out a corruption bug related to it.  I'm importing it now so that
people with these machines can (finally) stop netbooting and report bugs
before 5.3.
2004-06-10 05:11:39 +00:00
Poul-Henning Kamp
9a6dc4b647 Remove filename+line number from panic messages. 2004-06-06 21:26:49 +00:00
Poul-Henning Kamp
6b2f1cf005 Add missing <sys/module.h> #includes 2004-06-04 11:52:25 +00:00
Tim J. Robbins
cc05397ffc Remove checks for curthread == NULL - it can't happen. 2004-06-03 10:22:47 +00:00
Poul-Henning Kamp
fd360128ff Add missing <sys/module.h> instances which were shadowed by the nested
include in <sys/kernel.h>
2004-06-03 05:58:30 +00:00
Tim J. Robbins
fa2a4d0595 Move TDF_DEADLKTREAT into td_pflags (and rename it accordingly) to avoid
having to acquire sched_lock when manipulating it in lockmgr(), uiomove(),
and uiomove_fromphys().

Reviewed by:	jhb
2004-06-03 01:47:37 +00:00
Bosko Milekic
099a0e588c Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.

Extensions to UMA worth noting:
  - Better layering between slab <-> zone caches; introduce
    Keg structure which splits off slab cache away from the
    zone structure and allows multiple zones to be stacked
    on top of a single Keg (single type of slab cache);
    perhaps we should look into defining a subset API on
    top of the Keg for special use by malloc(9),
    for example.
  - UMA_ZONE_REFCNT zones can now be added, and reference
    counters automagically allocated for them within the end
    of the associated slab structures.  uma_find_refcnt()
    does a kextract to fetch the slab struct reference from
    the underlying page, and lookup the corresponding refcnt.

mbuma things worth noting:
  - integrates mbuf & cluster allocations with extended UMA
    and provides caches for commonly-allocated items; defines
    several zones (two primary, one secondary) and two kegs.
  - change up certain code paths that always used to do:
    m_get() + m_clget() to instead just use m_getcl() and
    try to take advantage of the newly defined secondary
    Packet zone.
  - netstat(1) and systat(1) quickly hacked up to do basic
    stat reporting but additional stats work needs to be
    done once some other details within UMA have been taken
    care of and it becomes clearer to how stats will work
    within the modified framework.

From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used.  The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.

Additional things worth noting/known issues (READ):
   - One report of 'ips' (ServeRAID) driver acting really
     slow in conjunction with mbuma.  Need more data.
     Latest report is that ips is equally sucking with
     and without mbuma.
   - Giant leak in NFS code sometimes occurs, can't
     reproduce but currently analyzing; brueffer is
     able to reproduce but THIS IS NOT an mbuma-specific
     problem and currently occurs even WITHOUT mbuma.
   - Issues in network locking: there is at least one
     code path in the rip code where one or more locks
     are acquired and we end up in m_prepend() with
     M_WAITOK, which causes WITNESS to whine from within
     UMA.  Current temporary solution: force all UMA
     allocations to be M_NOWAIT from within UMA for now
     to avoid deadlocks unless WITNESS is defined and we
     can determine with certainty that we're not holding
     any locks when we're M_WAITOK.
   - I've seen at least one weird socketbuffer empty-but-
     mbuf-still-attached panic.  I don't believe this
     to be related to mbuma but please keep your eyes
     open, turn on debugging, and capture crash dumps.

This change removes more code than it adds.

A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.

Testing and Debugging:
    rwatson,
    brueffer,
    Ketrien I. Saihr-Kesenchedra,
    ...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
Thomas Moestl
65e29c4822 Retire cpu_sched_exit(); it is not used any more. 2004-05-26 12:09:39 +00:00
Thomas Moestl
3e519a2cf4 Move the per-CPU vmspace pointer fixup that is required before a
struct vmspace is freed from cpu_sched_exit() to pmap_release().

This has the advantage of being able to rely on MI code to decide
when a free should occur, instead of having to inspect the reference
count ourselves.

At the same time, turn the per-CPU vmspace pointer into a pmap pointer,
so that pmap_release() can deal with pmaps exclusively.

Reviewed (and embrassing bug spotted) by: jake
2004-05-26 12:06:52 +00:00
Marius Strobl
c9407be9ec Use unsigned types for the arguments of the atomic(9) operations,
like described in the man page and done on all other architectures.

OK'ed by:	tmm
2004-05-22 00:52:16 +00:00
Marius Strobl
980284e38f Switch from BSD-style u_intXX_t to ISO C99 uintXX_t. 2004-05-22 00:47:26 +00:00
Thomas Moestl
048ec99ef2 In cpu_sched_exit(), we must check vm_refcnt against 0, not 1, since
exit1() decrements the reference count before calling this function.
2004-05-20 18:41:07 +00:00
Bruce Evans
b2321e7cdb Moved most of the "MI" definitions and declarations from <machine/profile.h>
to <sys/gmon.h>.  Cleaned them up a little by not attempting to ifdef
for incomplete and out of date support for GUPROF in userland, as in
the sparc64 version.
2004-05-19 15:41:26 +00:00
Stefan Farfeleder
b1aa0ba527 <stdint.h> should define WINT_M{AX,IN} independent from whether WCHAR_MIN is
defined.  Otherwise first including <wchar.h> and then <stdint.h> leads to no
WINT_M{AX,IN} at all.

PR:		64956
Approved by:	das (mentor)
2004-05-18 16:04:57 +00:00
Peter Wemm
31f1cfb7e9 Oops, I left a duplicate 'relocbase' declaration.
Submitted by:  Koop Mast <kwm@rainbow-runner.nl>
2004-05-17 22:26:17 +00:00
Peter Wemm
e8855d4f97 Make a small revision to the api between the elf linker core and the
elf_reloc() backends for two reasons.  First, to support the possibility
of there being two elf linkers in the kernel (eg: amd64), and second, to
pass the relocbase explicitly (for relocating .o format kld files).
2004-05-16 20:00:28 +00:00