Commit Graph

4207 Commits

Author SHA1 Message Date
peter
a6185d4b05 Sync with i386 - set rbp reg to 0 for upcalls as a frame marker, not that
it is guaranteed to be used in userland though.
2004-08-16 22:57:13 +00:00
peter
9f0b195d9f Sync with i386 - trace syscall entry/exit times, and a cosmetic fix. 2004-08-16 22:56:20 +00:00
peter
89798174d7 Sync with i386 - fix bounds check in lapic_create() 2004-08-16 22:55:32 +00:00
peter
42ec62e265 Sync with i386 - pass resource requests up to parent 2004-08-16 22:54:50 +00:00
peter
03a8108c60 Sync with i386 - s/cpu_swtch/cpu_switch/ 2004-08-16 22:53:29 +00:00
peter
b543ac08f9 Sync with i386 - dont count needed bounce pages if loading a buffer that
was created with bud_dmamem_alloc()
2004-08-16 22:53:03 +00:00
peter
3948d5b6c5 Sync with i386 - cosmetic fixes 2004-08-16 22:52:02 +00:00
peter
4fb7f8b389 Catch up with i386 - remove lots of no longer used symbolic constants 2004-08-16 22:51:44 +00:00
peter
2800cddd5f Sync with i386 2004-08-16 22:51:13 +00:00
obrien
9dc119c4b6 Complete 'IA32' -> 'COMPAT_IA32' change for the Linuxulator32. 2004-08-16 12:51:33 +00:00
tjr
72f8d44703 Un-comment LINPROCFS. 2004-08-16 12:39:27 +00:00
obrien
cdbbacbd49 I missed an 'IA32' in the documentation. 2004-08-16 11:15:46 +00:00
obrien
6cd572e133 I'm not sure what tjr envisioned for turning on FreeBSD/i386 rt support,
but make it COMPAT_IA32 for now.
Fix the 'DEBUG' argument code to unbreak the amd64 LINT build.
2004-08-16 11:09:59 +00:00
obrien
9d0b1233c4 Fix the 'DEBUG' argument code to unbreak the amd64 LINT build. 2004-08-16 10:54:25 +00:00
tjr
5847463612 Regen. 2004-08-16 08:07:06 +00:00
tjr
8a2532c456 Add preliminary support for running 32-bit Linux binaries on amd64, enabled
with the COMPAT_LINUX32 option. This is largely based on the i386 MD Linux
emulations bits, but also builds on the 32-bit FreeBSD and generic IA-32
binary emulation work.

Some of this is still a little rough around the edges, and will need to be
revisited before 32-bit and 64-bit Linux emulation support can coexist in
the same kernel.
2004-08-16 07:55:06 +00:00
rwatson
6f9a1cc98b Preemptive anti-footshooting: cause a #error if MP_WATCHDOG is compiled
with SCHED_ULE.
2004-08-15 20:32:40 +00:00
rwatson
abf6ea2973 Add an "options MP_WATCHDOG" to i386. This option allows one of the
logical CPUs on a system to be used as a dedicated watchdog to cause a
drop to the debugger and/or generate an NMI to the boot processor if
the kernel ceases to respond.  A sysctl enables the watchdog running
out of the processor's idle thread; a callout is launched to reset a
timer in the watchdog.  If the callout fails to reset the timer for ten
seconds, the watchdog will fire.  The sysctl allows you to select which
CPU will run the watchdog.

A sample "debug.leak_schedlock" is included, which causes a sysctl to
spin holding sched_lock in order to trigger the watchdog.  On my Xeons,
the watchdog is able to detect this failure mode and break into the
debugger, which cannot otherwise be done without an NMI button.

This option does not currently work with sched_ule due to ule's push
notion of scheduling, similar to machdep.hlt_logical_cpus failing to
work with that scheduler.

On face value, this might seem somewhat inefficient, but there are a
lot of dual-processor Xeons with HTT around, so using one as a watchdog
for testing is not as inefficient as one might fear.
2004-08-15 18:02:09 +00:00
ambrisko
30b952654a Fix the memory scaling bug when basemem was converted to Kbytes from
bytes for AMD64.  Otherwise the AP will be started at 640K which
won't work.  Bug found on a Xeon 64bit system.
2004-08-13 22:30:55 +00:00
davidxu
54683a131e Mark end of frames. 2004-08-11 23:23:05 +00:00
marcel
fbbaea5f90 Add __elfN(dump_thread). This function is called from __elfN(coredump)
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.

Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
2004-08-11 02:35:06 +00:00
davidxu
8c3963846d As AMD64 architecture volume 1 chapter 3.1.2 says, high 32 bits of %rflags
are resevered, they can be written with anything, but they always read
as zero, we should simulate it in set_regs() as we are reading/writting
real hardware %rflags register.
2004-08-10 12:15:27 +00:00
davidxu
9ec724bfc3 In syscall, always make a copy of parameters from trapframe, this
becauses some syscalls using set_mcontext can sneakily change
parameters and later when those syscalls references parameters,
they will wrongly use register values in mcontext_t.

Approved by: peter
2004-08-09 23:57:59 +00:00
alc
1ac3389eba With the advent of pmap locking it makes sense for pmap_copy() to be less
forgiving about inconsistencies in the source pmap.  Also, remove a new-
line character terminating a nearby panic string.
2004-08-08 00:31:58 +00:00
scottl
ec3b17da1d Move the definition of M_MEMDESC to a non-optional file. This allows
kernels configurations without the 'mem' device to compile.
2004-08-07 06:21:37 +00:00
alc
584ed250ae Eliminate a variable that became unused in the i386 to amd64 conversion. 2004-08-07 04:31:26 +00:00
markm
b665823aa4 MFi386: Fix mem device. Grrr. 2004-08-06 07:22:36 +00:00
markm
cce2e351df MFi386: sort out the mem device. Grrrr. 2004-08-06 07:20:32 +00:00
markm
124b176fef Fix module builds for i386 and amd64. 2004-08-04 18:30:31 +00:00
alc
97dc3be9d1 Post-locking clean up/simplification, particularly, the elimination of
vm_page_sleep_if_busy() and the page table page's busy flag as a
synchronization mechanism on page table pages.

Also, relocate the inline pmap_unwire_pte_hold() so that it can be used
to shorten _pmap_unwire_pte_hold() on alpha and amd64.  This places
pmap_unwire_pte_hold() next to a comment that more accurately describes
it than _pmap_unwire_pte_hold().
2004-08-04 18:04:44 +00:00
markm
f516045149 Making a loadable null.ko for /dev/(null|zero) proved rather
unpopular, so remove this (mis)feature.

Encouragement provided by:	jhb (and others)
2004-08-03 19:24:54 +00:00
mux
35780dc21a Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait().  It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by:	jhb
Reviewed by:	jhb
2004-08-03 18:44:27 +00:00
dfr
8354a9598d Add style(9) foolishness. 2004-08-03 08:21:48 +00:00
markm
f2823af1c4 Diff reduction WRT i386 version. 2004-08-02 20:36:47 +00:00
dfr
090a3e72f1 Add definitions for TLS relocations. 2004-08-02 19:12:17 +00:00
obrien
2a58f76a64 Fix the build by providing 'PHYS_TO_DMAP' and 'M_MEMDESC'. 2004-08-02 02:07:20 +00:00
markm
b8bae5430c Add the I/O device for those architectures that have it. 2004-08-01 19:37:34 +00:00
scottl
30cf65f35d Turn off PREEMPTION by default while it gets debugged. It's been causing
4 weeks of problems including deadlocks and instant panics.  Note that the
real bugs are likely in the scheduler.
2004-08-01 14:31:45 +00:00
markm
a6c822020d Break out the MI part of the /dev/[k]mem and /dev/io drivers into
their own directory and module, leaving the MD parts in the MD
area (the MD parts _are_ part of the modules). /dev/mem and /dev/io
are now loadable modules, thus taking us one step further towards
a kernel created entirely out of modules. Of course, there is nothing
preventing the kernel from having these statically compiled.
2004-08-01 11:40:54 +00:00
davidxu
3720a53426 Turn on PCB_FULLCTX for set_mcontext, functions like kse_switchin
needs to fully restore asynchronous context which did not come
from fast syscall.
2004-07-31 14:02:29 +00:00
alc
add997e1ff Add pmap locking to pmap_object_init_pt(). 2004-07-31 06:42:05 +00:00
ps
1be1b43db4 MFia64:
Fix -O builds with gcc 3.4 by defining ffs as __builtin_ffs instead
of creating an inline function that just calls __builtin_ffs.
2004-07-30 16:44:29 +00:00
alc
ff80a7c7f2 Advance the state of pmap locking on alpha, amd64, and i386.
- Enable recursion on the page queues lock.  This allows calls to
   vm_page_alloc(VM_ALLOC_NORMAL) and UMA's obj_alloc() with the page
   queues lock held.  Such calls are made to allocate page table pages
   and pv entries.
 - The previous change enables a partial reversion of vm/vm_page.c
   revision 1.216, i.e., the call to vm_page_alloc() by vm_page_cowfault()
   now specifies VM_ALLOC_NORMAL rather than VM_ALLOC_INTERRUPT.
 - Add partial locking to pmap_copy().  (As a side-effect, pmap_copy()
   should now be faster on i386 SMP because it no longer generates IPIs
   for TLB shootdown on the other processors.)
 - Complete the locking of pmap_enter() and pmap_enter_quick().  (As of now,
   all changes to a user-level pmap on alpha, amd64, and i386 are performed
   with appropriate locking.)
2004-07-29 18:56:31 +00:00
kan
c1bceab802 Use newly added __used attribute to keep static function symbol from
being eliminated.
2004-07-29 18:02:28 +00:00
phk
98d8f3741c Move a relic to its correct location(s): Put nfs diskless initialization
calls with the code they call.  (Yet another example of mindless copy&paste).
2004-07-28 21:54:57 +00:00
rwatson
4ab080249a Pass a thread argument into cpu_critical_{enter,exit}() rather than
dereference curthread.  It is called only from critical_{enter,exit}(),
which already dereferences curthread.  This doesn't seem to affect SMP
performance in my benchmarks, but improves MySQL transaction throughput
by about 1% on UP on my Xeon.

Head nodding:	jhb, bmilekic
2004-07-27 16:41:01 +00:00
imp
5a3e9fbaf3 Remove ahb, aha, ie, le and wl devices. They are all ISA/EISA only.
I went ahead and left in the ISA cards that also have pccard
attachments.  There's no way that these devices could attach.

OK'd by: peter
2004-07-22 22:29:45 +00:00
imp
0973e758b4 There is no pcic device on amd64. OLDCARD isn't supported, and
NEWCARD will call it something different.  and there are no ISA add-in
devices.
2004-07-22 22:28:34 +00:00
marcel
f77d7b9449 Unify db_stack_trace_cmd(). All it did was look up the thread given
the thread ID and call db_trace_thread().
Since arm has all the logic in db_stack_trace_cmd(), rename the
new DB_COMMAND function to db_stack_trace to avoid conflicts on
arm.
While here, have db_stack_trace parse its own arguments so that
we can use a more natural radix for IDs. If the ID is not a thread
ID, or more precisely when no thread exists with the ID, try if
there's a process with that ID and return the first thread in it.
This makes it easier to print stack traces from the ps output.

requested by: rwatson@
tested on: amd64, i386, ia64
2004-07-21 05:07:09 +00:00
alc
55d33c6b7d Remove the allpmaps list. It's unused.
Reviewed by: peter@
2004-07-20 02:40:56 +00:00
jhb
d4ea20b7e1 As a temporary hack, turn off deferred preemptions that are the result of
a fast interrupt handler doing an swi_sched().  This fixed the lockups I
saw on my laptop when using xmms in KDE and on rwatson's MySQL benchmarks
on SMP.  This will eventually be removed and/or modified when I figure out
what the root cause is and fix that.
2004-07-19 16:37:47 +00:00
das
86c293bf54 Make FLT_ROUNDS correctly reflect the dynamic rounding mode. 2004-07-19 08:17:25 +00:00
scottl
7ad60316cd Enable ADAPTIVE_MUTEXES by default by changing the sense of the option to
NO_ADAPTIVE_MUTEXES.  This option has been enabled by default on amd64 for
quite some time, and has been extensively tested on i386 and sparc64.  It
shows measurable performance gains in many circumstances, and few negative
effects.  It would be nice in t he future if adaptive mutexes actually went
to sleep after a certain amount of spinning, but that will require quite a
bit more testing.
2004-07-18 15:59:03 +00:00
maxim
531ff1b33d In -CURRENT pseudo devices are not statically assigned at compile time,
remove a stale comment.

PR:		kern/62285
2004-07-18 09:03:12 +00:00
ps
8f0acfb821 Fix the build. pcm is no more. 2004-07-16 21:48:30 +00:00
alc
123cfa6b64 Push down the acquisition and release of the page queues lock into
pmap_protect() and pmap_remove().  In general, they require the lock in
order to modify a page's pv list or flags.  In some cases, however,
pmap_protect() can avoid acquiring the lock.
2004-07-15 18:00:43 +00:00
peter
363f0d38d5 Like on i386, eliminate pv_ptem (which was suggested by alc). This
reduces the size of the pv_entry structure a small but significant amount.

This is implemented a little differently because it isn't so cheap to get
the physical address of the page tabke page on amd64.. instead of it
being directly accessible from the top level page directory, it is now
two additional tree levels down.  However.. In almost all cases, we
recently had the physical address if the page table page a short while
before we needed it, but it slipped through our fingers.  This patch
saves it for when we do need it.  Also, for the one case where we do not
have the ptp paddr, we are always running in curproc context and so we
can do a vtopte-like trick.  I've implemented vtopde() for this purpose.

There is still a CYA entry in pmap_unuse_pt() that needs to be removed.  I
think it can be removed now but I forgot to test with it gone.
2004-07-14 07:13:35 +00:00
davidxu
0a29616acf Add ptrace_clear_single_step(), alpha already has it for years, the function
will be used by ptrace to clear a thread's single step state.
2004-07-13 07:22:56 +00:00
alc
ea94acfaa6 Push down the acquisition and release of the page queues lock into
pmap_remove_pages().  (The implementation of pmap_remove_pages() is
optional.  If pmap_remove_pages() is unimplemented, the acquisition and
release of the page queues lock is unnecessary.)

Remove spl calls from the alpha, arm, and ia64 pmap_remove_pages().
2004-07-13 02:49:22 +00:00
marcel
62f362bd8f MFi386: rev 1.213 -- fix DELAY while the debugger is active.
This also fixes the (runtime) breakage introduced in the previous
commit that was the result of a botched merge. This hasn't even
been compile-tested...
2004-07-11 18:07:55 +00:00
marcel
f18ce8babb Add options KDB and GDB. KDB takes on the function of what DDB used
to be. Both DDB and GDB specify which KDB backends to include.
2004-07-11 03:20:09 +00:00
marcel
1a65a2060a Remove the now unused GDB stubs. See src/sys/gdb/* for the new KDB
backend.
2004-07-11 01:47:26 +00:00
marcel
aae5483213 Mega update for the KDB framework: turn DDB into a KDB backend.
Most of the changes are a direct result of adding thread awareness.
Typically, DDB_REGS is gone. All registers are taken from the
trapframe and backtraces use the PCB based contexts. DDB_REGS was
defined to be a trapframe on all platforms anyway.
Thread awareness introduces the following new commands:
	thread X	switch to thread X (where X is the TID),
	show threads	list all threads.

The backtrace code has been made more flexible so that one can
create backtraces for any thread by giving the thread ID as an
argument to trace.

With this change, ia64 has support for breakpoints.
2004-07-10 23:47:20 +00:00
marcel
1339cad36a MFi386: don't fake the time counter when the debugger is active.
This breaks the fundamental property of DELAY(). Instead, avoid
grabbing clock_lock when kdb_active is non-zero.
2004-07-10 22:42:22 +00:00
marcel
5d30d6766f Remove obsolete prototype of kdb_trap(). 2004-07-10 22:39:56 +00:00
marcel
a2239c0d3c Update for the KDB framework:
o  Make debugging support conditional upon KDB instead of DDB.
o  Remove implementation of Debugger().
o  Don't make setjump() and longjump() conditional upon DDB.
o  s/ddb_on_nmi/kdb_on_nmi/g
o  Call kdb_reenter() when kdb_active is non-zero. Call kdb_trap()
   otherwise.
2004-07-10 22:39:17 +00:00
marcel
006d404cf7 Implement makectx(). The makectx() function is used by KDB to create
a PCB from a trapframe for purposes of unwinding the stack. The PCB
is used as the thread context and all but the thread that entered the
debugger has a valid PCB.
This function can also be used to create a context for the threads
running on the CPUs that have been stopped when the debugger got
entered. This however is not done at the time of this commit.
2004-07-10 19:56:00 +00:00
marcel
60b53542e9 Introduce the KDB debugger frontend. The frontend provides a framework
in which multiple (presumably different) debugger backends can be
configured and which provides basic services to those backends.
Besides providing services to backends, it also serves as the single
point of contact for any and all code that wants to make use of the
debugger functions, such as entering the debugger or handling of the
alternate break sequence. For this purpose, the frontend has been
made non-optional.
All debugger requests are forwarded or handed over to the current
backend, if applicable. Selection of the current backend is done by
the debug.kdb.current sysctl. A list of configured backends can be
obtained with the debug.kdb.available sysctl. One can enter the
debugger by writing to the debug.kdb.enter sysctl.
2004-07-10 18:40:12 +00:00
marcel
6e0dcca8a9 Introduce the GDB debugger backend for the new KDB framework. The
backend improves over the old GDB support in the following ways:
o  Unified implementation with minimal MD code.
o  A simple interface for devices to register themselves as debug
   ports, ala consoles.
o  Compression by using run-length encoding.
o  Implements GDB threading support.
2004-07-10 17:47:22 +00:00
brian
aae31dbf32 Change the following environment variables to kernel options:
bootp -> BOOTP
    bootp.nfsroot -> BOOTP_NFSROOT
    bootp.nfsv3 -> BOOTP_NFSV3
    bootp.compat -> BOOTP_COMPAT
    bootp.wired_to -> BOOTP_WIRED_TO

- i.e. back out the previous commit.  It's already possible to
pxeboot(8) with a GENERIC kernel.

Pointed out by: dwmalone
2004-07-08 22:35:36 +00:00
brian
2821a50eaa Change the following kernel options to environment variables:
BOOTP -> bootp
    BOOTP_NFSROOT -> bootp.nfsroot
    BOOTP_NFSV3 -> bootp.nfsv3
    BOOTP_COMPAT -> bootp.compat
    BOOTP_WIRED_TO -> bootp.wired_to

This lets you PXE boot with a GENERIC kernel by putting this sort of thing
in loader.conf:

    bootp="YES"
    bootp.nfsroot="YES"
    bootp.nfsv3="YES"
    bootp.wired_to="bge1"

or even setting the variables manually from the OK prompt.
2004-07-08 13:40:33 +00:00
peter
e3e493024d MFi386: various io apic cleanups 2004-07-08 01:42:49 +00:00
peter
dd3c90cb13 MFi386: use rman access methods instead of groping around inside
struct resource
2004-07-08 01:34:24 +00:00
peter
fc114e00d8 MFi386: whitespace nit fix (spare blank line) 2004-07-08 01:32:25 +00:00
peter
e5ce867c01 MFi386: fix up CR0 settings 2004-07-08 01:31:13 +00:00
peter
bb56fe721b MFi386: 1.57: transparently respect alignment/boundary tags 2004-07-08 01:28:33 +00:00
alc
d7d59aa241 Simplify the control flow in pmap_extract(), enabling the elimination of a
PMAP_UNLOCK() call.
2004-07-07 16:47:58 +00:00
alc
fda3c26a4c White space and style changes only. 2004-07-07 02:23:46 +00:00
alc
88e9582a49 Style changes to pmap_extract(). 2004-07-06 02:33:11 +00:00
jhb
696704716d Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
  determine if a thread about to be added to a run queue should be
  preempted to directly.  If it is not safe to preempt or if the new
  thread does not have a high enough priority, then the function returns
  false and sched_add() adds the thread to the run queue.  If the thread
  should be preempted to but the current thread is in a nested critical
  section, then the flag TDF_OWEPREEMPT is set and the thread is added
  to the run queue.  Otherwise, mi_switch() is called immediately and the
  thread is never added to the run queue since it is switch to directly.
  When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
  then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
  setrunqueue() now does all the correct work.  This also removes the
  do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
  supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
  chance to run if the architecture supports native preemption since
  the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
  architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
  preemption, namely alpha, i386, and amd64.

This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.

Approved by:	scottl (with his re@ hat)
2004-07-02 20:21:44 +00:00
imp
d698489e91 We need to make resources visible here as well. 2004-06-30 19:24:26 +00:00
njl
53a70792e9 Add machdep quirks functions. On i386, this disables acpi on systems with
BIOS dates earlier than Jan 1, 1999.  Add prototypes and quirks flags.
2004-06-30 04:42:29 +00:00
jhb
8371bbfa53 Fetch the actual acpi0 device_t and use device_is_attached() to see if
it's alive rather than trying to fetch its softc pointer via its devclass.

Glanced at by:	imp, njl
2004-06-23 17:59:01 +00:00
alc
a7339dd630 Implement the protection check required by the pmap_extract_and_hold()
specification.  This enables the elimination of Giant from that function.
2004-06-23 04:37:14 +00:00
alc
1450b79b4f - Simplify pmap_remove_pages(), eliminating unnecessary indirection.
- Simplify the locking of pmap_is_modified() by converting control flow to
   data flow.
2004-06-20 20:57:06 +00:00
alc
d711478366 Add pmap locking to pmap_is_prefaultable(). 2004-06-20 06:11:00 +00:00
bde
da4e7c693b Backed out previous commit. Blind substitution of dev_t by `struct cdev *'
was just wrong here because the dev_t's are user dev_t's.
2004-06-20 03:52:50 +00:00
alc
72c4b0bd2f Remove unused pt_entry_ts. Remove an unneeded semicolon. 2004-06-19 19:09:08 +00:00
bde
e041a584a6 Include <sys/_lock.h>'s prerequisite <sys/queue.h> before including the
former, not after.

Don't hide this bug by including <sys/queue.h> in <sys/_lock.h>.
2004-06-19 14:58:35 +00:00
peter
8c286596fc Try harder to give new processes a clean initial fpu state. fpu_cleanstate
wasn't actually clean, it was saving the xmm registers as left over by the
bios.  fninit() doesn't clear those.

In fpudna(), instead of doing a fninit() and forgetting to load the initial
mxcsr, do a full fxrstor(&fpu_cleanstate).  Otherwise we hand over whatever
random values are left in the xmm registers by the last user.

I'm not certain of whether this is excessive paranoia or not, but there was
an outright bug in neglecting to set the mxcsr value that caused awk to
SIGFPE in some case.  Especially for Tim Robbins. :-)

i386 probably should do something about the mxcsr setings too.

Found by:  tjr
2004-06-18 04:01:54 +00:00
njl
bbd8cdb07b Revert last change. If acpi is loaded or compiled into the kernel, its
devclass will be present even if the driver was disabled by a hint.  Using
device_get_softc() provides the right info even if it's overkill.

Explained by:	jhb
2004-06-17 17:27:37 +00:00
alc
4c2e464ea2 Do not preset PG_BUSY on VM_ALLOC_NOOBJ pages. Such pages are not
accessible through an object.  Thus, PG_BUSY serves no purpose.
2004-06-17 06:16:58 +00:00
phk
dfd1f7fd50 Do the dreaded s/dev_t/struct cdev */
Bump __FreeBSD_version accordingly.
2004-06-16 09:47:26 +00:00
alc
315177e2ec Add some lock assertions. Lock a small part of pmap_enter(). 2004-06-16 07:51:19 +00:00
alc
d82b0bb246 Correct an error in the implementation of pmap_is_prefaultable(). When I
introduced this function in revision 1.441, I inverted one of the
comparisons.
2004-06-16 03:11:24 +00:00
alc
72b65ac70d Remove a stale comment. 2004-06-15 19:28:40 +00:00
alc
1ccccce78d Add pmap locking to pmap_extract(), pmap_mincore(), and pmap_remove(). 2004-06-15 07:41:44 +00:00
njl
8d89806526 We only need the devclass_find() result, not the softc. 2004-06-15 02:12:12 +00:00
alc
c14fd5dbce Introduce pmap locking to many of the pmap functions. There is more to
come later.
2004-06-14 01:17:50 +00:00
obrien
a57bdc7ec6 The majority of FreeBSD/amd64 machines are SMP, so use ADAPTIVE_MUTEXES
by default to improve performance.
2004-06-13 23:03:57 +00:00