Commit Graph

336 Commits

Author SHA1 Message Date
mike
24420c8af5 Remove the hack for segsz_t from <sys/types.h>; use the normal
_BSD_FOO_T_ method for defining segsz_t.
2002-04-10 15:58:13 +00:00
mike
7fb662578d Add manifest constants: _LITTLE_ENDIAN, _BIG_ENDIAN, _PDP_ENDIAN, and
_BYTE_ORDER.  These are far more useful than their non-underscored
equivalents as these can be used in restricted namespace environments.
Mark the non-underscored variants as deprecated.
2002-04-10 14:39:14 +00:00
phk
77e3582887 GC various bits and pieces of USERCONFIG from all over the place. 2002-04-09 11:18:46 +00:00
phk
3234f33800 GC the "dumplo" variable, which is no longer used.
A lot of sys/*/*/machdep.c seems not to be.
2002-04-07 21:01:37 +00:00
jhb
9d3d63fcbc - Move the MI mutexes sched_lock and Giant from being declared in the
various machdep.c's to being declared in kern_mutex.c.
- Add a new function mutex_init() used to perform early initialization
  needed for mutexes such as setting up thread0's contested lock list
  and initializing MI mutexes.  Change the various MD startup routines
  to call this function instead of duplicating all the code themselves.

Tested on:	alpha, i386
2002-04-02 22:19:16 +00:00
dillon
3ad295d416 Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter()
and cpu_critical_exit() and moves associated critical prototypes into their
own header file, <arch>/<arch>/critical.h, which is only included by the
three MI source files that need it.

Backout and re-apply improperly comitted syntactical cleanups made to files
that were still under active development.  Backout improperly comitted program
structure changes that moved localized declarations to the top of two
procedures.  Partially re-apply one of the program structure changes to
move 'mask' into an intermediate block rather then in three separate
sub-blocks to make the code more readable.  Re-integrate bug fixes that Jake
made to the sparc64 code.

Note: In general, developers should not gratuitously move declarations out
of sub-blocks.  They are where they are for reasons of structure, grouping,
readability, compiler-localizability, and to avoid developer-introduced bugs
similar to several found in recent years in the VFS and VM code.

Reviewed by:	jake
2002-04-01 23:51:23 +00:00
phk
87273d930a Centralize the "bootdev" and "dumpdev" variables. They are still pretty
bogus all things considered, but at least now they don't camouflage as
being MD variables.
2002-03-31 07:15:28 +00:00
alc
34c6c68a2d Use the MI vm_map_growstack() instead of the MD grow_stack() in trap(). Remove
the MD grow_stack().
2002-03-30 20:44:31 +00:00
jeff
dff418f166 Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locks
with this flag.  Remove the dup_list and dup_ok code from subr_witness.  Now
we just check for the flag instead of doing string compares.

Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface.  The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.

Approved by:	jhb
2002-03-27 09:23:41 +00:00
dillon
dc5aafeb94 Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
obrien
8842976cdd Guard against redefining __gnuc_va_list. 2002-03-24 11:25:46 +00:00
obrien
4c2f517045 ASM versions of __FBSDID. 2002-03-23 02:01:27 +00:00
benno
d0f7d01438 Collect all functions for copying to and from userspace into the one file.
This allows me to reimplement [sf]u{byte,word} as separate functions and not
as calls to copy{in,out}.
2002-03-21 23:45:59 +00:00
benno
6932067f77 - Make all inlines for manipulating supervisor-level registers accept/return
register_t values.
- Implement an inline for isync.
2002-03-21 13:07:31 +00:00
benno
d99dec6eef GC some unused, bogus interrupt functions and replace them with proper
implementations of intr_disable and intr_restore.
2002-03-21 12:04:58 +00:00
jeff
ec342524a2 Remove references to vm_zone.h and switch over to the new uma API. 2002-03-21 01:11:31 +00:00
alfred
f1b2b9896d Remove __P.
Reveiwed by: benno
2002-03-20 23:17:50 +00:00
jhb
715dfdbcbe Change the way we ensure td_ucred is NULL if DIAGNOSTIC is defined.
Instead of caching the ucred reference, just go ahead and eat the
decerement and increment of the refcount.  Now that Giant is pushed down
into crfree(), we no longer have to get Giant in the common case.  In the
case when we are actually free'ing the ucred, we would normally free it on
the next kernel entry, so the cost there is not new, just in a different
place.  This also removse td_cache_ucred from struct thread.  This is
still only done #ifdef DIAGNOSTIC.

Tested on:	i386, alpha
2002-03-20 21:09:09 +00:00
benno
95b805ab3b Increment pmap_pvo_count in the right place. 2002-03-20 05:25:33 +00:00
jeff
2923687da3 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
benno
d2e1ce9477 Changes and fixes in preparation for UMA:
- Bootstrap pvo entries are now allocated by stealing pages.
- Just return if we're pmap_enter'ing a mapping that's already there.  Don't
  remove it and re-enter it.
2002-03-17 23:58:12 +00:00
benno
1a9cabc484 Lowercase all of the trap names. 2002-03-17 23:55:11 +00:00
benno
15a0342057 Clean up and fix up copyin and copyout. 2002-03-17 23:54:55 +00:00
des
a032109782 Move the definition of PT_[GS]ET{,DB,FP}REGS from the MD ptrace.h to the
MI ptrace.h, since all platforms define them.  Keep the MD ptrace.h around
for FIX_SSTEP (which is currently only needed on Alpha).
2002-03-16 00:25:53 +00:00
benno
a16c1fce78 Correct a typo. (* that should've been &) 2002-03-11 07:09:42 +00:00
mike
b8cc0d1207 o Don't require long long support in bswap64() functions.
o In i386's <machine/endian.h>, macros have some advantages over
  inlines, so change some inlines to macros.
o In i386's <machine/endian.h>, ungarbage collect word_swap_int()
  (previously __uint16_swap_uint32), it has some uses on i386's with
  PDP endianness.

Submitted by:	bde

o Move a comment up in <machine/endian.h> that was accidentially moved
  down a few revisions ago.
o Reenable userland's use of optimized inline-asm versions of
  byteorder(3) functions.
o Fix ordering of prototypes vs. redefinition of byteorder(3)
  functions, so that the non-GCC (libc asm) case has proper
  prototypes.
o Add proper prototypes for byteorder(3) functions in <sys/param.h>.
o Prevent redundant duplicate prototypes by making use of the
  _BYTEORDER_PROTOTYPED define.
o Move the bswap16(), bswap32(), bswap64() C functions into MD space
  for platforms in which asm versions don't exist.  This significantly
  reduces the complexity of some things at the cost of duplicate code.

Reviewed by:	bde
2002-03-09 21:02:16 +00:00
benno
65393002bd Install the DSI and ISI trap handlers and their appropriate locations. 2002-03-07 12:22:44 +00:00
benno
5784524774 Copy the "implementation" of pmap_prefault from sparc64. 2002-03-07 12:22:08 +00:00
benno
2ee5d124d0 Move tunable initialisation so it can get access to physmem. 2002-03-07 10:15:17 +00:00
benno
a4f95150fb Calculate physmem. 2002-03-07 10:09:24 +00:00
arr
ed36876e15 - Move a comment from being on the same line as a #ifdef to the line
following it.  This should have gone in the previous commit, but
  misviewed Bruce's patch.

Requested by: bde
2002-02-28 21:52:08 +00:00
benno
e2a5a67db8 cpu_switch now works, for kthreads at least. 2002-02-28 12:06:49 +00:00
benno
9ed9d1965d Various cleanups. 2002-02-28 12:00:24 +00:00
benno
63f5bcde8c - Prevent the decrementer interrupt handler from nesting.
- Catch some more cases of PSL_EE and PSL_RI getting out of sync.
2002-02-28 11:57:47 +00:00
benno
2f6cdd2140 - Modify pmap_activate so it only marks the pmap as active.
- Add a pmap_deactivate function.
2002-02-28 11:55:44 +00:00
benno
77973ba896 GC an unused variable in cpu_fork(). 2002-02-28 08:48:58 +00:00
arr
0aaddb66e9 - Fix panic() message and a couple style nits that snuck in from the
recent diagnostics commit (rev. 1.84).
2002-02-28 08:28:14 +00:00
benno
a0268a0622 Make fork work, at least for kthreads. Switching still has some issues. 2002-02-28 03:24:07 +00:00
benno
6c392f40ba - Rearrange the sequence of events in powerpc_init() somewhat.
- Catch another instance of PSL_EE being cleared without PSL_RI.
2002-02-28 03:15:49 +00:00
benno
bd730b6871 - When enabling/disabling interrupts, set/clear both PSL_EE and PSL_RI, not
just PSL_EE.
- Make cpu_critical_enter/exit independant of save_intr/restore_intr.
2002-02-28 03:07:48 +00:00
benno
821d20af41 Add a missing (. 2002-02-28 03:04:33 +00:00
benno
7c729fe961 Implement the following functions:
- pmap_remove
	- pmap_kremove
	- pmap_qremove
2002-02-28 02:54:16 +00:00
benno
9ca8c0b6f6 Remove most of the usage of critical_enter/exit.
I put these in to match the use of spl*() in the NetBSD code I was basing this
on, but it appears to cause problems.

I'm doing this in a separate commit so as to be able to refer back if locking
becomes an issue at a later stage.
2002-02-28 02:45:10 +00:00
silby
230f96f3ce Fix a horribly suboptimal algorithm in the vm_daemon.
In order to determine what to page out, the vm_daemon checks
reference bits on all pages belonging to all processes.  Unfortunately,
the algorithm used reacted badly with shared pages; each shared page
would be checked once per process sharing it; this caused an O(N^2)
growth of tlb invalidations.  The algorithm has been changed so that
each page will be checked only 16 times.

Prior to this change, a fork/sleepbomb of 1300 processes could cause
the vm_daemon to take over 60 seconds to complete, effectively
freezing the system for that time period.  With this change
in place, the vm_daemon completes in less than a second.  Any system
with hundreds of processes sharing pages should benefit from this change.

Note that the vm_daemon is only run when the system is under extreme
memory pressure.  It is likely that many people with loaded systems saw
no symptoms of this problem until they reached the point where swapping
began.

Special thanks go to dillon, peter, and Chuck Cranor, who helped me
get up to speed with vm internals.

PR:		33542, 20393
Reviewed by:	dillon
MFC after:	1 week
2002-02-27 18:03:02 +00:00
tmm
3ed05b7b89 Add the following functions/macros to support byte order conversions and
device drivers for bus system with other endinesses than the CPU (using
interfaces compatible to NetBSD):

- bwap16() and bswap32(). These have optimized implementations on some
  architectures; for those that don't, there exist generic implementations.
- macros to convert from a certain byte order to host byte order and vice
  versa, using a naming scheme like le16toh(), htole16().
  These are implemented using the bswap functions.
- stream bus space access functions, which do not perform a byte order
  conversion (while the normal access functions would if the bus endianess
  differs from the CPU endianess).

htons(), htonl(), ntohs() and ntohl() are implemented using the new
functions above for kernel usage. None of the above interfaces is currently
exported to user land.

Make use of the new functions in a few places where local implementations
of the same functionality existed.

Reviewed by:	mike, bde
Tested on alpha by:	mike
2002-02-27 17:16:18 +00:00
benno
5cad5d55f8 Add makeoptions NO_WERROR=true so that we can build. =) 2002-02-26 09:55:17 +00:00
benno
d4d68be2fe Make atomic_cmpset_32 correctly return 0 on failure. 2002-02-24 23:31:49 +00:00
benno
b2f2771fbb Don't call critical_enter()/critical_exit() around calls to pmap_pvo_enter()
as it does it's own handling of critical sections.
2002-02-23 05:55:51 +00:00
julian
53eb1d9219 Add some DIAGNOSTIC code.
While in userland, keep the thread's ucred reference in a shadow
field so that the usual place to store it is NULL.
If DIAGNOSTIC is not set, the thread ucred is kept valid until the next
kernel entry, at which time it is checked against the process cred
and possibly corrected. Produces a BIG speedup in
kernels with INVARIANTS set. (A previous commit corrected it
for the non INVARIANTS case already)

Reviewed by:	dillon@freebsd.org
2002-02-22 23:58:22 +00:00
julian
cb1f971d38 Add change to teh PPC to keep it in step with i386 and MI code
Pointy hat this direction please...
2002-02-19 03:27:08 +00:00