Commit Graph

432 Commits

Author SHA1 Message Date
David E. O'Brien
1792335469 style(9) the structure definitions. 2001-09-05 01:36:46 +00:00
Peter Wemm
b53f9c45f9 Nuke #if 0'ed "setredzone()" stub. We never used it, and probably
never will.  I've implemented an optional redzone as part of the KSE
upage breakup.
2001-09-04 08:36:46 +00:00
Doug Rabson
093a61588e Add a working version of setjmp/longjmp.
Obtained from: Intel's EFI toolkit.
2001-09-03 13:54:50 +00:00
Peter Wemm
05a806502b Since we're cross compiling from x86, ignore the x86 CPUTYPE by default. 2001-09-03 07:58:32 +00:00
Peter Wemm
d73178aea8 Dont conflict with sysctl debug.mddebug 2001-09-03 04:49:19 +00:00
Peter Wemm
02de199140 Sync with i386 / alpha. Whitespace unindent / style prep for kse. 2001-09-02 10:07:09 +00:00
Peter Wemm
494e7e3923 Merge from i386: various cleanups including moving the map calculations
to MI code.  This gets ia64 to compile again.
2001-09-02 07:47:47 +00:00
Peter Wemm
5a4b540da0 Same as i386/i386/pmap.c: clean up some style. This is irrelevant since
it is inside #if 0'ed code, but it would be a shame if this stuff got
cut/pasted elsewhere.
2001-08-31 06:25:28 +00:00
Mike Barcroft
03516cfeb0 o Remove some GCCisms in src/powerpc/include/endian.h.
o Unify <machine/endian.h>'s across all architectures.
o Make bswapXX() functions use a different spelling of u_int16_t and
  friends to reduce namespace pollution.  The bswapXX() functions
  don't actually exist, but we'll probably import these at some
  point.  Atleast one driver (if_de) depends on bswapXX() for big
  endian cases.
o Deprecate byteorder(3) prototypes from <sys/types.h>, these are
  now prototyped indirectly in <arpa/inet.h>.
o Deprecate in_addr_t and in_port_t typedefs in <sys/types.h>, these
  are now typedef'd in <arpa/inet.h>.
o Change byteorder(3) prototypes to use standards compliant uint32_t
  (spelled __uint32_t to reduce namespace pollution).
o Document new preferred headers and standards compliance.

Discussed with:	bde
PR:		29946
Reviewed by:	bmilekic
2001-08-30 00:04:19 +00:00
Peter Wemm
76cb0cadf1 Enable hardwiring of things like tunables from embedded enironments
that do not start from loader(8).
2001-08-27 05:11:53 +00:00
Peter Wemm
547a9e66fd vm_page_zero_idle() is no longer MD. 2001-08-25 04:54:25 +00:00
Peter Wemm
95a6562598 Strip out some #if's for old implementations of global data pointers. 2001-08-21 22:14:13 +00:00
Peter Wemm
e8ebc08f80 Make COMPAT_43 optional again. XXX we need COMPAT_FBSD3 etc for this
stuff.
2001-08-21 02:32:59 +00:00
David E. O'Brien
589278dbae style(9) and make consistent across platforms 2001-08-16 09:29:35 +00:00
Andrey A. Chernov
2a54e09dff OFF_T -> OFF (more standard style) 2001-08-15 19:50:59 +00:00
Andrey A. Chernov
a6314641a4 Add OFF_T_MAX/OFF_T_MIN 2001-08-15 19:25:08 +00:00
David E. O'Brien
059d1e91d8 Style changes to commonize the various platforms. 2001-08-15 04:02:41 +00:00
John Baldwin
688ebe120c - Close races with signals and other AST's being triggered while we are in
the process of exiting the kernel.  The ast() function now loops as long
  as the PS_ASTPENDING or PS_NEEDRESCHED flags are set.  It returns with
  preemption disabled so that any further AST's that arrive via an
  interrupt will be delayed until the low-level MD code returns to user
  mode.
- Use u_int's to store the tick counts for profiling purposes so that we
  do not need sched_lock just to read p_sticks.  This also closes a
  problem where the call to addupc_task() could screw up the arithmetic
  due to non-atomic reads of p_sticks.
- Axe need_proftick(), aston(), astoff(), astpending(), need_resched(),
  clear_resched(), and resched_wanted() in favor of direct bit operations
  on p_sflag.
- Fix up locking with sched_lock some.  In addupc_intr(), use sched_lock
  to ensure pr_addr and pr_ticks are updated atomically with setting
  PS_OWEUPC.  In ast() we clear pr_ticks atomically with clearing
  PS_OWEUPC.  We also do not grab the lock just to test a flag.
- Simplify the handling of Giant in ast() slightly.

Reviewed by:	bde (mostly)
2001-08-10 22:53:32 +00:00
Peter Wemm
2aca0c28d3 Zap 'ptrace(PT_READ_U, ...)' and 'ptrace(PT_WRITE_U, ...)' since they
are a really nasty interface that should have been killed long ago
when 'ptrace(PT_[SG]ETREGS' etc came along.  The entity that they
operate on (struct user) will not be around much longer since it
is part-per-process and part-per-thread in a post-KSE world.

gdb does not actually use this except for the obscure 'info udot'
command which does a hexdump of as much of the child's 'struct user'
as it can get.  It carries its own #defines so it doesn't break
compiles.
2001-08-08 05:25:15 +00:00
John Baldwin
5ebe32c611 Grab Giant arond page faults. ia64 boots again in the simulator now. 2001-08-07 17:31:42 +00:00
Doug Rabson
2f6fec30cf Make this compile again. 2001-08-06 12:52:55 +00:00
Doug Rabson
da572e14d4 Remove usage of nonexistent vm_mtx. 2001-08-06 12:52:17 +00:00
John Baldwin
4ce050bf39 GC some obsolete alpha code. 2001-07-31 14:35:36 +00:00
Jake Burkholder
7e5102989e Use a machine dependent type, Elf_Hashelt, for the elements of the elf
dynamic symbol table buckets and chains.  The sparc64 toolchain uses 32
bit .hash entries, unlike other 64 bits architectures (alpha), which use
64 bit entries.

Discussed with: dfr, jdp
2001-07-31 03:46:39 +00:00
Peter Wemm
0b27d7104f Make PMAP_SHPGPERPROC tunable. One shouldn't need to recompile a kernel
for this, since it is easy to run into with large systems with lots of
shared mmap space.

Obtained from:	yahoo
2001-07-27 01:08:59 +00:00
Peter Wemm
bd40659f85 Call the early tunable setup functions as soon as kern_envp is available.
Some things depend on hz being set not long after this.
2001-07-26 23:06:44 +00:00
Bosko Milekic
49f854f926 - Do not handle the per-CPU containers in mbuf code as though the cpuids
were indices in a dense array. The cpuids are a sparse set and treat
  them as such, setting up containers only for CPUs activated during
  mb_init().

- Fix netstat(1) and systat(1) to treat the per-CPU stats area as a sparse
  map, in accordance with the above.

This allows us to properly boot with certain CPUs disactivated. However, if
we later decide to re-activate said CPUs, we will barf until we decide to
implement CPU spinon/spinoff callback hooks to allow for said CPUs' per-CPU
containers to get configured on their activation.

Reported by: mjacob
Partially (sys/ diffs) Submitted by: mjacob
2001-07-26 18:47:46 +00:00
Brian S. Dean
17bbfb5897 Add 'hwatch' and 'dhwatch' ddb commands analogous to 'watch' and
'dwatch'.  The new commands install hardware watchpoints if supported
by the architecture and if there are enough registers to cover the
desired memory area.

No objection by: audit@, hackers@

MFC after: 2 weeks
2001-07-11 03:15:25 +00:00
Julian Elischer
0b1ae8097d A set of changes to reduce the number of include files the kernel
takes from /usr/include. I cannot check them on alpha.. (will try beast)

Briefly looked at by: Warner Losh <imp@harmony.village.org>
2001-07-08 04:56:07 +00:00
Matthew Dillon
7197571105 Move vm_page_zero_idle() from machine-dependant sections to a
machine-independant source file, vm/vm_zeroidle.c.  It was exactly the
same for all platforms and updating them all was getting annoying.
2001-07-05 01:32:42 +00:00
Matthew Dillon
6d03d577a5 Reorg vm_page.c into vm_page.c, vm_pageq.c, and vm_contig.c (for contigmalloc).
Also removed some spl's and added some VM mutexes, but they are not actually
used yet, so this commit does not really make any operational changes
to the system.

vm_page.c relates to vm_page_t manipulation, including high level deactivation,
activation, etc...  vm_pageq.c relates to finding free pages and aquiring
exclusive access to a page queue (exclusivity part not yet implemented).
And the world still builds... :-)
2001-07-04 23:27:09 +00:00
Matthew Dillon
0cddd8f023 With Alfred's permission, remove vm_mtx in favor of a fine-grained approach
(this commit is just the first stage).  Also add various GIANT_ macros to
formalize the removal of Giant, making it easy to test in a more piecemeal
fashion. These macros will allow us to test fine-grained locks to a degree
before removing Giant, and also after, and to remove Giant in a piecemeal
fashion via sysctl's on those subsystems which the authors believe can
operate without Giant.
2001-07-04 16:20:28 +00:00
John Baldwin
d2a5bcc3d3 Allow Giant to be recursed when a process terminates. 2001-07-03 05:09:48 +00:00
Brooks Davis
53dab5fe7b gif(4) and stf(4) modernization:
- Remove gif dependencies from stf.
 - Make gif and stf into modules
 - Make gif cloneable.

PR:		kern/27983
Reviewed by:	ru, ume
Obtained from:	NetBSD
MFC after:	1 week
2001-07-02 21:02:09 +00:00
Warner Losh
1f94b9005c Don't need the .keep_me files. Obrien and I committed past each other.
Add 0-9 to the list of possible kernel names at matsushita-san's
suggestion.

Submitted by: Makoto MATSUSHITA-san <matusita@jp.FreeBSD.org>
2001-07-01 23:35:44 +00:00
David E. O'Brien
5e6ded4212 Ensure sys/${MACHINE}/compile/FOO exists
Reviewed by: arch, imp, peter, and the USENIX terminal room secret kernel cabal
2001-06-30 15:16:29 +00:00
Warner Losh
e1d0d8a941 Really do proper keepme files in the compile directories. Use
.cvsignore file for [A-Za-z]* to keep these directories around rather
than waste a file on .keepme.  This should also make people's built
trees place nice with CVS.

Idea for .cvsignore: peter (although I suggested the regexp)
Pointed out by: Makoto MATSUSHITA-san <matusita@jp.FreeBSD.org>
Llama's costuming by: Fernamdo Llamas
2001-06-30 14:38:32 +00:00
David E. O'Brien
791eca5f7b Ensure sys/${MACHINE}/compile/FOO exists
Reviewed by: arch, imp, peter and
  the USENIX terminal room secret kernel cabal
2001-06-30 07:12:34 +00:00
Warner Losh
1b0a8621e6 Repo copy i8237.h to dev/ic so we can get rid of some of the final vestiges
of includes of i386 files from non-i386 ports.
2001-06-30 05:29:11 +00:00
John Baldwin
7aa7260e4a Move ast() and userret() to sys/kern/subr_trap.c now that they are MI. 2001-06-29 19:51:37 +00:00
John Baldwin
6be523bca7 Add a new MI pointer to the process' trapframe p_frame instead of using
various differently named pointers buried under p_md.

Reviewed by:	jake (in principle)
2001-06-29 11:10:41 +00:00
John Baldwin
8b2cea330f Catch up to mbuf allocator changes from last September so this compiles
again.
2001-06-27 14:57:17 +00:00
John Baldwin
b49a082a33 Make this compile again. Broken since June 1. 2001-06-27 14:46:44 +00:00
John Baldwin
5ae10ded6e Fix cut-n-paste brain-o.
Pointy-hat to:	me
2001-06-25 16:38:09 +00:00
John Baldwin
06c836bbca - Grab the proc lock around CURSIG and postsig(). Don't release the proc
lock until after grabbing the sched_lock to avoid CURSIG racing with
  psignal.
- Don't grab Giant for addupc_task() as it isn't needed.

Reported by:	tegge (signal race), bde (addupc_task a while back)
2001-06-22 23:05:11 +00:00
Peter Wemm
e634a8c804 oops. prepare_usermode() died in August 2000 in the MI and x86 code.
Issue raised by:	scottl
2001-06-15 09:59:27 +00:00
David E. O'Brien
f9b58b41a3 Fix style of defines. 2001-06-09 05:21:17 +00:00
Joerg Wunsch
e774b25111 Nuke the various poorly maintained copies of ioctl_fd.h. The file is
not machine-dependant, thus it has been moved out (repo-copied) into
<sys/fdcio.h>.
2001-06-06 06:15:03 +00:00
John Baldwin
262c9f8a3b Don't hold sched_lock across addupc_task().
Reported by:	David Taylor <davidt@yadt.co.uk>
Submitted by:	bde
2001-06-06 00:57:24 +00:00
Poul-Henning Kamp
0d31cbfab7 Properly wrap mtx_intr_enable() macro in "do $bla while (0)" 2001-06-02 08:17:42 +00:00
Thomas Moestl
d279178df7 Clean up the code exporting interrupt statistics via sysctl a bit:
- move the sysctl code to kern_intr.c
- do not use INTRCNT_COUNT, but rather eintrcnt - intrcnt to determine
  the length of the intrcnt array
- move the declarations of intrnames, eintrnames, intrcnt and eintrcnt
  from machine-dependent include files to sys/interrupt.h
- remove the hw.nintr sysctl, it is not needed.
- fix various style bugs

Requested by:	bde
Reviewed by:	bde (some time ago)
2001-06-01 13:23:28 +00:00
John Baldwin
f8584ee1e1 Catch up to the axeing of MFS and fix the ia64 build.
Forgotten by:	a Danish axe-wielder
2001-05-30 23:06:14 +00:00
John Baldwin
c9dc25fa44 - Catch up to the VM mutex changes.
- Sort includes in a few places.
2001-05-30 00:03:13 +00:00
Poul-Henning Kamp
888a8e3567 Remove MFS options from all example kernel configs. 2001-05-29 18:49:06 +00:00
Ruslan Ermilov
99d300a1ec - FDESC, FIFO, NULL, PORTAL, PROC, UMAP and UNION file
systems were repo-copied from sys/miscfs to sys/fs.

- Renamed the following file systems and their modules:
  fdesc -> fdescfs, portal -> portalfs, union -> unionfs.

- Renamed corresponding kernel options:
  FDESC -> FDESCFS, PORTAL -> PORTALFS, UNION -> UNIONFS.

- Install header files for the above file systems.

- Removed bogus -I${.CURDIR}/../../sys CFLAGS from userland
  Makefiles.
2001-05-23 09:42:29 +00:00
David E. O'Brien
ef76752043 Style changes -- revert ordering to mostly two revs ago.
Embellish some comments, fix tab'ing.

Requested by:	bde
2001-05-18 01:40:40 +00:00
John Baldwin
7a08bae6ec - Move the setting of bootverbose to a MI SI_SUB_TUNABLES SYSINIT.
- Attach a writable sysctl to bootverbose (debug.bootverbose) so it can be
  toggled after boot.
- Move the printf of the version string to a SI_SUB_COPYRIGHT SYSINIT just
  afer the display of the copyright message instead of doing it by hand in
  three MD places.
2001-05-17 22:28:46 +00:00
David E. O'Brien
0dfc89c188 Consistently define the rune types.
Follow NetBSD's lead and add a _BSD_MBSTATE_T_ type.
2001-05-16 22:32:44 +00:00
David E. O'Brien
1123bf8862 Move the int typedefs to the top so they can be used in defining other types.
Ensure every platform has __offsetof.
Make multiple inclusion detection consistent with other
  <platform>/include/*.h files.
2001-05-16 22:21:43 +00:00
John Baldwin
f0ba29575d Lock the procfs functions for doing a single step and reading/writing
registers better.  Hold sched_lock not only for checking the flag but
also while performing the actual operation to ensure the process doesn't
get swapped out by another CPU while we the operation is being performed.
2001-05-16 00:47:27 +00:00
John Baldwin
dec54ac5b5 "Sir, the deorbit burn completed succesfully."
RIP {sys/machine}/ipl.h.
2001-05-15 23:30:37 +00:00
John Baldwin
8bd57f8fc2 Remove unneeded includes of sys/ipl.h and machine/ipl.h. 2001-05-15 23:22:29 +00:00
Poul-Henning Kamp
ab9f3b292e Convert DEVFS from an "opt-in" to an "opt-out" option.
If for some reason DEVFS is undesired, the "NODEVFS" option is
needed now.

Pending any significant issues, DEVFS will be made mandatory in
-current on july 1st so that we can start reaping the full
benefits of having it.
2001-05-13 20:52:40 +00:00
John Baldwin
1efb92b7ca Simplify the vm fault trap handling code a bit by using if-else instead of
duplicating code in the then case and then using a goto to jump around
the else case.
2001-05-11 23:50:08 +00:00
John Baldwin
ba228f6d96 - Split out the support for per-CPU data from the SMP code. UP kernels
have per-CPU data and gdb on the i386 at least needs access to it.
- Clean up includes in kern_idle.c and subr_smp.c.

Reviewed by:	jake
2001-05-10 17:45:49 +00:00
John Baldwin
28a24ae515 Add include of sys/mutex.h and resort include of sys/lock.h. 2001-05-09 16:56:48 +00:00
John Baldwin
d90b453427 Add needed sys/lock.h include. 2001-05-09 16:55:59 +00:00
Poul-Henning Kamp
a468031ce8 Actually biofinish(struct bio *, struct devstat *, int error) is more general
than the bioerror().

Most of this patch is generated by scripts.
2001-05-06 20:00:03 +00:00
John Baldwin
6caa8a1501 Overhaul of the SMP code. Several portions of the SMP kernel support have
been made machine independent and various other adjustments have been made
to support Alpha SMP.

- It splits the per-process portions of hardclock() and statclock() off
  into hardclock_process() and statclock_process() respectively.  hardclock()
  and statclock() call the *_process() functions for the current process so
  that UP systems will run as before.  For SMP systems, it is simply necessary
  to ensure that all other processors execute the *_process() functions when the
  main clock functions are triggered on one CPU by an interrupt.  For the alpha
  4100, clock interrupts are delievered in a staggered broadcast fashion, so
  we simply call hardclock/statclock on the boot CPU and call the *_process()
  functions on the secondaries.  For x86, we call statclock and hardclock as
  usual and then call forward_hardclock/statclock in the MD code to send an IPI
  to cause the AP's to execute forwared_hardclock/statclock which then call the
  *_process() functions.
- forward_signal() and forward_roundrobin() have been reworked to be MI and to
  involve less hackery.  Now the cpu doing the forward sets any flags, etc. and
  sends a very simple IPI_AST to the other cpu(s).  AST IPIs now just basically
  return so that they can execute ast() and don't bother with setting the
  astpending or needresched flags themselves.  This also removes the loop in
  forward_signal() as sched_lock closes the race condition that the loop worked
  around.
- need_resched(), resched_wanted() and clear_resched() have been changed to take
  a process to act on rather than assuming curproc so that they can be used to
  implement forward_roundrobin() as described above.
- Various other SMP variables have been moved to a MI subr_smp.c and a new
  header sys/smp.h declares MI SMP variables and API's.   The IPI API's from
  machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h.
- The globaldata_register() and globaldata_find() functions as well as the
  SLIST of globaldata structures has become MI and moved into subr_smp.c.
  Also, the globaldata list is only available if SMP support is compiled in.

Reviewed by:	jake, peter
Looked over by:	eivind
2001-04-27 19:28:25 +00:00
Doug Rabson
a78af1ccdb When switching backing store during signal delivery, do the switch before
creating the register frame for calling the handler. Also discard that
frame before switching back to the old backing store after the handler
returns.
2001-04-24 15:57:16 +00:00
Doug Rabson
2c7122ff0f Align stack pointer and backing store pointer to 16 byte boundary when
delivering signals.
2001-04-24 15:55:47 +00:00
Doug Rabson
1eaf877f2e Don't trash the user's pr on syscalls. 2001-04-24 15:54:23 +00:00
Doug Rabson
2322ee63c2 Don't unwrap the function descriptor used as the callout argument to
fork_exit(). The MI version of fork_exit() needs a real function
descriptor, not a simple function pointer.
2001-04-19 12:35:47 +00:00
Doug Rabson
eed829bca9 Don't take the Giant mutex for clock interrupts. 2001-04-19 12:34:23 +00:00
Doug Rabson
ff37c2003c Don't panic when we try to modify the kernel pmap. 2001-04-18 15:08:37 +00:00
Doug Rabson
169915f56d Print an approximation of the function arguments in the stack trace. 2001-04-18 15:07:56 +00:00
Doug Rabson
072c3a5395 Implement a simple stack trace for DDB. This will have to be redone
if/when we change to a more modern toolchain.
2001-04-18 14:15:45 +00:00
Doug Rabson
dd85faa611 Record the right value for tf_ndirty for kernel interruptions so that
we can examine the interrupted register stack frame in DDB.
2001-04-18 14:10:43 +00:00
David E. O'Brien
c5e70d92ce Turn on kernel debugging support (DDB, INVARIANTS, INVARIANT_SUPPORT, WITNESS)
by default while SMPng is still being developed.

Submitted by:	jhb
2001-04-15 19:37:28 +00:00
John Baldwin
2fea957dc5 Rename the IPI API from smp_ipi_* to ipi_* since the smp_ prefix is just
"redundant noise" and to match the IPI constant namespace (IPI_*).

Requested by:	bde
2001-04-11 17:06:02 +00:00
David E. O'Brien
563dd74008 Reduce the emasculation of bounds_check_with_label() by one line, so we
propagate a bio error condition to the caller and above.
2001-03-29 20:26:12 +00:00
John Baldwin
1005a129e5 Convert the allproc and proctree locks from lockmgr locks to sx locks. 2001-03-28 11:52:56 +00:00
John Baldwin
192846463a Rework the witness code to work with sx locks as well as mutexes.
- Introduce lock classes and lock objects.  Each lock class specifies a
  name and set of flags (or properties) shared by all locks of a given
  type.  Currently there are three lock classes: spin mutexes, sleep
  mutexes, and sx locks.  A lock object specifies properties of an
  additional lock along with a lock name and all of the extra stuff needed
  to make witness work with a given lock.  This abstract lock stuff is
  defined in sys/lock.h.  The lockmgr constants, types, and prototypes have
  been moved to sys/lockmgr.h.  For temporary backwards compatability,
  sys/lock.h includes sys/lockmgr.h.
- Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin
  locks held.  By making this per-cpu, we do not have to jump through
  magic hoops to deal with sched_lock changing ownership during context
  switches.
- Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with
  proc->p_sleeplocks, which is a list of held sleep locks including sleep
  mutexes and sx locks.
- Add helper macros for logging lock events via the KTR_LOCK KTR logging
  level so that the log messages are consistent.
- Add some new flags that can be passed to mtx_init():
  - MTX_NOWITNESS - specifies that this lock should be ignored by witness.
    This is used for the mutex that blocks a sx lock for example.
  - MTX_QUIET - this is not new, but you can pass this to mtx_init() now
    and no events will be logged for this lock, so that one doesn't have
    to change all the individual mtx_lock/unlock() operations.
- All lock objects maintain an initialized flag.  Use this flag to export
  a mtx_initialized() macro that can be safely called from drivers.  Also,
  we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness
  performs the corresponding checks using the initialized flag.
- The lock order reversal messages have been improved to output slightly
  more accurate file and line numbers.
2001-03-28 09:03:24 +00:00
John Baldwin
0006681fe6 Switch from save/disable/restore_intr() to critical_enter/exit(). 2001-03-28 03:06:10 +00:00
John Baldwin
b944b9033a Catch up to the mtx_saveintr -> mtx_savecrit change. 2001-03-28 02:46:21 +00:00
John Baldwin
6283b7d01b - Switch from using save/disable/restore_intr to using critical_enter/exit
and change the u_int mtx_saveintr member of struct mtx to a critical_t
  mtx_savecrit.
- On the alpha we no longer need a custom _get_spin_lock() macro to avoid
  an extra PAL call, so remove it.
- Partially fix using mutexes with WITNESS in modules.  Change all the
  _mtx_{un,}lock_{spin,}_flags() macros to accept explicit file and line
  parameters and rename them to use a prefix of two underscores.  Inside
  of kern_mutex.c, generate wrapper functions for
  _mtx_{un,}lock_{spin,}_flags() (only using a prefix of one underscore)
  that are called from modules.  The macros mtx_{un,}lock_{spin,}_flags()
  are mapped to the __mtx_* macros inside of the kernel to inline the
  usual case of mutex operations and map to the internal _mtx_* functions
  in the module case so that modules will use WITNESS and KTR logging if
  the kernel is compiled with support for it.
2001-03-28 02:40:47 +00:00
John Baldwin
034dc442ad - Add the new critical_t type used to save state inside of critical
sections.
- Add implementations of the critical_enter() and critical_exit() functions
  and remove restore_intr() and save_intr().
- Remove the somewhat bogus disable_intr() and enable_intr() functions on
  the alpha as the alpha actually uses a priority level and not simple bit
  flag on the CPU.
2001-03-28 02:31:54 +00:00
Poul-Henning Kamp
f83880518b Send the remains (such as I have located) of "block major numbers" to
the bit-bucket.
2001-03-26 12:41:29 +00:00
Hajimu UMEMOTO
8b625cb701 Unbreak build on alpha.
- Move in_port_t to sys/types.h.
  - Nuke in_addr_t from each endian.h.

Reported by:	jhb
2001-03-24 15:17:27 +00:00
John Baldwin
5e115a7e04 - Define and use MAXCPU like the alpha and i386 instead of NCPUS.
- Sort the sys/mutex.h include in mp_machdep.c into a closer to correct
  location.
2001-03-24 06:22:57 +00:00
John Baldwin
197855a3ea Stick a prototype for handleclock() in machine/clock.h and include it
interrupt.c to quiet a warning.
2001-03-24 06:20:48 +00:00
Thomas Moestl
368d2edce4 Export intrnames and intrcnt as sysctls (hw.nintr, hw.intrnames and
hw.intrcnt).

Approved by:	rwatson
2001-03-23 03:45:17 +00:00
Peter Wemm
6eb39ac8fc Use a generic implementation of the Fowler/Noll/Vo hash (FNV hash).
Make the name cache hash as well as the nfsnode hash use it.

As a special tweak, create an unsigned version of register_t.  This allows
us to use a special tweak for the 64 bit versions that significantly
speeds up the i386 version (ie: int64 XOR int64 is slower than int64
XOR int32).

The code layout is a little strange for the string function, but I was
able to get between 5 to 10% improvement over the original version I
started with. The layout affects gcc code generation choices and this way
was fastest on x86 and alpha.

Note that 'CPUTYPE=p3' etc makes a fair difference to this.  It is
around 45% faster with -march=pentiumpro on a p6 cpu.
2001-03-17 09:31:06 +00:00
Doug Rabson
7ea98b2be3 Allow the config file to specify a root filesystem filename. 2001-03-09 13:45:31 +00:00
Doug Rabson
60fe99457c Adjust a comment slightly. 2001-03-09 13:44:53 +00:00
John Baldwin
5db078a9be Fix mtx_legal2block. The only time that it is bad to block on a mutex is
if we hold a spin mutex, since we can trivially get into deadlocks if we
start switching out of processes that hold spinlocks.  Checking to see if
interrupts were disabled was a sort of cheap way of doing this since most
of the time interrupts were only disabled when holding a spin lock.  At
least on the i386.  To fix this properly, use a per-process counter
p_spinlocks that counts the number of spin locks currently held, and
instead of checking to see if interrupts are disabled in the witness code,
check to see if we hold any spin locks.  Since child processes always
start up with the sched lock magically held in fork_exit(), we initialize
p_spinlocks to 1 for child processes.  Note that proc0 doesn't go through
fork_exit(), so it starts with no spin locks held.

Consulting from:	cp
2001-03-09 07:24:17 +00:00
John Baldwin
136d8f42b9 Unrevert the pmap_map() changes. They weren't broken on x86.
Sense beaten into me by:	peter
2001-03-07 05:29:21 +00:00
John Baldwin
f227364a17 - Release Giant a bit earlier on syscall exit.
- Don't try to grab Giant before postsig() in userret() as it is no longer
  needed.
- Don't grab Giant before psignal() in ast() but get the proc lock instead.
2001-03-07 03:53:39 +00:00
John Baldwin
19eb87d22a Grab the process lock while calling psignal and before calling psignal. 2001-03-07 03:37:06 +00:00
John Baldwin
ff655691d8 Use the proc lock to protect p_pptr when waking up our parent in cpu_exit()
and remove the mpfixme() message that is now fixed.
2001-03-07 03:20:15 +00:00
John Baldwin
4a01ebd482 Back out the pmap_map() change for now, it isn't completely stable on the
i386.
2001-03-07 01:04:17 +00:00
John Baldwin
f90f3aadba Don't psignal() a process from forward_hardclock() but set the appropriate
pending flag in p_sflag instead.
2001-03-06 07:40:51 +00:00
John Baldwin
968950e5d1 - Rework pmap_map() to take advantage of direct-mapped segments on
supported architectures such as the alpha.  This allows us to save
  on kernel virtual address space, TLB entries, and (on the ia64) VHPT
  entries.  pmap_map() now modifies the passed in virtual address on
  architectures that do not support direct-mapped segments to point to
  the next available virtual address.  It also returns the actual
  address that the request was mapped to.
- On the IA64 don't use a special zone of PV entries needed for early
  calls to pmap_kenter() during pmap_init().  This gets us in trouble
  because we end up trying to use the zone allocator before it is
  initialized.  Instead, with the pmap_map() change, the number of needed
  PV entries is small enough that we can get by with a static pool that is
  used until pmap_init() is complete.

Submitted by:		dfr
Debugging help:		peter
Tested by:		me
2001-03-06 06:06:42 +00:00
Doug Rabson
3af0f1028b Fix a couple of typos which became obvious when I started to actually use
this on real hardware.
2001-03-04 23:30:31 +00:00
John Baldwin
505b544d52 sched_swi -> swi_sched 2001-02-24 19:09:37 +00:00
John Baldwin
f58a8b1a54 Don't include machine/mutex.h and relocate sys/mutex.h's include to be
closer to alphabetical order and identical to that of the alpha.
2001-02-24 19:09:16 +00:00
John Baldwin
7b4a30b689 Clockframes have a trapframe stored in a cf_tf member, not ct_tf. 2001-02-24 18:57:34 +00:00
John Baldwin
7182d6578d Whitespace nits. 2001-02-24 18:41:38 +00:00
John Baldwin
ad0a541a9e Pass in process to mark ast on to aston(). 2001-02-24 18:41:17 +00:00
John Baldwin
2f01e2eaf2 Axe pcb_schednest as it is no longer used. 2001-02-22 17:09:50 +00:00
John Baldwin
938f15c7c4 Rename switch_trampoline() to fork_trampoline() on the alpha and ia64.
Suggested by:	dfr
2001-02-22 16:56:53 +00:00
John Baldwin
6a7b2863ba Don't set the sched_lock lesting level for new processes as it is no
longer used.
2001-02-22 16:53:23 +00:00
John Baldwin
6efc0dc140 Catch comments up to child_return() -> fork_return() as well. 2001-02-22 16:49:36 +00:00
John Baldwin
2217ebb4df Synch up with the other architectures:
- Remove unneeded spl()'s around mi_switch() in userret().
- Don't hold sched_lock across addupc_task().
- Remove the MD function child_return() now that the MI function
  fork_return() is used instead.
- Use TRAPF_USERMODE() instead of dinking with the trapframe directly to
  check for ast's in kernel mode.
- Check astpending(curproc) and resched_wanted() in ast() and return if
  neither is true.
- Use astoff() rather than setting the non-existent per-cpu variable
  astpending to 0 to clear an ast.
2001-02-22 16:27:03 +00:00
John Baldwin
7def4c9ab5 Use the MI fork_return() fork trampoline callout function for child
processes instead of the MD child_return().
2001-02-22 16:05:48 +00:00
John Baldwin
29182967ef - Don't dink with sched_lock in cpu_switch() since mi_switch() does this
for us.
- Change the switch_trampoline() to call fork_exit() passing in the
  required arguments instead of calling the fork trampoline callout
  function directly.
Warning: this hasn't been tested.

Looked over by:	dfr
2001-02-22 16:05:09 +00:00
John Baldwin
5f78859e9a - Axe the now unused ASS_* assertions for interrupt status.
- Use ia64_get_psr() instead of save_intr() in mtx_legal2block().
2001-02-22 15:43:42 +00:00
John Baldwin
409b7113a1 Add a inline function to read the psr. 2001-02-22 15:39:58 +00:00
John Baldwin
7a473ad644 Add a mtx_intr_enable() macro. 2001-02-22 15:37:57 +00:00
John Baldwin
66b5727c4e Axe the astpending per-cpu variable. 2001-02-22 15:37:34 +00:00
John Baldwin
3e68dabf19 Add TRAPF_PC() and TRAPF_USERMODE() macros and redefine CLKF_PC() and
CLKF_USERMODE() in terms of them.
2001-02-22 15:35:04 +00:00
John Baldwin
de0e830e6d Catch up to new MI astpending and need_resched handling. 2001-02-22 13:29:22 +00:00
Hajimu UMEMOTO
e335205699 Correct disordering which is corresponding to bde's fix to
i386/include/ansi.h.
2001-02-17 14:51:11 +00:00
Hajimu UMEMOTO
ad9fdc8f4d Correct 2nd argument of getnameinfo(3) to socklen_t.
Reviewed by:	itojun
2001-02-15 10:35:55 +00:00
Jake Burkholder
d5a08a6065 Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different
  scheduling classes using different portions of the array.  This
  allows user processes to have their priorities propogated up into
  interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
  32.  We used to have 4 separate arrays of 32 queues each, so this
  may not be optimal.  The new run queue code was written with this
  in mind; changing the number of run queues only requires changing
  constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter.  This
  is intended to be used to create per-cpu run queues.  Implement
  wrappers for compatibility with the old interface which pass in
  the global run queue structure.
- Group the priority level, user priority, native priority (before
  propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
  symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
  This was used to detect when a process' priority had lowered and
  it should yield.  We now effectively yield on every interrupt.
- Activate propogate_priority().  It should now have the desired
  effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
  idle loop.  It interfered with propogate_priority() because
  the idle process needed to do a non-blocking acquire of Giant
  and then other processes would try to propogate their priority
  onto it.  The idle process should not do anything except idle.
  vm_page_zero_idle() will return in the form of an idle priority
  kernel thread which is woken up at apprioriate times by the vm
  system.
- Update struct kinfo_proc to the new priority interface.  Deliberately
  change its size by adjusting the spare fields.  It remained the same
  size, but the layout has changed, so userland processes that use it
  would parse the data incorrectly.  The size constraint should really
  be changed to an arbitrary version number.  Also add a debug.sizeof
  sysctl node for struct kinfo_proc.
2001-02-12 00:20:08 +00:00
Mark Murray
d888fc4e73 RIP <machine/lock.h>.
Some things needed bits of <i386/include/lock.h> - cy.c now has its
own (only) copy of the COM_(UN)LOCK() macros, and IMASK_(UN)LOCK()
has been moved to <i386/include/apic.h> (AKA <machine/apic.h>).
Reviewed by:	jhb
2001-02-11 10:44:09 +00:00
John Baldwin
929604ec9b Move the initailization of the proc lock for proc0 very early into the MD
startup code.
2001-02-09 16:25:16 +00:00
John Baldwin
ec2e32b7fd Remove bogus #if 0'd code that dinked with the saved interrupt state in
sched_lock.
2001-02-09 14:50:52 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
Peter Wemm
bcf77694d1 Clean up some leftovers from the root mount cleanup that was done some
time ago.  FFS_ROOT and CD9660_ROOT are obsolete.
2001-02-04 15:35:10 +00:00
Peter Wemm
7aef6a1e88 All the world is not an i386. Merge rev 1.438 of i386/i386/machdep.c.
Make buffer_map a system map.
2001-02-04 07:00:47 +00:00
Peter Wemm
8ab109d131 Remove count for NSIO. The only places it was used it were incorrect.
(alpha-gdbstub.c got sync'ed up a bit with the i386 version)
2001-01-31 10:54:45 +00:00
Doug Rabson
109ba4608b Flesh out EFI support somewhat. 2001-01-29 13:31:19 +00:00
Peter Wemm
03927d3c33 Send "#if NISA > 0" to the bit-bucket and replace it with an option.
These were compile-time "is the isa code present?" tests and not
'how many isa busses' tests.
2001-01-29 09:38:39 +00:00
Marcel Moolenaar
cd682c042d Add gd_witness_spin_check. 2001-01-28 08:06:50 +00:00
Marcel Moolenaar
079c9adfc4 Fix typo. 2001-01-28 08:05:55 +00:00
Marcel Moolenaar
136345c019 Improve kernel bootstrapping:
o  Use objdump instead of gensetdefs(1) to build the linker sets.
o  Allow overriding of nm and objdump in resp. genassym.sh and
   gensetdefs.pl for non-native toolchains.

Reviewed by: arch
Perl improvements: Jos Backus <josb@cncdsl.com>, benno
2001-01-28 06:39:56 +00:00
Doug Rabson
038f285f72 Initialise proc0.p_heldmtx and proc0.p_contested and call
mtx_enter(&Giant, MTX_DEF) after Giant is initialised.

Reviewed by: jhb
2001-01-26 17:52:34 +00:00
Doug Rabson
5c72bfc7ee Change cpuno to cpuid. 2001-01-24 17:12:37 +00:00
Doug Rabson
778d716fd3 Fix typo. 2001-01-24 17:11:33 +00:00
Jason Evans
1b367556b5 Convert all simplelocks to mutexes and remove the simplelock implementations. 2001-01-24 12:35:55 +00:00
John Baldwin
a8a3dd2bb5 - Proc locking.
- P_OWEUPC -> PS_OWEUPC.
2001-01-24 10:38:58 +00:00
John Baldwin
a467939095 - Proc locking.
- Update userret() to take a struct trapframe * as a second argument.
- Axe have_giant and use mtx_owned(&Giant) where appropriate.
2001-01-24 10:38:13 +00:00
John Baldwin
88541b72b7 - Proc locking.
- P_FOO -> PS_FOO.
2001-01-24 10:36:47 +00:00
John Baldwin
89db405712 - Proc locking.
- Bring across forwarded_statclock() fixes from i386 and alpha.
2001-01-24 10:36:21 +00:00
Dag-Erling Smørgrav
a3ea6d41b9 First step towards an MP-safe zone allocator:
- have zalloc() and zfree() always lock the vm_zone.
 - remove zalloci() and zfreei(), which are now redundant.

Reviewed by:	bmilekic, jasone
2001-01-21 22:23:11 +00:00
Jake Burkholder
a448b62ac9 Make intr_nesting_level per-process, rather than per-cpu. Setup
interrupt threads to run with it always >= 1, so that malloc can
detect M_WAITOK from "interrupt" context.  This is also necessary
in order to context switch from sched_ithd() directly.

Reviewed By:	peter
2001-01-21 19:25:07 +00:00
Jason Evans
d1c1b8413e Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.
2001-01-21 07:52:20 +00:00
Peter Wemm
a496358e30 Remove the now-empty ipl_funcs.c file on all platforms. 2001-01-19 09:59:56 +00:00
Peter Wemm
198c5b0891 Remove the static splXXX functions and replace them by static __inline
stubs.  Remove the xxx_imask variables which have been all but gone for
a while.
2001-01-19 09:57:29 +00:00
Bosko Milekic
08812b3925 Implement MTX_RECURSE flag for mtx_init().
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.

The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.

The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.

Reviewed by: jhb
2001-01-19 01:59:14 +00:00
Poul-Henning Kamp
02e5c5513c These files have been on deathrow for a couple of months, no appeal. 2001-01-16 10:01:56 +00:00
Mark Murray
b79ad7e642 Remove NOBLOCKRANDOM as a compile-time option. Instead, provide
exactly the same functionality via a sysctl, making this feature
a run-time option.

The default is 1(ON), which means that /dev/random device will
NOT block at startup.

setting kern.random.sys.seeded to 0(OFF) will cause /dev/random
to block until the next reseed, at which stage the sysctl
will be changed back to 1(ON).

While I'm here, clean up the sysctls, and make them dynamic.
Reviewed by:		des
Tested on Alpha by:	obrien
2001-01-14 17:50:15 +00:00
Jake Burkholder
7586909279 Remove unused per-cpu variables inside_intr and ss_eflags. 2001-01-12 07:47:54 +00:00
Jake Burkholder
df729d6f00 - Remove compatibility macros for accessing per-cpu variables.
__FreeBSD_version 500015 can be used to detect their disappearance.
- Move the symbols for SMP_prvspace and lapic from globals.s to
  locore.s.
- Remove globals.s with extreme prejudice.
2001-01-11 14:46:26 +00:00
Jake Burkholder
ef73ae4b0c Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.
2001-01-10 04:43:51 +00:00
David E. O'Brien
eaca6822a9 Put VCS ids in a consistent place and form. 2001-01-08 06:24:08 +00:00
David E. O'Brien
2590b31beb Remove seconds types we don't use that came in thru the NetBSD heiratage. 2001-01-08 06:17:11 +00:00
Jake Burkholder
f8761e53a7 Implement accessors for per-cpu variables which don't depend on the
symbols in globals.s.

	PCPU_GET(name) returns the value of the per-cpu variable
	PCPU_PTR(name) returns a pointer to the per-cpu variable
	PCPU_SET(name, val) sets the value of the per-cpu variable

In general these are not yet used, compatibility macros remain.

Unifdef SMP struct globaldata, this makes variables such as cpuid
available for UP as well.

Rebuilding modules is probably a good idea, but I believe old
modules will still work, as most of the old infrastructure
remains.
2001-01-06 19:55:42 +00:00
Doug Rabson
15c2c12d0d Don't include <stddef.h> for offsetof() - its also defined in <sys/types.h> 2000-12-30 13:07:58 +00:00
Doug Rabson
48dd241687 Merge ALIGN changes from alpha code. 2000-12-30 13:06:01 +00:00
Doug Rabson
c8cf198085 Fix typo. 2000-12-30 13:04:20 +00:00
Jake Burkholder
98f03f9030 Protect proc.p_pptr and proc.p_children/p_sibling with the
proctree_lock.

linprocfs not locked pending response from informal maintainer.

Reviewed by:	jhb, -smp@
2000-12-23 19:43:10 +00:00
Marcel Moolenaar
ca4f131c60 Resolve RAW dependency violation between tbit and adds. 2000-12-20 05:12:41 +00:00
John Baldwin
5f38ca907c Remove the "machine dependent" KTR trace buffer ddb commands. The code was
exactly the same on all platforms.
2000-12-14 23:57:30 +00:00
David E. O'Brien
c81f693089 Sync with i386/GENERIC rev 1.294 removing "COMPAT_OLDPCI".
This fixed the broken kernel build on the Alpha.
2000-12-13 07:34:47 +00:00
Jake Burkholder
c0c2557090 - Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead
of explicit calls to lockmgr.  Also provides macros for the flags
  pased to specify shared, exclusive or release which map to the
  lockmgr flags.  This is so that the use of lockmgr can be easily
  replaced with optimized reader-writer locks.
- Add some locking that I missed the first time.
2000-12-13 00:17:05 +00:00
Jake Burkholder
92cf772d8d - Add code to detect if a system call returns with locks other than Giant
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
  panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
  a system call, which were missing for alpha and ia64.
2000-12-12 01:14:32 +00:00
Marcel Moolenaar
bc3768182a o Remove mcclock
o  s/alpha/ia64/g
2000-12-10 04:32:34 +00:00
David Malone
7cc0979fd6 Convert more malloc+bzero to malloc+M_ZERO.
Submitted by:	josh@zipperup.org
Submitted by:	Robert Drehmel <robd@gmx.net>
2000-12-08 21:51:06 +00:00
Jake Burkholder
1eb44f0270 Remove the last of the MD netisr code. It is now all MI. Remove
spending, which was unused now that all software interrupts have
their own thread.  Make the legacy schednetisr use an atomic op
for setting bits in the netisr mask.

Reviewed by:	jhb
2000-12-05 00:36:00 +00:00
Alfred Perlstein
82625cf321 remove unneded sys/ucred.h includes 2000-11-30 18:52:32 +00:00
Marcel Moolenaar
d034d459da Don't use p->p_sigstk.ss_flags to keep state of whether the
process is on the alternate stack or not. For compatibility
with sigstack(2) state is being updated if such is needed.

We now determine whether the process is on the alternate
stack by looking at its stack pointer. This allows a process
to siglongjmp from a signal handler on the alternate stack
to the place of the sigsetjmp on the normal stack. When
maintaining state, this would have invalidated the state
information and causing a subsequent signal to be delivered
on the normal stack instead of the alternate stack.

PR: 22286
2000-11-30 05:23:49 +00:00
Jonathan Lemon
9528be94e8 Add 'mpsafe' parameter to callout_init() in MD bits.
Reminded by:  jake
2000-11-26 13:52:17 +00:00
Jake Burkholder
553629ebc9 Protect the following with a lockmgr lock:
allproc
	zombproc
	pidhashtbl
	proc.p_list
	proc.p_hash
	nextpid

Reviewed by:	jhb
Obtained from:	BSD/OS and netbsd
2000-11-22 07:42:04 +00:00
Mark Murray
5855006767 Add a consistent API to a feature that most modern CPUs have; a fast
counter register in-CPU.

This is to be used as a fast "timer", where linearity is more important
than time, and multiple lines in the linearity caused by multiple CPUs
in an SMP machine is not a problem.

This adds no code whatsoever to the FreeBSD kernel until it is actually
used, and then as a single-instruction inline routine (except for the
80386 and 80486 where it is some more inline code around nanotime(9).

Reviewed by:	bde, kris, jhb
2000-11-21 19:55:21 +00:00
Poul-Henning Kamp
d7450ce6d5 Make programs which still #include <machine/{mouse,console}.h> fail
at compiletime, with an explanatory error message.  Previously they
would only get a warning.

These files will be finally removed 2001-01-15
2000-11-20 22:00:25 +00:00
Jake Burkholder
fa2fbc3dac - Protect the callout wheel with a separate spin mutex, callout_lock.
- Use the mutex in hardclock to ensure no races between it and
  softclock.
- Make softclock be INTR_MPSAFE and provide a flag,
  CALLOUT_MPSAFE, which specifies that a callout handler does not
  need giant.  There is still no way to set this flag when
  regstering a callout.

Reviewed by:	-smp@, jlemon
2000-11-19 06:02:32 +00:00
Jake Burkholder
7da6f97772 - Split the run queue and sleep queue linkage, so that a process
may block on a mutex while on the sleep queue without corrupting
it.
- Move dropping of Giant to after the acquire of sched_lock.

Tested by:	John Hay <jhay@icomtek.csir.co.za>
		jhb
2000-11-17 18:09:18 +00:00
John Baldwin
20cdcc5b73 Don't release and acquire Giant in mi_switch(). Instead, release and
acquire Giant as needed in functions that call mi_switch().  The releases
need to be done outside of the sched_lock to avoid potential deadlocks
from trying to acquire Giant while interrupts are disabled.

Submitted by:	witness
2000-11-16 02:16:44 +00:00
John Baldwin
7e4b7c97de Don't perform an mi_switch() when we release Giant during cpu_exit(). We
are about to call cpu_switch() anyways.

Found by:	witness
2000-11-15 19:44:38 +00:00
Marcel Moolenaar
806d7daafe Make MINSIGSTKSZ machine dependent, and have the sigaltstack
syscall compare against a variable sv_minsigstksz in struct
sysentvec as to properly take the size of the machine- and
ABI dependent struct sigframe into account.

The SVR4 and iBCS2 modules continue to have a minsigstksz of
8192 to preserve behavior. The real values (if different) are
not known at this time. Other ABI modules use the real
values.

The native MINSIGSTKSZ is now defined as follows:

Arch		MINSIGSTKSZ
----		-----------
alpha		    4096
i386		    2048
ia64		   12288

Reviewed by: mjacob
Suggested by: bde
2000-11-09 08:25:48 +00:00
Poul-Henning Kamp
46aa3347cb Convert all users of fldoff() to offsetof(). fldoff() is bad
because it only takes a struct tag which makes it impossible to
use unions, typedefs etc.

Define __offsetof() in <machine/ansi.h>

Define offsetof() in terms of __offsetof() in <stddef.h> and <sys/types.h>

Remove myriad of local offsetof() definitions.

Remove includes of <stddef.h> in kernel code.

NB: Kernelcode should *never* include from /usr/include !

Make <sys/queue.h> include <machine/ansi.h> to avoid polluting the API.

Deprecate <struct.h> with a warning.  The warning turns into an error on
01-12-2000 and the file gets removed entirely on 01-01-2001.

Paritials reviews by:   various.
Significant brucifications by:  bde
2000-10-27 11:45:49 +00:00
Mark Murray
5f3431b5ad As the blocking model has seems to be troublesome for many, disable
it for now with an option.

This option is already deprecated, and will be removed when the
entropy-harvesting code is fast enough to warrant it.
2000-10-27 06:06:04 +00:00
Doug Rabson
b2950f19aa Minor build fixes. 2000-10-26 16:23:18 +00:00
John Baldwin
8088699f79 - Overhaul the software interrupt code to use interrupt threads for each
type of software interrupt.  Roughly, what used to be a bit in spending
  now maps to a swi thread.  Each thread can have multiple handlers, just
  like a hardware interrupt thread.
- Instead of using a bitmask of pending interrupts, we schedule the specific
  software interrupt thread to run, so spending, NSWI, and the shandlers
  array are no longer needed.  We can now have an arbitrary number of
  software interrupt threads.  When you register a software interrupt
  thread via sinthand_add(), you get back a struct intrhand that you pass
  to sched_swi() when you wish to schedule your swi thread to run.
- Convert the name of 'struct intrec' to 'struct intrhand' as it is a bit
  more intuitive.  Also, prefix all the members of struct intrhand with
  'ih_'.
- Make swi_net() a MI function since there is now no point in it being
  MD.

Submitted by:	cp
2000-10-25 05:19:40 +00:00
John Baldwin
f6835fffeb Implement atomic_{set,clear,add,subtract}_{acq_,rel_,}_ptr() 2000-10-25 00:16:38 +00:00
Doug Rabson
ce97a5fc4e * Various fixes to breakage introduced by the atomic and mutex reorgs.
* Fixes to the signal delivery code. Not quite right yet.

I would have preferred to wait until I have signal delivery actually
working but the current kernel in CVS doesn't build.
2000-10-24 19:54:38 +00:00
David E. O'Brien
2d26708326 Adjust comments
Submitted by:	bde

Add ISO C99's long long type limits.
Reviewed by:	bde
2000-10-24 10:49:56 +00:00
Matt Jacob
fc11653f20 CURTHD now defines in globals.h 2000-10-23 18:39:30 +00:00
John Baldwin
bd4635599d Define the mtx_legal2block() macro used in the witness code that managed
to get lost during the MI mutex conversion.

Reported by:    Steve Kargl <sgk@troutmask.apl.washington.edu>
2000-10-20 22:44:06 +00:00
John Baldwin
bce7f05af8 - machine/mutex.h -> sys/mutex.h
- Catch up to the MI mutex structure due to saveflags,saveipl,savepsr
  becoming saveintr.
2000-10-20 07:38:44 +00:00
John Baldwin
557b927eca - machine/mutex.h -> sys/mutex.h
- Use MUTEX_DECLARE() and MTX_COLD for Giant and sched_lock.
2000-10-20 07:32:48 +00:00
John Baldwin
36412d79b4 - Make the mutex code almost completely machine independent. This greatly
reducues the maintenance load for the mutex code.  The only MD portions
  of the mutex code are in machine/mutex.h now, which include the assembly
  macros for handling mutexes as well as optionally overriding the mutex
  micro-operations.  For example, we use optimized micro-ops on the x86
  platform #ifndef I386_CPU.
- Change the behavior of the SMP_DEBUG kernel option.  In the new code,
  mtx_assert() only depends on INVARIANTS, allowing other kernel developers
  to have working mutex assertiions without having to include all of the
  mutex debugging code.  The SMP_DEBUG kernel option has been renamed to
  MUTEX_DEBUG and now just controls extra mutex debugging code.
- Abolish the ugly mtx_f hack.  Instead, we dynamically allocate
  seperate mtx_debug structures on the fly in mtx_init, except for mutexes
  that are initiated very early in the boot process.   These mutexes
  are declared using a special MUTEX_DECLARE() macro, and use a new
  flag MTX_COLD when calling mtx_init.  This is still somewhat hackish,
  but it is less evil than the mtx_f filler struct, and the mtx struct is
  now the same size with and without mutex debugging code.
- Add some micro-micro-operation macros for doing the actual atomic
  operations on the mutex mtx_lock field to make it easier for other archs
  to override/optimize mutex ops if needed.  These new tiny ops also clean
  up the code in some places by replacing long atomic operation function
  calls that spanned 2-3 lines with a short 1-line macro call.
- Don't call mi_switch() from mtx_enter_hard() when we block while trying
  to obtain a sleep mutex.  Calling mi_switch() would bogusly release
  Giant before switching to the next process.  Instead, inline most of the
  code from mi_switch() in the mtx_enter_hard() function.  Note that when
  we finally kill Giant we can back this out and go back to calling
  mi_switch().
2000-10-20 07:26:37 +00:00
John Baldwin
ccbdd9ee59 - Expand the set of atomic operations to optionally include memory barriers
in most of the atomic operations.  Now for these operations, you can
  use the normal atomic operation, you can use the operation with a read
  barrier, or you can use the operation with a write barrier.  The function
  names follow the same semantics used in the ia64 instruction set.  An
  atomic operation with a read barrier has the extra suffix 'acq', due to
  it having "acquire" semantics.  An atomic operation with a write barrier
  has the extra suffix 'rel'.  These suffixes are inserted between the
  name of the operation to perform and the typename.  For example, the
  atomic_add_int() function now has 3 variants:
  - atomic_add_int() - this is the same as the previous function
  - atomic_add_acq_int() - this function combines the add operation with a
    read memory barrier
  - atomic_add_rel_int() - this function combines the add operation with a
    write memory barrier
- Add 'ptr' to the list of types that we can perform atomic operations
  on.  This allows one to do atomic operations on uintptr_t's.  This is
  useful in the mutex code, for example, because the actual mutex lock is
  a pointer.
- Add two new operations for doing loads and stores with memory barriers.
  The new load operations use a read barrier before the load, and the
  new store operations use a write barrier after the load.  For example,
  atomic_load_acq_int() will atomically load an integer as well as
  enforcing a read barrier.
2000-10-20 07:00:48 +00:00
John Baldwin
3f4809dd0d Axe the barrier_{read,write,rw}() helper functions as this method of
doing memory barriers doesn't really scale well for the ia64.  Also,
memory barriers are more a property of the CPU than bus space.

Requested by:	dfr
2000-10-20 06:45:48 +00:00
Doug Rabson
4e94026a72 Don't force bootverbose anymore. 2000-10-19 20:39:48 +00:00
Doug Rabson
45b56a5505 Decrease the number of ticks between clock interrupts by a factor of ten
to place more pressure on the exception handling code.
2000-10-19 20:37:28 +00:00
Doug Rabson
2a4f0b6fd4 * Disable interrupts when restoring a trapframe.
* Make sure we reset ar.k6 (used to hold the kernel stack pointer when
  we are returning to user mode after a syscall.
2000-10-19 20:36:31 +00:00
John Baldwin
25f3f7c530 Add in a simple API for memory barriers to machine/bus.h:
- barrier_read() enforces a memory read barrier
- barrier_write() enforces a memory write barrier
- barrier_rw() enforces a memory read/write barrier
2000-10-18 10:30:12 +00:00
Paul Saab
c794ceb56a Implement write combining for crashdumps. This is useful when
write caching is disabled on both SCSI and IDE disks where large
memory dumps could take up to an hour to complete.

Taking an i386 scsi based system with 512MB of ram and timing (in
seconds) how long it took to complete a dump, the following results
were obtained:

Before:				After:
	WCE           TIME		WCE           TIME
	------------------		------------------
	1	141.820972		1	 15.600111
	0	797.265072		0	 65.480465

Obtained from:	Yahoo!
Reviewed by:	peter
2000-10-17 10:05:49 +00:00
Doug Rabson
e296c9c9f7 In pmap_remove_pv(), only manipulate the page's list if the pv is
managed.
2000-10-16 17:06:32 +00:00
Doug Rabson
85a25ab3d5 Do a full exception_restore after an execve syscall to ensure that the
new program gets the right values for its arguments etc.
2000-10-16 17:03:51 +00:00
Doug Rabson
4f8d61b521 Clear the register stack frame before using loadrs to invalidate the
stacked registers.
2000-10-16 16:59:32 +00:00
Doug Rabson
48ac0f7c79 Clear ar.pfs for the child process in cpu_fork - switch_trampoline
doesn't want a stack frame.
2000-10-16 16:55:59 +00:00
Doug Rabson
9c4e05709a Track changes to trapframe. 2000-10-16 09:18:05 +00:00
Doug Rabson
7fcf71bc40 * Correct some of my misunderstandings about how best to switch to the
kernel backing store.
* Implement syscalls via break instructions.
* Fix backing store copying in cpu_fork() so that the child gets the right
  register values.

This thing is actually starting to work now. This set of changes takes me
up to the second execve (the one which runs the first shell). Next stop
single-user mode :-).
2000-10-16 08:54:40 +00:00
Doug Rabson
2e8a70354f Use the right mask for extracting sof from cr.ifs. 2000-10-16 08:47:56 +00:00
Doug Rabson
eded8b5cfa Remember to re-initialise cr.itm on clock interrupts so that we get more
than just one tick.
2000-10-16 08:46:57 +00:00
Doug Rabson
ecfff41c65 Merge a fix from the alpha port - put softintr in the right place in the
table.
2000-10-16 08:45:45 +00:00
Doug Rabson
a3f05798c1 Give names to app registers and control registers. Fix a typo handling
mov from branch register instructions.
2000-10-16 08:44:34 +00:00
Poul-Henning Kamp
398bc678aa Move DELAY() from <machine/clock.h> to <sys/systm.h> 2000-10-15 09:51:49 +00:00
Doug Rabson
61e1efff8a Implement a rudimentary interrupt handling system which should be good
enough for clock interrupts in SKI.
2000-10-12 17:47:01 +00:00
Doug Rabson
d84feb93c4 Turn off a debugging printf. 2000-10-12 17:46:12 +00:00
Doug Rabson
5596cfd09a * Fix exception handling so that it actually works. We can now handle
exceptions from both kernel and user mode.
* Fix context switching so that we can switch back to a proc which we
  switched away from (we were saving the state in the wrong place).
* Implement lazy switching of the high-fp state. This needs to be looked
  at again for SMP to cope with the case of a process migrating from one
  processor to another while it has the high-fp state.
* Make setregs() work properly. I still think this should be called
  cpu_exec() or something.
* Various other minor fixes.

With this lot, we can execve() /sbin/init and we get all the way up to its
first syscall. At that point, we stop because syscall handling is not done
yet.
2000-10-12 14:36:39 +00:00
Doug Rabson
8034cac98a Fix this so that it can cope with transfers to/from regions which are not
physically contiguous.
2000-10-12 14:29:24 +00:00
Doug Rabson
c332b0bc57 * Allocate kernel stacks with contigmalloc() to make exception handling
safe - we can't afford to take a TLB trap when we are writing a
  trapframe. Possibly revisit this later.
* Various fixes to pmap_enter() so that it actually works properly.
2000-10-12 14:28:05 +00:00
Doug Rabson
e571ae24fe Some minor fixes and simplifications. 2000-10-12 14:25:09 +00:00
Doug Rabson
ec502c357d * Add rudimentary DDB support (no kgdb, no backtrace, no single step).
* Track recent changes to SWI code.
* Allocate RIDs for pmaps (untested).
* Implement assembler version of cpu_switch - its cleaner that way.
2000-10-10 14:57:10 +00:00
Poul-Henning Kamp
f6b5c74c35 Initiate deorbit burn sequence for <machine/mouse.h>.
Replace all in-tree uses with <sys/mouse.h> which repo-copied a few
moments ago from src/sys/i386/include/mouse.h by peter.
This is also the appropriate fix for exo-tree sources.

Put warnings in <machine/mouse.h> to discourage use.
November 15th 2000 the warnings will be converted to errors.
January 15th 2001 the <machine/mouse.h> files will be removed.
2000-10-09 08:08:36 +00:00
Poul-Henning Kamp
00d25f512c Initiate deorbit burn sequence for <machine/console.h>.
Replace all in-tree uses with necessary subset of <sys/{fb,kb,cons}io.h>.
This is also the appropriate fix for exo-tree sources.

Put warnings in <machine/console.h> to discourage use.
November 15th 2000 the warnings will be converted to errors.
January 15th 2001 the <machine/console.h> files will be removed.
2000-10-08 21:34:00 +00:00
Bosko Milekic
ec222a71d9 Cleanup comment in machine/param.h regarding mbuf-related sizes, and get rid
of MCLOFSET, which does not appear to be used anywhere anymore, and if it is,
it probably shouldn't be.
2000-10-08 03:52:27 +00:00
Bruce Evans
cc46dff67f Work around a bug by adding struct tags. gcc-2.95 apparently gets the
check in the [basic.link] section of the C++ standard wrong.  gcc-2.7.2.3
apparently doesn't do the check, so the bug doesn't affect RELENG_3.

PR:		16170, 21427
Submitted by:	Max Khon <fjoe@lark.websci.ru> (i386 version)
Discussed with:	jdp
2000-10-06 11:53:32 +00:00
Jason Evans
5704011c0d Reduce userland namespace polution.
#include <proc.h>, since curproc is needed.
2000-10-06 08:11:11 +00:00
Doug Rabson
8d9761debf Next round of fixes to the ia64 code. This includes simulated clock and
disk drivers along with a load of fixes to context switching, fork
handling and a load of other stuff I can't remember now. This takes us as
far as start_init() before it dies. I guess now I will have to finish off
the VM system and syscall handling :-).
2000-10-04 17:53:03 +00:00
Doug Rabson
431f3cb41d Next round of ia64 work, including fixes to context switching,
implementing cpu_fork(), copy*str(), bcopy(), copy{in,out}(). With these
changes, my test kernel reaches the mountroot prompt.
2000-09-30 17:48:44 +00:00
Doug Rabson
8fb26af111 Ansify and fix warnings. 2000-09-29 16:53:39 +00:00
Doug Rabson
254d92cca6 Implement dirty and access bit exceptions. 2000-09-29 16:52:50 +00:00
Doug Rabson
3e90246a82 Bodge the simplelocks in a way which works UP. 2000-09-29 16:51:33 +00:00
Doug Rabson
b7da7f63eb Use write-back instead of write-combining for region 7. 2000-09-29 15:41:43 +00:00
Doug Rabson
75f10fcd78 Add a few more files for the ia64 port. 2000-09-29 13:48:14 +00:00
Doug Rabson
1ebcad5720 This is the first snapshot of the FreeBSD/ia64 kernel. This kernel will
not work on any real hardware (or fully work on any simulator). Much more
needs to happen before this is actually functional but its nice to see
the FreeBSD copyright message appear in the ia64 simulator.
2000-09-29 13:46:07 +00:00