Commit Graph

791 Commits

Author SHA1 Message Date
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Mark Murray
db8f2e326c Stylify (mainly line up macro EOL-continuation \'s), and add a dummy
alternative for lint.
2002-04-21 10:49:00 +00:00
Tor Egge
de8218bb7b Fix typo in adjusted panic message.
Submitted by:	cokane
2002-04-17 22:41:58 +00:00
Tor Egge
50c15af4da Update io_apic_ints array properly when revoking an irq mapping.
Adjust panic message.

Submitted by:	David Xu <bsddiy@yahoo.com>
2002-04-17 18:27:10 +00:00
David Malone
a983fdfe4c Move do_cpuid into the correct place in this file and make
the indentation more like the other multi-line assembley in
this file.

Someone who understands gcc constraints could update the
constraints for do_cpuid.
2002-04-10 21:18:46 +00:00
Poul-Henning Kamp
2ce7d7a033 GC various bits and pieces of USERCONFIG from all over the place. 2002-04-09 11:18:46 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Matthew Dillon
182da8209d Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter()
and cpu_critical_exit() and moves associated critical prototypes into their
own header file, <arch>/<arch>/critical.h, which is only included by the
three MI source files that need it.

Backout and re-apply improperly comitted syntactical cleanups made to files
that were still under active development.  Backout improperly comitted program
structure changes that moved localized declarations to the top of two
procedures.  Partially re-apply one of the program structure changes to
move 'mask' into an intermediate block rather then in three separate
sub-blocks to make the code more readable.  Re-integrate bug fixes that Jake
made to the sparc64 code.

Note: In general, developers should not gratuitously move declarations out
of sub-blocks.  They are where they are for reasons of structure, grouping,
readability, compiler-localizability, and to avoid developer-introduced bugs
similar to several found in recent years in the VFS and VM code.

Reviewed by:	jake
2002-04-01 23:51:23 +00:00
John Baldwin
e6c838590e GC #if 0'd assembly mutex micro operations. If someone wants to bring
these back later then can get them from the attic.  Also, GC, some stale
macros to acquire and release sleep mutexes in assembly.
2002-03-28 15:14:23 +00:00
Matthew Dillon
d74ac6819b Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
Bruce Evans
809dbbc99b Fixed some style bugs in the removal of __P(()). The main ones were
not removing tabs before "__P((", and not outdenting continuation lines
to preserve non-KNF lining up of code with parentheses.  Switch to KNF
formatting and/or rewrap the whole prototype in some cases.
2002-03-23 15:09:35 +00:00
Bruce Evans
fa9c948c1c Fixed some style bugs in the removal of __P(()). The main ones were
not removing tabs before "__P((", and not outdenting continuation lines
to preserve non-KNF lining up of code with parentheses.  Switch to KNF
formatting and/or rewrap the whole prototype in some cases.
2002-03-23 14:27:06 +00:00
David E. O'Brien
439a4003ab ASM versions of __FBSDID. 2002-03-23 02:01:27 +00:00
Warner Losh
ba74981e71 Fix abuses of cpu_critical_{enter,exit} by converting to
intr_{disable,restore} as well as providing an implemenation of
intr_{disable,restore}.

Reviewed by: jake, rwatson, jhb
2002-03-21 06:19:08 +00:00
Warner Losh
e7b110dcf7 Fix minor style(9) violation in de__Ping 2002-03-20 19:04:56 +00:00
Alfred Perlstein
15fe306743 Remove __P. 2002-03-20 08:56:31 +00:00
Alfred Perlstein
b63dc6ad47 Remove __P. 2002-03-20 05:48:58 +00:00
Dag-Erling Smørgrav
a2e0658045 Move the definition of PT_[GS]ET{,DB,FP}REGS from the MD ptrace.h to the
MI ptrace.h, since all platforms define them.  Keep the MD ptrace.h around
for FIX_SSTEP (which is currently only needed on Alpha).
2002-03-16 00:25:53 +00:00
Jake Burkholder
752dff3d9c Add needed includes of machine/smp.h, remove nested include in sys/smp.h
so that inlines in machine/smp.h can use variables declared in sys/smp.h.
2002-03-07 04:43:51 +00:00
Jeff Roberson
88c99cfbc8 Add a new variable mp_maxid. This is used so that per cpu datastructures may
be allocated as arrays indexed by the cpu id.  Previously the only reliable
way to know the max cpu id was through MAXCPU. mp_ncpus isn't useful here
because cpu ids may be sparsely mapped, although x86 and alpha do not do this.

Also, call cpu_mp_probe much earlier so the max cpu id is known before the VM
starts up.  This is intended to help support per cpu queues for the new
allocator, but may be useful elsewhere.

Reviewed by:	jake
Approved by:	jake
2002-03-05 10:01:46 +00:00
Mark Murray
f5d9a10b94 Make it a bit clearer where this file is to be used and where it
should not be. (Comments only)

Inspired by:	bde
2002-02-28 18:26:30 +00:00
Bosko Milekic
71acb2477f Make MPLOCKED work again in asm files and stringify it explicitly
where necessary.

Reviewed by: jake
2002-02-28 06:17:05 +00:00
John Baldwin
62da23f1e2 Back out part of KSE/M2 that snuck in under the radar: changing the
prototype of bzero() on the i386 to have a volatile first argument.

Requested by:	bde, jake
2002-02-27 22:12:29 +00:00
Thomas Moestl
90ce56c287 Add the following functions/macros to support byte order conversions and
device drivers for bus system with other endinesses than the CPU (using
interfaces compatible to NetBSD):

- bwap16() and bswap32(). These have optimized implementations on some
  architectures; for those that don't, there exist generic implementations.
- macros to convert from a certain byte order to host byte order and vice
  versa, using a naming scheme like le16toh(), htole16().
  These are implemented using the bswap functions.
- stream bus space access functions, which do not perform a byte order
  conversion (while the normal access functions would if the bus endianess
  differs from the CPU endianess).

htons(), htonl(), ntohs() and ntohl() are implemented using the new
functions above for kernel usage. None of the above interfaces is currently
exported to user land.

Make use of the new functions in a few places where local implementations
of the same functionality existed.

Reviewed by:	mike, bde
Tested on alpha by:	mike
2002-02-27 17:16:18 +00:00
Peter Wemm
d1693e1701 Back out all the pmap related stuff I've touched over the last few days.
There is some unresolved badness that has been eluding me, particularly
affecting uniprocessor kernels.  Turning off PG_G helped (which is a bad
sign) but didn't solve it entirely.  Userland programs still crashed.
2002-02-27 09:51:33 +00:00
Matthew Dillon
181df8c9d4 revert last commit temporarily due to whining on the lists. 2002-02-26 20:33:41 +00:00
Matthew Dillon
f96ad4c223 STAGE-1 of 3 commit - allow (but do not require) interrupts to remain
enabled in critical sections and streamline critical_enter() and
critical_exit().

This commit allows an architecture to leave interrupts enabled inside
critical sections if it so wishes.  Architectures that do not wish to do
this are not effected by this change.

This commit implements the feature for the I386 architecture and provides
a sysctl, debug.critical_mode, which defaults to 1 (use the feature).  For
now you can turn the sysctl on and off at any time in order to test the
architectural changes or track down bugs.

This commit is just the first stage.  Some areas of the code, specifically
the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will
be cleaned up in the STAGE-2 commit when the critical_*() functions are
moved entirely into MD files.

The following changes have been made:

	* critical_enter() and critical_exit() for I386 now simply increment
	  and decrement curthread->td_critnest.  They no longer disable
	  hard interrupts.  When critical_exit() decrements the counter to
	  0 it effectively calls a routine to deal with whatever interrupts
	  were deferred during the time the code was operating in a critical
	  section.

	  Other architectures are unaffected.

	* fork_exit() has been conditionalized to remove MD assumptions for
	  the new code.  Old code will still use the old MD assumptions
	  in regards to hard interrupt disablement.  In STAGE-2 this will
	  be turned into a subroutine call into MD code rather then hardcoded
	  in MI code.

	  The new code places the burden of entering the critical section
	  in the trampoline code where it belongs.

	* I386: interrupts are now enabled while we are in a critical section.
	  The interrupt vector code has been adjusted to deal with the fact.
	  If it detects that we are in a critical section it currently defers
	  the interrupt by adding the appropriate bit to an interrupt mask.

	* In order to accomplish the deferral, icu_lock is required.  This
	  is i386-specific.  Thus icu_lock can only be obtained by mainline
	  i386 code while interrupts are hard disabled.  This change has been
	  made.

	* Because interrupts may or may not be hard disabled during a
	  context switch, cpu_switch() can no longer simply assume that
	  PSL_I will be in a consistent state.  Therefore, it now saves and
	  restores eflags.

	* FAST INTERRUPT PROVISION.  Fast interrupts are currently deferred.
	  The intention is to eventually allow them to operate either while
	  we are in a critical section or, if we are able to restrict the
	  use of sched_lock, while we are not holding the sched_lock.

	* ICU and APIC vector assembly for I386 cleaned up.  The ICU code
	  has been cleaned up to match the APIC code in regards to format
	  and macro availability.  Additionally, the code has been adjusted
	  to deal with deferred interrupts.

	* Deferred interrupts use a per-cpu boolean int_pending, and
	  masks ipending, spending, and fpending.  Being per-cpu variables
	  it is not currently necessary to lock; bus cycles modifying them.

	  Note that the same mechanism will enable preemption to be
	  incorporated as a true software interrupt without having to
	  further hack up the critical nesting code.

	* Note: the old critical_enter() code in kern/kern_switch.c is
	  currently #ifdef to be compatible with both the old and new
	  methodology.  In STAGE-2 it will be moved entirely to MD code.

Performance issues:

	One of the purposes of this commit is to enhance critical section
	performance, specifically to greatly reduce bus overhead to allow
	the critical section code to be used to protect per-cpu caches.
	These caches, such as Jeff's slab allocator work, can potentially
	operate very quickly making the effective savings of the new
	critical section code's performance very significant.

	The second purpose of this commit is to allow architectures to
	enable certain interrupts while in a critical section.  Specifically,
	the intention is to eventually allow certain FAST interrupts to
	operate rather then defer.

	The third purpose of this commit is to begin to clean up the
	critical_enter()/critical_exit()/cpu_critical_enter()/
	cpu_critical_exit() API which currently has serious cross pollution
	in MI code (in fork_exit() and ast() for example).

	The fourth purpose of this commit is to provide a framework that
	allows kernel-preempting software interrupts to be implemented
	cleanly.  This is currently used for two forward interrupts in I386.
	Other architectures will have the choice of using this infrastructure
	or building the functionality directly into critical_enter()/
	critical_exit().

	Finally, this commit is designed to greatly improve the flexibility
	of various architectures to manage critical section handling,
	software interrupts, preemption, and other highly integrated
	architecture-specific details.
2002-02-26 17:06:21 +00:00
Peter Wemm
6bd95d70db Work-in-progress commit syncing up pmap cleanups that I have been working
on for a while:
- fine grained TLB shootdown for SMP on i386
- ranged TLB shootdowns.. eg: specify a range of pages to shoot down with
  a single IPI, since the IPI is very expensive.  Adjust some callers
  that used to trigger this inside tight loops to do a ranged shootdown
  at the end instead.
- PG_G support for SMP on i386 (options ENABLE_PG_G)
- defer PG_G activation till after we decide what we are going to do with
  PSE and the 4MB pages at the start of the kernel.  This should solve
  some rumored strangeness about stale PG_G entries getting stuck
  underneath the 4MB pages.
- add some instrumentation for the fine TLB shootdown
- convert some asm instruction wrappers from functions to inlines.  gcc
  seems to do a fair bit better with this.
- [temporarily!] pessimize the tlb shootdown IPI handlers.  I will fix
  this again shortly.

This has been working fairly well for me for a while, but I have tweaked
it again prior to commit since my last major testing round.  The only
outstanding problem that I know of is PG_G related, which is why there
is an option for it (not on by default for SMP).  I have seen a world
speedups by a few percent (as much as 4 or 5% in one case) but I have
*not* accurately measured this - I am a bit sceptical of these numbers.
2002-02-25 23:49:51 +00:00
Peter Wemm
963131fe0a Tidy up some warnings 2002-02-25 21:42:23 +00:00
Poul-Henning Kamp
1cbb9c3b03 Convert p->p_runtime and PCPU(switchtime) to bintime format. 2002-02-22 13:32:01 +00:00
Peter Wemm
6a3e90ef2f Some more tidy-up of stray "unsigned" variables instead of p[dt]_entry_t
etc.
2002-02-20 01:05:57 +00:00
Yoshihiro Takahashi
2630782c21 Add stubs for bus_space_unmap() and bus_space_free(). They are needed to
release a bus_space_handle allocated by bus_space_subregion().
2002-02-18 13:43:19 +00:00
Daniel Eischen
0270d57aef Use struct __ucontext in prototypes and associated functions instead of
ucontext_t.  Forward declare struct __ucontext in <sys/signal.h> and
remove reliance on <sys/ucontext.h> being included.

While I'm here, also hide osigcontext types from userland; suggested
by bde.

Namespace pollution noticed by: Kevin Day <toasty@shell.dragondata.com>
2002-02-17 17:40:34 +00:00
Yoshihiro Takahashi
37be85a7b0 Correct typo. 2002-02-17 14:16:17 +00:00
Yoshihiro Takahashi
9f67ccb7e5 Move the bus_space_subregion function from the puc driver to the bus_space
sutff.

Reviewed by:	jhay
2002-02-17 09:41:23 +00:00
David Malone
34221a4505 Move do_cpuid() from a identcpu.c into cpufunc.h. 2002-02-12 21:06:48 +00:00
Bruce Evans
d2f22d707a Garbage-collect the "LOCORE" version of MPLOCKED. 2002-02-11 03:41:59 +00:00
John Baldwin
289da3dd99 Apparently during the KSE M2 commit bzero() on the i386 was changed so that
it's first parameter was volatile.  Catch i486_bzero() and i586_bzero()'s
prototypes up to this to quiet warnings.
2002-02-08 19:16:47 +00:00
Mark Murray
9a29e3250d Make the style a little bit more consistant by removing parameter
names from some prototypes. (Other prototypes here already have
these removed).
2002-02-03 11:21:22 +00:00
Bruce Evans
92fd4795fa Finish revs.1.23 and 1.24 so that MCOUNT_ENTER really actually compiles
for SMP in the plain profiling case.  It seems to work too.

This error was not detected by LINT because LINT only compiles the
GUPROF profiling case, which is is a superset of the plain profiling
case for !SMP but which is so broken for SMP that the buggy code is
not compiled.
2002-01-31 13:49:55 +00:00
Peter Wemm
58815d1196 Avoid __func__ string concatenation 2002-01-18 04:41:23 +00:00
Bruce Evans
e744f30933 Changed the type of pcb_flags from u_char to u_int and adjusted things.
This removes the only atomic operation on a char type in the entire
kernel.
2002-01-17 17:49:23 +00:00
Peter Wemm
77c99f362a Ensure that we set all the %cr0 bits to a known state for the AP's before
they make it through to userland.  This should fix the p5-smp problem
without affecting the other cpus (eg: cyrix, see initcpu.c and the special
cache handling for these cpu types).
2002-01-16 00:44:29 +00:00
Daniel Eischen
2e615b482d Use a spare slot in the machine context for a flags word to indicate
whether the machine context is valid and whether the FPU state is
valid (saved).

Mark the machine context valid before copying it out when sending a
signal.

Approved by:	-arch
2002-01-10 02:32:30 +00:00
Peter Wemm
ead8168ac0 Convert a bunch of 1 << PCPU_GET(cpuid) to PCPU_GET(cpumask). 2002-01-05 09:41:37 +00:00
John Baldwin
98f9879242 Introduce a standard name for the lock protecting an interrupt controller
and it's associated state variables: icu_lock with the name "icu".  This
renames the imen_mtx for x86 SMP, but also uses the lock to protect
access to the 8259 PIC on x86 UP.  This also adds an appropriate lock to
the various Alpha chipsets which fixes problems with Alpha SMP machines
dropping interrupts with an SMP kernel.
2001-12-20 23:48:31 +00:00
John Baldwin
3f9a462fb9 Various assembly fixes mostly in the form of using the "+" modifier for
output operands to mark them as both input and output rather than listing
operands twice.

Reviewed by:	bde
2001-12-18 08:54:39 +00:00
John Baldwin
e4e991e117 Allow the ATOMIC_ASM() macro to pass in the constraints on the V parameter
since the char versions need to use either ax, bx, cx, or dx.

Submitted by:	Peter Jeremy (mostly)
Recommended by:	bde
2001-12-18 08:51:34 +00:00
John Baldwin
7e1f6dfe9d Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
John Baldwin
1ecf0d56c8 Small cleanups to the SMP code:
- Axe inlvtlb_ok as it was completely redundant with smp_active.
- Remove references to non-existent variable and non-existent file
  in i386/include/smp.h.
- Don't perform initializations local to each CPU while holding the
  ap boot lock on i386 while an AP bootstraps itself.
- Reorganize the AP startup code some to unify the latter half of the
  functions to bring an AP up.  Eventually this might be broken out into
  a MI function in subr_smp.c.
2001-12-17 23:14:35 +00:00