Commit Graph

3280 Commits

Author SHA1 Message Date
Alfred Perlstein
4f492bfab5 use __packed. 2002-09-23 18:54:32 +00:00
Poul-Henning Kamp
bc8c3c3e37 Fix a 3 year old oversight: Remove the #ifdef/#endif pair now that there
is nothing between them anymore.

Spotted by:	peter.
2002-09-21 07:59:06 +00:00
Mitsuru IWASAKI
076ef4620b Restore status register A of RTC at resume time.
This should fix the 'too many RTC interrupts and statclock seems
broken after resume' problem.

MFC after:	1 week
2002-09-18 07:34:04 +00:00
Jonathan Mini
30abe507c0 Add kernel support needed for the KSE-aware libpthread:
- Maintain fpu state across signals.
	- Save and restore FPU state properly in ucontext_t's.

Reviewed by:	bde, deischen, julian
Approved by:	-arch
2002-09-16 19:25:59 +00:00
Peter Wemm
f7749f924c Automatically enable CPU_ENABLE_SSE (detect and enable SSE instructions)
if compiling with I686_CPU as a target.  CPU_DISABLE_SSE will prevent
this from happening and will guarantee the code is not compiled in.

I am still not happy with this, but gcc is now generating code that uses
these instructions if you set CPUTYPE to p3/p4 or athlon-4/mp/xp or higher.
2002-09-07 07:02:12 +00:00
Philippe Charnier
93b0017f88 Replace various spelling with FALLTHROUGH which is lint()able 2002-08-25 13:23:09 +00:00
Peter Wemm
8b2624e85c Ok, somebody please shoot me. The asm I wrote for the ranged IPI shootdown
was wrong.  It only ever invalidated one page due to me getting the loop
terminator wrong.  This explains the DISABLE_PG_G effect on SMP.
2002-08-23 21:45:59 +00:00
Robert Watson
82d9ad331a Add additional range checks for copyout targets.
Submitted by:	Silvio Cesare <silvio@qualys.com>
2002-08-09 05:50:32 +00:00
Warner Losh
8b5cc27046 Fix more abuse of __FreeBSD__ to detect version. 2002-07-21 05:34:14 +00:00
Peter Wemm
cd71fd08cc Stop abusing NPCI for code that doesn't even work. Emit a warning. 2002-07-21 05:25:49 +00:00
Bruce Evans
3c9d896571 Quick fix for high resolution kernel profiling on i386's. Use
-finstrument-functions instead of -mprofiler-epilogue.  The former
works essentially the same as the latter but has a higher overhead
(about 22 more bytes per function for passing unused args to the
profiling functions).

Removed all traces of the IDENT Makefile variable, which had been
reduced to just a place for holding profiling's contribution to CFLAGS
(the IDENT that gives the kernel identity was renamed to KERN_IDENT).
2002-07-13 22:28:34 +00:00
Peter Wemm
f1b665c8fe Revive backed out pmap related changes from Feb 2002. The highlights are:
- It actually works this time, honest!
- Fine grained TLB shootdowns for SMP on i386.  IPI's are very expensive,
  so try and optimize things where possible.
- Introduce ranged shootdowns that can be done as a single IPI.
- PG_G support for i386
- Specific-cpu targeted shootdowns.  For example, there is no sense in
  globally purging the TLB cache for where we are stealing a page from
  the local unshared process on the local cpu.  Use pm_active to track
  this.
- Add some instrumentation for the tlb shootdown code.
- Rip out SMP code from <machine/cpufunc.h>
- Try and fix some very bogus PG_G and PG_PS interactions that were bad
  enough to cause vm86 bios calls to break.  vm86 depended on our existing
  bugs and this was the cause of the VESA panics last time.
- Fix the silly one-line error that caused the 'panic: bad pte' last time.
- Fix a couple of other silly one-line errors that should have caused more
  pain than they did.

Some more work is needed:
- pmap_{zero,copy}_page[_idle].  These can be done without IPI's if we
  have a hook in cpu_switch.
- The IPI handlers need some cleanup.  I have a bogus %ds load that can
  be avoided.
- APTD handling is rather bogus and appears to be a large source of
  global TLB IPI shootdowns for no really good reason.

I see speedups of between 1.5% and ~4% on buildworlds in a while 1 loop.
I expect to see a bigger difference when there is significant pageout
activity or the system otherwise has memory shortages.

I have backed out a few optimizations that I had been using over the last
few days in order to be a little more conservative.  I'll revisit these
again over the next few days as the dust settles.

New option:  DISABLE_PG_G - In case I missed something.
2002-07-12 07:56:11 +00:00
Peter Wemm
da035a22eb Bah, move the invltlb counter to C code and hook a debug sysctl onto it. 2002-07-11 08:31:10 +00:00
Peter Wemm
dedf64505a s/NCPU/MAXCPU/ to try and get this to compile. 2002-07-11 08:24:33 +00:00
Julian Elischer
d50fe601d4 This file has been included en-mass into i386/i386/exception.s 2002-07-10 21:07:47 +00:00
Mike Barcroft
a6519e64cc Move the type definition of ointhand2_t from i386/include/types.h to
i386/isa/isa_device.h.  This is a more appropriate location and
helps restrict <machine/types.h> to only types that exist on all
platforms.
2002-07-09 01:16:18 +00:00
Peter Wemm
9ecb46bf87 The clock is already allocated as 'fast' - no need to try and intercept a
'slow' interrupt registration and convert it into 'fast'.
2002-07-08 09:12:22 +00:00
Peter Wemm
160554fbf4 Remove a couple of __P() stragglers. 2002-06-29 02:32:34 +00:00
Mark Peek
5e3939b59b Clock frequencies reported by sysctl should be unsigned values. Discovered
when machdep.tsc_freq returned a negative number on a 2.2GHz Xeon.

Submitted by:	Brian Harrison <bharrison@ironport.com>
Reviewed by:	phk
MFC after:	1 week
2002-06-22 16:30:18 +00:00
Jens Schweikhardt
21dc7d4f57 Fix typo in the BSD copyright: s/withough/without/
Spotted and suggested by:	des
MFC after:	3 weeks
2002-06-02 20:05:59 +00:00
Robert Watson
6be2f8829a Off-by-128 error in the cuam* device node numbers. 2002-05-20 05:12:56 +00:00
Robert Watson
12f2edc7d5 Bump the rc driver a little bit closer to the 21st century: use
make_dev() to create device nodes for each of the serial port channels
(ttym%d and cuam%d respectively, as borrowed from MAKEDEV).  This allows
the rc driver to work in 5.0.  I've tested it with only one card, but
will try sticking in a second card tomorrow and see what happens.
2002-05-20 05:04:41 +00:00
Poul-Henning Kamp
22bd43ccda Move a few ancient minor-number definitions for tapedrives to the
only driver which uses them.  Remove the rest.
2002-05-14 06:57:02 +00:00
Bruce Evans
f318190a01 Fixed checking for VM86 mode in doreti which I broke in rev.1.30. Only
the case of VM86 calls from the kernel was broken, so this bug was not
a security hole.

PR:		36710
Submitted by:	David Xu <davidx@viasoft.com.cn> (version for RELENG_4)
MFC after:	3 days
2002-05-05 03:19:48 +00:00
Poul-Henning Kamp
2266fe776e Don't export timecounter structures under debug. with sysctl, they
contain no truly interesting data anymore.
2002-04-30 19:34:31 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Poul-Henning Kamp
7e2d76ff05 Remove the tc_update() function. Any frequency change to the
timecounter will be used starting at the next second, which is
good enough for sysctl purposes.  If better adjustment is needed
the NTP PLL should be used.
2002-04-26 10:06:26 +00:00
Poul-Henning Kamp
2ce7d7a033 GC various bits and pieces of USERCONFIG from all over the place. 2002-04-09 11:18:46 +00:00
Yoshihiro Takahashi
181593adec Move ICU_* defines into icu.h. 2002-04-06 08:25:05 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Poul-Henning Kamp
408ab1b875 Retire the bogus ioctl DIOCGPART in toto.
Once again we can notice that badly thought out hacks ferment and infect
far more code than initially expected.

Sponsored by:	DARPA and NAI Labs.
2002-04-02 11:52:13 +00:00
Matthew Dillon
182da8209d Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter()
and cpu_critical_exit() and moves associated critical prototypes into their
own header file, <arch>/<arch>/critical.h, which is only included by the
three MI source files that need it.

Backout and re-apply improperly comitted syntactical cleanups made to files
that were still under active development.  Backout improperly comitted program
structure changes that moved localized declarations to the top of two
procedures.  Partially re-apply one of the program structure changes to
move 'mask' into an intermediate block rather then in three separate
sub-blocks to make the code more readable.  Re-integrate bug fixes that Jake
made to the sparc64 code.

Note: In general, developers should not gratuitously move declarations out
of sub-blocks.  They are where they are for reasons of structure, grouping,
readability, compiler-localizability, and to avoid developer-introduced bugs
similar to several found in recent years in the VFS and VM code.

Reviewed by:	jake
2002-04-01 23:51:23 +00:00
John Baldwin
44731cab3b Change the suser() API to take advantage of td_ucred as well as do a
general cleanup of the API.  The entire API now consists of two functions
similar to the pre-KSE API.  The suser() function takes a thread pointer
as its only argument.  The td_ucred member of this thread must be valid
so the only valid thread pointers are curthread and a few kernel threads
such as thread0.  The suser_cred() function takes a pointer to a struct
ucred as its first argument and an integer flag as its second argument.
The flag is currently only used for the PRISON_ROOT flag.

Discussed on:	smp@
2002-04-01 21:31:13 +00:00
Jake Burkholder
d0ce9a7e07 Remove abuse of intr_disable/restore in MI code by moving the loop in ast()
back into the calling MD code.  The MD code must ensure no races between
checking the astpening flag and returning to usermode.

Submitted by:	peter (ia64 bits)
Tested on:	alpha (peter, jeff), i386, ia64 (peter), sparc64
2002-03-29 16:35:26 +00:00
Matthew Dillon
93e70a5f37 Tab-out the backslashes in icu_vector.s to make it more readable and to
match it up with apic_vector.s.
2002-03-27 05:43:11 +00:00
Matthew Dillon
d74ac6819b Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
Nicolas Souchu
ea4122d2bf Fix bktr and pcf compilation with LINT 2002-03-25 21:22:35 +00:00
Will Andrews
05f920205e Minor changes:
[1] Support the Sony VAIO Jogdial in moused(8).
 [2] Modify spic(4) to support additional Sony VAIO models.

Submitted by:	[1] Juriy Goloveshkin <j@gu.ru>,
		[2] Akira Funahashi <funa@funa.org>
Tested by:	cjh, jim, Jerry A! <jerry@thehutt.org>
Approved by:	nsayer
MFC after:	2 weeks
2002-03-24 03:07:07 +00:00
Bruce Evans
ea1499bf8f Fixed some style bugs in the removal of __P(()). The main ones were
not removing tabs before "__P((", and not outdenting continuation lines
to preserve non-KNF lining up of code with parentheses.  Switch to KNF
formatting and/or rewrap the whole prototype in some cases.
2002-03-23 16:01:49 +00:00
Warner Losh
ba74981e71 Fix abuses of cpu_critical_{enter,exit} by converting to
intr_{disable,restore} as well as providing an implemenation of
intr_{disable,restore}.

Reviewed by: jake, rwatson, jhb
2002-03-21 06:19:08 +00:00
Alfred Perlstein
89c9a48352 Remove __P. 2002-03-20 07:51:46 +00:00
Alfred Perlstein
85f190e4d1 Fixes to make select/poll mpsafe.
Problem:
  selwakeup required calling pfind which would cause lock order
  reversals with the allproc_lock and the per-process filedesc lock.
Solution:
  Instead of recording the pid of the select()'ing process into the
  selinfo structure, actually record a pointer to the thread.  To
  avoid dereferencing a bad address all the selinfo structures that
  are in use by a thread are kept in a list hung off the thread
  (protected by sellock).  When a selwakeup occurs the selinfo is
  removed from that threads list, it is also removed on the way out
  of select or poll where the thread will traverse its list removing
  all the selinfos from its own list.

Problem:
  Previously the PROC_LOCK was used to provide the mutual exclusion
  needed to ensure proper locking, this couldn't work because there
  was a single condvar used for select and poll and condvars can
  only be used with a single mutex.
Solution:
  Introduce a global mutex 'sellock' which is used to provide mutual
  exclusion when recording events to wait on as well as performing
  notification when an event occurs.

Interesting note:
  schedlock is required to manipulate the per-thread TDF_SELECT
  flag, however if given its own field it would not need schedlock,
  also because TDF_SELECT is only manipulated under sellock one
  doesn't actually use schedlock for syncronization, only to protect
  against corruption.

Proc locks are no longer used in select/poll.

Portions contributed by: davidc
2002-03-14 01:32:30 +00:00
Hellmuth Michaelis
29c063831d make pcvt compile again without "options XSERVER".
PR: 35577
2002-03-08 19:06:46 +00:00
Peter Wemm
4945f5ec47 Fix warning (const lost in assignment), harmless in this case. 2002-02-28 03:13:47 +00:00
John Baldwin
a854ed9893 Simple p_ucred -> td_ucred changes to start using the per-thread ucred
reference.
2002-02-27 18:32:23 +00:00
Peter Wemm
d1693e1701 Back out all the pmap related stuff I've touched over the last few days.
There is some unresolved badness that has been eluding me, particularly
affecting uniprocessor kernels.  Turning off PG_G helped (which is a bad
sign) but didn't solve it entirely.  Userland programs still crashed.
2002-02-27 09:51:33 +00:00
Matthew Dillon
181df8c9d4 revert last commit temporarily due to whining on the lists. 2002-02-26 20:33:41 +00:00
Matthew Dillon
f96ad4c223 STAGE-1 of 3 commit - allow (but do not require) interrupts to remain
enabled in critical sections and streamline critical_enter() and
critical_exit().

This commit allows an architecture to leave interrupts enabled inside
critical sections if it so wishes.  Architectures that do not wish to do
this are not effected by this change.

This commit implements the feature for the I386 architecture and provides
a sysctl, debug.critical_mode, which defaults to 1 (use the feature).  For
now you can turn the sysctl on and off at any time in order to test the
architectural changes or track down bugs.

This commit is just the first stage.  Some areas of the code, specifically
the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will
be cleaned up in the STAGE-2 commit when the critical_*() functions are
moved entirely into MD files.

The following changes have been made:

	* critical_enter() and critical_exit() for I386 now simply increment
	  and decrement curthread->td_critnest.  They no longer disable
	  hard interrupts.  When critical_exit() decrements the counter to
	  0 it effectively calls a routine to deal with whatever interrupts
	  were deferred during the time the code was operating in a critical
	  section.

	  Other architectures are unaffected.

	* fork_exit() has been conditionalized to remove MD assumptions for
	  the new code.  Old code will still use the old MD assumptions
	  in regards to hard interrupt disablement.  In STAGE-2 this will
	  be turned into a subroutine call into MD code rather then hardcoded
	  in MI code.

	  The new code places the burden of entering the critical section
	  in the trampoline code where it belongs.

	* I386: interrupts are now enabled while we are in a critical section.
	  The interrupt vector code has been adjusted to deal with the fact.
	  If it detects that we are in a critical section it currently defers
	  the interrupt by adding the appropriate bit to an interrupt mask.

	* In order to accomplish the deferral, icu_lock is required.  This
	  is i386-specific.  Thus icu_lock can only be obtained by mainline
	  i386 code while interrupts are hard disabled.  This change has been
	  made.

	* Because interrupts may or may not be hard disabled during a
	  context switch, cpu_switch() can no longer simply assume that
	  PSL_I will be in a consistent state.  Therefore, it now saves and
	  restores eflags.

	* FAST INTERRUPT PROVISION.  Fast interrupts are currently deferred.
	  The intention is to eventually allow them to operate either while
	  we are in a critical section or, if we are able to restrict the
	  use of sched_lock, while we are not holding the sched_lock.

	* ICU and APIC vector assembly for I386 cleaned up.  The ICU code
	  has been cleaned up to match the APIC code in regards to format
	  and macro availability.  Additionally, the code has been adjusted
	  to deal with deferred interrupts.

	* Deferred interrupts use a per-cpu boolean int_pending, and
	  masks ipending, spending, and fpending.  Being per-cpu variables
	  it is not currently necessary to lock; bus cycles modifying them.

	  Note that the same mechanism will enable preemption to be
	  incorporated as a true software interrupt without having to
	  further hack up the critical nesting code.

	* Note: the old critical_enter() code in kern/kern_switch.c is
	  currently #ifdef to be compatible with both the old and new
	  methodology.  In STAGE-2 it will be moved entirely to MD code.

Performance issues:

	One of the purposes of this commit is to enhance critical section
	performance, specifically to greatly reduce bus overhead to allow
	the critical section code to be used to protect per-cpu caches.
	These caches, such as Jeff's slab allocator work, can potentially
	operate very quickly making the effective savings of the new
	critical section code's performance very significant.

	The second purpose of this commit is to allow architectures to
	enable certain interrupts while in a critical section.  Specifically,
	the intention is to eventually allow certain FAST interrupts to
	operate rather then defer.

	The third purpose of this commit is to begin to clean up the
	critical_enter()/critical_exit()/cpu_critical_enter()/
	cpu_critical_exit() API which currently has serious cross pollution
	in MI code (in fork_exit() and ast() for example).

	The fourth purpose of this commit is to provide a framework that
	allows kernel-preempting software interrupts to be implemented
	cleanly.  This is currently used for two forward interrupts in I386.
	Other architectures will have the choice of using this infrastructure
	or building the functionality directly into critical_enter()/
	critical_exit().

	Finally, this commit is designed to greatly improve the flexibility
	of various architectures to manage critical section handling,
	software interrupts, preemption, and other highly integrated
	architecture-specific details.
2002-02-26 17:06:21 +00:00
Bruce Evans
fbd7573929 Initialize a variable bogusly to avoid a gcc bug that causes a spurious
warning.
2002-02-26 17:04:29 +00:00
Peter Wemm
6bd95d70db Work-in-progress commit syncing up pmap cleanups that I have been working
on for a while:
- fine grained TLB shootdown for SMP on i386
- ranged TLB shootdowns.. eg: specify a range of pages to shoot down with
  a single IPI, since the IPI is very expensive.  Adjust some callers
  that used to trigger this inside tight loops to do a ranged shootdown
  at the end instead.
- PG_G support for SMP on i386 (options ENABLE_PG_G)
- defer PG_G activation till after we decide what we are going to do with
  PSE and the 4MB pages at the start of the kernel.  This should solve
  some rumored strangeness about stale PG_G entries getting stuck
  underneath the 4MB pages.
- add some instrumentation for the fine TLB shootdown
- convert some asm instruction wrappers from functions to inlines.  gcc
  seems to do a fair bit better with this.
- [temporarily!] pessimize the tlb shootdown IPI handlers.  I will fix
  this again shortly.

This has been working fairly well for me for a while, but I have tweaked
it again prior to commit since my last major testing round.  The only
outstanding problem that I know of is PG_G related, which is why there
is an option for it (not on by default for SMP).  I have seen a world
speedups by a few percent (as much as 4 or 5% in one case) but I have
*not* accurately measured this - I am a bit sceptical of these numbers.
2002-02-25 23:49:51 +00:00