152 Commits

Author SHA1 Message Date
ed
a0f5dad6a9 Simplify in/out functions (for i386 and AMD64).
Remove a hack to generate more efficient code for port numbers below
0x100, which has been obsolete for at least ten years, because GCC has
an asm constraint to specify that.

Submitted by:	Christoph Mallon <christoph mallon gmx de>
2009-04-11 14:01:01 +00:00
ed
0d9b7a69e5 Don't explicitly force ecx to be used for MSR_FSBASE/MSR_GSBASE.
Because the "c" input constaint is used, the compiler will already place
the MSR_FSBASE/MSR_GSBASE constants in ecx. Using __asm("ecx") makes
LLVM crash. Even though this is also an LLVM bug, we'd better remove the
unnecessary GCCism as well.

Submitted by:	Christoph Mallon <christoph.mallon@gmx.de>
2009-04-07 19:31:36 +00:00
obrien
7a153194ec Change some movl's to mov's. Newer GAS no longer accept 'movl' instructions
for moving between a segment register and a 32-bit memory location.

Looked at by:	jhb
2009-01-31 11:37:21 +00:00
stas
a782fc10fe - Add cpuctl(4) pseudo-device driver to provide access to some low-level
features of CPUs like reading/writing machine-specific registers,
  retrieving cpuid data, and updating microcode.
- Add cpucontrol(8) utility, that provides userland access to
  the features of cpuctl(4).
- Add subsequent manpages.

The cpuctl(4) device operates as follows. The pseudo-device node cpuctlX
is created for each cpu present in the systems. The pseudo-device minor
number corresponds to the cpu number in the system. The cpuctl(4) pseudo-
device allows a number of ioctl to be preformed, namely RDMSR/WRMSR/CPUID
and UPDATE. The first pair alows the caller to read/write machine-specific
registers from the correspondent CPU. cpuid data could be retrieved using
the CPUID call, and microcode updates are applied via UPDATE.

The permissions are inforced based on the pseudo-device file permissions.
RDMSR/CPUID will be allowed when the caller has read access to the device
node, while WRMSR/UPDATE will be granted only when the node is opened
for writing. There're also a number of priv(9) checks.

The cpucontrol(8) utility is intened to provide userland access to
the cpuctl(4) device features. The utility also allows one to apply
cpu microcode updates.

Currently only Intel and AMD cpus are supported and were tested.

Approved by:	kib
Reviewed by:	rpaulo, cokane, Peter Jeremy
MFC after:	1 month
2008-08-08 16:26:53 +00:00
jeff
b82673ba40 - Add inlines for the monitor and mwait instructions.
Sponsored by:	Nokia
2008-04-18 05:47:56 +00:00
nectar
9ee23cec03 Add a knob for disabling/enabling HTT, "machdep.hyperthreading_allowed".
Default off due to information disclosure on multi-user systems.

Submitted by:	cperciva
Reviewed by:	jhb
2005-05-13 00:10:56 +00:00
peter
aae81bd5b3 Remove diffs to i386 version that came in via the compiler support ifdefs.
This changes things like whitespace, inconsistent use of #ifndef vs
#if !defined(), different macro argument orders, mismatched comments, etc.
2005-03-11 22:16:09 +00:00
joerg
c85a3e95f7 netchild's mega-patch to isolate compiler dependencies into a central
place.

This moves the dependency on GCC's and other compiler's features into
the central sys/cdefs.h file, while the individual source files can
then refer to #ifdef __COMPILER_FEATURE_FOO where they by now used to
refer to #if __GNUC__ > 3.1415 && __BARC__ <= 42.

By now, GCC and ICC (the Intel compiler) have been actively tested on
IA32 platforms by netchild.  Extension to other compilers is supposed
to be possible, of course.

Submitted by:	netchild
Reviewed by:	various developers on arch@, some time ago
2005-03-02 21:33:29 +00:00
ps
1be1b43db4 MFia64:
Fix -O builds with gcc 3.4 by defining ffs as __builtin_ffs instead
of creating an inline function that just calls __builtin_ffs.
2004-07-30 16:44:29 +00:00
peter
b035268c4a MFi386: move rss() from db_interface.c to cpufunc.h 2004-04-07 00:41:05 +00:00
imp
ceab5f79a3 Remove advertising clause from University of California Regent's license,
per letter dated July 22, 1999 and email from Peter Wemm.

Approved by: core, peter
2004-04-05 23:55:14 +00:00
bde
7bc1e8430f Don't implement anything in the ffs family in <machine/cpufunc.h>
in the non-_KERNEL case.  This "fixes" applications that include
this "kernel-only" header and also include <strings.h> (or get
<strings.h> via the default _BSD_VISIBLE pollution in <string.h>.
In C++ there was a fatal error: the declaration specifies C linkage
but the implementation gives C++ linkage.  In C there was only a
static/extern mismatch if the headers were included in a certain order
order, and a partially redundant declaration for all include orders;
gcc emits incomplete or wrong diagnostics for these, but only for
compiling with -Wsystem-headers and certain other warning options, so
the problem was usually not seen for C.

Ports breakage reported by:	kris
2004-03-11 13:38:54 +00:00
peter
d2aca56c04 MFi386: re-sort non-gcc function prototypes, trim includes 2004-03-08 00:24:15 +00:00
le
d23ba57557 Fix syntax errors and wrong function prototypes in several MD header
files when using non-GNUC compilers.

PR:             kern/58515
Submitted by:   Stefan Farfeleder <stefan@fafoe.narf.at>
Approved by:    grog (mentor), obrien
2004-03-05 09:19:59 +00:00
peter
9ba1ee132d Re-add debug register functions 2004-01-28 23:53:04 +00:00
peter
955dcc87c1 Add 64 bit bsf*/ffs* routines. Have the ffs() inline use gcc's builtin
because it uses the better cmove instructions to avoid branches.
2003-12-06 23:22:43 +00:00
peter
2edc2f597b Update the graffiti. 2003-11-08 04:39:22 +00:00
peter
12d7e4bee6 Collect the nastiness for preserving the kernel MSR_GSBASE around the
load_gs() calls into a single place that is less likely to go wrong.

Eliminate the per-process context switching of MSR_GSBASE, because it
should be constant for a single cpu.  Instead, save/restore it during
the loading of the new %gs selector for the new process.

Approved by:	re (amd64/* blanket)
2003-05-15 00:23:40 +00:00
peter
770abdbb9c Add BASIC i386 binary support for the amd64 kernel. This is largely
stolen from the ia64/ia32 code (indeed there was a repocopy), but I've
redone the MD parts and added and fixed a few essential syscalls.  It
is sufficient to run i386 binaries like /bin/ls, /usr/bin/id (dynamic)
and p4.  The ia64 code has not implemented signal delivery, so I had
to do that.

Before you say it, yes, this does need to go in a common place.  But
we're in a freeze at the moment and I didn't want to risk breaking ia64.
I will sort this out after the freeze so that the common code is in a
common place.

On the AMD64 side, this required adding segment selector context switch
support and some other support infrastructure.  The %fs/%gs etc code
is hairy because loading %gs will clobber the kernel's current MSR_GSBASE
setting.  The segment selectors are not used by the kernel, so they're only
changed at context switch time or when changing modes.  This still needs
to be optimized.

Approved by:	re (amd64/* blanket)
2003-05-14 04:10:49 +00:00
peter
45949ccde1 Commit MD parts of a loosely functional AMD64 port. This is based on
a heavily stripped down FreeBSD/i386 (brutally stripped down actually) to
attempt to get a stable base to start from.  There is a lot missing still.
Worth noting:
- The kernel runs at 1GB in order to cheat with the pmap code.  pmap uses
  a variation of the PAE code in order to avoid having to worry about 4
  levels of page tables yet.
- It boots in 64 bit "long mode" with a tiny trampoline embedded in the
  i386 loader.  This simplifies locore.s greatly.
- There are still quite a few fragments of i386-specific code that have
  not been translated yet, and some that I cheated and wrote dumb C
  versions of (bcopy etc).
- It has both int 0x80 for syscalls (but using registers for argument
  passing, as is native on the amd64 ABI), and the 'syscall' instruction
  for syscalls.  int 0x80 preserves all registers, 'syscall' does not.
- I have tried to minimize looking at the NetBSD code, except in a couple
  of places (eg: to find which register they use to replace the trashed
  %rcx register in the syscall instruction).  As a result, there is not a
  lot of similarity.  I did look at NetBSD a few times while debugging to
  get some ideas about what I might have done wrong in my first attempt.
2003-05-01 01:05:25 +00:00
davidxu
f781b4eab2 Backout my last commit.
Requested by: bde
2003-04-20 01:35:21 +00:00
davidxu
6223a95348 Don't return garbage in high 16 bits. 2003-04-19 02:40:39 +00:00
peter
e8f9ac26c8 Create inlines for ltr(sel), lldt(sel), lidt(addr) rather than
functions that have one instruction.
2002-09-22 04:45:21 +00:00
markm
fed5dfb239 Provide in inline function for the (GNUC) assembler "hlt" instruction. 2002-09-21 18:26:53 +00:00
peter
4a13c91393 Move SWTCH_OPTIM_STATS related code out of cpufunc.h. (This sort of stat
gathering is not an x86 cpu feature)
2002-07-21 05:22:16 +00:00
markm
ba39a3d3df Cast to prevent "signed/unsigned comparison" warnings. 2002-07-15 13:27:43 +00:00
peter
4d88d6566a Revive backed out pmap related changes from Feb 2002. The highlights are:
- It actually works this time, honest!
- Fine grained TLB shootdowns for SMP on i386.  IPI's are very expensive,
  so try and optimize things where possible.
- Introduce ranged shootdowns that can be done as a single IPI.
- PG_G support for i386
- Specific-cpu targeted shootdowns.  For example, there is no sense in
  globally purging the TLB cache for where we are stealing a page from
  the local unshared process on the local cpu.  Use pm_active to track
  this.
- Add some instrumentation for the tlb shootdown code.
- Rip out SMP code from <machine/cpufunc.h>
- Try and fix some very bogus PG_G and PG_PS interactions that were bad
  enough to cause vm86 bios calls to break.  vm86 depended on our existing
  bugs and this was the cause of the VESA panics last time.
- Fix the silly one-line error that caused the 'panic: bad pte' last time.
- Fix a couple of other silly one-line errors that should have caused more
  pain than they did.

Some more work is needed:
- pmap_{zero,copy}_page[_idle].  These can be done without IPI's if we
  have a hook in cpu_switch.
- The IPI handlers need some cleanup.  I have a bogus %ds load that can
  be avoided.
- APTD handling is rather bogus and appears to be a large source of
  global TLB IPI shootdowns for no really good reason.

I see speedups of between 1.5% and ~4% on buildworlds in a while 1 loop.
I expect to see a bigger difference when there is significant pageout
activity or the system otherwise has memory shortages.

I have backed out a few optimizations that I had been using over the last
few days in order to be a little more conservative.  I'll revisit these
again over the next few days as the dust settles.

New option:  DISABLE_PG_G - In case I missed something.
2002-07-12 07:56:11 +00:00
jhb
096c0249dc Rename pause() to ia32_pause() so it doesn't conflict with the pause()
function defined in <unistd.h>.  I didn't #ifdef _KERNEL it because the
mutex implementation in libpthread will probably need this.
2002-05-22 20:32:39 +00:00
jhb
06e51352f6 Debug registers aren't selectors, so use saner names for the variables in
the inline functions for reading and writing the debug registers.
2002-05-22 13:29:18 +00:00
jhb
fd3295715e - Sort the pause() inline into the appropriate location.
- Add many missing prototypes to the non-GCC section.
2002-05-22 13:27:05 +00:00
jhb
2f66cc911b Rename cpu_pause() to pause(). Originally I was going to make this an
MI API with empty cpu_pause() functions on other arch's, but this
functionality is definitely unique to IA-32, so I decided to leave it
as i386-only and wrap it in #ifdef's.  I should have dropped the cpu_
prefix when I made that decision.

Requested by:	bde
2002-05-22 13:19:22 +00:00
jhb
1a9e2f75a7 Add an inline function cpu_pause() for the IA32 'pause' instruction. 2002-05-21 20:21:53 +00:00
dwmalone
a1416e3bd8 Move do_cpuid into the correct place in this file and make
the indentation more like the other multi-line assembley in
this file.

Someone who understands gcc constraints could update the
constraints for do_cpuid.
2002-04-10 21:18:46 +00:00
dillon
3ad295d416 Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter()
and cpu_critical_exit() and moves associated critical prototypes into their
own header file, <arch>/<arch>/critical.h, which is only included by the
three MI source files that need it.

Backout and re-apply improperly comitted syntactical cleanups made to files
that were still under active development.  Backout improperly comitted program
structure changes that moved localized declarations to the top of two
procedures.  Partially re-apply one of the program structure changes to
move 'mask' into an intermediate block rather then in three separate
sub-blocks to make the code more readable.  Re-integrate bug fixes that Jake
made to the sparc64 code.

Note: In general, developers should not gratuitously move declarations out
of sub-blocks.  They are where they are for reasons of structure, grouping,
readability, compiler-localizability, and to avoid developer-introduced bugs
similar to several found in recent years in the VFS and VM code.

Reviewed by:	jake
2002-04-01 23:51:23 +00:00
dillon
dc5aafeb94 Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
bde
73b1dab77a Fixed some style bugs in the removal of __P(()). The main ones were
not removing tabs before "__P((", and not outdenting continuation lines
to preserve non-KNF lining up of code with parentheses.  Switch to KNF
formatting and/or rewrap the whole prototype in some cases.
2002-03-23 15:09:35 +00:00
imp
a85d713abc Fix abuses of cpu_critical_{enter,exit} by converting to
intr_{disable,restore} as well as providing an implemenation of
intr_{disable,restore}.

Reviewed by: jake, rwatson, jhb
2002-03-21 06:19:08 +00:00
imp
5281a9d636 Fix minor style(9) violation in de__Ping 2002-03-20 19:04:56 +00:00
alfred
61bf661405 Remove __P. 2002-03-20 05:48:58 +00:00
markm
875d7f4ce6 Make it a bit clearer where this file is to be used and where it
should not be. (Comments only)

Inspired by:	bde
2002-02-28 18:26:30 +00:00
peter
f2dee2e96f Back out all the pmap related stuff I've touched over the last few days.
There is some unresolved badness that has been eluding me, particularly
affecting uniprocessor kernels.  Turning off PG_G helped (which is a bad
sign) but didn't solve it entirely.  Userland programs still crashed.
2002-02-27 09:51:33 +00:00
dillon
996781f17a revert last commit temporarily due to whining on the lists. 2002-02-26 20:33:41 +00:00
dillon
57b097e18c STAGE-1 of 3 commit - allow (but do not require) interrupts to remain
enabled in critical sections and streamline critical_enter() and
critical_exit().

This commit allows an architecture to leave interrupts enabled inside
critical sections if it so wishes.  Architectures that do not wish to do
this are not effected by this change.

This commit implements the feature for the I386 architecture and provides
a sysctl, debug.critical_mode, which defaults to 1 (use the feature).  For
now you can turn the sysctl on and off at any time in order to test the
architectural changes or track down bugs.

This commit is just the first stage.  Some areas of the code, specifically
the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will
be cleaned up in the STAGE-2 commit when the critical_*() functions are
moved entirely into MD files.

The following changes have been made:

	* critical_enter() and critical_exit() for I386 now simply increment
	  and decrement curthread->td_critnest.  They no longer disable
	  hard interrupts.  When critical_exit() decrements the counter to
	  0 it effectively calls a routine to deal with whatever interrupts
	  were deferred during the time the code was operating in a critical
	  section.

	  Other architectures are unaffected.

	* fork_exit() has been conditionalized to remove MD assumptions for
	  the new code.  Old code will still use the old MD assumptions
	  in regards to hard interrupt disablement.  In STAGE-2 this will
	  be turned into a subroutine call into MD code rather then hardcoded
	  in MI code.

	  The new code places the burden of entering the critical section
	  in the trampoline code where it belongs.

	* I386: interrupts are now enabled while we are in a critical section.
	  The interrupt vector code has been adjusted to deal with the fact.
	  If it detects that we are in a critical section it currently defers
	  the interrupt by adding the appropriate bit to an interrupt mask.

	* In order to accomplish the deferral, icu_lock is required.  This
	  is i386-specific.  Thus icu_lock can only be obtained by mainline
	  i386 code while interrupts are hard disabled.  This change has been
	  made.

	* Because interrupts may or may not be hard disabled during a
	  context switch, cpu_switch() can no longer simply assume that
	  PSL_I will be in a consistent state.  Therefore, it now saves and
	  restores eflags.

	* FAST INTERRUPT PROVISION.  Fast interrupts are currently deferred.
	  The intention is to eventually allow them to operate either while
	  we are in a critical section or, if we are able to restrict the
	  use of sched_lock, while we are not holding the sched_lock.

	* ICU and APIC vector assembly for I386 cleaned up.  The ICU code
	  has been cleaned up to match the APIC code in regards to format
	  and macro availability.  Additionally, the code has been adjusted
	  to deal with deferred interrupts.

	* Deferred interrupts use a per-cpu boolean int_pending, and
	  masks ipending, spending, and fpending.  Being per-cpu variables
	  it is not currently necessary to lock; bus cycles modifying them.

	  Note that the same mechanism will enable preemption to be
	  incorporated as a true software interrupt without having to
	  further hack up the critical nesting code.

	* Note: the old critical_enter() code in kern/kern_switch.c is
	  currently #ifdef to be compatible with both the old and new
	  methodology.  In STAGE-2 it will be moved entirely to MD code.

Performance issues:

	One of the purposes of this commit is to enhance critical section
	performance, specifically to greatly reduce bus overhead to allow
	the critical section code to be used to protect per-cpu caches.
	These caches, such as Jeff's slab allocator work, can potentially
	operate very quickly making the effective savings of the new
	critical section code's performance very significant.

	The second purpose of this commit is to allow architectures to
	enable certain interrupts while in a critical section.  Specifically,
	the intention is to eventually allow certain FAST interrupts to
	operate rather then defer.

	The third purpose of this commit is to begin to clean up the
	critical_enter()/critical_exit()/cpu_critical_enter()/
	cpu_critical_exit() API which currently has serious cross pollution
	in MI code (in fork_exit() and ast() for example).

	The fourth purpose of this commit is to provide a framework that
	allows kernel-preempting software interrupts to be implemented
	cleanly.  This is currently used for two forward interrupts in I386.
	Other architectures will have the choice of using this infrastructure
	or building the functionality directly into critical_enter()/
	critical_exit().

	Finally, this commit is designed to greatly improve the flexibility
	of various architectures to manage critical section handling,
	software interrupts, preemption, and other highly integrated
	architecture-specific details.
2002-02-26 17:06:21 +00:00
peter
748d0e1167 Work-in-progress commit syncing up pmap cleanups that I have been working
on for a while:
- fine grained TLB shootdown for SMP on i386
- ranged TLB shootdowns.. eg: specify a range of pages to shoot down with
  a single IPI, since the IPI is very expensive.  Adjust some callers
  that used to trigger this inside tight loops to do a ranged shootdown
  at the end instead.
- PG_G support for SMP on i386 (options ENABLE_PG_G)
- defer PG_G activation till after we decide what we are going to do with
  PSE and the 4MB pages at the start of the kernel.  This should solve
  some rumored strangeness about stale PG_G entries getting stuck
  underneath the 4MB pages.
- add some instrumentation for the fine TLB shootdown
- convert some asm instruction wrappers from functions to inlines.  gcc
  seems to do a fair bit better with this.
- [temporarily!] pessimize the tlb shootdown IPI handlers.  I will fix
  this again shortly.

This has been working fairly well for me for a while, but I have tweaked
it again prior to commit since my last major testing round.  The only
outstanding problem that I know of is PG_G related, which is why there
is an option for it (not on by default for SMP).  I have seen a world
speedups by a few percent (as much as 4 or 5% in one case) but I have
*not* accurately measured this - I am a bit sceptical of these numbers.
2002-02-25 23:49:51 +00:00
dwmalone
d48667d57b Move do_cpuid() from a identcpu.c into cpufunc.h. 2002-02-12 21:06:48 +00:00
jhb
5d7a6cb932 Various assembly fixes mostly in the form of using the "+" modifier for
output operands to mark them as both input and output rather than listing
operands twice.

Reviewed by:	bde
2001-12-18 08:54:39 +00:00
jhb
a3b98398cb Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
bsd
47d3082050 Provide access to the IA32 hardware debug registers from the ddb
kernel debugger.  Proper use of these registers allows setting
hardware watchpoints for use in kernel debugging.

MFC after: 2 weeks
2001-06-28 02:08:13 +00:00
imp
87e8621c52 Back out 1.103. It wasn't approved by the owner of the file and
introduced style bugs.

Submited by: bde
2001-04-18 20:57:43 +00:00
imp
3feb61668d De __P() while I'm here. Done as a separate commit since it is just
stylistic.

# Yes, this break K&R, but this file already used so many gcc extensions
# keeping K&R support seemed too anachronistic for me.

Didn't fix the bug where functions that can only be used in the kernel
are exported to userland.
2001-04-03 18:50:55 +00:00