Commit Graph

21 Commits

Author SHA1 Message Date
John Baldwin
d9610574a2 Add extra constraints to tell the compiler that the memory be modified
in the arm __swp() and sparc64 casa() and casax() functions is actually
being used as an input and output and not just the value of the register
that points to the memory location.  This was the underlying source of
the mbuf refcount problems on sparc64 a while back.  For arm this should be
a nop because __swp() has a constraint to clobber all memory which can
probably be removed now.

Reviewed by:	alc, cognet
MFC after:	1 week
2005-07-27 20:01:45 +00:00
Marius Strobl
7bed9b320b - Add a workaround for a bug in BlackBird CPUs (said to be part of the
SpitFire erratum #54) which can cause writes to the TICK_CMPR register
  to fail. This seems to fix the dying clocks problem reported by jhb@
  and kris@. [1]
- In tick_start() don't reset the tick counter of the boot processor to
  zero. It's initially reset in _start() and afterwards but _before_
  tick_start() is called on the BSP the APs synchronise with the tick
  counter of the BSP in mp_startup(). Resetting the tick counter of the
  BSP in tick_start() probably also was the cause of problems seen when
  using the CPU tick counter as timecounter on SMP machines.
  Not resetting the tick counter of the BSP in mp_startup() makes the
  tick counters and tick interrupts between the BSP and APs be pretty
  much in sync as it's supposed to be. This also means there's no longer
  a real reason to have separate tick_start() and tick_start_ap() so
  merge them and zap tick_start_ap(). This is also a first step in
  simplifying the interface to the tick counters in preparation to use
  alternate clock hardware where available.
- Switch to the algorithm used on FreeBSD/ia64 for updating the tick
  interrupt register and which compensates the clock drift caused by
  varying delays between when the tick interrupts actually trigger and
  when they are serviced. Not compensating the clock drift mainly hurts
  interactive performance especially when using WITNESS. [2]
  For further information about the algorithm also see the commit log
  of sys/ia64/ia64/interrupt.c rev. 1.38.
  On sparc64 the sysctls for monitoring the behaviour of the tick
  interrupts are machdep.tick.adjust_edges, machdep.tick.adjust_excess,
  machdep.tick.adjust_missed and machdep.tick.adjust_ticks.
- In tick_init() just use tick_stop() for stopping the tick interrupts
  until a proper handler is set up later. This also stops the system
  tick interrupt on USIII systems earlier.
- In tick_start() check for a rough upper limit of HZ.
- Some minor changes, e.g. use FBSDID, remove unused headers, etc.

Info obtained from:	Linux [1]
Ok'ed by:		marcel [2]
Additional testing by:	kris (earlier version of the workaround), jhb
X-MFC after:		3 days [1]
2005-04-16 14:57:38 +00:00
Marius Strobl
c066bca62d Fix a style(9) bug in the stxa_sync() macro (DO NOT use function calls
in initializers).
2005-04-16 14:47:50 +00:00
Marius Strobl
980284e38f Switch from BSD-style u_intXX_t to ISO C99 uintXX_t. 2004-05-22 00:47:26 +00:00
John-Mark Gurney
dffca5a624 add support for peeking at pci busses on UltraSparc systems. This prevents
data access errors when trying to read/write to non-existant PCI devices.

fix the psycho bridge to use peek for probing devices.  This no longer
fakes it if the OFW node doesn't exist (and the reg == 0).

Reviewed by:	jake, tmm
2003-06-22 01:26:08 +00:00
Jake Burkholder
92fed30a07 Use the vis block copy/zero functions for pmap_copy_page and pmap_zero_page.
These are called through function pointers so that different implementations
can be provided for cheetah, where the block load instructions may or may
not be a win, and so they can be disabled with the machdep.use_vis tunable.
In terms of raw bandwidth the integer versions are faster, but not allocating
lines in the L2 cache for useless data gives a measurable improvement in user
time for the benchmarks I tested (mostly buildworld with -j8).

As far as I can tell the instructions used are implemented on everything
back to UltraSPARC I, so there should not be a problem with different cpu
types.
2003-04-06 17:05:26 +00:00
Jake Burkholder
6412c65cf0 Add optimized block copy and zero functions using vis instructions, which
can do 64 bytes at a time and don't allocate lines in the L2 cache.  These
assume that everything is 64 byte aligned, and that there's more than 128
bytes of data (best for whole pages).  The block load and store instructions
don't follow normal memory ordering rules and require either a memory barrier
or move between registers before the data can actually be used.  This
implementation correctly shuffles around 3 out of the 4 sets of registers
in order to avoid memory barriers expect for the last 2 blocks.
2003-04-03 18:43:40 +00:00
Matthew Dillon
182da8209d Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter()
and cpu_critical_exit() and moves associated critical prototypes into their
own header file, <arch>/<arch>/critical.h, which is only included by the
three MI source files that need it.

Backout and re-apply improperly comitted syntactical cleanups made to files
that were still under active development.  Backout improperly comitted program
structure changes that moved localized declarations to the top of two
procedures.  Partially re-apply one of the program structure changes to
move 'mask' into an intermediate block rather then in three separate
sub-blocks to make the code more readable.  Re-integrate bug fixes that Jake
made to the sparc64 code.

Note: In general, developers should not gratuitously move declarations out
of sub-blocks.  They are where they are for reasons of structure, grouping,
readability, compiler-localizability, and to avoid developer-introduced bugs
similar to several found in recent years in the VFS and VM code.

Reviewed by:	jake
2002-04-01 23:51:23 +00:00
Matthew Dillon
d74ac6819b Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit().  Cleanup the td_savecrit field by moving it
from MI to MD.  Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).

Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections.  This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.

This is the stage-1 commit.  Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways.  This should
be temporary.

Reviewed by:	core
Approved by:	core
2002-03-27 05:39:23 +00:00
Warner Losh
03742795eb intr_disable returns register_t 2002-03-21 06:21:32 +00:00
Thomas Moestl
9fb2b0d55e Add a few new functions/macros: intr_disable() and intr_restore() to
disable interrupts completely, and stxa_sync(), which performs a store
immediately followed by a membar #Sync with interrupts disabled (this
is needed for writes to diagnostic registers).
2002-02-13 15:40:05 +00:00
Jake Burkholder
c0e12e9356 Add a mov() macro, which is used in conjunction with the register defines
for setting reserved global registers from c.
2002-01-08 04:36:01 +00:00
Jake Burkholder
f664fe9075 Add "memory" to the clobber list for membars.
Submitted by:	tmm
2001-12-29 06:51:50 +00:00
John Baldwin
7e1f6dfe9d Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
  prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
  count and a per-thread critical section saved state set when entering
  a critical section while at nesting level 0 and restored when exiting
  to nesting level 0.  This moves the saved state out of spin mutexes so
  that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
  cpu_critical_enter/exit.  MI code such as device drivers and spin
  mutexes use the MI wrappers.  Note that since the MI wrappers store
  the state in the current thread, they do not have any return values or
  arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
  assigned to curthread->td_savecrit during fork_exit().

Tested on:	i386, alpha
2001-12-18 00:27:18 +00:00
Jake Burkholder
4d5e57d87d 1. Implement ascopyto() and ascopyfrom() for copying to an alternate address
space from kernel space and from an alternate address space to kernel
   space.
2. Remove the unused and unprototyped physcopy() and physzero() and replace
   with the more versatile ascopy() and aszero(), inspired by the above.
   These can be used to copy and zero physical pages of memory without mapping
   them into kernel space first.
3. Use magic numbers for the offsets in the jmpbuf structure like other
   platforms.
4. Use SET.

Submitted by: 	tmm (1, 4)
2001-11-18 02:47:26 +00:00
Thomas Moestl
11dd9255e9 Add bus_space and busdma support for sparc64. 2001-11-09 20:05:53 +00:00
Jake Burkholder
275e4fcf0a Add a flushw() macro. 2001-09-03 22:13:53 +00:00
Jake Burkholder
f70bfa97c0 Spell ta 1 correctly as ta %xcc, 1. Use %pil for critical enter/exit
instead of pstate.ie.  Note that popc is not implemented in hardware
on certain ultras, so we can't use it for inline ffs (suck).
2001-08-18 18:07:37 +00:00
David E. O'Brien
73a4930297 The author isn't a [UC] Regents. Correct the copyright language. 2001-08-09 02:09:34 +00:00
Jake Burkholder
89bf8575ee Flesh out the sparc64 port considerably. This contains:
- mostly complete kernel pmap support, and tested but currently turned
  off userland pmap support
- low level assembly language trap, context switching and support code
- fully implemented atomic.h and supporting cpufunc.h
- some support for kernel debugging with ddb
- various header tweaks and filling out of machine dependent structures
2001-07-31 06:05:05 +00:00
Jake Burkholder
98bb5304e1 Add skeleton machine dependent headers and c files for a port of freebsd
to a new architecture.  This is the base of the sparc64 port, but contains
limited machine dependent code, and can be used a base for ports.  Included
are:
- standard machine dependent headers, tweaked for a 64 bit, big endian
  architecture, including empty versions of all the machine dependent
  structures
- a machine independent atomic.h, which can be used until a port has
  support for interrupts and the operations really need to be atomic
- stub versions of all the machine dependent functions, which panic
  when called and print out the name of the function that needs to
  be implemented.  functions which are normally in assembly files are
  not included, but this should reduce the number of different undefined
  references on the first few compiles from hundreds to 5 or 6
Given minimal startup code and console support it should be trivial to
make this compile and run the first few sysinits on almost any architecture.

Requested by:   alfred, imp, jhb
2001-07-31 05:45:16 +00:00