Commit Graph

381 Commits

Author SHA1 Message Date
jake
87daf4df50 Fix ifdef LOCORE protection. 2002-03-13 06:04:36 +00:00
jake
31419a58a4 Add a DEBUGGER_ON_POWERFAIL option. This makes the power button on ultra 10s
work like an NMI button.
2002-03-13 05:58:45 +00:00
jake
b767cbaa7a Fix braino. 2002-03-13 05:54:00 +00:00
jake
97430a03ac Add support for starting and stopping cpus with ipis.
Stop the other cpus when shutting down or entering the debugger.

Submitted by:	tmm
2002-03-13 04:59:01 +00:00
jake
89751e20c2 Use intr_disable/intr_restore instead of doing it manually.
Submitted by:	tmm
2002-03-13 04:43:45 +00:00
jake
2b8f2f82cf Add support for driving the clocks on secondary cpus.
Submitted by:	tmm
2002-03-13 04:38:33 +00:00
jake
ddffcffd5c Fix a bug where the wrong number of windows were copied for a failed fill
on return to user mode.  We may not have frame pointers setup for more
than 1 on return from exec.
2002-03-13 04:02:27 +00:00
jake
bfa19e6a1a White space. 2002-03-13 03:55:28 +00:00
jake
df6db29bae Make IPI_WAIT use a bit mask of the cpus that a pmap is active on and only
wait for those cpus, instead of all of them by using a count.  Oops.
Make the pointer to the mask that the primary cpu spins on volatile, so
gcc doesn't optimize out an important load.  Oops again.
Activate tlb shootdown ipi synchronization now that it works.  We have
all involved cpus wait until all the others are done.  This may not be
necessary, it is mostly for sanity.
Make the trigger level interrupt ipi handler work.

Submitted by:	tmm
2002-03-13 03:43:00 +00:00
jake
6ee80641df Add an ATOMIC_CLEAR_INT macro.
Submitted by:	tmm
2002-03-13 03:28:47 +00:00
tmm
aababfffd5 Fix the type of some constants, and make some macros safer by casting
the argument.
2002-03-11 03:04:28 +00:00
tmm
4667cf8132 Add convenience macros to extract the cc0 and cc1 from format 2 and 3
instructions.
2002-03-11 03:03:35 +00:00
jake
5f2da45bc7 Increase VM_KMEM_SIZE to 16 megs from 12. Define VM_KMEM_SIZE_SCALE so that
the number of physical pages per KVA page allocated scales properly with
memory size.  This fixes problems with kmem_map being too small.

Noticed by:	mike, wollman
Submitted by:	tmm
2002-03-09 23:35:50 +00:00
tmm
a55210aac5 Add a driver for the mem and kmem devices, based off the i386 version. 2002-03-09 22:33:16 +00:00
tmm
bea936f66b Set the interrupt map type accordingly if we need to fall back to using
the PCI bus interrupt map.
2002-03-09 22:02:02 +00:00
tmm
a3bd8f39b9 Fix a warning by adding a missing include. 2002-03-09 22:00:30 +00:00
mike
b8cc0d1207 o Don't require long long support in bswap64() functions.
o In i386's <machine/endian.h>, macros have some advantages over
  inlines, so change some inlines to macros.
o In i386's <machine/endian.h>, ungarbage collect word_swap_int()
  (previously __uint16_swap_uint32), it has some uses on i386's with
  PDP endianness.

Submitted by:	bde

o Move a comment up in <machine/endian.h> that was accidentially moved
  down a few revisions ago.
o Reenable userland's use of optimized inline-asm versions of
  byteorder(3) functions.
o Fix ordering of prototypes vs. redefinition of byteorder(3)
  functions, so that the non-GCC (libc asm) case has proper
  prototypes.
o Add proper prototypes for byteorder(3) functions in <sys/param.h>.
o Prevent redundant duplicate prototypes by making use of the
  _BYTEORDER_PROTOTYPED define.
o Move the bswap16(), bswap32(), bswap64() C functions into MD space
  for platforms in which asm versions don't exist.  This significantly
  reduces the complexity of some things at the cost of duplicate code.

Reviewed by:	bde
2002-03-09 21:02:16 +00:00
jake
33d439a7d0 Implement delivery of tlb shootdown ipis. This is currently more fine grained
than the other implementations; we have complete control over the tlb, so we
only demap specific pages.  We take advantage of the ranged tlb flush api
to send one ipi for a range of pages, and due to the pm_active optimization
we rarely send ipis for demaps from user pmaps.

Remove now unused routines to load the tlb; this is only done once outside
of the tlb fault handlers.
Minor cleanups to the smp startup code.

This boots multi user with both cpus active on a dual ultra 60 and on a
dual ultra 2.
2002-03-07 06:01:40 +00:00
jake
951cf2831e Modify the tlb demap API to take a pmap instead of a tlb context number.
Due to allocating tlb contexts on the fly, we only ever need to demap the
primary context, non-primary contexts have already been implicitly flushed
by context switching.  All we really need to tell is if its a kernel demap
or not, and its easier just to compare against the kernel_pmap which is a
constant.
2002-03-07 05:25:15 +00:00
jake
04926795be Implement kthread context stealing. This is a bit of a misnomer because
the context is not actually stolen, as it would be for i386.  Instead of
deactivating a user vmspace immediately when switching out, and recycling
its tlb context, wait until the next context switch to a different user
vmspace.  In this way we can switch from a user process to any number of
kernel threads and back to the same user process again, without losing any
of its mappings in the tlb that would not already be knocked by the automatic
replacement algorithm.  This is not expected to have a measurable performance
improvement on the machines we currently run on, but it sounds cool and makes
the sparc64 port SMPng buzz word compliant.
2002-03-07 05:15:43 +00:00
jake
4adfe1f199 Add support for starting secondary cpus in kernel, as opposed to relying
on the loader to do it.  Improve smp startup code to be less racy and to
defer certain things until the right time.  This almost boots single user
on my dual ultra 60, it is still very fragile:

SMP: AP CPU #1 Launched!
Enter full pathname of shell or RETURN for /bin/sh:
# ls
Debugger("trapsig")
Stopped at      Debugger+0x1c:  ta              %xcc, 1
db> heh
No such command
db>
2002-03-04 07:12:36 +00:00
jake
c87ee2427d Dig the information about which tlb slots were used to map the kernel out
of the metadata passed by the loader.
2002-03-04 07:07:10 +00:00
jake
8322761809 Allocate tlb contexts on the fly in cpu_switch, instead of statically 1 to 1
with pmaps.  When the context numbers wrap around we flush all user mappings
from the tlb.  This makes use of the array indexed by cpuid to allow a pmap
to have a different context number on a different cpu.  If the context numbers
are then divided evenly among cpus such that none are shared, we can avoid
sending tlb shootdown ipis in an smp system for non-shared pmaps.  This also
removes a limit of 8192 processes (pmaps) that could be active at any given
time due to running out of tlb contexts.

Inspired by:		the brown book
Crucial bugfix from:	tmm
2002-03-04 05:20:29 +00:00
jake
dd2207f5cd Fix obscure problems with vfork where part of the parent's stack could be
clobbered by the child.  This is more complicated than usual because the
window that could get clobbered is pushed in kernel mode, so a lot of
registers would have to be saved in other registers in userland and we
don't have enough.  What we do have is space in the pcb to temporarily
store user windows that were spilled in kernel mode, but could not be
immediately stored to the user stack.  So we copy in the parent's topmost
window and save it in the pcb, and arrange for it to be copied back out
when the child is done frobbing the stack.

Reviewed by:	tmm
2002-03-04 05:07:22 +00:00
jake
f682fc3b87 We don't need KTR_COMPILE in assym.s, its already in opt_global.h. Add
assyms for more ktr trace classes.
2002-03-01 16:22:06 +00:00
jake
4c11a624cf Use a better trace class for ktr traces in the tlb fault handlers, which are
rather loud.
2002-03-01 16:17:50 +00:00
arr
ed36876e15 - Move a comment from being on the same line as a #ifdef to the line
following it.  This should have gone in the previous commit, but
  misviewed Bruce's patch.

Requested by: bde
2002-02-28 21:52:08 +00:00
arr
0aaddb66e9 - Fix panic() message and a couple style nits that snuck in from the
recent diagnostics commit (rev. 1.84).
2002-02-28 08:28:14 +00:00
silby
c58cf9d742 Fix a minor swap leak.
Previously, the UPAGES/KSTACK area of processes/threads would leak memory
at the time that a previously swapped process was terminated.  Lukcily, the
leak was only 12K/proc, so it was unlikely to be a major problem unless you
had an undersized swap partition.

Submitted by:	dillon
Reviewed by:	silby
MFC after:	1 week
2002-02-28 07:41:12 +00:00
silby
230f96f3ce Fix a horribly suboptimal algorithm in the vm_daemon.
In order to determine what to page out, the vm_daemon checks
reference bits on all pages belonging to all processes.  Unfortunately,
the algorithm used reacted badly with shared pages; each shared page
would be checked once per process sharing it; this caused an O(N^2)
growth of tlb invalidations.  The algorithm has been changed so that
each page will be checked only 16 times.

Prior to this change, a fork/sleepbomb of 1300 processes could cause
the vm_daemon to take over 60 seconds to complete, effectively
freezing the system for that time period.  With this change
in place, the vm_daemon completes in less than a second.  Any system
with hundreds of processes sharing pages should benefit from this change.

Note that the vm_daemon is only run when the system is under extreme
memory pressure.  It is likely that many people with loaded systems saw
no symptoms of this problem until they reached the point where swapping
began.

Special thanks go to dillon, peter, and Chuck Cranor, who helped me
get up to speed with vm internals.

PR:		33542, 20393
Reviewed by:	dillon
MFC after:	1 week
2002-02-27 18:03:02 +00:00
tmm
3ed05b7b89 Add the following functions/macros to support byte order conversions and
device drivers for bus system with other endinesses than the CPU (using
interfaces compatible to NetBSD):

- bwap16() and bswap32(). These have optimized implementations on some
  architectures; for those that don't, there exist generic implementations.
- macros to convert from a certain byte order to host byte order and vice
  versa, using a naming scheme like le16toh(), htole16().
  These are implemented using the bswap functions.
- stream bus space access functions, which do not perform a byte order
  conversion (while the normal access functions would if the bus endianess
  differs from the CPU endianess).

htons(), htonl(), ntohs() and ntohl() are implemented using the new
functions above for kernel usage. None of the above interfaces is currently
exported to user land.

Make use of the new functions in a few places where local implementations
of the same functionality existed.

Reviewed by:	mike, bde
Tested on alpha by:	mike
2002-02-27 17:16:18 +00:00
jake
0f3fdcbf9d Minimal testing has shown that a 4 page tsb is a nice sweet spot for current
work loads.  It tapers off after that as gcc's working set generally just fits.

compiling bin/csh:

TSB_PAGES = 2
	213.33 real        77.59 user       110.01 sys
TSB_PAGES = 4
	116.43 real        75.78 user        19.16 sys
TSB_PAGES = 8
	119.27 real        76.38 user        18.12 sys

Testing by:	tmm
2002-02-27 06:18:02 +00:00
jake
e4a45ab17b Parameterize the number of pages to allocate for the per-cpu area on
PCPU_PAGES.
2002-02-27 06:08:13 +00:00
jake
c584d5961c Make cpu_identify take the value of the ver register and cpuid as arguments
so we can print nice things about non-current cpus.
2002-02-27 06:05:50 +00:00
jake
b81cd84d30 Minor cleanup. 2002-02-27 00:31:31 +00:00
jake
0f47dc5152 Wrap long lines. 2002-02-27 00:28:35 +00:00
jake
e3f3464752 Use pcpu.pc_cpumask instead of computing 1 << cpuid. 2002-02-27 00:27:05 +00:00
jake
aec950ed91 Add a macro for shift of an integer (1 << shift == sizeof). Move the pointer
define to live alongside it.  For kicks assert at compile time that they are
correct.  Use these instead of magic numbers.
2002-02-27 00:21:04 +00:00
jake
cc951d5968 Wrap long lines. 2002-02-27 00:03:01 +00:00
obrien
8449ab85ee Define basic macros required by GDB. 2002-02-26 21:49:46 +00:00
jake
52438c9de8 Apparently gcc3.1 is now using deprcated v8 instructions in v9 code
due to them being faster in certain cases.  Therefore we need to save
and restore the v8 %y register around traps in kernel mode as well as
traps in usermode.

Tested by:	obrien, tmm
2002-02-26 17:09:24 +00:00
jake
8319be1fd2 Convert pmap.pm_context to an array of contexts indexed by cpuid. This
doesn't make sense for SMP right now, but it is a means to an end.
2002-02-26 06:57:30 +00:00
jake
865ab80de2 Pu back a call to pmap_context_destroy which was accidentily removed
in the previous commit.

Spotted by:	tmm
2002-02-26 06:39:38 +00:00
jake
84d0ef9268 Allow the user tsb to span multiple pages. Make the default 2 pages for now
until we do some testing to see what's best.  This gives a massive reduction
in system time for processes with a relatively large working set.  The size
of the tsb directly affects the rss size that a user process can keep mapped.
When it starts to get full replacements occur and the process takes a lot of
soft vm faults.  Increasing the default from 1 page to 2 gives the following
before and after numbers for compiling vfs_bio.c:

before:
       14.27 real         6.56 user         5.69 sys
after:
        8.57 real         6.11 user         1.62 sys

This should make self hosted builds more tolerable.
2002-02-26 02:37:43 +00:00
jake
2a0b1812b2 Remove code to lock the user tsb into the tlb. We can handle faults on it
now, as we do for normal wired kernel memory.
2002-02-25 22:58:41 +00:00
obrien
86fc68cfdd I was able to boot this kernel using the latest WIP kernel sources.
I don't believe anyone is quite using the sparc64 kernel sources in CVS
yet -- things aren't just quite ready (but almost).  So this commit should
be OK to make.
2002-02-25 22:13:44 +00:00
jake
11e9d44ed7 Implement a nested window state. This avoids attempting to spill a user
window to the user stack while in a nested kernel trap.  We do this for
entry to the kernel from user mode, but if we get an interrupt in kernel
mode while there are still user windows in the cpu, and we attempt to spill
to the user stack, we may take too many nested traps and overflow the trap
stack, causing a red state exception.  This is needed by upcoming changes
to allow the user tsb to not be locked in the tlb.

Reviewed by:	tmm
2002-02-25 18:37:17 +00:00
jake
7eea55cfea Modify the tte format to not include the tlb context number and to store the
virtual page number in a much more convenient way; all in one piece.  This
greatly simplifies the comparison for a matching tte, and allows the fault
handlers to be much simpler due to not having to load wierd masks.
Rewrite the tlb fault handlers to account for the new format.  These are also
written to allow faults on the user tsb inside of the fault handlers; the
kernel fault handler must be aware of this and not clobber the other's
registers.  The faults do not yet occur due to other support that is needed
(and still under my desk).

Bug fixes from:	tmm
2002-02-25 04:56:50 +00:00
obrien
ac195ce76f Sync with the Alpha's GENERIC configuration.
Most of the contents are commented out as they are as-yet untested.
However, I wanted the contents to match our other arches, so that when
people make changes to {i386,alpha,ia64}, they will also make the same
changes here.
2002-02-24 18:49:38 +00:00
jake
0238fc54e2 Make use of the ranged tlb demap operations where ever possible. Use
pmap_qenter and pmap_qremove in preference to pmap_kenter/pmap_kremove.
The former maps in multiple pages at a time, and so can do a ranged
flush.  Don't assume that pmap_kenter and pmap_kremove will flush the tlb,
even though they still do.  It will not once the MI code is updated to use
pmap_qenter and pmap_qremove.
2002-02-23 22:18:15 +00:00