Commit Graph

349 Commits

Author SHA1 Message Date
Peter Grehan
2184ddd1f7 In pmap_page_protect, clear the vm page's PG_WRITEABLE flag if
downgrading to read-only. Found by triggering the KASSERT in
vm_pageout_flush().
2004-08-05 12:44:12 +00:00
Alan Cox
684a62b7bf - Push down the acquisition and release of Giant into pmap_enter_quick()
on those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant in vm_map_pmap_enter().
2004-08-04 22:03:16 +00:00
Peter Grehan
016927054b Kernel traps were not being passed to trap_fatal in some
circumstances.

Spotted by:  gallatin
2004-08-02 02:37:29 +00:00
Alan Cox
9bb0e06861 - Push down the acquisition and release of Giant into pmap_protect() on
those architectures without pmap locking.
 - Eliminate the acquisition and release of Giant from vm_map_protect().

(Translation: mprotect(2) runs to completion without touching Giant on
alpha, amd64, i386 and ia64.)
2004-07-30 20:38:30 +00:00
Suleiman Souhlal
009a0e433b Implement MD parts of ptrace.
Approved by:	grehan (mentor)
2004-07-29 13:34:50 +00:00
Peter Grehan
1f1b01c0a5 Save DAR/DSISR in DDB regsave area when stack overflow detected. It's
hard to work out where the problem was without these.
2004-07-27 03:46:34 +00:00
Peter Grehan
5dd954f3de Improve boot-time debugging with DDB by extracting the ksym start/end
values from the loader.
2004-07-27 03:41:34 +00:00
Alan Cox
ab50a26230 Implement the protection check required by the pmap_extract_and_hold()
specification.

Reviewed and tested by:	grehan@
2004-07-26 18:10:10 +00:00
Peter Grehan
2a1c4385c3 Detect kernel stack excursion into guard pages. Drop into KDB
with a wired stack if this is found.

Mostly obtained from:  NetBSD
2004-07-23 05:33:24 +00:00
Peter Grehan
a76b77653c Bring KDB stack size into line with thread stack size (4 pages). 2004-07-23 05:31:14 +00:00
Peter Grehan
c06a377abc Allow DSI exceptions to invoke DDB. 2004-07-23 05:27:17 +00:00
Peter Grehan
bddfaa895c Update the callframe structure to leave space for the frame pointer
and saved link register as per the ABI call sequence. Update code
that uses this (fork_trampoline etc) to use the correct genassym'd
offsets.

 This fixes the 'invalid LR' message when backtracing kernel
threads in DDB.
2004-07-22 01:28:51 +00:00
Peter Grehan
1c4ba0be13 Properly obey PPC context synchronization rules when modifying
the address translation bits of the MSR. This fixes the boot-time
panic reported by Drew Gallatin.
2004-07-20 02:22:36 +00:00
Alan Cox
3d2e54c317 Push down the acquisition and release of the page queues lock into
pmap_protect() and pmap_remove().  In general, they require the lock in
order to modify a page's pv list or flags.  In some cases, however,
pmap_protect() can avoid acquiring the lock.
2004-07-15 18:00:43 +00:00
David Xu
53dbf30349 Add ptrace_clear_single_step(), alpha already has it for years, the function
will be used by ptrace to clear a thread's single step state.
2004-07-13 07:22:56 +00:00
Peter Grehan
441e42eaf4 Rename low-level code ddb -> db. Use KDB instead of DDB.
Fix bug in setup of stack frame where 8 bytes wasn't being
saved for the callee's frame pointer and saved LR.
2004-07-12 22:32:08 +00:00
Peter Grehan
b188ee2269 Bring into KDB new order. 2004-07-12 22:26:20 +00:00
Peter Grehan
20d21fe9ef - DDB -> KDB, with kdb routines
- ddb -> db for low-level trapcode
- implement makectx. I think it only matters that the stack is setup
  correctly.
- bring over ddb_trap_glue and rename to db_trap_glue
2004-07-12 22:25:09 +00:00
Peter Grehan
def828503c No need for ddb option. Never a need for ipkdb option. 2004-07-12 22:22:53 +00:00
Alan Cox
eba90ac75f pmap_remove_pages() must not remove wired mappings. Since
pmap_remove_pages() is an optimization, its implementation is optional.

Discussed with:	grehan
2004-07-12 04:40:26 +00:00
Peter Grehan
077a0fb8b6 - correctly set the return value for the copyin/out fault buffer to 1
so setfault would return correctly when a page fault was invalid
  (e.g. a syscall with a bad parameter).

  This caused an endless DSI loop, seen when running sendmail which
  does a setlogin() call with a NULL pointer.

- introduce KTR_SYSC tracing. expose the syscallnames[] array to
  make the tracing more readable.
2004-07-09 11:00:41 +00:00
Peter Grehan
5d64cf91fb G4 requires isync after 256Mb ibat/dbat update, G3 requires
isync after each bat update. Otherwise, pmap_bootstrap causes
an ISI exception. A fall-out of loader BAT removal.
2004-07-08 12:47:36 +00:00
Peter Grehan
d0540ed535 - trailing white-space cleanup
- add call to thread_user_enter for P_SA processes before
  trap processing ala all other arches
2004-07-06 11:46:56 +00:00
Alan Cox
56b093883a Correct pmap_extract()'s return type. It should be vm_paddr_t, not
vm_offset_t.
2004-07-05 23:08:27 +00:00
Peter Grehan
6cc1cdf47b Modify loop test when cycling through phys_avail array. It's possible
for an OpenFirmware implementation to have a single memory region
(hello PearPC).
2004-07-01 08:01:49 +00:00
Peter Grehan
40cdee9dab Catchup to now-required <sys/module.h> for PowerPC 2004-06-25 13:42:48 +00:00
Tim J. Robbins
cc05397ffc Remove checks for curthread == NULL - it can't happen. 2004-06-03 10:22:47 +00:00
Tim J. Robbins
fa2a4d0595 Move TDF_DEADLKTREAT into td_pflags (and rename it accordingly) to avoid
having to acquire sched_lock when manipulating it in lockmgr(), uiomove(),
and uiomove_fromphys().

Reviewed by:	jhb
2004-06-03 01:47:37 +00:00
Thomas Moestl
65e29c4822 Retire cpu_sched_exit(); it is not used any more. 2004-05-26 12:09:39 +00:00
Peter Grehan
e6bd8ae1e9 trap_pfault() shouldn't be acquiring Giant. Found to blow up
with MUTEX_PROFILING.

Submitted by:  Suleiman Souhlal <refugee@segfaulted.com>
2004-05-19 06:05:42 +00:00
Alan Cox
377a50503d MFamd64
Simplify the sf_buf implementation.  In short, make it a veneer
 over the direct virtual-to-physical mapping.
2004-04-18 08:10:04 +00:00
Alan Cox
1f51408ade Remove avail_end. It is not used. 2004-04-11 06:02:24 +00:00
Warner Losh
2fcbca0d85 Remove advertising clause from University of California Regent's
license, per letter dated July 22, 1999 and email from Peter Wemm,
Alan Cox and Robert Watson.

Approved by: core, peter, alc, rwatson
2004-04-07 05:00:01 +00:00
Alan Cox
c8607538c8 Remove avail_start on those platforms that no longer use it. (Only amd64
does anything with it beyond simple initialization.)
2004-04-05 04:08:00 +00:00
Alan Cox
bdb93eb248 Remove unused arguments from pmap_init(). 2004-04-05 00:37:50 +00:00
Alan Cox
121230a40d In some cases, sf_buf_alloc() should sleep with pri PCATCH; in others, it
should not.  Add a new parameter so that the caller can specify which is
the case.

Reported by:	dillon
2004-04-03 09:16:27 +00:00
Benno Rice
0c7e9074e6 Replace td2 with td on the assumption that this was a typo. This should at
least unbreak the build.

Pointy hat to: peter
Not tested either by: benno
2004-03-30 13:57:34 +00:00
Peter Wemm
5c89deaefc Finish tidying up a couple of leftovers from the KSTACK_PAGES stuff. Some
files still #included the opt_ file.  powerpc hadn't been updated yet.
2004-03-29 19:38:05 +00:00
Alan Cox
010b69bae2 Add an implementation of uiomove_fromphys() for PowerPC. This
implementation uses the direct virtual-to-physical mapping.

Discussed with:	grehan
2004-03-23 18:26:03 +00:00
Alan Cox
90ecfebd82 Refactor the existing machine-dependent sf_buf_free() into a machine-
dependent function by the same name and a machine-independent function,
sf_buf_mext().  Aside from the virtue of making more of the code machine-
independent, this change also makes the interface more logical.  Before,
sf_buf_free() did more than simply undo an sf_buf_alloc(); it also
unwired and if necessary freed the page.  That is now the purpose of
sf_buf_mext().  Thus, sf_buf_alloc() and sf_buf_free() can now be used
as a general-purpose emphemeral map cache.
2004-03-16 19:04:28 +00:00
Alan Cox
fcffa790e9 Retire pmap_pinit2(). Alpha was the last platform that used it. However,
ever since alpha/alpha/pmap.c revision 1.81 introduced the list allpmaps,
there has been no reason for having this function on Alpha.  Briefly,
when pmap_growkernel() relied upon the list of all processes to find and
update the various pmaps to reflect a growth in the kernel's valid
address space, pmap_init2() served to avoid a race between pmap
initialization and pmap_growkernel().  Specifically, pmap_pinit2() was
responsible for initializing the kernel portions of the pmap and
pmap_pinit2() was called after the process structure contained a pointer
to the new pmap for use by pmap_growkernel().  Thus, an update to the
kernel's address space might be applied to the new pmap unnecessarily,
but an update would never be lost.
2004-03-07 21:06:48 +00:00
Peter Grehan
4daf20b2f1 Increase kernel VA from 256Mb to 512Mb by shifting the segment used
for user copyinout down to 12, and keeping segments 13/14 for
kernel VA.

It would be nice to have more available, but segments lower than
this are reserved for either memory or 1:1 mapped device i/o,
and seg 15 is OpenFirmware ROM. Also, the effort to keep OpenFirmware
available for callbacks limits the use of VA-mapped segments.
Fortunately UMA_MD_SMALL_ALLOC takes away a lot of VM pressure.

Obtained from:  NetBSD
2004-03-02 06:49:21 +00:00
Peter Grehan
919cb3362f Kernel changes for libthr (and probably libpthread).
include/ucontext.h
 - remove trapframe and switch over to 'generic' description of machine
   state. Include version field to help with future modifications.
   Include floating point and altivec state, and hopefully align
   correctly

powerpc/copyinout.c
 - fill out casuptr() sync primitive, required by kern_umtx.c

powerpc/machdep.c
 - shifted proc0/thread0/pcpu setup to before cninit, since
   syscons -> make_dev -> devlock requires a valid curthread
 - implemented get_mcontext/set_mcontext
 - recast sendsig/sigreturn to use get/set_mcontext and new
   ucontext struct. floating point now saved
 - TODO: save/restore altivec state

powerpc/vm_machdep.c
 - implemented cpu_thread_setup/cpu_set_upcall/cpu_set_upcall_kse
 - eliminated trailing whitespace

Submitted by:  Suleiman Souhlal <refugee@segfaulted.com>, ucontext by grehan
2004-03-02 06:13:09 +00:00
Peter Grehan
49f397d0c3 Interrupt statistics, vmstat -i now works.
Submitted by:  Suleiman Souhlal <refugee@segfaulted.com>
Slightly modified by: grehan
Derived from:  i386
2004-02-11 13:18:31 +00:00
Peter Grehan
69a9f22118 - constify devinfo strings to eliminate compile warning
- remove trailing whitespace
2004-02-11 10:15:15 +00:00
Peter Grehan
3102ccf30c - fix compile warnings
- removed obsolete NetBSD-derived ADB conditionals
2004-02-11 08:07:19 +00:00
Peter Grehan
7c2779715c Cleaned up param.h:
- culled long-dead #define's
 - segment register defs moved to sr.h
 - NPMAPS moved to pmap.h
 - KERNBASE moved to vmparam.h
 - removed include of <machine/cpu.h> and fixed src files that
   relied on this.

Modifying segment register code no longer causes gcc rebuilds :-)
2004-02-11 07:27:34 +00:00
Peter Grehan
7d23b5b7db Add sysctl hw.uma_mdpages to track how many pages have been allocated
by UMA_MD_SMALL_ALLOC
2004-02-11 04:42:48 +00:00
Peter Grehan
0ee6dbd789 Remove pmap_pvo_allocf zone alloc function. It was a way of
using the direct-mapping of physmem to force PTE data structures
to be physically addressable so the interrupt-time real-mode
DSI trap handler could perform PTE spills. However, the memory
may have been > 256Mb, which would have caused a BAT spill and
double-interrupt.

The new trap code no longer handles PTE spills, so the requirement
that these pages be direct-mapped no longer applies. The irony is
UMA_MD_SMALL_ALLOC will return direct mappings for these structs :-)
2004-02-04 13:16:21 +00:00
Peter Grehan
112a8d7bdb Major overhaul of common trap code
- remove unused 601 and tlb exception code
 - remove interrupt-time PTE spill code. The pmap code
   will now take care of pinning kernel PTEs, and there
   are no longer issues about physical mapping of PTE
   data structures
 - All segment registers are switched on kernel entry/exit,
   allowing the kernel to have more virtual space and for
   user virtual space to extend to 4G.
 - The temporary register save area has been shifted from
   unused exception vector space to the per-cpu data area.
   This allows interrupts to be delivered to multiple CPUs
 - ISI traps no longer spill to BAT tables. It is assumed
   that all of kernel instruction memory is pinned.
 - shift from 'ldmw/stmw' instructions to individual register
   loads/stores when saving context. All PPC manuals indicate
   this should be much faster.
 - use '%r' for register names throughout.

TODO: need to test if DSI traps were the result of kernel stack
guard-page hits.

Reworked from:  NetBSD
2004-02-04 13:10:25 +00:00