Commit Graph

40 Commits

Author SHA1 Message Date
Peter Wemm
a136efe9b6 Collect all the (now equivalent) pmap_new_proc/pmap_dispose_proc/
pmap_swapin_proc/pmap_swapout_proc functions from the MD pmap code
and use a single equivalent MI version.  There are other cleanups
needed still.

While here, use the UMA zone hooks to keep a cache of preinitialized
proc structures handy, just like the thread system does.  This eliminates
one dependency on 'struct proc' being persistent even after being freed.
There are some comments about things that can be factored out into
ctor/dtor functions if it is worth it.  For now they are mostly just
doing statistics to get a feel of how it is working.
2002-07-07 23:05:27 +00:00
Peter Wemm
1f0b8b7582 Update for post-kse3 pmap kthread allocation changes 2002-07-07 22:56:31 +00:00
Benno Rice
8bbfa33a79 Add pmap_mapdev and pmap_unmapdev. 2002-06-29 09:45:59 +00:00
Benno Rice
0d29067503 - Initialise battable to cover I/O spaces.
- Statically size the bpvo entries to avoid conflicts between bpvo allocation
  and the vm allocator.
- Shift pmap_init2 code into pmap_init.
- Add UMA_ZONE_VM flag to uma_zcreate.

Submitted by:	Peter Grehan <peterg@ptree32.com.au>
2002-06-29 09:43:59 +00:00
Benno Rice
25e2288dd7 Implement pmap_copy and pmap_copy_page. 2002-05-28 07:38:55 +00:00
Benno Rice
31c82d0332 Get the correct memory regions from OpenFirmware. We were getting the
"available" ranges, not the "physical" ranges.  Clean up some of the
bootstrap code in the process.

Submitted by:	Peter Grehan <peterg@ptree32.com.au>
2002-05-27 11:18:12 +00:00
Benno Rice
0f92104c14 Implement the following functions:
- pmap_addr_hint
- pmap_change_wiring
- pmap_extract
- pmap_is_modified
2002-05-10 14:21:48 +00:00
Benno Rice
fafc736254 Improve our detection of an attempted duplicate entry. We may be trying to
change the page protection bits.
2002-05-10 06:27:08 +00:00
Benno Rice
8207b3627b 1. Better track the executable status of mappings.
2.  Set a pcpu variable to the real address of the active pmap (used when
    exiting from traps.

Obtained from:	NetBSD (1)
2002-05-09 14:09:19 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Benno Rice
864bc5205b Correct a comment. 2002-04-16 12:15:17 +00:00
Benno Rice
e79f59e84c Implement the following functions:
- pmap_kextract
	- pmap_object_init_pt
	- pmap_protect
	- pmap_remove_pages

I'm pretty sure pmap_remove_pages is at least somewhat bogus.
2002-04-16 12:13:10 +00:00
Benno Rice
27dbf9d5e8 Remove some dead code. 2002-04-16 12:10:04 +00:00
Benno Rice
d080d5fd7c Use mtsrin() instead of inline asm. 2002-04-16 12:07:41 +00:00
Benno Rice
a8aaf02c3c Change the value of PMAP_BOOTSTRAP so we don't stomp on the PTE index value. 2002-04-16 12:00:43 +00:00
Peter Wemm
1a87a0da66 Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()
and pmap_copy_page().  This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap.  (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances.  Leaving this to pmap itself gives more flexibilitly.)

Reviewed by:	jake
Tested on:	i386, ia64 and (I believe) sparc64. (my alpha was hosed)
2002-04-15 16:00:03 +00:00
Benno Rice
52a3cde55d Turn some CTR's into CTR0's. 2002-04-15 12:11:18 +00:00
Jeff Roberson
378862a72d Remove references to vm_zone.h and switch over to the new uma API. 2002-03-21 01:11:31 +00:00
Benno Rice
21d7ec8915 Increment pmap_pvo_count in the right place. 2002-03-20 05:25:33 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
Benno Rice
49f8f7273b Changes and fixes in preparation for UMA:
- Bootstrap pvo entries are now allocated by stealing pages.
- Just return if we're pmap_enter'ing a mapping that's already there.  Don't
  remove it and re-enter it.
2002-03-17 23:58:12 +00:00
Benno Rice
8862232d7b Correct a typo. (* that should've been &) 2002-03-11 07:09:42 +00:00
Benno Rice
3e4409437f Copy the "implementation" of pmap_prefault from sparc64. 2002-03-07 12:22:08 +00:00
Benno Rice
d2c1f57685 Calculate physmem. 2002-03-07 10:09:24 +00:00
Benno Rice
ac6ba8bd4a - Modify pmap_activate so it only marks the pmap as active.
- Add a pmap_deactivate function.
2002-02-28 11:55:44 +00:00
Benno Rice
88afb2a31b Implement the following functions:
- pmap_remove
	- pmap_kremove
	- pmap_qremove
2002-02-28 02:54:16 +00:00
Benno Rice
54eb8bbc14 Remove most of the usage of critical_enter/exit.
I put these in to match the use of spl*() in the NetBSD code I was basing this
on, but it appears to cause problems.

I'm doing this in a separate commit so as to be able to refer back if locking
becomes an issue at a later stage.
2002-02-28 02:45:10 +00:00
Mike Silbersack
7f3a40933b Fix a horribly suboptimal algorithm in the vm_daemon.
In order to determine what to page out, the vm_daemon checks
reference bits on all pages belonging to all processes.  Unfortunately,
the algorithm used reacted badly with shared pages; each shared page
would be checked once per process sharing it; this caused an O(N^2)
growth of tlb invalidations.  The algorithm has been changed so that
each page will be checked only 16 times.

Prior to this change, a fork/sleepbomb of 1300 processes could cause
the vm_daemon to take over 60 seconds to complete, effectively
freezing the system for that time period.  With this change
in place, the vm_daemon completes in less than a second.  Any system
with hundreds of processes sharing pages should benefit from this change.

Note that the vm_daemon is only run when the system is under extreme
memory pressure.  It is likely that many people with loaded systems saw
no symptoms of this problem until they reached the point where swapping
began.

Special thanks go to dillon, peter, and Chuck Cranor, who helped me
get up to speed with vm internals.

PR:		33542, 20393
Reviewed by:	dillon
MFC after:	1 week
2002-02-27 18:03:02 +00:00
Benno Rice
f8e03c1093 Don't call critical_enter()/critical_exit() around calls to pmap_pvo_enter()
as it does it's own handling of critical sections.
2002-02-23 05:55:51 +00:00
Benno Rice
5244eac968 Complete rework of the PowerPC pmap and a number of other bits in the early
boot sequence.

The new pmap.c is based on NetBSD's newer pmap.c (for the mpc6xx processors)
which is 70% faster than the older code that the original pmap.c was based
on.  It has also been based on the framework established by jake's initial
sparc64 pmap.c.

There is no change to how far the kernel gets (it makes it to the mountroot
prompt in psim) but the new pmap code is a lot cleaner.

Obtained from:	NetBSD (pmap code)
2002-02-14 01:39:11 +00:00
Mark Peek
94e0b85e76 Fix includes based on recent changes to lock.h, mutex.h and ktr.h. 2001-10-19 22:45:46 +00:00
Benno Rice
bdf71f568b Implement pmap_mapdev. 2001-10-14 08:38:16 +00:00
Mark Peek
d699b539c2 Add missing include file. 2001-09-20 15:32:56 +00:00
Mark Peek
06fdffd84d Use BATL/BATU macros instead of hardcoded hex constants. 2001-09-20 00:48:30 +00:00
Mark Peek
5fd2c51edb Update PowerPC MD code to compile and do initial bootstrap based on
recent changes (KSE and VM requiring physmem to be setup).

Reviewed by:	benno, jhb, julian
2001-09-20 00:47:17 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Peter Wemm
b8603f0e57 Similar to changes on i386/alpha/etc pmap.c; converge on a similar
look/feel on pmap_new_proc() with some cosmetic style changes.
2001-08-31 06:42:45 +00:00
Peter Wemm
0b27d7104f Make PMAP_SHPGPERPROC tunable. One shouldn't need to recompile a kernel
for this, since it is easy to run into with large systems with lots of
shared mmap space.

Obtained from:	yahoo
2001-07-27 01:08:59 +00:00
Benno Rice
111c77dcef Fix comment breakage. 2001-06-27 12:20:48 +00:00
Benno Rice
f9bac91b18 Bring in NetBSD code used in the PowerPC port.
Reviewed by:	obrien, dfr
Obtained from:	NetBSD
2001-06-10 02:39:37 +00:00