Commit Graph

177 Commits

Author SHA1 Message Date
Peter Grehan
5d64cf91fb G4 requires isync after 256Mb ibat/dbat update, G3 requires
isync after each bat update. Otherwise, pmap_bootstrap causes
an ISI exception. A fall-out of loader BAT removal.
2004-07-08 12:47:36 +00:00
Alan Cox
56b093883a Correct pmap_extract()'s return type. It should be vm_paddr_t, not
vm_offset_t.
2004-07-05 23:08:27 +00:00
Peter Grehan
6cc1cdf47b Modify loop test when cycling through phys_avail array. It's possible
for an OpenFirmware implementation to have a single memory region
(hello PearPC).
2004-07-01 08:01:49 +00:00
Alan Cox
1f51408ade Remove avail_end. It is not used. 2004-04-11 06:02:24 +00:00
Alan Cox
c8607538c8 Remove avail_start on those platforms that no longer use it. (Only amd64
does anything with it beyond simple initialization.)
2004-04-05 04:08:00 +00:00
Alan Cox
bdb93eb248 Remove unused arguments from pmap_init(). 2004-04-05 00:37:50 +00:00
Alan Cox
fcffa790e9 Retire pmap_pinit2(). Alpha was the last platform that used it. However,
ever since alpha/alpha/pmap.c revision 1.81 introduced the list allpmaps,
there has been no reason for having this function on Alpha.  Briefly,
when pmap_growkernel() relied upon the list of all processes to find and
update the various pmaps to reflect a growth in the kernel's valid
address space, pmap_init2() served to avoid a race between pmap
initialization and pmap_growkernel().  Specifically, pmap_pinit2() was
responsible for initializing the kernel portions of the pmap and
pmap_pinit2() was called after the process structure contained a pointer
to the new pmap for use by pmap_growkernel().  Thus, an update to the
kernel's address space might be applied to the new pmap unnecessarily,
but an update would never be lost.
2004-03-07 21:06:48 +00:00
Peter Grehan
4daf20b2f1 Increase kernel VA from 256Mb to 512Mb by shifting the segment used
for user copyinout down to 12, and keeping segments 13/14 for
kernel VA.

It would be nice to have more available, but segments lower than
this are reserved for either memory or 1:1 mapped device i/o,
and seg 15 is OpenFirmware ROM. Also, the effort to keep OpenFirmware
available for callbacks limits the use of VA-mapped segments.
Fortunately UMA_MD_SMALL_ALLOC takes away a lot of VM pressure.

Obtained from:  NetBSD
2004-03-02 06:49:21 +00:00
Peter Grehan
7c2779715c Cleaned up param.h:
- culled long-dead #define's
 - segment register defs moved to sr.h
 - NPMAPS moved to pmap.h
 - KERNBASE moved to vmparam.h
 - removed include of <machine/cpu.h> and fixed src files that
   relied on this.

Modifying segment register code no longer causes gcc rebuilds :-)
2004-02-11 07:27:34 +00:00
Peter Grehan
0ee6dbd789 Remove pmap_pvo_allocf zone alloc function. It was a way of
using the direct-mapping of physmem to force PTE data structures
to be physically addressable so the interrupt-time real-mode
DSI trap handler could perform PTE spills. However, the memory
may have been > 256Mb, which would have caused a BAT spill and
double-interrupt.

The new trap code no longer handles PTE spills, so the requirement
that these pages be direct-mapped no longer applies. The irony is
UMA_MD_SMALL_ALLOC will return direct mappings for these structs :-)
2004-02-04 13:16:21 +00:00
Peter Grehan
0efd0097cb When UMA_MD_SMALL_ALLOC is defined, pmap_kextract will be called
for direct-mapped addresses. Assume that any address less than KVA
is one of these and return it. Also assert that an address is KVA
does have a valid mapping - callers of pmap_kextract don't check
the return value, since they assume that they have a valid virtual
address.
2004-01-29 00:45:41 +00:00
Peter Grehan
7b33c6ef7a Disable the per-vm_page PTE cache. This was not being invalidated
correctly, resulting in the dreaded "vm_pageout_flush: partially
invalid page" panic. The caching issue will be revisited in the
future, but opt for safety over performance in the meantime.

Tested by: gallatin
2003-12-16 03:55:57 +00:00
Andrew Gallatin
4f7daed045 pmap_query_bit() should return false if the bit is not set.
Reviewed by: grehan
2003-12-09 14:47:33 +00:00
Alan Cox
566526a957 Migrate pmap_prefault() into the machine-independent virtual memory layer.
A small helper function pmap_is_prefaultable() is added.  This function
encapsulate the few lines of pmap_prefault() that actually vary from
machine to machine.  Note: pmap_is_prefaultable() and pmap_mincore() have
much in common.  Going forward, it's worth considering their merger.
2003-10-03 22:46:53 +00:00
Peter Grehan
84792e72d6 Soften assert in pmap_remove_all.
Introduct pmap_extract_and_hold.

Stolen from: sparc64
2003-09-22 11:59:05 +00:00
Alan Cox
e53f32ace5 Use kmem_alloc_nofault() rather than kmem_alloc_pageable() in pmap_mapdev().
See revision 1.140 of kern/sys_pipe.c for a detailed rationale.

Submitted by:	tegge
2003-08-02 19:26:09 +00:00
Bosko Milekic
b053bc8407 Make sure that when the PV ENTRY zone is created in pmap, that it's
created not only with UMA_ZONE_VM but also with UMA_ZONE_NOFREE.  In
the i386 case in particular, the pmap code would hook a special
page allocation routine that allocated from kernel_map and not kmem_map,
and so when/if the pageout daemon drained the zones, it could actually
push out slabs from the PV ENTRY zone but call UMA's default page_free,
which resulted in pages allocated from kernel_map being freed to
kmem_map; bad.  kmem_free() ignores the return value of the
vm_map_delete and just returns.  I'm not sure what the exact
repercussions could be, but it doesn't look good.

In the PAE case on i386, we also set-up a zone in pmap, so be
conservative for now and make that zone also ZONE_NOFREE and
ZONE_VM.  Do this for the pmap zones for the other archs too,
although in some cases it may not be entirely necessarily.  We'd
rather be safe than sorry at this point.

Perhaps all UMA_ZONE_VM zones should by default be also
UMA_ZONE_NOFREE?

May fix some of silby's crashes on the PV ENTRY zone.
2003-07-31 03:39:51 +00:00
Peter Wemm
ad7a226f9d Deal with 'options KSTACK_PAGES' being a global option. 2003-07-31 01:31:32 +00:00
Alan Cox
cdedf48666 Make pmap_pvo_allocf() callable without Giant. 2003-07-27 20:57:53 +00:00
Alan Cox
1f78f902a8 Background: pmap_object_init_pt() premaps the pages of a object in
order to avoid the overhead of later page faults.  In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects.  Only the device-backed case is really
machine-dependent, belonging in the pmap.

This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter().  On amd64 and i386, this commit only
amounts to code rearrangement.  On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations.  (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.)  On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
2003-07-03 20:18:02 +00:00
Alan Cox
dca96f1adc - Export pmap_enter_quick() to the MI VM. This will permit the
implementation of a largely MI pmap_object_init_pt() for vnode-backed
   objects.  pmap_enter_quick() is implemented via pmap_enter() on sparc64
   and powerpc.
 - Correct a mismatch between pmap_object_init_pt()'s prototype and its
   various implementations.  (I plan to keep pmap_object_init_pt() as
   the MD hook for device-backed objects on i386 and amd64.)
 - Correct an error in ia64's pmap_enter_quick() and adjust its interface
   to match the other versions.  Discussed with: marcel
2003-06-29 21:20:04 +00:00
Alan Cox
49a2507bd1 Migrate the thread stack management functions from the machine-dependent
to the machine-independent parts of the VM.  At the same time, this
introduces vm object locking for the non-i386 platforms.

Two details:

1. KSTACK_GUARD has been removed in favor of KSTACK_GUARD_PAGES.  The
different machine-dependent implementations used various combinations
of KSTACK_GUARD and KSTACK_GUARD_PAGES.  To disable guard page, set
KSTACK_GUARD_PAGES to 0.

2. Remove the (unnecessary) clearing of PG_ZERO in vm_thread_new.  In
5.x, (but not 4.x,) PG_ZERO can only be set if VM_ALLOC_ZERO is passed
to vm_page_alloc() or vm_page_grab().
2003-06-14 23:23:55 +00:00
Alan Cox
89f4fca265 Move the *_new_altkstack() and *_dispose_altkstack() functions out of the
various pmap implementations into the machine-independent vm.  They were
all identical.
2003-06-14 06:20:25 +00:00
David E. O'Brien
8368cf8f75 Use __FBSDID rather than rcsid[]. 2003-04-03 21:36:33 +00:00
Maxime Henrion
07159f9c56 Cleanup of the d_mmap_t interface.
- Get rid of the useless atop() / pmap_phys_address() detour.  The
  device mmap handlers must now give back the physical address
  without atop()'ing it.
- Don't borrow the physical address of the mapping in the returned
  int.  Now we properly pass a vm_offset_t * and expect it to be
  filled by the mmap handler when the mapping was successful.  The
  mmap handler must now return 0 when successful, any other value
  is considered as an error.  Previously, returning -1 was the only
  way to fail.  This change thus accidentally fixes some devices
  which were bogusly returning errno constants which would have been
  considered as addresses by the device pager.
- Garbage collect the poorly named pmap_phys_address() now that it's
  no longer used.
- Convert all the d_mmap_t consumers to the new API.

I'm still not sure wheter we need a __FreeBSD_version bump for this,
since and we didn't guarantee API/ABI stability until 5.1-RELEASE.

Discussed with:		alc, phk, jake
Reviewed by:		peter
Compile-tested on:	LINT (i386), GENERIC (alpha and sparc64)
Runtime-tested on:	i386
2003-02-25 03:21:22 +00:00
Peter Grehan
03b6e02551 - add pmap_pagedaemon_waken variable
- remove dead code and fix warnings in pmap_zero_page/zero_page_area
- implement
    pmap_clear_reference
    pmap_ts_referenced
    pmap_page_exists_quick
    pmap_remove_all
- align pmap_qenter/qremove closer with i386 code
- fix vm_page locking in pmap_new_thread (from benno)
- add new parameter to pmap_clear_bit to return original
  pte value

Approved by:  benno
2003-02-01 02:56:48 +00:00
Benno Rice
b3c02b110a Back out some changes that snuck in with the last commit.
Pointy hat to:	benno
2003-01-27 04:32:10 +00:00
Benno Rice
622cfbd033 Flesh out bus_dmamap_sync. 2003-01-27 04:27:01 +00:00
Alan Cox
eea85e9bb6 Move pmap_collect() out of the machine-dependent code, rename it
to reflect its new location, and add page queue and flag locking.

Notes: (1) alpha, i386, and ia64 had identical implementations
of pmap_collect() in terms of machine-independent interfaces;
(2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
2002-11-13 05:39:58 +00:00
Peter Grehan
24a4e806f2 - fix zero-sized stack alloc from previous commit. a default is now
selected ala sparc64
- KSEIII routines implemented (taken from i386/sparc64)

Approved by: Benno
2002-10-04 01:13:34 +00:00
Scott Long
316ec49abd Some kernel threads try to do significant work, and the default KSTACK_PAGES
doesn't give them enough stack to do much before blowing away the pcb.
This adds MI and MD code to allow the allocation of an alternate kstack
who's size can be speficied when calling kthread_create.  Passing the
value 0 prevents the alternate kstack from being created.  Note that the
ia64 MD code is missing for now, and PowerPC was only partially written
due to the pmap.c being incomplete there.
Though this patch does not modify anything to make use of the alternate
kstack, acpi and usb are good candidates.

Reviewed by:	jake, peter, jhb
2002-10-02 07:44:29 +00:00
Peter Grehan
32bc78460d - use BAT registers to map device space and physical memory
- remove test in pmap_activate that prevented vmspace sharing (v/rfork)
 - always sync icache in pmap_enter until problems are sorted
 - fix incorrect use of regions in pmap_kenter
 - bring in pmap_release from NetBSD
 - fix overwrite of bootstrap flag in pmap_pvo_enter

Approved by: benno
2002-09-19 04:36:20 +00:00
Alan Cox
6508a194aa o Retire pmap_pageable(). It's an advisory routine that none
of our platforms implements.
2002-08-25 04:20:05 +00:00
Alan Cox
7ffcf9ec77 o Don't set PG_MAPPED or PG_WRITEABLE when a page is mapped
using pmap_kenter() or pmap_qenter().
 o Use VM_ALLOC_WIRED in pmap_new_thread().
2002-08-05 00:04:18 +00:00
Benno Rice
aa39961e37 Remove the statically allocated array that holds OpenFirmware memory mappings
during pmap_bootstrap.  Instead, temporarily help ourselves to some memory
from phys_avail since we won't need it post-boostrap.
2002-07-18 12:43:08 +00:00
Benno Rice
3495845eec Add an implementation for pmap_zero_page_area. 2002-07-09 13:44:24 +00:00
Peter Wemm
a58b3a6878 Add a special page zero entry point intended to be called via the single
threaded VM pagezero kthread outside of Giant.  For some platforms, this
is really easy since it can just use the direct mapped region.  For others,
IPI sending is involved or there are other issues, so grab Giant when
needed.

We still have preemption issues to deal with, but Alan Cox has an
interesting suggestion on how to minimize the problem on x86.

Use Luigi's hack for preserving the (lack of) priority.

Turn the idle zeroing back on since it can now actually do something useful
outside of Giant in many cases.
2002-07-08 04:24:26 +00:00
Peter Wemm
a136efe9b6 Collect all the (now equivalent) pmap_new_proc/pmap_dispose_proc/
pmap_swapin_proc/pmap_swapout_proc functions from the MD pmap code
and use a single equivalent MI version.  There are other cleanups
needed still.

While here, use the UMA zone hooks to keep a cache of preinitialized
proc structures handy, just like the thread system does.  This eliminates
one dependency on 'struct proc' being persistent even after being freed.
There are some comments about things that can be factored out into
ctor/dtor functions if it is worth it.  For now they are mostly just
doing statistics to get a feel of how it is working.
2002-07-07 23:05:27 +00:00
Peter Wemm
1f0b8b7582 Update for post-kse3 pmap kthread allocation changes 2002-07-07 22:56:31 +00:00
Benno Rice
8bbfa33a79 Add pmap_mapdev and pmap_unmapdev. 2002-06-29 09:45:59 +00:00
Benno Rice
0d29067503 - Initialise battable to cover I/O spaces.
- Statically size the bpvo entries to avoid conflicts between bpvo allocation
  and the vm allocator.
- Shift pmap_init2 code into pmap_init.
- Add UMA_ZONE_VM flag to uma_zcreate.

Submitted by:	Peter Grehan <peterg@ptree32.com.au>
2002-06-29 09:43:59 +00:00
Benno Rice
25e2288dd7 Implement pmap_copy and pmap_copy_page. 2002-05-28 07:38:55 +00:00
Benno Rice
31c82d0332 Get the correct memory regions from OpenFirmware. We were getting the
"available" ranges, not the "physical" ranges.  Clean up some of the
bootstrap code in the process.

Submitted by:	Peter Grehan <peterg@ptree32.com.au>
2002-05-27 11:18:12 +00:00
Benno Rice
0f92104c14 Implement the following functions:
- pmap_addr_hint
- pmap_change_wiring
- pmap_extract
- pmap_is_modified
2002-05-10 14:21:48 +00:00
Benno Rice
fafc736254 Improve our detection of an attempted duplicate entry. We may be trying to
change the page protection bits.
2002-05-10 06:27:08 +00:00
Benno Rice
8207b3627b 1. Better track the executable status of mappings.
2.  Set a pcpu variable to the real address of the active pmap (used when
    exiting from traps.

Obtained from:	NetBSD (1)
2002-05-09 14:09:19 +00:00
Peter Wemm
db17c6fc07 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
Benno Rice
864bc5205b Correct a comment. 2002-04-16 12:15:17 +00:00
Benno Rice
e79f59e84c Implement the following functions:
- pmap_kextract
	- pmap_object_init_pt
	- pmap_protect
	- pmap_remove_pages

I'm pretty sure pmap_remove_pages is at least somewhat bogus.
2002-04-16 12:13:10 +00:00
Benno Rice
27dbf9d5e8 Remove some dead code. 2002-04-16 12:10:04 +00:00
Benno Rice
d080d5fd7c Use mtsrin() instead of inline asm. 2002-04-16 12:07:41 +00:00
Benno Rice
a8aaf02c3c Change the value of PMAP_BOOTSTRAP so we don't stomp on the PTE index value. 2002-04-16 12:00:43 +00:00
Peter Wemm
1a87a0da66 Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()
and pmap_copy_page().  This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap.  (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances.  Leaving this to pmap itself gives more flexibilitly.)

Reviewed by:	jake
Tested on:	i386, ia64 and (I believe) sparc64. (my alpha was hosed)
2002-04-15 16:00:03 +00:00
Benno Rice
52a3cde55d Turn some CTR's into CTR0's. 2002-04-15 12:11:18 +00:00
Jeff Roberson
378862a72d Remove references to vm_zone.h and switch over to the new uma API. 2002-03-21 01:11:31 +00:00
Benno Rice
21d7ec8915 Increment pmap_pvo_count in the right place. 2002-03-20 05:25:33 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
Benno Rice
49f8f7273b Changes and fixes in preparation for UMA:
- Bootstrap pvo entries are now allocated by stealing pages.
- Just return if we're pmap_enter'ing a mapping that's already there.  Don't
  remove it and re-enter it.
2002-03-17 23:58:12 +00:00
Benno Rice
8862232d7b Correct a typo. (* that should've been &) 2002-03-11 07:09:42 +00:00
Benno Rice
3e4409437f Copy the "implementation" of pmap_prefault from sparc64. 2002-03-07 12:22:08 +00:00
Benno Rice
d2c1f57685 Calculate physmem. 2002-03-07 10:09:24 +00:00
Benno Rice
ac6ba8bd4a - Modify pmap_activate so it only marks the pmap as active.
- Add a pmap_deactivate function.
2002-02-28 11:55:44 +00:00
Benno Rice
88afb2a31b Implement the following functions:
- pmap_remove
	- pmap_kremove
	- pmap_qremove
2002-02-28 02:54:16 +00:00
Benno Rice
54eb8bbc14 Remove most of the usage of critical_enter/exit.
I put these in to match the use of spl*() in the NetBSD code I was basing this
on, but it appears to cause problems.

I'm doing this in a separate commit so as to be able to refer back if locking
becomes an issue at a later stage.
2002-02-28 02:45:10 +00:00
Mike Silbersack
7f3a40933b Fix a horribly suboptimal algorithm in the vm_daemon.
In order to determine what to page out, the vm_daemon checks
reference bits on all pages belonging to all processes.  Unfortunately,
the algorithm used reacted badly with shared pages; each shared page
would be checked once per process sharing it; this caused an O(N^2)
growth of tlb invalidations.  The algorithm has been changed so that
each page will be checked only 16 times.

Prior to this change, a fork/sleepbomb of 1300 processes could cause
the vm_daemon to take over 60 seconds to complete, effectively
freezing the system for that time period.  With this change
in place, the vm_daemon completes in less than a second.  Any system
with hundreds of processes sharing pages should benefit from this change.

Note that the vm_daemon is only run when the system is under extreme
memory pressure.  It is likely that many people with loaded systems saw
no symptoms of this problem until they reached the point where swapping
began.

Special thanks go to dillon, peter, and Chuck Cranor, who helped me
get up to speed with vm internals.

PR:		33542, 20393
Reviewed by:	dillon
MFC after:	1 week
2002-02-27 18:03:02 +00:00
Benno Rice
f8e03c1093 Don't call critical_enter()/critical_exit() around calls to pmap_pvo_enter()
as it does it's own handling of critical sections.
2002-02-23 05:55:51 +00:00
Benno Rice
5244eac968 Complete rework of the PowerPC pmap and a number of other bits in the early
boot sequence.

The new pmap.c is based on NetBSD's newer pmap.c (for the mpc6xx processors)
which is 70% faster than the older code that the original pmap.c was based
on.  It has also been based on the framework established by jake's initial
sparc64 pmap.c.

There is no change to how far the kernel gets (it makes it to the mountroot
prompt in psim) but the new pmap code is a lot cleaner.

Obtained from:	NetBSD (pmap code)
2002-02-14 01:39:11 +00:00
Mark Peek
94e0b85e76 Fix includes based on recent changes to lock.h, mutex.h and ktr.h. 2001-10-19 22:45:46 +00:00
Benno Rice
bdf71f568b Implement pmap_mapdev. 2001-10-14 08:38:16 +00:00
Mark Peek
d699b539c2 Add missing include file. 2001-09-20 15:32:56 +00:00
Mark Peek
06fdffd84d Use BATL/BATU macros instead of hardcoded hex constants. 2001-09-20 00:48:30 +00:00
Mark Peek
5fd2c51edb Update PowerPC MD code to compile and do initial bootstrap based on
recent changes (KSE and VM requiring physmem to be setup).

Reviewed by:	benno, jhb, julian
2001-09-20 00:47:17 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
Peter Wemm
b8603f0e57 Similar to changes on i386/alpha/etc pmap.c; converge on a similar
look/feel on pmap_new_proc() with some cosmetic style changes.
2001-08-31 06:42:45 +00:00
Peter Wemm
0b27d7104f Make PMAP_SHPGPERPROC tunable. One shouldn't need to recompile a kernel
for this, since it is easy to run into with large systems with lots of
shared mmap space.

Obtained from:	yahoo
2001-07-27 01:08:59 +00:00
Benno Rice
111c77dcef Fix comment breakage. 2001-06-27 12:20:48 +00:00
Benno Rice
f9bac91b18 Bring in NetBSD code used in the PowerPC port.
Reviewed by:	obrien, dfr
Obtained from:	NetBSD
2001-06-10 02:39:37 +00:00