Commit Graph

307 Commits

Author SHA1 Message Date
Marcel Moolenaar
b21a0008ba Introduce MAP_ENTRY_GROWS_DOWN and MAP_ENTRY_GROWS_UP to allow for
growable (stack) entries that not only grow down, but also grow up.
Have vm_map_growstack() take these flags into account when growing
an entry.

This is the first step in adding support for upward growable stacks.
It is a required feature on ia64 to support the register stack (or
rstack as I like to call it -- it also means reverse stack). We do
not currently create rstacks, so the upward growing is not exercised
and the change should be a functional no-op.

Reviewed by: alc
2003-08-30 21:25:23 +00:00
Alan Cox
5402d8ec23 Remove GIANT_REQUIRED from vmspace_alloc(). 2003-08-13 19:23:51 +00:00
Bruce M Simpson
abd498aa71 Add the mlockall() and munlockall() system calls.
- All those diffs to syscalls.master for each architecture *are*
   necessary. This needed clarification; the stub code generation for
   mlockall() was disabled, which would prevent applications from
   linking to this API (suggested by mux)
 - Giant has been quoshed. It is no longer held by the code, as
   the required locking has been pushed down within vm_map.c.
 - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES
   to express their intention explicitly.
 - Inspected at the vmstat, top and vm pager sysctl stats level.
   Paging-in activity is occurring correctly, using a test harness.
 - The RES size for a process may appear to be greater than its SIZE.
   This is believed to be due to mappings of the same shared library
   page being wired twice. Further exploration is needed.
 - Believed to back out of allocations and locks correctly
   (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC).

PR:             kern/43426, standards/54223
Reviewed by:    jake, alc
Approved by:    jake (mentor)
MFC after:	2 weeks
2003-08-11 07:14:08 +00:00
Poul-Henning Kamp
ec38b344cb Move the implementation of the vmspace_swap_count() (used only in
the "toss the largest process" emergency handling) from vm_map.c to
swap_pager.c.

The quantity calculated depends strongly on the internals of the
swap_pager and by moving it, we no longer need to expose the
internal metrics of the swap_pager to the world.
2003-07-18 10:47:58 +00:00
Alan Cox
1f78f902a8 Background: pmap_object_init_pt() premaps the pages of a object in
order to avoid the overhead of later page faults.  In general, it
implements two cases: one for vnode-backed objects and one for
device-backed objects.  Only the device-backed case is really
machine-dependent, belonging in the pmap.

This commit moves the vnode-backed case into the (relatively) new
function vm_map_pmap_enter().  On amd64 and i386, this commit only
amounts to code rearrangement.  On alpha and ia64, the new machine
independent (MI) implementation of the vnode case is smaller and more
efficient than their pmap-based implementations.  (The MI
implementation takes advantage of the fact that objects in -CURRENT
are ordered collections of pages.)  On sparc64, pmap_object_init_pt()
hadn't (yet) been implemented.
2003-07-03 20:18:02 +00:00
Alan Cox
8526ce9b64 Check the address provided to vm_map_stack() against the vm map's maximum,
returning an error if the address is too high.
2003-07-01 03:57:25 +00:00
Alan Cox
0551c08dee Introduce vm_map_pmap_enter(). Presently, this is a stub calling the MD
pmap_object_init_pt().
2003-06-29 23:32:55 +00:00
Alan Cox
23252eeabe Simple read-modify-write operations on a vm object's flags, ref_count, and
shadow_count can now rely on its mutex for synchronization.  Remove one use
of Giant from vm_map_insert().
2003-06-27 18:52:49 +00:00
Alan Cox
95018011e5 Remove a GIANT_REQUIRED on the kernel object that we no longer need. 2003-06-25 05:31:02 +00:00
David E. O'Brien
874651b13c Use __FBSDID(). 2003-06-11 23:50:51 +00:00
Alan Cox
d7fc221044 Pass the vm object to vm_object_collapse() with its lock held. 2003-06-07 02:29:17 +00:00
Alan Cox
4e73db5f40 Increase the scope of the vm_object lock in vm_map_delete(). 2003-04-30 19:18:09 +00:00
Alan Cox
8ba20a48bd Add vm_object locking to vmspace_swap_count(). 2003-04-30 00:43:17 +00:00
Alan Cox
155080d31e - Extend the scope of two existing vm_object locks to cover
swap_pager_freespace().
2003-04-26 05:30:56 +00:00
Alan Cox
b6e48e0372 - Acquire the vm_object's lock when performing vm_object_page_clean().
- Add a parameter to vm_pageout_flush() that tells vm_pageout_flush()
   whether its caller has locked the vm_object.  (This is a temporary
   measure to bootstrap vm_object locking.)
2003-04-24 04:31:25 +00:00
Alan Cox
1d284e00b5 - Update the vm_object locking in vm_map_insert(). 2003-04-20 21:56:40 +00:00
Alan Cox
7d040e3cc5 Update vm_object locking in vm_map_delete(). 2003-04-20 04:35:47 +00:00
Alan Cox
034b3d7a6f o Update locking around vm_object_page_remove() in vm_map_clean()
to use the new macros.
 o Remove unnecessary increment and decrement of the vm_object's
   reference count in vm_map_clean().
2003-04-19 01:43:32 +00:00
Alan Cox
e2479b4fc3 Lock some manipulations of the vm object's flags. 2003-04-13 20:22:02 +00:00
Poul-Henning Kamp
b4b138c27f Including <sys/stdint.h> is (almost?) universally only to be able to use
%j in printfs, so put a newsted include in <sys/systm.h> where the printf
prototype lives and save everybody else the trouble.
2003-03-18 08:45:25 +00:00
David Schultz
72d97679ff - When the VM daemon is out of swap space and looking for a
process to kill, don't block on a map lock while holding the
  process lock.  Instead, skip processes whose map locks are held
  and find something else to kill.
- Add vm_map_trylock_read() to support the above.

Reviewed by:	alc, mike (mentor)
2003-03-12 23:13:16 +00:00
Alan Cox
09c80124a3 Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.
Discussed on:	arch@
2003-03-06 03:41:02 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Alan Cox
814f5c92d7 Remove the acquisition and release of Giant around pmap_growkernel().
It's unnecessary for two reasons: (1) Giant is at present already held in
such cases and (2) our various implementations of pmap_growkernel() look to
be MP safe.  (For example, for sparc64 the proof of (2) is trivial.)
2003-02-15 20:01:09 +00:00
Alan Cox
d923c5986e Add MTX_DUPOK to the initialization of system map locks. 2003-01-25 18:45:55 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Matthew Dillon
2d5c7e4506 Close the remaining user address mapping races for physical
I/O, CAM, and AIO.  Still TODO: streamline useracc() checks.

Reviewed by:	alc, tegge
MFC after:	7 days
2003-01-20 17:46:48 +00:00
Matthew Dillon
3db161e079 It is possible for an active aio to prevent shared memory from being
dereferenced when a process exits due to the vmspace ref-count being
bumped.  Change shmexit() and shmexit_myhook() to take a vmspace instead
of a process and call it in vmspace_dofree().  This way if it is missed
in exit1()'s early-resource-free it will still be caught when the zombie is
reaped.

Also fix a potential race in shmexit_myhook() by NULLing out
vmspace->vm_shm prior to calling shm_delete_mapping() and free().

MFC after:	7 days
2003-01-13 23:04:32 +00:00
Alan Cox
a6864937e2 Lock the vm object when performing vm_object_clear_flag(). 2003-01-03 09:15:43 +00:00
Alan Cox
36daaecd04 Implement a variant locking scheme for vm maps: Access to system maps
is now synchronized by a mutex, whereas access to user maps is still
synchronized by a lockmgr()-based lock.  Why?  No single type of lock,
including sx locks, meets the requirements of both types of vm map.
Sometimes we sleep while holding the lock on a user map.  Thus, a
a mutex isn't appropriate.  On the other hand, both lockmgr()-based
and sx locks release Giant when a thread/process blocks during
contention for a lock.  This could lead to a race condition in a legacy
driver (that relies on Giant for synchronization) if it attempts to
kmem_malloc() and fails to immediately obtain the lock.  Fortunately,
we never sleep while holding a system map lock.
2002-12-31 19:38:04 +00:00
Alan Cox
3a92e5d5e9 - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.
- Introduce map_sleep_mtx and use it to replace Giant in
   vm_map_unlock_and_wait() and vm_map_wakeup().  (Original
   version by: tegge.)
2002-12-30 00:41:33 +00:00
Alan Cox
e3a9e1b2a8 - Remove vm_object_init2(). It is unused.
- Add a mtx_destroy() to vm_object_collapse().  (This allows a bzero()
   to migrate from _vm_object_allocate() to vm_object_zinit(), where it
   will be performed less often.)
2002-12-29 21:01:14 +00:00
Matthew Dillon
389d2b6e21 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
Alan Cox
5e83956af5 Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 07:16:51 +00:00
Alan Cox
bc105a6797 Hold the page queues lock when calling pmap_protect(); it updates fields
of the vm_page structure.  Make the style of the pmap_protect() calls
consistent.

Approved by:	re (blanket)
2002-12-01 18:57:56 +00:00
Alan Cox
85e03a7e1e Acquire and release the page queues lock around calls to pmap_protect()
because it updates flags within the vm page.

Approved by:	re (blanket)
2002-11-25 22:00:31 +00:00
Alan Cox
f6116791a2 Fix an error case in vm_map_wire(): unwiring of an entry during cleanup
after a user wire error fails when the entry is already system wired.

Reported by:	tegge
2002-11-09 21:26:49 +00:00
Maxime Henrion
cd034a5be9 Correctly print vm_offset_t types. 2002-11-07 22:49:07 +00:00
Poul-Henning Kamp
af045176d1 Properly put macro args in ().
Spotted by:	FlexeLint.
2002-10-16 10:52:15 +00:00
Matthew N. Dodd
4a2eca23ca Modify vm_map_clean() (and thus the msync(2) system call) to support
invalidation of cached pages for objects of type OBJT_DEVICE.

Submitted by:	Christian Zander <zander@minion.de>
Approved by:	alc
2002-09-22 08:22:32 +00:00
Jake Burkholder
05ba50f522 Use the fields in the sysentvec and in the vm map header in place of the
constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS.
This is mainly so that they can be variable even for the native abi, based
on different machine types.  Get stack protections from the sysentvec too.
This makes it trivial to map the stack non-executable for certain abis, on
machines that support it.
2002-09-21 22:07:17 +00:00
Alan Cox
4eaa117956 o Use vm_object_lock() in place of Giant when manipulating a vm object
in vm_map_insert().
2002-08-24 17:52:08 +00:00
Alan Cox
ef594d3186 o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,
user_wire.
2002-07-24 19:47:56 +00:00
Peter Wemm
3ebc124838 Infrastructure tweaks to allow having both an Elf32 and an Elf64 executable
handler in the kernel at the same time.  Also, allow for the
exec_new_vmspace() code to build a different sized vmspace depending on
the executable environment.  This is a big help for execing i386 binaries
on ia64.   The ELF exec code grows the ability to map partial pages when
there is a page size difference, eg: emulating 4K pages on 8K or 16K
hardware pages.

Flesh out the i386 emulation support for ia64.  At this point, the only
binary that I know of that fails is cvsup, because the cvsup runtime
tries to execute code in pages not marked executable.

Obtained from:  dfr (mostly, many tweaks from me).
2002-07-20 02:56:12 +00:00
Peter Wemm
9e7c1bce60 (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.
Use lmin(long, long), not min(u_int, u_int).  This is a problem here on
ia64 which has *way* more than 2^32 pages of KVA.  281474976710655 pages
to be precice.
2002-07-18 10:28:00 +00:00
Alan Cox
93bc4879e6 o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),
_vm_map_lock_read(), and _vm_map_trylock().  Submitted by: tegge
 o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
   (This clears the way for exec_map accesses to move outside of Giant.
   The exec_map is not a system map.)
 o Remove some premature MPSAFE comments.

Reviewed by:	tegge
2002-07-12 23:20:06 +00:00
Alan Cox
9688f93163 o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()
and kmem_free_wakeup().  Previously, kmem_free_wakeup() always
   called wakeup().  In general, no one was sleeping.
 o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c
   for use in vm_kern.c.
2002-07-11 02:39:24 +00:00
Alan Cox
22a97b04de o Make the reservation of KVA space for kernel map entries a function
of the KVA space's size in addition to the amount of physical memory
   and reduce it by a factor of two.

Under the old formula, our reservation amounted to one kernel map entry
per virtual page in the KVA space on a 4GB i386.
2002-07-03 19:16:37 +00:00
Ian Dowse
23f09d50bb Avoid using the 64-bit vm_pindex_t in a few places where 64-bit
types are not required, as the overhead is unnecessary:

 o In the i386 pmap_protect(), `sindex' and `eindex' represent page
   indices within the 32-bit virtual address space.
 o In swp_pager_meta_build() and swp_pager_meta_ctl(), use a temporary
   variable to store the low few bits of a vm_pindex_t that gets used
   as an array index.
 o vm_uiomove() uses `osize' and `idx' for page offsets within a
   map entry.
 o In vm_object_split(), `idx' is a page offset within a map entry.
2002-06-26 20:32:51 +00:00
Matthew Dillon
a69ac1740f Enforce RLIMIT_VMEM on growable mappings (aka the primary stack or any
MAP_STACK mapping).

Suggested by:	alc
2002-06-26 03:13:46 +00:00