Commit Graph

286 Commits

Author SHA1 Message Date
Alan Cox
09c80124a3 Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.
Discussed on:	arch@
2003-03-06 03:41:02 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Alan Cox
814f5c92d7 Remove the acquisition and release of Giant around pmap_growkernel().
It's unnecessary for two reasons: (1) Giant is at present already held in
such cases and (2) our various implementations of pmap_growkernel() look to
be MP safe.  (For example, for sparc64 the proof of (2) is trivial.)
2003-02-15 20:01:09 +00:00
Alan Cox
d923c5986e Add MTX_DUPOK to the initialization of system map locks. 2003-01-25 18:45:55 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Matthew Dillon
2d5c7e4506 Close the remaining user address mapping races for physical
I/O, CAM, and AIO.  Still TODO: streamline useracc() checks.

Reviewed by:	alc, tegge
MFC after:	7 days
2003-01-20 17:46:48 +00:00
Matthew Dillon
3db161e079 It is possible for an active aio to prevent shared memory from being
dereferenced when a process exits due to the vmspace ref-count being
bumped.  Change shmexit() and shmexit_myhook() to take a vmspace instead
of a process and call it in vmspace_dofree().  This way if it is missed
in exit1()'s early-resource-free it will still be caught when the zombie is
reaped.

Also fix a potential race in shmexit_myhook() by NULLing out
vmspace->vm_shm prior to calling shm_delete_mapping() and free().

MFC after:	7 days
2003-01-13 23:04:32 +00:00
Alan Cox
a6864937e2 Lock the vm object when performing vm_object_clear_flag(). 2003-01-03 09:15:43 +00:00
Alan Cox
36daaecd04 Implement a variant locking scheme for vm maps: Access to system maps
is now synchronized by a mutex, whereas access to user maps is still
synchronized by a lockmgr()-based lock.  Why?  No single type of lock,
including sx locks, meets the requirements of both types of vm map.
Sometimes we sleep while holding the lock on a user map.  Thus, a
a mutex isn't appropriate.  On the other hand, both lockmgr()-based
and sx locks release Giant when a thread/process blocks during
contention for a lock.  This could lead to a race condition in a legacy
driver (that relies on Giant for synchronization) if it attempts to
kmem_malloc() and fails to immediately obtain the lock.  Fortunately,
we never sleep while holding a system map lock.
2002-12-31 19:38:04 +00:00
Alan Cox
3a92e5d5e9 - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.
- Introduce map_sleep_mtx and use it to replace Giant in
   vm_map_unlock_and_wait() and vm_map_wakeup().  (Original
   version by: tegge.)
2002-12-30 00:41:33 +00:00
Alan Cox
e3a9e1b2a8 - Remove vm_object_init2(). It is unused.
- Add a mtx_destroy() to vm_object_collapse().  (This allows a bzero()
   to migrate from _vm_object_allocate() to vm_object_zinit(), where it
   will be performed less often.)
2002-12-29 21:01:14 +00:00
Matthew Dillon
389d2b6e21 Fix a refcount race with the vmspace structure. In order to prevent
resource starvation we clean-up as much of the vmspace structure as we
can when the last process using it exits.  The rest of the structure
is cleaned up when it is reaped.  But since exit1() decrements the ref
count it is possible for a double-free to occur if someone else, such as
the process swapout code, references and then dereferences the structure.
Additionally, the final cleanup of the structure should not occur until
the last process referencing it is reaped.

This commit solves the problem by introducing a secondary reference count,
calling 'vm_exitingcnt'.  The normal reference count is decremented on exit
and vm_exitingcnt is incremented.  vm_exitingcnt is decremented when the
process is reaped.  When both vm_exitingcnt and vm_refcnt are 0, the
structure is freed for real.

MFC after:	3 weeks
2002-12-15 18:50:04 +00:00
Alan Cox
5e83956af5 Perform vm_object_lock() and vm_object_unlock() around
vm_object_page_remove().
2002-12-15 07:16:51 +00:00
Alan Cox
bc105a6797 Hold the page queues lock when calling pmap_protect(); it updates fields
of the vm_page structure.  Make the style of the pmap_protect() calls
consistent.

Approved by:	re (blanket)
2002-12-01 18:57:56 +00:00
Alan Cox
85e03a7e1e Acquire and release the page queues lock around calls to pmap_protect()
because it updates flags within the vm page.

Approved by:	re (blanket)
2002-11-25 22:00:31 +00:00
Alan Cox
f6116791a2 Fix an error case in vm_map_wire(): unwiring of an entry during cleanup
after a user wire error fails when the entry is already system wired.

Reported by:	tegge
2002-11-09 21:26:49 +00:00
Maxime Henrion
cd034a5be9 Correctly print vm_offset_t types. 2002-11-07 22:49:07 +00:00
Poul-Henning Kamp
af045176d1 Properly put macro args in ().
Spotted by:	FlexeLint.
2002-10-16 10:52:15 +00:00
Matthew N. Dodd
4a2eca23ca Modify vm_map_clean() (and thus the msync(2) system call) to support
invalidation of cached pages for objects of type OBJT_DEVICE.

Submitted by:	Christian Zander <zander@minion.de>
Approved by:	alc
2002-09-22 08:22:32 +00:00
Jake Burkholder
05ba50f522 Use the fields in the sysentvec and in the vm map header in place of the
constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS.
This is mainly so that they can be variable even for the native abi, based
on different machine types.  Get stack protections from the sysentvec too.
This makes it trivial to map the stack non-executable for certain abis, on
machines that support it.
2002-09-21 22:07:17 +00:00
Alan Cox
4eaa117956 o Use vm_object_lock() in place of Giant when manipulating a vm object
in vm_map_insert().
2002-08-24 17:52:08 +00:00
Alan Cox
ef594d3186 o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,
user_wire.
2002-07-24 19:47:56 +00:00
Peter Wemm
3ebc124838 Infrastructure tweaks to allow having both an Elf32 and an Elf64 executable
handler in the kernel at the same time.  Also, allow for the
exec_new_vmspace() code to build a different sized vmspace depending on
the executable environment.  This is a big help for execing i386 binaries
on ia64.   The ELF exec code grows the ability to map partial pages when
there is a page size difference, eg: emulating 4K pages on 8K or 16K
hardware pages.

Flesh out the i386 emulation support for ia64.  At this point, the only
binary that I know of that fails is cvsup, because the cvsup runtime
tries to execute code in pages not marked executable.

Obtained from:  dfr (mostly, many tweaks from me).
2002-07-20 02:56:12 +00:00
Peter Wemm
9e7c1bce60 (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.
Use lmin(long, long), not min(u_int, u_int).  This is a problem here on
ia64 which has *way* more than 2^32 pages of KVA.  281474976710655 pages
to be precice.
2002-07-18 10:28:00 +00:00
Alan Cox
93bc4879e6 o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),
_vm_map_lock_read(), and _vm_map_trylock().  Submitted by: tegge
 o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
   (This clears the way for exec_map accesses to move outside of Giant.
   The exec_map is not a system map.)
 o Remove some premature MPSAFE comments.

Reviewed by:	tegge
2002-07-12 23:20:06 +00:00
Alan Cox
9688f93163 o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()
and kmem_free_wakeup().  Previously, kmem_free_wakeup() always
   called wakeup().  In general, no one was sleeping.
 o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c
   for use in vm_kern.c.
2002-07-11 02:39:24 +00:00
Alan Cox
22a97b04de o Make the reservation of KVA space for kernel map entries a function
of the KVA space's size in addition to the amount of physical memory
   and reduce it by a factor of two.

Under the old formula, our reservation amounted to one kernel map entry
per virtual page in the KVA space on a 4GB i386.
2002-07-03 19:16:37 +00:00
Ian Dowse
23f09d50bb Avoid using the 64-bit vm_pindex_t in a few places where 64-bit
types are not required, as the overhead is unnecessary:

 o In the i386 pmap_protect(), `sindex' and `eindex' represent page
   indices within the 32-bit virtual address space.
 o In swp_pager_meta_build() and swp_pager_meta_ctl(), use a temporary
   variable to store the low few bits of a vm_pindex_t that gets used
   as an array index.
 o vm_uiomove() uses `osize' and `idx' for page offsets within a
   map entry.
 o In vm_object_split(), `idx' is a page offset within a map entry.
2002-06-26 20:32:51 +00:00
Matthew Dillon
a69ac1740f Enforce RLIMIT_VMEM on growable mappings (aka the primary stack or any
MAP_STACK mapping).

Suggested by:	alc
2002-06-26 03:13:46 +00:00
Alan Cox
409748276e o In vm_map_insert(), replace GIANT_REQUIRED by the acquisition and
release of Giant around the direct manipulation of the vm_object and
   the optional call to pmap_object_init_pt().
 o In vm_map_findspace(), remove GIANT_REQUIRED.  Instead, acquire and
   release Giant around the occasional call to pmap_growkernel().
 o In vm_map_find(), remove GIANT_REQUIRED.
2002-06-22 17:47:12 +00:00
Alan Cox
27168693db o Remove GIANT_REQUIRED from vm_map_stack(). 2002-06-21 06:03:47 +00:00
Alan Cox
00e1854a1f o Replace GIANT_REQUIRED in vm_object_coalesce() by the acquisition and
release of Giant.
 o Reduce the scope of GIANT_REQUIRED in vm_map_insert().

These changes will enable us to remove the acquisition and release
of Giant from obreak().
2002-06-19 06:02:03 +00:00
Alan Cox
515630b12f o Remove LK_CANRECURSE from the vm_map lock. 2002-06-18 18:31:35 +00:00
Jeff Roberson
18aa2de5a7 - Introduce the new M_NOVM option which tells uma to only check the currently
allocated slabs and bucket caches for free items.  It will not go ask the vm
  for pages.  This differs from M_NOWAIT in that it not only doesn't block, it
  doesn't even ask.

- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag.  This
  tells uma that it should only allocate buckets out of the bucket cache, and
  not from the VM.  It does this by using the M_NOVM option to zalloc when
  getting a new bucket.  This is so that the VM doesn't recursively enter
  itself while trying to allocate buckets for vm_map_entry zones.  If there
  are already allocated buckets when we get here we'll still use them but
  otherwise we'll skip it.

- Use the ZONE_VM flag on vm map entries and pv entries on x86.
2002-06-17 22:02:41 +00:00
Alan Cox
b49ecb86d0 o Acquire and release Giant in vm_map_wakeup() to prevent
a lost wakeup().

Reviewed by:	tegge
2002-06-17 13:27:40 +00:00
Alan Cox
1d7cf06c8c o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() and
vm_map_user_pageable().
 o Remove vm_map_pageable() and vm_map_user_pageable().
 o Remove vm_map_clear_recursive() and vm_map_set_recursive().  (They were
   only used by vm_map_pageable() and vm_map_user_pageable().)

Reviewed by:	tegge
2002-06-14 18:21:01 +00:00
Alan Cox
d46e7d6bee o Acquire and release Giant in vm_map_unlock_and_wait().
Submitted by:	tegge
2002-06-12 08:15:52 +00:00
Alan Cox
28c58286ef o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()
in vm_map_wire().
 o Make two white-space changes in vm_map_wire().

Reviewed by:	tegge
2002-06-11 19:13:59 +00:00
Alan Cox
73b2bace26 o Teach vm_map_delete() to respect the "in-transition" flag
on a vm_map_entry by sleeping until the flag is cleared.

Submitted by:	tegge
2002-06-11 05:24:22 +00:00
Alan Cox
2b4a2c272d o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.
Submitted by: tegge
 o Eliminate the "!mapentzone" check from vm_map_entry_create() and
   vm_map_entry_dispose().  Reviewed by: tegge
 o Fix white-space usage in vm_map_entry_create().
2002-06-10 06:11:45 +00:00
Alan Cox
12d7cc840f o Add vm_map_wire() for wiring contiguous regions of either kernel
or user vm_maps.  This implementation has two key benefits when compared
   to vm_map_{user_,}pageable(): (1) it avoids a race condition through
   the use of "in-transition" vm_map entries and (2) it eliminates lock
   recursion on the vm_map.

Note: there is still an error case that requires clean up.

Reviewed by:	tegge
2002-06-09 20:25:18 +00:00
Alan Cox
b2f3846aef o Simplify vm_map_unwire() by merging the second and third passes
over the caller-specified region.
2002-06-08 19:00:40 +00:00
Alan Cox
e27e17b711 o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().
o Add a stub for vm_map_wire().

Note: the description of the previous commit had an error.  The in-
transition flag actually blocks the deallocation of a vm_map_entry by
vm_map_delete() and vm_map_simplify_entry().
2002-06-08 07:32:38 +00:00
Alan Cox
acd9a301ec o Add vm_map_unwire() for unwiring contiguous regions of either kernel
or user vm_maps.  In accordance with the standards for munlock(2),
   and in contrast to vm_map_user_pageable(), this implementation does not
   allow holes in the specified region.  This implementation uses the
   "in transition" flag described below.
 o Introduce a new flag, "in transition," to the vm_map_entry.
   Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
   this flag by deallocating in-transition vm_map_entrys, allowing
   the vm_map lock to be safely released in vm_map_unwire() and (the
   forthcoming) vm_map_wire().
 o Modify vm_map_simplify_entry() to respect the in-transition flag.

In collaboration with:	tegge
2002-06-07 18:34:23 +00:00
Alan Cox
c5aaa06ded o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming it
to vm_object_split().  Its interface should still be changed
   to resemble vm_object_shadow().
2002-06-02 23:54:09 +00:00
Alan Cox
0d78c0dce2 o Style fixes to vm_map_split(), including the elimination of one variable
declaration that shadows another.

Note: This function should really be vm_object_split(), not vm_map_split().

Reviewed by:	md5
2002-06-02 19:32:05 +00:00
Alan Cox
61c075b67f o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),
vm_map_create(), and vm_map_submap().
 o Make further use of a local variable in vm_map_entry_splay()
   that caches a reference to one of a vm_map_entry's children.
   (This reduces code size somewhat.)
 o Revert a part of revision 1.66, deinlining vmspace_pmap().
   (This function is MPSAFE.)
2002-06-01 22:41:43 +00:00
Alan Cox
794316a866 o Revert a part of revision 1.66, contrary to what that commit message says,
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
   actually increases the kernel's size.
 o Make vm_map_entry_set_behavior() static and add a comment describing
   its purpose.
 o Remove an unnecessary initialization statement from vm_map_entry_splay().
2002-06-01 16:59:30 +00:00
Alan Cox
9917e01041 Further work on pushing Giant out of the vm_map layer and down
into the vm_object layer:
 o Acquire and release Giant in vm_object_shadow() and
   vm_object_page_remove().
 o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call
   to vm_object_page_remove().
 o Remove the acquisition and release of Giant around vm_map_lookup()'s
   call to vm_object_shadow().
2002-05-31 03:48:55 +00:00
Alan Cox
4b9fdc2bce o Acquire and release Giant around pmap operations in vm_fault_unwire()
and vm_map_delete().  Assert GIANT_REQUIRED in vm_map_delete()
   only if operating on the kernel_object or the kmem_object.
 o Remove GIANT_REQUIRED from vm_map_remove().
 o Remove the acquisition and release of Giant from munmap().
2002-05-26 04:54:56 +00:00