Commit Graph

1207 Commits

Author SHA1 Message Date
alc
c3f3773d48 o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()
in vm_map_wire().
 o Make two white-space changes in vm_map_wire().

Reviewed by:	tegge
2002-06-11 19:13:59 +00:00
alc
d9429c7e6e o Teach vm_map_delete() to respect the "in-transition" flag
on a vm_map_entry by sleeping until the flag is cleared.

Submitted by:	tegge
2002-06-11 05:24:22 +00:00
alc
319e2fb2b8 o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.
Submitted by: tegge
 o Eliminate the "!mapentzone" check from vm_map_entry_create() and
   vm_map_entry_dispose().  Reviewed by: tegge
 o Fix white-space usage in vm_map_entry_create().
2002-06-10 06:11:45 +00:00
iedowse
02040b5ae2 Correct the logic for determining whether the per-CPU locks need
to be destroyed. This fixes a problem where destroying a UMA zone
would fail to destroy all zone mutexes.

Reviewed by:	jeff
2002-06-10 03:25:23 +00:00
alc
e780ef4712 o Add vm_map_wire() for wiring contiguous regions of either kernel
or user vm_maps.  This implementation has two key benefits when compared
   to vm_map_{user_,}pageable(): (1) it avoids a race condition through
   the use of "in-transition" vm_map entries and (2) it eliminates lock
   recursion on the vm_map.

Note: there is still an error case that requires clean up.

Reviewed by:	tegge
2002-06-09 20:25:18 +00:00
alc
564b5ce457 o Simplify vm_map_unwire() by merging the second and third passes
over the caller-specified region.
2002-06-08 19:00:40 +00:00
alc
75c335726c o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().
o Add a stub for vm_map_wire().

Note: the description of the previous commit had an error.  The in-
transition flag actually blocks the deallocation of a vm_map_entry by
vm_map_delete() and vm_map_simplify_entry().
2002-06-08 07:32:38 +00:00
alc
ee9168748e o Add vm_map_unwire() for unwiring contiguous regions of either kernel
or user vm_maps.  In accordance with the standards for munlock(2),
   and in contrast to vm_map_user_pageable(), this implementation does not
   allow holes in the specified region.  This implementation uses the
   "in transition" flag described below.
 o Introduce a new flag, "in transition," to the vm_map_entry.
   Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
   this flag by deallocating in-transition vm_map_entrys, allowing
   the vm_map lock to be safely released in vm_map_unwire() and (the
   forthcoming) vm_map_wire().
 o Modify vm_map_simplify_entry() to respect the in-transition flag.

In collaboration with:	tegge
2002-06-07 18:34:23 +00:00
alfred
63b8dfa515 fix typo in _SYS_SYSPROTO_H_ case: s/mlockall_args/munlockall_args
Submitted by: Mark Santcroos <marks@ripe.net>
2002-06-06 18:51:14 +00:00
jeff
d9ab0c8dbc Add a comment describing a resource leak that occurs during a failure case
in obj_alloc.
2002-06-03 22:59:19 +00:00
alc
d75c291207 o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming it
to vm_object_split().  Its interface should still be changed
   to resemble vm_object_shadow().
2002-06-02 23:54:09 +00:00
alc
02c58f2a49 o Style fixes to vm_map_split(), including the elimination of one variable
declaration that shadows another.

Note: This function should really be vm_object_split(), not vm_map_split().

Reviewed by:	md5
2002-06-02 19:32:05 +00:00
alc
6c9e59ad00 o Condition vm_object_pmap_copy_1()'s compilation on the kernel
option ENABLE_VFS_IOOPT.  Unless this option is in effect,
   vm_object_pmap_copy_1() is not used.
2002-06-02 06:31:41 +00:00
alc
2abbbe7b8a o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),
vm_map_create(), and vm_map_submap().
 o Make further use of a local variable in vm_map_entry_splay()
   that caches a reference to one of a vm_map_entry's children.
   (This reduces code size somewhat.)
 o Revert a part of revision 1.66, deinlining vmspace_pmap().
   (This function is MPSAFE.)
2002-06-01 22:41:43 +00:00
alc
b55171a0a3 o Revert a part of revision 1.66, contrary to what that commit message says,
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
   actually increases the kernel's size.
 o Make vm_map_entry_set_behavior() static and add a comment describing
   its purpose.
 o Remove an unnecessary initialization statement from vm_map_entry_splay().
2002-06-01 16:59:30 +00:00
des
ca02bcf7fc Export nswapdev through sysctl(8).
Sponsored by:	DARPA, NAI Labs
2002-05-31 08:17:58 +00:00
alc
0b8e31ba85 Further work on pushing Giant out of the vm_map layer and down
into the vm_object layer:
 o Acquire and release Giant in vm_object_shadow() and
   vm_object_page_remove().
 o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call
   to vm_object_page_remove().
 o Remove the acquisition and release of Giant around vm_map_lookup()'s
   call to vm_object_shadow().
2002-05-31 03:48:55 +00:00
alfred
26c9c27f03 Check for defined(__i386__) instead of just defined(i386) since the compiler
will be updated to only define(__i386__) for ANSI cleanliness.
2002-05-30 07:32:58 +00:00
peter
929921851c The kernel printf does not have %i 2002-05-29 08:25:13 +00:00
alc
06c3939cfb o Remove unused #defines. 2002-05-27 22:10:28 +00:00
alc
642723e24c o Acquire and release Giant around pmap operations in vm_fault_unwire()
and vm_map_delete().  Assert GIANT_REQUIRED in vm_map_delete()
   only if operating on the kernel_object or the kmem_object.
 o Remove GIANT_REQUIRED from vm_map_remove().
 o Remove the acquisition and release of Giant from munmap().
2002-05-26 04:54:56 +00:00
alc
480071f11a o Replace the vm_map's hint by the root of a splay tree. By design,
the last accessed datum is moved to the root of the splay tree.
   Therefore, on lookups in which the hint resulted in O(1) access,
   the splay tree still achieves O(1) access.  In contrast, on lookups
   in which the hint failed miserably, the splay tree achieves amortized
   logarithmic complexity, resulting in dramatic improvements on vm_maps
   with a large number of entries.  For example, the execution time
   for replaying an access log from www.cs.rice.edu against the thttpd
   web server was reduced by 23.5% due to the large number of files
   simultaneously mmap()ed by this server.  (The machine in question has
   enough memory to cache most of this workload.)

   Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld"
   due to the overhead of maintaining the splay tree.  I believe that
   some or all of this can be eliminated through optimizations
   to the code.

Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu>
Reviewed by:	jeff
2002-05-24 01:33:24 +00:00
alc
53ca2106e4 o Make contigmalloc1() static. 2002-05-22 01:01:37 +00:00
jhb
d53ecb9f84 In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensure
that td_intr_nesting_level is 0 (like malloc() does).  Since malloc() calls
uma we can probably remove the check in malloc() for this now.  Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
2002-05-20 17:54:48 +00:00
alc
cad592a881 o Eliminate the acquisition and release of Giant from minherit(2).
(vm_map_inherit() no longer requires Giant to be held.)
2002-05-18 18:59:00 +00:00
alc
b4282fb943 o Remove GIANT_REQUIRED from vm_map_madvise(). Instead, acquire and
release Giant around vm_map_madvise()'s call to pmap_object_init_pt().
 o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition
   and release of Giant.
 o Remove the acquisition and release of Giant from madvise().
2002-05-18 07:48:06 +00:00
alc
3438549125 o Remove the acquisition and release of Giant from mprotect(). 2002-05-18 03:58:16 +00:00
trhodes
28d42899b7 More s/file system/filesystem/g 2002-05-16 21:28:32 +00:00
phk
8536ea3cdb Make daddr_t and u_daddr_t 64bits wide.
Retire daddr64_t and use daddr_t instead.

Sponsored by:	DARPA & NAI Labs.
2002-05-14 11:09:43 +00:00
jeff
7b96796a72 Don't call the uz free function while the zone lock is held. This can lead
to lock order reversals.  uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
2002-05-13 05:08:18 +00:00
jeff
9020efbab0 Remove the hash_free() lock order reversal. This could have happened for
several reasons before.  Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
2002-05-13 04:39:28 +00:00
alc
ceb3fe2c04 o Remove GIANT_REQUIRED and an excessive number of blank lines
from vm_map_inherit().  (minherit() need not acquire Giant
   anymore.)
2002-05-12 18:42:05 +00:00
alc
e7bbea38d9 o Acquire and release Giant in vm_object_reference() and
vm_object_deallocate(), replacing the assertion GIANT_REQUIRED.
 o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry().
 o Acquire and release Giant around vm_map_protect()'s call to pmap_protect().

Altogether, these changes eliminate the need for mprotect() to acquire
and release Giant.
2002-05-12 05:22:56 +00:00
alc
94ec8e207f o Header files shouldn't depend on options: Provide prototypes
for uiomoveco(), uioread(), and vm_uiomove() regardless
   of whether ENABLE_VFS_IOOPT is defined or not.

Submitted by:	bde
2002-05-06 06:20:04 +00:00
alc
d60a525036 o Condition the compilation and use of vm_freeze_copyopts()
on ENABLE_VFS_IOOPT.
2002-05-06 05:45:57 +00:00
alc
33ff01f29d o Some improvements to the page coloring of vm objects, particularly,
for shadow objects.

Submitted by:	bde
2002-05-06 03:34:17 +00:00
alc
c7465abaf7 o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainly
an operation on a vm_object and belongs in the latter place.
2002-05-06 00:12:47 +00:00
alc
c5483b3129 o Condition the compilation of uiomoveco() and vm_uiomove()
on ENABLE_VFS_IOOPT.
 o Add a comment to the effect that this code is experimental
   support for zero-copy I/O.
2002-05-05 22:42:40 +00:00
phk
5020d62430 Expand the one-line function pbreassignbuf() the only place it is or could
be used.
2002-05-05 20:37:08 +00:00
alc
7e1b68b6e0 o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().
o Acquire and release Giant around vm_map_lookup()'s call
   to vm_object_shadow().
2002-05-05 05:36:28 +00:00
jeff
926e98b719 Use pages instead of uz_maxpages, which has not been initialized yet, when
creating the vm_object.  This was broken after the code was rearranged to
grab giant itself.

Spotted by:     alc
2002-05-04 21:49:29 +00:00
alc
c281c83bd5 o Make _vm_object_allocate() and vm_object_allocate() callable
without holding Giant.
 o Begin documenting the trivial cases of the locking protocol
   on vm_object.
2002-05-04 20:23:48 +00:00
alc
d44b3a12b3 o Remove GIANT_REQUIRED from vm_map_lookup_entry() and
vm_map_check_protection().
 o Call vm_map_check_protection() without Giant held in munmap().
2002-05-04 02:07:36 +00:00
alc
e8eb438f94 o Change the implementation of vm_map locking to use exclusive locks
exclusively.  The interface still, however, distinguishes
   between a shared lock and an exclusive lock.
2002-05-02 17:32:27 +00:00
jeff
6bfc4bdd96 Hide a pointer to the malloc_type bucket at the end of the freed memory. If
this memory is modified after it has been freed we can now report it's
previous owner.
2002-05-02 09:07:04 +00:00
jeff
fb737bd04b Move around the dbg code a bit so it's always under a lock. This stops a
weird potential race if we were preempted right as we were doing the dbg
checks.
2002-05-02 09:05:36 +00:00
arr
146b277b08 - Changed the size element of uma_zctor_args to be size_t instead of int.
- Changed uma_zcreate to accept the size argument as a size_t intead of
  int.

Approved by:	jeff
2002-05-02 07:36:30 +00:00
jeff
f7f01600de malloc/free(9) no longer require Giant. Use the malloc_mtx to protect the
mallochash.  Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper.  This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
2002-05-02 07:22:19 +00:00
alc
c1bf294b84 o Remove dead and lockmgr()-specific debugging code. 2002-05-02 02:32:09 +00:00
jeff
b152d5fbb5 Remove the temporary alignment check in free().
Implement the following checks on freed memory in the bucket path:
	- Slab membership
	- Alignment
	- Duplicate free

This previously was only done if we skipped the buckets.  This code will slow
down INVARIANTS a bit, but it is smp safe.  The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
2002-05-02 02:08:48 +00:00