Commit Graph

1186 Commits

Author SHA1 Message Date
alc
480071f11a o Replace the vm_map's hint by the root of a splay tree. By design,
the last accessed datum is moved to the root of the splay tree.
   Therefore, on lookups in which the hint resulted in O(1) access,
   the splay tree still achieves O(1) access.  In contrast, on lookups
   in which the hint failed miserably, the splay tree achieves amortized
   logarithmic complexity, resulting in dramatic improvements on vm_maps
   with a large number of entries.  For example, the execution time
   for replaying an access log from www.cs.rice.edu against the thttpd
   web server was reduced by 23.5% due to the large number of files
   simultaneously mmap()ed by this server.  (The machine in question has
   enough memory to cache most of this workload.)

   Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld"
   due to the overhead of maintaining the splay tree.  I believe that
   some or all of this can be eliminated through optimizations
   to the code.

Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu>
Reviewed by:	jeff
2002-05-24 01:33:24 +00:00
alc
53ca2106e4 o Make contigmalloc1() static. 2002-05-22 01:01:37 +00:00
jhb
d53ecb9f84 In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensure
that td_intr_nesting_level is 0 (like malloc() does).  Since malloc() calls
uma we can probably remove the check in malloc() for this now.  Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
2002-05-20 17:54:48 +00:00
alc
cad592a881 o Eliminate the acquisition and release of Giant from minherit(2).
(vm_map_inherit() no longer requires Giant to be held.)
2002-05-18 18:59:00 +00:00
alc
b4282fb943 o Remove GIANT_REQUIRED from vm_map_madvise(). Instead, acquire and
release Giant around vm_map_madvise()'s call to pmap_object_init_pt().
 o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition
   and release of Giant.
 o Remove the acquisition and release of Giant from madvise().
2002-05-18 07:48:06 +00:00
alc
3438549125 o Remove the acquisition and release of Giant from mprotect(). 2002-05-18 03:58:16 +00:00
trhodes
28d42899b7 More s/file system/filesystem/g 2002-05-16 21:28:32 +00:00
phk
8536ea3cdb Make daddr_t and u_daddr_t 64bits wide.
Retire daddr64_t and use daddr_t instead.

Sponsored by:	DARPA & NAI Labs.
2002-05-14 11:09:43 +00:00
jeff
7b96796a72 Don't call the uz free function while the zone lock is held. This can lead
to lock order reversals.  uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
2002-05-13 05:08:18 +00:00
jeff
9020efbab0 Remove the hash_free() lock order reversal. This could have happened for
several reasons before.  Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
2002-05-13 04:39:28 +00:00
alc
ceb3fe2c04 o Remove GIANT_REQUIRED and an excessive number of blank lines
from vm_map_inherit().  (minherit() need not acquire Giant
   anymore.)
2002-05-12 18:42:05 +00:00
alc
e7bbea38d9 o Acquire and release Giant in vm_object_reference() and
vm_object_deallocate(), replacing the assertion GIANT_REQUIRED.
 o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry().
 o Acquire and release Giant around vm_map_protect()'s call to pmap_protect().

Altogether, these changes eliminate the need for mprotect() to acquire
and release Giant.
2002-05-12 05:22:56 +00:00
alc
94ec8e207f o Header files shouldn't depend on options: Provide prototypes
for uiomoveco(), uioread(), and vm_uiomove() regardless
   of whether ENABLE_VFS_IOOPT is defined or not.

Submitted by:	bde
2002-05-06 06:20:04 +00:00
alc
d60a525036 o Condition the compilation and use of vm_freeze_copyopts()
on ENABLE_VFS_IOOPT.
2002-05-06 05:45:57 +00:00
alc
33ff01f29d o Some improvements to the page coloring of vm objects, particularly,
for shadow objects.

Submitted by:	bde
2002-05-06 03:34:17 +00:00
alc
c7465abaf7 o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainly
an operation on a vm_object and belongs in the latter place.
2002-05-06 00:12:47 +00:00
alc
c5483b3129 o Condition the compilation of uiomoveco() and vm_uiomove()
on ENABLE_VFS_IOOPT.
 o Add a comment to the effect that this code is experimental
   support for zero-copy I/O.
2002-05-05 22:42:40 +00:00
phk
5020d62430 Expand the one-line function pbreassignbuf() the only place it is or could
be used.
2002-05-05 20:37:08 +00:00
alc
7e1b68b6e0 o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().
o Acquire and release Giant around vm_map_lookup()'s call
   to vm_object_shadow().
2002-05-05 05:36:28 +00:00
jeff
926e98b719 Use pages instead of uz_maxpages, which has not been initialized yet, when
creating the vm_object.  This was broken after the code was rearranged to
grab giant itself.

Spotted by:     alc
2002-05-04 21:49:29 +00:00
alc
c281c83bd5 o Make _vm_object_allocate() and vm_object_allocate() callable
without holding Giant.
 o Begin documenting the trivial cases of the locking protocol
   on vm_object.
2002-05-04 20:23:48 +00:00
alc
d44b3a12b3 o Remove GIANT_REQUIRED from vm_map_lookup_entry() and
vm_map_check_protection().
 o Call vm_map_check_protection() without Giant held in munmap().
2002-05-04 02:07:36 +00:00
alc
e8eb438f94 o Change the implementation of vm_map locking to use exclusive locks
exclusively.  The interface still, however, distinguishes
   between a shared lock and an exclusive lock.
2002-05-02 17:32:27 +00:00
jeff
6bfc4bdd96 Hide a pointer to the malloc_type bucket at the end of the freed memory. If
this memory is modified after it has been freed we can now report it's
previous owner.
2002-05-02 09:07:04 +00:00
jeff
fb737bd04b Move around the dbg code a bit so it's always under a lock. This stops a
weird potential race if we were preempted right as we were doing the dbg
checks.
2002-05-02 09:05:36 +00:00
arr
146b277b08 - Changed the size element of uma_zctor_args to be size_t instead of int.
- Changed uma_zcreate to accept the size argument as a size_t intead of
  int.

Approved by:	jeff
2002-05-02 07:36:30 +00:00
jeff
f7f01600de malloc/free(9) no longer require Giant. Use the malloc_mtx to protect the
mallochash.  Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper.  This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
2002-05-02 07:22:19 +00:00
alc
c1bf294b84 o Remove dead and lockmgr()-specific debugging code. 2002-05-02 02:32:09 +00:00
jeff
b152d5fbb5 Remove the temporary alignment check in free().
Implement the following checks on freed memory in the bucket path:
	- Slab membership
	- Alignment
	- Duplicate free

This previously was only done if we skipped the buckets.  This code will slow
down INVARIANTS a bit, but it is smp safe.  The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
2002-05-02 02:08:48 +00:00
alc
0e84366ae7 o Convert the vm_page buckets mutex to a spin lock. (This resolves
an issue on the Alpha platform found by jeff@.)
 o Simplify vm_page_lookup().

Reviewed by:	jhb
2002-04-30 21:24:47 +00:00
jeff
21868731b0 Add a new UMA debugging facility. This will overwrite freed memory with
0xdeadc0de and then check for it just before memory is handed off as part
of a new request.  This will catch any post free/pre alloc modification of
memory, as well as introduce errors for anything that tries to dereference
it as a pointer.

This code takes the form of special init, fini, ctor and dtor routines that
are specificly used by malloc.  It is in a seperate file because additional
debugging aids will want to live here as well.
2002-04-30 07:54:25 +00:00
jeff
06a56984b5 Move the implementation of M_ZERO into UMA so that it can be passed to
uma_zalloc and friends.  Remove this functionality from the malloc wrapper.

Document this change in uma.h and adjust variable names in uma_core.
2002-04-30 04:26:34 +00:00
alc
4d466c829c o Revert vm_fault1() to its original name vm_fault(), eliminating the wrapper
that took its place for the purposes of acquiring and releasing Giant.
2002-04-30 03:44:34 +00:00
jeff
c52db1bfdf Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's own
mutex class.  Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory.  This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
2002-04-29 23:45:41 +00:00
peter
c0e3147cc6 Tidy up some loose ends.
i386/ia64/alpha - catch up to sparc64/ppc:
- replace pmap_kernel() with refs to kernel_pmap
- change kernel_pmap pointer to (&kernel_pmap_store)
  (this is a speedup since ld can set these at compile/link time)
all platforms (as suggested by jake):
- gc unused pmap_reference
- gc unused pmap_destroy
- gc unused struct pmap.pm_count
(we never used pm_count - we track address space sharing at the vmspace)
2002-04-29 07:43:16 +00:00
alc
d61adfe678 Document three synchronization issues in vm_fault(). 2002-04-29 05:23:01 +00:00
alc
50524fd0df Pass the caller's file name and line number to the vm_map locking functions. 2002-04-28 23:12:52 +00:00
alc
dc5b6882d3 o Introduce and use vm_map_trylock() to replace several direct uses
of lockmgr().
 o Add missing synchronization to vmspace_swap_count(): Obtain a read lock
   on the vm_map before traversing it.
2002-04-28 06:07:54 +00:00
peter
0feba1c376 We do not necessarily need to map/unmap pages to zero parts of them.
On systems where physical memory is also direct mapped (alpha, sparc,
ia64 etc) this is slightly harmful.
2002-04-28 00:15:48 +00:00
alc
d8a0954909 o Begin documenting the (existing) locking protocol on the vm_map
in the same style as sys/proc.h.
 o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map.
   (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c
   revision 1.206, de-inlining these methods increased the kernel's size.)
2002-04-27 22:01:37 +00:00
alc
796c529836 o Control access to the vm_page_buckets with a mutex.
o Fix some style(9) bugs.
2002-04-26 22:44:15 +00:00
arr
99bf18775e - Fix a round down bogon in uma_zone_set_max().
Submitted by: jeff@
2002-04-25 06:24:40 +00:00
alc
80bca30405 Reintroduce locking on accesses to vm_object_list. 2002-04-20 07:23:22 +00:00
alc
bafd353ece o Move the acquisition of Giant from vm_fault() to the point
after initialization in vm_fault1().
 o Fix some style problems in vm_fault1().
2002-04-19 04:20:31 +00:00
alc
492a7d8a7b Add a comment documenting a race condition in vm_fault(): Specifically, a
modification is made to the vm_map while only a read lock is held.
2002-04-18 03:55:50 +00:00
alc
50de418295 o Call vm_map_growstack() from vm_fault() if vm_map_lookup() has failed
due to conditions that suggest the possible need for stack growth.
   This has two beneficial effects: (1) we can
   now remove calls to vm_map_growstack() from the MD trap handlers and (2)
   simple page faults are faster because we no longer unnecessarily perform
   vm_map_growstack() on every page fault.
 o Remove vm_map_growstack() from the i386's trap_pfault().
 o Remove the acquisition and release of Giant from i386's trap_pfault().
   (vm_fault() still acquires it.)
2002-04-18 03:28:27 +00:00
peter
7af4726714 Do not free the vmspace until p->p_vmspace is set to null. Otherwise
statclock can access it in the tail end of statclock_process() at an
unfortunate time.  This bit me several times on an SMP alpha (UP2000)
and the problem went away with this change.  I'm not sure why it doesn't
break x86 as well.  Maybe it's because the clocks are much faster
on alpha (HZ=1024 by default).
2002-04-17 05:26:42 +00:00
alc
49bc4331bd Remove an unused option, VM_FAULT_HOLD, to vm_fault(). 2002-04-17 02:23:57 +00:00
peter
3d8c7d4cab Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()
and pmap_copy_page().  This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap.  (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances.  Leaving this to pmap itself gives more flexibilitly.)

Reviewed by:	jake
Tested on:	i386, ia64 and (I believe) sparc64. (my alpha was hosed)
2002-04-15 16:00:03 +00:00
jeff
2d333700d6 Fix a witness warning when expanding a hash table. We were allocating the new
hash while holding the lock on a zone.  Fix this by doing the allocation
seperately from the actual hash expansion.

The lock is dropped before the allocation and reacquired before the expansion.
The expansion code checks to see if we lost the race and frees the new hash
if we do.  We really never will lose this race because the hash expansion is
single threaded via the timeout mechanism.
2002-04-14 13:47:10 +00:00