Commit Graph

260 Commits

Author SHA1 Message Date
Alan Cox
22a97b04de o Make the reservation of KVA space for kernel map entries a function
of the KVA space's size in addition to the amount of physical memory
   and reduce it by a factor of two.

Under the old formula, our reservation amounted to one kernel map entry
per virtual page in the KVA space on a 4GB i386.
2002-07-03 19:16:37 +00:00
Ian Dowse
23f09d50bb Avoid using the 64-bit vm_pindex_t in a few places where 64-bit
types are not required, as the overhead is unnecessary:

 o In the i386 pmap_protect(), `sindex' and `eindex' represent page
   indices within the 32-bit virtual address space.
 o In swp_pager_meta_build() and swp_pager_meta_ctl(), use a temporary
   variable to store the low few bits of a vm_pindex_t that gets used
   as an array index.
 o vm_uiomove() uses `osize' and `idx' for page offsets within a
   map entry.
 o In vm_object_split(), `idx' is a page offset within a map entry.
2002-06-26 20:32:51 +00:00
Matthew Dillon
a69ac1740f Enforce RLIMIT_VMEM on growable mappings (aka the primary stack or any
MAP_STACK mapping).

Suggested by:	alc
2002-06-26 03:13:46 +00:00
Alan Cox
409748276e o In vm_map_insert(), replace GIANT_REQUIRED by the acquisition and
release of Giant around the direct manipulation of the vm_object and
   the optional call to pmap_object_init_pt().
 o In vm_map_findspace(), remove GIANT_REQUIRED.  Instead, acquire and
   release Giant around the occasional call to pmap_growkernel().
 o In vm_map_find(), remove GIANT_REQUIRED.
2002-06-22 17:47:12 +00:00
Alan Cox
27168693db o Remove GIANT_REQUIRED from vm_map_stack(). 2002-06-21 06:03:47 +00:00
Alan Cox
00e1854a1f o Replace GIANT_REQUIRED in vm_object_coalesce() by the acquisition and
release of Giant.
 o Reduce the scope of GIANT_REQUIRED in vm_map_insert().

These changes will enable us to remove the acquisition and release
of Giant from obreak().
2002-06-19 06:02:03 +00:00
Alan Cox
515630b12f o Remove LK_CANRECURSE from the vm_map lock. 2002-06-18 18:31:35 +00:00
Jeff Roberson
18aa2de5a7 - Introduce the new M_NOVM option which tells uma to only check the currently
allocated slabs and bucket caches for free items.  It will not go ask the vm
  for pages.  This differs from M_NOWAIT in that it not only doesn't block, it
  doesn't even ask.

- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag.  This
  tells uma that it should only allocate buckets out of the bucket cache, and
  not from the VM.  It does this by using the M_NOVM option to zalloc when
  getting a new bucket.  This is so that the VM doesn't recursively enter
  itself while trying to allocate buckets for vm_map_entry zones.  If there
  are already allocated buckets when we get here we'll still use them but
  otherwise we'll skip it.

- Use the ZONE_VM flag on vm map entries and pv entries on x86.
2002-06-17 22:02:41 +00:00
Alan Cox
b49ecb86d0 o Acquire and release Giant in vm_map_wakeup() to prevent
a lost wakeup().

Reviewed by:	tegge
2002-06-17 13:27:40 +00:00
Alan Cox
1d7cf06c8c o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() and
vm_map_user_pageable().
 o Remove vm_map_pageable() and vm_map_user_pageable().
 o Remove vm_map_clear_recursive() and vm_map_set_recursive().  (They were
   only used by vm_map_pageable() and vm_map_user_pageable().)

Reviewed by:	tegge
2002-06-14 18:21:01 +00:00
Alan Cox
d46e7d6bee o Acquire and release Giant in vm_map_unlock_and_wait().
Submitted by:	tegge
2002-06-12 08:15:52 +00:00
Alan Cox
28c58286ef o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()
in vm_map_wire().
 o Make two white-space changes in vm_map_wire().

Reviewed by:	tegge
2002-06-11 19:13:59 +00:00
Alan Cox
73b2bace26 o Teach vm_map_delete() to respect the "in-transition" flag
on a vm_map_entry by sleeping until the flag is cleared.

Submitted by:	tegge
2002-06-11 05:24:22 +00:00
Alan Cox
2b4a2c272d o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.
Submitted by: tegge
 o Eliminate the "!mapentzone" check from vm_map_entry_create() and
   vm_map_entry_dispose().  Reviewed by: tegge
 o Fix white-space usage in vm_map_entry_create().
2002-06-10 06:11:45 +00:00
Alan Cox
12d7cc840f o Add vm_map_wire() for wiring contiguous regions of either kernel
or user vm_maps.  This implementation has two key benefits when compared
   to vm_map_{user_,}pageable(): (1) it avoids a race condition through
   the use of "in-transition" vm_map entries and (2) it eliminates lock
   recursion on the vm_map.

Note: there is still an error case that requires clean up.

Reviewed by:	tegge
2002-06-09 20:25:18 +00:00
Alan Cox
b2f3846aef o Simplify vm_map_unwire() by merging the second and third passes
over the caller-specified region.
2002-06-08 19:00:40 +00:00
Alan Cox
e27e17b711 o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().
o Add a stub for vm_map_wire().

Note: the description of the previous commit had an error.  The in-
transition flag actually blocks the deallocation of a vm_map_entry by
vm_map_delete() and vm_map_simplify_entry().
2002-06-08 07:32:38 +00:00
Alan Cox
acd9a301ec o Add vm_map_unwire() for unwiring contiguous regions of either kernel
or user vm_maps.  In accordance with the standards for munlock(2),
   and in contrast to vm_map_user_pageable(), this implementation does not
   allow holes in the specified region.  This implementation uses the
   "in transition" flag described below.
 o Introduce a new flag, "in transition," to the vm_map_entry.
   Eventually, vm_map_delete() and vm_map_simplify_entry() will respect
   this flag by deallocating in-transition vm_map_entrys, allowing
   the vm_map lock to be safely released in vm_map_unwire() and (the
   forthcoming) vm_map_wire().
 o Modify vm_map_simplify_entry() to respect the in-transition flag.

In collaboration with:	tegge
2002-06-07 18:34:23 +00:00
Alan Cox
c5aaa06ded o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming it
to vm_object_split().  Its interface should still be changed
   to resemble vm_object_shadow().
2002-06-02 23:54:09 +00:00
Alan Cox
0d78c0dce2 o Style fixes to vm_map_split(), including the elimination of one variable
declaration that shadows another.

Note: This function should really be vm_object_split(), not vm_map_split().

Reviewed by:	md5
2002-06-02 19:32:05 +00:00
Alan Cox
61c075b67f o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),
vm_map_create(), and vm_map_submap().
 o Make further use of a local variable in vm_map_entry_splay()
   that caches a reference to one of a vm_map_entry's children.
   (This reduces code size somewhat.)
 o Revert a part of revision 1.66, deinlining vmspace_pmap().
   (This function is MPSAFE.)
2002-06-01 22:41:43 +00:00
Alan Cox
794316a866 o Revert a part of revision 1.66, contrary to what that commit message says,
deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior()
   actually increases the kernel's size.
 o Make vm_map_entry_set_behavior() static and add a comment describing
   its purpose.
 o Remove an unnecessary initialization statement from vm_map_entry_splay().
2002-06-01 16:59:30 +00:00
Alan Cox
9917e01041 Further work on pushing Giant out of the vm_map layer and down
into the vm_object layer:
 o Acquire and release Giant in vm_object_shadow() and
   vm_object_page_remove().
 o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call
   to vm_object_page_remove().
 o Remove the acquisition and release of Giant around vm_map_lookup()'s
   call to vm_object_shadow().
2002-05-31 03:48:55 +00:00
Alan Cox
4b9fdc2bce o Acquire and release Giant around pmap operations in vm_fault_unwire()
and vm_map_delete().  Assert GIANT_REQUIRED in vm_map_delete()
   only if operating on the kernel_object or the kmem_object.
 o Remove GIANT_REQUIRED from vm_map_remove().
 o Remove the acquisition and release of Giant from munmap().
2002-05-26 04:54:56 +00:00
Alan Cox
4e94f40222 o Replace the vm_map's hint by the root of a splay tree. By design,
the last accessed datum is moved to the root of the splay tree.
   Therefore, on lookups in which the hint resulted in O(1) access,
   the splay tree still achieves O(1) access.  In contrast, on lookups
   in which the hint failed miserably, the splay tree achieves amortized
   logarithmic complexity, resulting in dramatic improvements on vm_maps
   with a large number of entries.  For example, the execution time
   for replaying an access log from www.cs.rice.edu against the thttpd
   web server was reduced by 23.5% due to the large number of files
   simultaneously mmap()ed by this server.  (The machine in question has
   enough memory to cache most of this workload.)

   Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld"
   due to the overhead of maintaining the splay tree.  I believe that
   some or all of this can be eliminated through optimizations
   to the code.

Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu>
Reviewed by:	jeff
2002-05-24 01:33:24 +00:00
Alan Cox
094f6d2694 o Remove GIANT_REQUIRED from vm_map_madvise(). Instead, acquire and
release Giant around vm_map_madvise()'s call to pmap_object_init_pt().
 o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition
   and release of Giant.
 o Remove the acquisition and release of Giant from madvise().
2002-05-18 07:48:06 +00:00
Alan Cox
a47335fdb4 o Remove GIANT_REQUIRED and an excessive number of blank lines
from vm_map_inherit().  (minherit() need not acquire Giant
   anymore.)
2002-05-12 18:42:05 +00:00
Alan Cox
47c3ccc467 o Acquire and release Giant in vm_object_reference() and
vm_object_deallocate(), replacing the assertion GIANT_REQUIRED.
 o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry().
 o Acquire and release Giant around vm_map_protect()'s call to pmap_protect().

Altogether, these changes eliminate the need for mprotect() to acquire
and release Giant.
2002-05-12 05:22:56 +00:00
Alan Cox
e86256c1f4 o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainly
an operation on a vm_object and belongs in the latter place.
2002-05-06 00:12:47 +00:00
Alan Cox
c50fe92b8d o Condition the compilation of uiomoveco() and vm_uiomove()
on ENABLE_VFS_IOOPT.
 o Add a comment to the effect that this code is experimental
   support for zero-copy I/O.
2002-05-05 22:42:40 +00:00
Alan Cox
15fdd586e3 o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().
o Acquire and release Giant around vm_map_lookup()'s call
   to vm_object_shadow().
2002-05-05 05:36:28 +00:00
Alan Cox
8c5c5d049f o Remove GIANT_REQUIRED from vm_map_lookup_entry() and
vm_map_check_protection().
 o Call vm_map_check_protection() without Giant held in munmap().
2002-05-04 02:07:36 +00:00
Alan Cox
bc91c5107a o Change the implementation of vm_map locking to use exclusive locks
exclusively.  The interface still, however, distinguishes
   between a shared lock and an exclusive lock.
2002-05-02 17:32:27 +00:00
Alan Cox
569687d02f o Remove dead and lockmgr()-specific debugging code. 2002-05-02 02:32:09 +00:00
Jeff Roberson
28bc44195c Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's own
mutex class.  Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory.  This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
2002-04-29 23:45:41 +00:00
Alan Cox
780b1c0997 Pass the caller's file name and line number to the vm_map locking functions. 2002-04-28 23:12:52 +00:00
Alan Cox
d974f03c69 o Introduce and use vm_map_trylock() to replace several direct uses
of lockmgr().
 o Add missing synchronization to vmspace_swap_count(): Obtain a read lock
   on the vm_map before traversing it.
2002-04-28 06:07:54 +00:00
Alan Cox
089b073345 o Begin documenting the (existing) locking protocol on the vm_map
in the same style as sys/proc.h.
 o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map.
   (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c
   revision 1.206, de-inlining these methods increased the kernel's size.)
2002-04-27 22:01:37 +00:00
Peter Wemm
334f706177 Do not free the vmspace until p->p_vmspace is set to null. Otherwise
statclock can access it in the tail end of statclock_process() at an
unfortunate time.  This bit me several times on an SMP alpha (UP2000)
and the problem went away with this change.  I'm not sure why it doesn't
break x86 as well.  Maybe it's because the clocks are much faster
on alpha (HZ=1024 by default).
2002-04-17 05:26:42 +00:00
Peter Wemm
1a87a0da66 Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()
and pmap_copy_page().  This gets rid of a couple more physical addresses
in upper layers, with the eventual aim of supporting PAE and dealing with
the physical addressing mostly within pmap.  (We will need either 64 bit
physical addresses or page indexes, possibly both depending on the
circumstances.  Leaving this to pmap itself gives more flexibilitly.)

Reviewed by:	jake
Tested on:	i386, ia64 and (I believe) sparc64. (my alpha was hosed)
2002-04-15 16:00:03 +00:00
Jeff Roberson
670d17b5c0 Remove references to vm_zone.h and switch over to the new uma API. 2002-03-20 04:02:59 +00:00
Jeff Roberson
9eb6e51923 Quit a warning introduced by UMA. This only occurs on machines where
vm_size_t != unsigned long.

Reviewed by:	phk
2002-03-19 11:49:10 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
Brian Feldman
25adb370be Back out the modification of vm_map locks from lockmgr to sx locks. The
best path forward now is likely to change the lockmgr locks to simple
sleep mutexes, then see if any extra contention it generates is greater
than removed overhead of managing local locking state information,
cost of extra calls into lockmgr, etc.

Additionally, making the vm_map lock a mutex and respecting it properly
will put us much closer to not needing Giant magic in vm.
2002-03-18 15:08:09 +00:00
Alan Cox
2f6c16e1e8 Acquire a read lock on the map inside of vm_map_check_protection() rather
than expecting the caller to do so.  This (1) eliminates duplicated code in
kernacc() and useracc() and (2) fixes missing synchronization in munmap().
2002-03-17 03:19:31 +00:00
Brian Feldman
0e0af8ecda Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.
While doing this, move it earlier in the sysinit boot process so that the
VM system can use it.

After that, the system is now able to use sx locks instead of lockmgr
locks in the VM system.  To accomplish this, some of the more
questionable uses of the locks (such as testing whether they are
owned or not, as well as allowing shared+exclusive recursion) are
removed, and simpler logic throughout is used so locks should also be
easier to understand.

This has been tested on my laptop for months, and has not shown any
problems on SMP systems, either, so appears quite safe.  One more
user of lockmgr down, many more to go :)
2002-03-13 23:48:08 +00:00
Eivind Eklund
a128794977 - Remove a number of extra newlines that do not belong here according to
style(9)
- Minor space adjustment in cases where we have "( ", " )", if(), return(),
  while(), for(), etc.
- Add /* SYMBOL */ after a few #endifs.

Reviewed by:	alc
2002-03-10 21:52:48 +00:00
Matthew Dillon
8c5dffe8ca Fix a bug in the vm_map_clean() procedure. msync()ing an area of memory
that has just been mapped MAP_ANON|MAP_NOSYNC and has not yet been accessed
will panic the machine.

MFC after:	1 day
2002-03-07 03:54:56 +00:00
Alfred Perlstein
582ec34cd8 Fix a race with free'ing vmspaces at process exit when vmspaces are
shared.

Also introduce vm_endcopy instead of using pointer tricks when
initializing new vmspaces.

The race occured because of how the reference was utilized:
  test vmspace reference,
  possibly block,
  decrement reference

When sharing a vmspace between multiple processes it was possible
for two processes exiting at the same time to test the reference
count, possibly block and neither one free because they wouldn't
see the other's update.

Submitted by: green
2002-02-05 21:23:05 +00:00
Matthew Dillon
e302698320 Don't let pmap_object_init_pt() exhaust all available free pages
(allocating pv entries w/ zalloci) when called in a loop due to
an madvise().  It is possible to completely exhaust the free page list and
cause a system panic when an expected allocation fails.
2001-10-31 03:06:33 +00:00