Commit Graph

1887 Commits

Author SHA1 Message Date
alc
e2aed2d5cb - Unmanage pages allocated by contigmalloc1(). (There is no point in
having PV entries for these pages.)
 - Remove splvm() and splx() calls.
2004-01-10 21:17:53 +00:00
alc
d587f759ff Unmanage pages allocated by kmem_alloc(). (There is no point in having PV
entries for these pages.)
2004-01-10 00:22:33 +00:00
alc
2def05907f - Enable recursive acquisition of the mutex synchronizing access to the
free pages queue.  This is presently needed by contigmalloc1().
 - Move a sanity check against attempted double allocation of two pages
   to the same vm object offset from vm_page_alloc() to vm_page_insert().
   This provides better protection because double allocation could occur
   through a direct call to vm_page_insert(), such as that by
   vm_page_rename().
 - Modify contigmalloc1() to hold the mutex synchronizing access to the
   free pages queue while it scans vm_page_array in search of free pages.
 - Correct a potential leak of pages by contigmalloc1() that I introduced
   in revision 1.20: We must convert all cache queue pages to free pages
   before we begin removing free pages from the free queue.  Otherwise,
   if we have to restart the scan because we are unable to acquire the
   vm object lock that is necessary to convert a cache queue page to a
   free page, we leak those free pages already removed from the free queue.
2004-01-08 20:48:26 +00:00
alc
06ffc68c44 Don't bother clearing PG_ZERO in contigmalloc1(), kmem_alloc(), or
kmem_malloc().  It serves no purpose.
2004-01-06 20:52:55 +00:00
alc
6903aac917 Simplify the various pager allocation routines by computing the desired
object size once and assigning that value to a local variable.
2004-01-04 20:55:15 +00:00
alc
9139218fef Eliminate the acquisition and release of Giant from vnode_pager_alloc().
The vm object and vnode locking should suffice.

Discussed with:	jeff
2004-01-04 03:18:24 +00:00
alc
5fd01400f6 Reduce the scope of Giant in swap_pager_alloc(). 2004-01-03 20:02:17 +00:00
alc
1b6c0226e5 Revision 1.74 of vm_meter.c ("Avoid lock-order reversal") makes the release
and subsequent reacquisition of the same vm object lock in
vm_object_collapse() unnecessary.
2004-01-02 19:57:45 +00:00
alc
8ef757f9bb Avoid lock-order reversal between the vm object list mutex and the vm
object mutex.
2004-01-02 19:38:25 +00:00
alc
bcbe34faa7 - Increase the scope of the kmem_object's lock in kmem_malloc(). Add a
comment explaining why a further increase is not possible.
2004-01-01 19:48:56 +00:00
alc
a61dca9e40 In vm_page_lookup() check the root of the vm object's splay tree for the
desired page before calling vm_page_splay().
2003-12-31 19:02:01 +00:00
alc
f56285bf79 Simplify vm_page_grab(): Don't bother with the generation check. If the
vm object hasn't changed, the desired page will be at or near the root
of the vm object's splay tree, making vm_page_lookup() cheap.  (The only
lock required for vm_page_lookup() is already held.)  If, however, the
vm object has changed and retry was requested, eliminating the generation
check also eliminates a pointless acquisition and release of the page
queues lock.
2003-12-31 01:44:45 +00:00
alc
5b67ae5069 - Modify vm_object_split() to expect a locked vm object on entry and
return on a locked vm object on exit.  Remove GIANT_REQUIRED.
 - Eliminate some unnecessary local variables from vm_object_split().
2003-12-30 22:28:36 +00:00
alc
383fec21f2 Remove swap_pager_un_object_list; it is unused. 2003-12-29 04:21:44 +00:00
alc
e354cb03da Remove GIANT_REQUIRED from kmem_suballoc(). 2003-12-28 00:10:48 +00:00
alc
2693f5356a - Reduce Giant's scope in vm_fault().
- Use vm_object_reference_locked() instead of vm_object_reference()
   in vm_fault().
2003-12-26 23:33:37 +00:00
alc
92ccc1caad Minor correction to revision 1.258: Use the proc pointer that is passed to
vm_map_growstack() in the RLIMIT_VMEM check rather than curthread.
2003-12-26 21:54:45 +00:00
alc
ce44c14b4d - Create an unmapped guard page to trap access to vm_page_array[-1].
This guard page would have trapped the problems with the MFC of the PAE
   support to RELENG_4 at an earlier point in the sequence of events.

Submitted by:	tegge
2003-12-22 02:04:08 +00:00
alc
532ae03841 - Significantly reduce the number of preallocated pv entries in
pmap_init().  Such a large preallocation is unnecessary and wastes
   nearly eight megabytes of kernel virtual address space per gigabyte
   of managed physical memory.
 - Increase UMA_BOOT_PAGES by two.  This enables the removal of
   pmap_pv_allocf().  (Note: this function was only used during
   initialization, specifically, after pmap_init() but before
   pmap_init2().  During pmap_init2(), a new allocator is installed.)
2003-12-22 01:01:32 +00:00
alc
54fc96b316 - Correct an error in mincore(2) that has existed since its introduction:
mincore(2) should check that the page is valid, not just allocated.
   Otherwise, it can return a false positive for a page that is not yet
   resident because it is being read from disk.
2003-12-21 06:03:40 +00:00
kan
b657516563 Remove trailing whitespace. 2003-12-08 02:45:45 +00:00
alc
5857bc6508 Addendum to revision 1.174: In the case where vm_pager_allocate() is called
to create a vnode-backed object, the vnode lock must be held by the caller.

Reported by:	truckman
Discussed with:	kan
2003-12-08 00:47:33 +00:00
alc
c6a4b2c623 Fix a deadlock between vm_fault() and vm_mmap(): The expected lock ordering
between vm_map and vnode locks is that vm_map locks are acquired first.  In
revision 1.150 mmap(2) was changed to pass a locked vnode into vm_mmap().
This creates a lock-order reversal when vm_mmap() calls one of the vm_map
routines that acquires a vm_map lock.  The solution implemented herein is
to release the vnode lock in mmap() before calling vm_mmap() and reacquire
this lock if necessary in vm_mmap().

Approved by:	re (scottl)
Reviewed by:	jeff, kan, rwatson
2003-12-06 05:45:32 +00:00
jhb
57e73abc21 Fix all users of mp_maxid to use the same semantics, namely:
1) mp_maxid is a valid FreeBSD CPU ID in the range 0 .. MAXCPU - 1.
2) For all active CPUs in the system, PCPU_GET(cpuid) <= mp_maxid.

Approved by:	re (scottl)
Tested on:	i386, amd64, alpha
2003-12-03 14:57:26 +00:00
jeff
52c9ed9c97 - Unbreak UP. mp_maxid is not defined on uni-processor machines, although
I believe it and the other MP variables should be.  For now, just define it
   here and wait for jhb to clean it up later.

Approved by:	re (rwatson)
2003-11-30 22:18:14 +00:00
jeff
a0e6d5afbd - Replace the local maxcpu with mp_maxid. Previously, if mp_maxid
was equal to MAXCPU, we would overrun the pcpu_mtx array because maxcpu
   was calculated incorrectly.
 - Add some more debugging code so that memory leaks at the time of
   uma_zdestroy() are more easily diagnosed.

Approved by:	re (rwatson)
2003-11-30 08:04:01 +00:00
alc
2058f00606 - Avoid a lock-order reversal between Giant and a system map mutex that
occurs when kmem_malloc() fails to allocate a sufficient number of vm
   pages.  Specifically, we avoid the lock-order reversal by not grabbing
   Giant around pmap_remove() if the map is the kmem_map.

Approved by:	re (jhb)
Reported by:	Eugene <eugene3@web.de>
2003-11-19 18:48:45 +00:00
tjr
4ffa39d505 In vnode_pager_input_smlfs(), call VOP_STRATEGY instead of VOP_SPECSTRATEGY
on non-VCHR vnodes. This fixes a panic when reading data from files on a
filesystem with a small (less than a page) block size.

PR:		59271
Reviewed by:	alc
2003-11-15 09:54:11 +00:00
alc
cbe7fc3dda - Remove use of Giant from uma_zone_set_obj(). 2003-11-14 17:49:07 +00:00
alc
4c8c337e76 - Remove long dead code. 2003-11-14 08:22:38 +00:00
alc
69b0108429 Changes to msync(2)
- Return EBUSY if the region was wired by mlock(2) and MS_INVALIDATE
   is specified to msync(2).  This is required by the Open Group Base
   Specifications Issue 6.
 - vm_map_sync() doesn't return KERN_FAILURE.  Thus, msync(2) can't
   possibly return EIO.
 - The second major loop in vm_map_sync() handles sub maps.  Thus,
   failing on sub maps in the first major loop isn't necessary.
2003-11-14 06:55:11 +00:00
alc
fc132b411d - The Open Group Base Specifications Issue 6 specifies that an munmap(2)
must return EINVAL if size is zero.  Submitted by: tegge
 - In order to avoid a race condition in multithreaded applications, the
   check and removal operations by munmap(2) must be in the same critical
   section.  To accomodate this, vm_map_check_protection() is modified to
   require its caller to obtain at least a read lock on the map.
2003-11-10 01:37:40 +00:00
mini
7345277fb5 NFC: Update stale comments.
Reviewed by:	alc
2003-11-10 00:44:00 +00:00
alc
3dd60847d6 - Remove Giant from msync(2). Giant is still acquired by the lower layers
if we drop into the pmap or vnode layers.
 - Migrate the handling of zero-length msync(2)s into vm_map_sync() so that
   multithread applications can't change the map between implementing the
   zero-length hack in msync(2) and reacquiring the map lock in
   vm_map_sync().

Reviewed by:	tegge
2003-11-09 22:09:04 +00:00
alc
99cf24e1d5 - Rename vm_map_clean() to vm_map_sync(). This better reflects the fact
that msync(2) is its only caller.
 - Migrate the parts of the old vm_map_clean() that examined the internals
   of a vm object to a new function vm_object_sync() that is implemented in
   vm_object.c.  At the same, introduce the necessary vm object locking so
   that vm_map_sync() and vm_object_sync() can be called without Giant.

Reviewed by:	tegge
2003-11-09 05:25:35 +00:00
alc
db577c50bc - Move the implementation of OBJ_ONEMAPPING from vm_map_delete() to
vm_map_entry_delete() so that all of the vm object manipulation is
   performed in one place.
2003-11-05 05:48:22 +00:00
marcel
40b4578074 Update avail_ssize for rstacks after growing them. 2003-11-04 06:48:58 +00:00
des
df69756f8b Whitespace cleanup. 2003-11-03 16:14:45 +00:00
alc
d8a239222c - Increase the scope of the source object lock in vm_map_copy_entry(). 2003-11-03 00:59:54 +00:00
alc
7f9352d8dd - Increase the scope of two vm object locks in vm_object_split(). 2003-11-02 22:52:42 +00:00
alc
135d6ec078 - Introduce and use vm_object_reference_locked(). Unlike
vm_object_reference(), this function must not be used to reanimate dead
   vm objects.  This restriction simplifies locking.

Reviewed by:	tegge
2003-11-02 21:30:10 +00:00
alc
5048b73dc7 - Increase the scope of two vm object locks in vm_object_collapse().
- Remove the acquisition and release of Giant from vm_object_coalesce().
2003-11-01 23:06:41 +00:00
alc
404767be51 - Modify swap_pager_copy() and its callers such that the source and
destination objects are locked on entry and exit.  Add comments to
   the callers noting that the locks can be released by swap_pager_copy().
 - Remove several instances of GIANT_REQUIRED.
2003-11-01 08:57:26 +00:00
alc
52d51e3fc1 - Additional vm object locking in vm_object_split()
- New vm object locking assertions in vm_page_insert() and
   vm_object_set_writeable_dirty()
2003-11-01 04:54:23 +00:00
alc
0bc9f88735 - Revert a part of revision 1.73: Make vm_object_set_flag() an inline
function.  This function is so trivial that inlining reduces the size
   of the kernel.
2003-10-31 20:17:00 +00:00
alc
1a7561273d - Take advantage of the swap pager locking: Eliminate the use of Giant
from vm_object_madvise().
 - Remove excessive blank lines from vm_object_madvise().
2003-10-31 18:32:03 +00:00
marcel
85f7e242fb Fix two bugs introduced with the rstack functionality and specific to
the rstack functionality:
1. Fix a KASSERT that tests for the address to be above the upward
   growable stack. Typically for rstack, the faulting address can be
   identical to the record end of the upward growable entry, and
   very likely is on ia64. The KASSERT tested for greater than, not
   greater equal, so whenever the register stack had to be grown
   the assertion fired.
2. When we grow the upward growable stack entry and adjust the
   unlying object, don't forget to adjust the size of the VM map.
   Not doing so would trigger an assert in vm_mapzdtor().

Pointy hat: marcel (for not testing with INVARIANTS).
2003-10-31 07:29:28 +00:00
alc
9cb00ad5c9 - Synchronize access to the swdevt's sw_flags with sw_dev_mtx.
- Remove several instances of GIANT_REQUIRED.
2003-10-31 05:18:45 +00:00
alc
49922f4c16 - Synchronize access to the swdevt's sw_blist with sw_dev_mtx.
- Remove several instances of GIANT_REQUIRED.
2003-10-30 09:12:43 +00:00
alc
c34f18fd8c - Synchronize access to swdevhd using sw_dev_mtx.
- Use swp_sizecheck() rather than assignment to swap_pager_full in
   swaponsomething().
2003-10-30 07:11:06 +00:00
alc
9abe515c9c - Synchronize updates to nswapdev using sw_dev_mtx. 2003-10-29 07:51:41 +00:00
alc
8236ec01db - Avoid a race in swaponsomething(): Calculate the new swdevt's first and
end swblk and insert this new swdevt into the list of swap devices
   in the same critical section.
2003-10-29 05:42:28 +00:00
alc
bc4efe0cf2 - Complete the synchronization of accesses to the swblock hash table. 2003-10-27 05:58:15 +00:00
alc
a98eda3774 - Introduce and use a mutex synchronizing access to the swblock hash table. 2003-10-26 19:55:35 +00:00
alc
af5c3c9543 - Simplify vm_object_collapse()'s collapse case, reducing the number
of lock acquires and releases performed.
 - Move an assertion from vm_object_collapse() to vm_object_zdtor()
   because it applies to all cases of object destruction.
2003-10-26 06:29:26 +00:00
alc
c518b53eef - Add some of the required vm object locking, including assertions where
the vm object lock is required and already held.
2003-10-25 23:42:17 +00:00
alc
d9148d5648 - Align a comment within struct vm_page.
- Annotate the vm_page's valid field as synchronized by the containing
   vm object's lock.
2003-10-25 18:33:04 +00:00
alc
66c7e84c03 - Call vnode_pager_input_old() with the vm object locked. 2003-10-25 05:21:16 +00:00
alc
da3247f15f - Push down Giant from vm_pageout() to vm_pageout_scan(), freeing
vm_pageout_page_stats() from Giant.
 - Modify vm_pager_put_pages() and vm_pager_page_unswapped() to expect the
   vm object to be locked on entry.  (All of the pager routines now expect
   this.)
2003-10-24 06:43:04 +00:00
alc
3bd3ba6e2f - Retire vm_pageout_page_free(). Instead, use vm_page_select_cache() from
vm_pageout_scan().  Rationale: I don't like leaving a busy page in the
   cache queue with neither the vm object nor the vm page queues lock held.
 - Assert that the page is active in vm_pageout_page_stats().
2003-10-22 18:41:32 +00:00
alc
b99ee703ce - Assert that every page found in the active queue is an active page. 2003-10-22 03:08:24 +00:00
alc
1dab578b27 - Assert that the containing vm object is locked in
vm_page_set_validclean().  (This function reads and modifies the
   vm page's valid field, which is synchronized by the lock on the
   containing vm object.)
2003-10-21 19:36:51 +00:00
alc
d3354f0251 - Remove some long unused code. 2003-10-20 18:57:01 +00:00
alc
68f12077da - Remove comments referring to functions that no longer exist. 2003-10-20 05:16:27 +00:00
alc
cf82ee0544 - Hold the vm object's lock around calls to vm_page_set_validclean(). 2003-10-20 04:05:24 +00:00
alc
7f534aa0da - Synchronize access to a vm page's valid field using the containing
vm object's lock.
 - Reduce the scope of the vm page queues lock in two places.
2003-10-19 00:01:56 +00:00
alc
5205e8fc0e - Synchronize access to the page's valid field in
vnode_pager_generic_getpages() using the containing object's lock.
2003-10-18 21:30:29 +00:00
alc
acc2e67bab - Increase the object lock's scope in vm_contig_launder() so that access
to the object's type field and the call to vm_pageout_flush() are
   synchronized.
 - The above change allows for the eliminaton of the last parameter
   to vm_pageout_flush().
 - Synchronize access to the page's valid field in vm_pageout_flush()
   using the containing object's lock.
2003-10-18 21:09:21 +00:00
alc
7e20f6079b Corrections to revision 1.305
- Specifying VM_MAP_WIRE_HOLESOK should not assume that the start
   address is the beginning of the map.  Instead, move to the first
   entry after the start address.
 - The implementation of VM_MAP_WIRE_HOLESOK was incomplete.  This
   caused the failure of mlockall(2) in some circumstances.
2003-10-18 18:48:17 +00:00
phk
5c5a72acf5 DuH!
bp->b_iooffset (the spot on the disk), not bp->b_offset (the offset in
the file)
2003-10-18 14:10:28 +00:00
phk
06d8cf2ceb Initialize bp->b_offset before calling VOP_[SPEC]STRATEGY().
Remove stale comment about B_PHYS.
2003-10-18 11:11:05 +00:00
alc
66d71e6a11 - Synchronize access to a vm page's valid field using the containing
vm object's lock.
 - Release the vm object and vm page queues locks around vput().
2003-10-17 05:07:17 +00:00
alc
550b8621d9 - vm_fault_copy_entry() should not assume that the source object contains
every page.  If the source entry was read-only, one or more wired pages
   could be in backing objects.
 - vm_fault_copy_entry() should not set the PG_WRITEABLE flag on the page
   unless the destination entry is, in fact, writeable.
2003-10-15 08:00:45 +00:00
alc
6d6f15daf5 Lock the destination object in vm_fault_copy_entry(). 2003-10-08 07:11:19 +00:00
alc
6aec8820ce Retire vm_page_copy(). Its reason for being ended when peter@ modified
pmap_copy_page() et al. to accept a vm_page_t rather than a physical
address.  Also, this change will facilitate locking access to the vm page's
valid field.
2003-10-08 05:35:12 +00:00
bms
5cb68c177c Only the super-user should be able to wire pages via the mlock() family
of system calls at this time.  Remove various #ifdef's to enforce this.
2003-10-06 01:59:04 +00:00
bms
97fd5bcaf9 Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.
Add a definition of pmap_wired_count().
Add a definition of vmspace_wired_count().

Reviewed by:	truckman
Discussed with:	peter
2003-10-06 01:47:12 +00:00
alc
bed390da77 The addition of a locking assertion to vm_page_zero_invalid() has revealed
a long-time bug: vm_pager_get_pages() assumes that m[reqpage] contains a
valid page upon return from pgo_getpages().  In the case of the device
pager this page has been freed and replaced by a fake page.  The fake page
is properly inserted into the vm object but m[reqpage] is left pointing
to a freed page.  For now, update m[reqpage] to point to the fake page.

Submitted by:	tegge
2003-10-05 22:23:44 +00:00
bms
0b197826d8 Revert previous commit. Come back vslock(), all is forgiven.
Pointy hat to:	bms
2003-10-05 12:41:08 +00:00
bms
74cf890021 Retire vslock() and vsunlock() with extreme prejudice.
Discussed with:	pete
2003-10-05 09:47:54 +00:00
alc
efd1f76e7f Assert that the containing vm object's lock is held in
vm_page_set_invalid().
2003-10-05 06:58:07 +00:00
alc
6f683aad7c Assert that the containing vm object's lock is held in
vm_page_zero_invalid().
2003-10-04 21:56:27 +00:00
alc
4f4885272c Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 21:35:48 +00:00
alc
b8eab77e1e - Extend the scope the vm object lock to cover calls to
vm_page_is_valid().
 - Assert that the lock on the containing vm object is held in
   vm_page_is_valid().
2003-10-04 19:23:29 +00:00
alc
1e229ab82b Synchronize access to a vm page's valid field using the containing
vm object's lock.
2003-10-04 19:13:27 +00:00
jeff
9596c52d53 - Use the UMA_ZONE_VM flag on the fakepg and object zones to prevent
vm recursion and LORs.  This may be necessary for other zones created in
   the vm but this needs to be verified.
2003-10-04 14:21:53 +00:00
alc
bc6bc9a64d Migrate pmap_prefault() into the machine-independent virtual memory layer.
A small helper function pmap_is_prefaultable() is added.  This function
encapsulate the few lines of pmap_prefault() that actually vary from
machine to machine.  Note: pmap_is_prefaultable() and pmap_mincore() have
much in common.  Going forward, it's worth considering their merger.
2003-10-03 22:46:53 +00:00
alc
499c62ff42 In vm_page_remove(), assert that the vm object is locked, unless an Alpha.
(The Alpha still requires updates to its pmap.)
2003-09-28 04:50:48 +00:00
marcel
c37d5fae88 Part 2 of implementing rstacks: add the ability to create rstacks and
use the ability on ia64 to map the register stack. The orientation of
the stack (i.e. its grow direction) is passed to vm_map_stack() in the
overloaded cow argument. Since the grow direction is represented by
bits, it is possible and allowed to create bi-directional stacks.
This is not an advertised feature, more of a side-effect.

Fix a bug in vm_map_growstack() that's specific to rstacks and which
we could only find by having the ability to create rstacks: when
the mapped stack ends at the faulting address, we have not actually
mapped the faulting address. we need to include or cover the faulting
address.

Note that at this time mmap(2) has not been extended to allow the
creation of rstacks by processes. If such a need arises, this can
be done.

Tested on: alpha, i386, ia64, sparc64
2003-09-27 22:28:14 +00:00
phk
eaca01f3e7 Provide a bit more help with "memory overwritten after free" style bugs. 2003-09-27 21:33:13 +00:00
peter
420ccff7be Add sysentvec->sv_fixlimits() hook so that we can catch cases on 64 bit
systems where the data/stack/etc limits are too big for a 32 bit process.

Move the 5 or so identical instances of ELF_RTLD_ADDR() into imgact_elf.c.

Supply an ia32_fixlimits function.  Export the clip/default values to
sysctl under the compat.ia32 heirarchy.

Have mmap(0, ...) respect the current p->p_limits[RLIMIT_DATA].rlim_max
value rather than the sysctl tweakable variable.  This allows mmap to
place mappings at sensible locations when limits have been reduced.

Have the imgact_elf.c ld-elf.so.1 placement algorithm use the same
method as mmap(0, ...) now does.

Note that we cannot remove all references to the sysctl tweakable
maxdsiz etc variables because /etc/login.conf specifies a datasize
of 'unlimited'.  And that causes exec etc to fail since it can no
longer find space to mmap things.
2003-09-25 01:10:26 +00:00
silby
20104b753f Adjust the kmapentzone limit so that it takes into account the size of
maxproc and maxfiles, as procs, pipes, and other structures cause allocations
from kmapentzone.

Submitted by:	tegge
2003-09-23 18:56:54 +00:00
alc
0bf68803b7 Change the handling of the kernel and kmem objects in vm_map_delete(): In
order to use "unmanaged" pages in the kmem object, vm_map_delete() must
unconditionally perform pmap_remove().  Otherwise, sparc64 has problems.

Tested by:	jake
2003-09-23 04:28:04 +00:00
alc
da8a79a54b Initialize the page's pindex field even for VM_ALLOC_NOOBJ allocations.
(This field is useful for implementing sanity checks even if the page does
not belong to an object.)
2003-09-22 00:56:13 +00:00
jeff
9cb0f173a3 - Fix MD_SMALL_ALLOC on architectures that support it. Define a new alloc
function, startup_alloc(), that is used for single page allocations prior
   to the VM starting up.  If it is used after the VM startups up, it
   replaces the zone's allocf pointer with either page_alloc() or
   uma_small_alloc() where appropriate.

Pointy hat to:	me
Tested by:	phk/amd64, me/x86
2003-09-21 07:39:16 +00:00
peter
6c34326f33 Bad Jeffr! No cookie!
Temporarily disable the UMA_MD_SMALL_ALLOC stuff since recent commits
break sparc64, amd64, ia64 and alpha.  It appears only i386 and maybe
powerpc were not broken.
2003-09-20 23:35:33 +00:00
jeff
c1c7e8e350 - Remove the working-set algorithm. Instead, use the per cpu buckets as the
working set cache.  This has several advantages.  Firstly, we never touch
   the per cpu queues now in the timeout handler.  This removes one more
   reason for having per cpu locks.  Secondly, it reduces the size of the zone
   by 8 bytes, bringing it under 200 bytes for a single proc x86 box.  This
   tidies up other logic as well.
 - The 'destroy' flag no longer needs to be passed to zone_drain() since it
   always frees everything in the zone's slabs.
 - cache_drain() is now only called from zone_dtor() and so it destroys by
   default.  It also does not need the destroy parameter now.
2003-09-19 23:27:46 +00:00
jeff
52d8179a1f - Remove the cache colorization code. We can't use it due to all of the
broken consumers of the malloc interface who assume that the allocated
   address will be an even multiple of the size.
 - Remove disabled time delay code on uma_reclaim().  The comment there said
   it all.  It was not an effective strategy and it should not be left in
   #if 0'd for all eternity.
2003-09-19 23:04:44 +00:00
jeff
2348b96bcc - There are an endless stream of style(9) errors in this file. Fix a few.
Also catch some spelling errors.
2003-09-19 22:31:45 +00:00
jeff
e77a30f7d5 - Don't inspect the zone in page_alloc(). It may be NULL.
- Don't cache more items than the zone would like in uma_zalloc_bucket().
2003-09-19 09:22:04 +00:00