Commit Graph

253 Commits

Author SHA1 Message Date
Alan Cox
410cfc455e Lock the vm_object in obj_alloc(). 2003-04-19 00:30:36 +00:00
Andrew Gallatin
b37d8ead52 Don't grab Giant in slab_zalloc() if M_NOWAIT is specified. This
should allow the use of INTR_MPSAFE network drivers.

Tested by: njl
Glanced at by: jeff
2003-04-18 13:02:29 +00:00
Tor Egge
125ee0d161 Obtain Giant before calling kmem_alloc without M_NOWAIT and before calling
kmem_free if Giant isn't already held.
2003-03-26 18:44:53 +00:00
John Baldwin
263067951a Replace calls to WITNESS_SLEEP() and witness_list() with equivalent calls
to WITNESS_WARN().
2003-03-04 21:03:05 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Poul-Henning Kamp
886eaaacfa Change a printf to also tell how many items were left in the zone. 2003-02-04 08:23:18 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Jeff Roberson
ebc85edf5e - M_WAITOK is 0 and not a real flag. Test for this properly.
Submitted by:	tmm
Pointy hat to:	jeff
2003-01-20 01:32:56 +00:00
Jens Schweikhardt
9d5abbddbf Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,
especially in troff files.
2003-01-01 18:49:04 +00:00
Jeff Roberson
74c924b553 - Wakeup the correct address when a zone is no longer full.
Spotted by:	jake
2002-11-18 08:27:14 +00:00
Jeff Roberson
f3da1873bc - Don't forget the flags value when using boot pages.
Reported by:	grehan
2002-11-16 20:57:41 +00:00
Matt Jacob
81f71edaec atomic_set_8 isn't MI. Instead, follow Jake's suggestions about
ZONE_LOCK.
2002-11-11 11:50:03 +00:00
Jeff Roberson
48eea37508 - Add support for machine dependant page allocation routines. MD code
may define UMA_MD_SMALL_ALLOC to make use of this feature.

Reviewed by:	peter, jake
2002-11-01 01:01:27 +00:00
Jeff Roberson
bbee39c629 - Now that uma_zalloc_internal is not the fast path don't be so fussy about
extra function calls.  Refactor uma_zalloc_internal into seperate functions
   for finding the most appropriate slab, filling buckets, allocating single
   items, and pulling items off of slabs.  This makes the code significantly
   cleaner.
 - This also fixes the "Returning an empty bucket." panic that a few people
   have seen.

Tested On:	alpha, x86
2002-10-24 07:59:03 +00:00
Jeff Roberson
bba739abf9 - Move the destructor calls so that they are not called with the zone lock
held.  This avoids a lock order reversal when destroying zones.
   Unfortunately, this also means that the free checks are not done before
   the destructor is called.

Reported by:	phk
2002-10-24 06:17:30 +00:00
Poul-Henning Kamp
37c841831f Be consistent about "static" functions: if the function is marked
static in its prototype, mark it static at the definition too.

Inspired by:    FlexeLint warning #512
2002-09-28 17:15:38 +00:00
Jeff Roberson
f461cf2297 - Use my freebsd email alias in the copyright.
- Remove redundant instances of my email alias in the file summary.
2002-09-19 06:05:32 +00:00
Jeff Roberson
99571dc345 - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.
- Remove all instances of the mallochash.
 - Stash the slab pointer in the vm page's object pointer when allocating from
   the kmem_obj.
 - Use the overloaded object pointer to find slabs for malloced memory.
2002-09-18 08:26:30 +00:00
Archie Cobbs
55f7c614fd Don't use "NULL" when "0" is really meant. 2002-08-21 23:39:52 +00:00
Jeff Roberson
17b9cc4941 Fix a lock order reversal in uma_zdestroy. The uma_mtx needs to be held across
calls to zone_drain().

Noticed by:	scottl
2002-07-05 21:39:52 +00:00
Jeff Roberson
f5118d6aaf Remove unnecessary includes. 2002-07-05 05:16:19 +00:00
Jeff Roberson
e221e841b0 Actually use the fini callback.
Pointy hat to:	me :-(
Noticed By:	Julian
2002-07-03 00:30:51 +00:00
Jeff Roberson
5c0e403ba2 Reduce the amount of code that runs with the zone lock held in slab_zalloc().
This allows us to run the zone initialization functions without any locks held.
2002-06-25 21:04:50 +00:00
Jeff Roberson
3370c5bfd7 - Remove bogus use of kmem_alloc that was inherited from the old zone
allocator.
- Properly set M_ZERO when talking to the back end page allocators for
  non malloc zones.  This forces us to zero fill pages when they are first
  brought into a cache.
- Properly handle M_ZERO in uma_zalloc_internal.  This fixes a problem where
  per cpu buckets weren't always getting zeroed.
2002-06-19 20:49:44 +00:00
Jeff Roberson
4741dcbff5 Honor the BUCKETCACHE flag on free as well. 2002-06-17 23:53:58 +00:00
Jeff Roberson
18aa2de5a7 - Introduce the new M_NOVM option which tells uma to only check the currently
allocated slabs and bucket caches for free items.  It will not go ask the vm
  for pages.  This differs from M_NOWAIT in that it not only doesn't block, it
  doesn't even ask.

- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag.  This
  tells uma that it should only allocate buckets out of the bucket cache, and
  not from the VM.  It does this by using the M_NOVM option to zalloc when
  getting a new bucket.  This is so that the VM doesn't recursively enter
  itself while trying to allocate buckets for vm_map_entry zones.  If there
  are already allocated buckets when we get here we'll still use them but
  otherwise we'll skip it.

- Use the ZONE_VM flag on vm map entries and pv entries on x86.
2002-06-17 22:02:41 +00:00
Ian Dowse
f97d6ce396 Correct the logic for determining whether the per-CPU locks need
to be destroyed. This fixes a problem where destroying a UMA zone
would fail to destroy all zone mutexes.

Reviewed by:	jeff
2002-06-10 03:25:23 +00:00
Jeff Roberson
494273bead Add a comment describing a resource leak that occurs during a failure case
in obj_alloc.
2002-06-03 22:59:19 +00:00
John Baldwin
4c1cc01cd8 In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensure
that td_intr_nesting_level is 0 (like malloc() does).  Since malloc() calls
uma we can probably remove the check in malloc() for this now.  Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
2002-05-20 17:54:48 +00:00
Jeff Roberson
713deb3677 Don't call the uz free function while the zone lock is held. This can lead
to lock order reversals.  uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
2002-05-13 05:08:18 +00:00
Jeff Roberson
0aef6126a1 Remove the hash_free() lock order reversal. This could have happened for
several reasons before.  Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
2002-05-13 04:39:28 +00:00
Jeff Roberson
c7173f58fa Use pages instead of uz_maxpages, which has not been initialized yet, when
creating the vm_object.  This was broken after the code was rearranged to
grab giant itself.

Spotted by:     alc
2002-05-04 21:49:29 +00:00
Jeff Roberson
b9ba893179 Move around the dbg code a bit so it's always under a lock. This stops a
weird potential race if we were preempted right as we were doing the dbg
checks.
2002-05-02 09:05:36 +00:00
Andrew R. Reiter
c3bdc05fb9 - Changed the size element of uma_zctor_args to be size_t instead of int.
- Changed uma_zcreate to accept the size argument as a size_t intead of
  int.

Approved by:	jeff
2002-05-02 07:36:30 +00:00
Jeff Roberson
5a34a9f089 malloc/free(9) no longer require Giant. Use the malloc_mtx to protect the
mallochash.  Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper.  This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
2002-05-02 07:22:19 +00:00
Jeff Roberson
639c9550fb Remove the temporary alignment check in free().
Implement the following checks on freed memory in the bucket path:
	- Slab membership
	- Alignment
	- Duplicate free

This previously was only done if we skipped the buckets.  This code will slow
down INVARIANTS a bit, but it is smp safe.  The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
2002-05-02 02:08:48 +00:00
Jeff Roberson
2cc35ff9c6 Move the implementation of M_ZERO into UMA so that it can be passed to
uma_zalloc and friends.  Remove this functionality from the malloc wrapper.

Document this change in uma.h and adjust variable names in uma_core.
2002-04-30 04:26:34 +00:00
Jeff Roberson
28bc44195c Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's own
mutex class.  Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory.  This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
2002-04-29 23:45:41 +00:00
Andrew R. Reiter
d4d6aee5a0 - Fix a round down bogon in uma_zone_set_max().
Submitted by: jeff@
2002-04-25 06:24:40 +00:00
Jeff Roberson
5300d9dda2 Fix a witness warning when expanding a hash table. We were allocating the new
hash while holding the lock on a zone.  Fix this by doing the allocation
seperately from the actual hash expansion.

The lock is dropped before the allocation and reacquired before the expansion.
The expansion code checks to see if we lost the race and frees the new hash
if we do.  We really never will lose this race because the hash expansion is
single threaded via the timeout mechanism.
2002-04-14 13:47:10 +00:00
Jeff Roberson
0da47b2fc6 Protect the initial list traversal in sysctl_vm_zone() with the uma_mtx. 2002-04-14 12:39:38 +00:00
Jeff Roberson
af7f9b97b6 Fix the calculation that determines uz_maxpages. It was off for large zones.
Fortunately we have no large zones with maximums specified yet, so it wasn't
breaking anything.

Implement blocking when a zone exceeds the maximum and M_WAITOK is specified.
Previously this just failed like the old zone allocator did.  The old zone
allocator didn't support WAITOK/NOWAIT though so we should do what we
advertise.

While I was in there I cleaned up some more zalloc logic to further simplify
that code path and reduce redundant code.  This was needed to make the blocking
work properly anyway.
2002-04-14 01:56:25 +00:00
Jeff Roberson
bce9779110 Remember to unlock the zone if the fill count is too high.
Pointed out by:	pete, jake, jhb
2002-04-10 01:52:50 +00:00
Jeff Roberson
86bbae32f4 Add a mechanism to disable buckets when the v_free_count drops below
v_free_min.  This should help performance in memory starved situations.
2002-04-08 06:20:34 +00:00
Jeff Roberson
605cbd6a08 Don't release the zone lock until after the dtor has been called. As far as I
can tell this could not have caused any problems yet because UMA is still
called with giant.

Pointy hat to:	jeff
Noticed by:	jake
2002-04-08 05:13:48 +00:00
Jeff Roberson
9c2cd7e5a9 Implement uma_zdestroy(). It's prototype changed slightly. I decided that I
didn't like the wait argument and that if you were removing a zone it had
better be empty.

Also, I broke out part of hash_expand and made a seperate hash_free() for use
in uma_zdestroy.
2002-04-08 04:48:58 +00:00
Jeff Roberson
a553d4b8eb Rework most of the bucket allocation and free code so that per cpu locks are
never held across blocking operations.  Also, fix two other lock order
reversals that were exposed by jhb's witness change.

The free path previously had a bug that would cause it to skip the free bucket
list in some cases and go straight to allocating a new bucket.  This has been
fixed as well.

These changes made the bucket handling code much cleaner and removed quite a
few lock operations.  This should be marginally faster now.

It is now possible to call malloc w/o Giant and avoid any witness warnings.
This still isn't entirely safe though because malloc_type statistics are not
protected by any lock.
2002-04-08 02:42:55 +00:00
Jeff Roberson
d0b06acbe1 This fixes a bug where isitem never got set to 1 if a certain chain of events
relating to extreme low memory situations occured.  This was only ever seen on
the port build cluster, so many thanks to kris for helping me debug this.

Tested by:	kris
2002-04-07 22:47:36 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Alfred Perlstein
157d7b3538 fix comment typo, s/neccisary/necessary/g 2002-04-02 21:25:12 +00:00
Jeff Roberson
f4af24d55d Reset the cachefree statistics after draining the cache. This fixes a bug
where a sysctl within 20 seconds of a cache_drain could yield negative "USED"
counts.

Also, grab the uma_mtx while in the sysctl handler.  This hadn't caused
problems yet because Giant is held all the time.

Reported by:	kkenn
2002-03-24 10:56:11 +00:00
Jeff Roberson
736ee5907f Add uma_zone_set_max() to add enforced limits to non vm obj backed zones. 2002-03-20 05:28:34 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00