Commit Graph

131 Commits

Author SHA1 Message Date
Dag-Erling Smørgrav
84344f9fbf Rename the kern.vm.kmem.size tunable to the more logical vm.kmem_size. To
assure backward compatibility (conditional on !BURN_BRIDGES), look it up
by its old name first, and log a warning (but accept the setting) if it
was found.  If both the old and new name are defined, the new name takes
precedence.

Also export vm.kmem_size as a read-only sysctl variable; I find it hard to
tune a parameter when I don't know its default value, especially when that
default value is computed at boot time.
2004-01-27 15:59:38 +00:00
Jeff Roberson
9fb535dec5 - Only use UMA to cache malloc requests up to PAGE_SIZE. Values larger than
this are requested very infrequently and waste memory when we cache
   spares.
2003-09-19 04:39:08 +00:00
Poul-Henning Kamp
68f2d20b70 Revert stuff which accidentally ended up in the previous commit. 2003-07-22 10:36:36 +00:00
Poul-Henning Kamp
55d1d7034f Don't attempt to inline large functions mb_alloc() and mb_free(),
it more than doubles the text size of this file.

GCC has wisely ignored us on this previously
2003-07-22 10:24:41 +00:00
Mike Silbersack
347194c172 Add init_param3() to subr_param. This function is called
immediately after the kernel map has been sized, and is
the optimal place for the autosizing of memory allocations
which occur within the kernel map to occur.

Suggested by:	bde
2003-07-11 00:01:03 +00:00
Paul Saab
1795d0cdec Don't overflow when calculating vm_kmem_size. This fixes kmem_map
too small panics on PAE machines which have odd > 4GB sizes (4.5 gig
would render a 20MB of KVA for kmem_map instead of 200MB).

Submitted by:	John Cagle <john.cagle@hp.com>, jeff
Reviewed by:	jeff, peter, scottl, lots of USENIX folks
2003-06-11 05:18:59 +00:00
David E. O'Brien
677b542ea2 Use __FBSDID(). 2003-06-11 00:56:59 +00:00
Poul-Henning Kamp
1282e9acea Don't pass NULL pointer to memset if we are compiled with DIAGNOSTIC
Approved by:	re/rwatson
2003-05-12 05:09:56 +00:00
Poul-Henning Kamp
8cb72d6174 Add two KASSERTS which trigger if free(9) would drag the "memuse" statistic
for a malloc bucket under zero.  This typically happens if you malloc(9)
from one bucket and free to another.
2003-05-05 08:32:53 +00:00
Poul-Henning Kamp
3f6ee876c1 Update the "last malloc failure timestamp" also for simulated
malloc errors.
2003-04-25 21:49:24 +00:00
Robert Watson
f2538508f6 Permit debug.malloc.failure_rate to be specified using a tunable so
that the feature can be enabled during the boot process.  Note the
continued limitation that FreeBSD fails so rapidly with this setting
enabled that it's hard to narrow down particular failures for
correction; we really need per-malloc type failure rates.
2003-03-26 20:44:29 +00:00
Robert Watson
eae870cdb4 Add a new kernel option, MALLOC_MAKE_FAILURES, which compiles
in a debugging feature causing M_NOWAIT allocations to fail at
a specified rate.  This can be useful for detecting poor
handling of M_NOWAIT: the most frequent problems I've bumped
into are unconditional deference of the pointer even though
it's NULL, and hangs as a result of a lost event where memory
for the event couldn't be allocated.  Two sysctls are added:

debug.malloc.failure_rate

  How often to generate a failure: if set to 0 (default), this
  feature is disabled.  Otherwise, the frequency of failures --
  I've been using 10 (one in ten mallocs fails), but other
  popular settings might be much lower or much higher.

debug.malloc.failure_count

  Number of times a coerced malloc failure has occurred as a
  result of this feature.  Useful for tracking what might have
  happened and whether failures are being generated.

Useful possible additions: tying failure rate to malloc type,
printfs indicating the thread that experienced the coerced
failure.

Reviewed by:	jeffr, jhb
2003-03-26 20:18:40 +00:00
Poul-Henning Kamp
194a0abf73 PHCC[1]:
I had commented the #ifdef INVARIANTS checks out to make sure I ran this
code in all kernels and forgot to comment the #ifdefs back in before I
committed.

Spotted by:	bmilekic

[1] PHCC = Pointy Hat Correction Commit
2003-03-10 20:24:54 +00:00
Poul-Henning Kamp
d3c11994e1 Make malloc and mbuf allocation mode flags nonoverlapping.
Under INVARIANTS whine if we get incompatible flags.

Submitted by:   imp
2003-03-10 19:39:53 +00:00
Bosko Milekic
025b4be197 o Allow "buckets" in mb_alloc to be differently sized (according to
compile-time constants).  That is, a "bucket" now is not necessarily
  a page-worth of mbufs or clusters, but it is MBUF_BUCK_SZ, CLUS_BUCK_SZ
  worth of mbufs, clusters.
o Rename {mbuf,clust}_limit to {mbuf,clust}_hiwm and introduce
  {mbuf,clust}_lowm, which currently has no effect but will be used
  to set the low watermarks.
o Fix netstat so that it can deal with the differently-sized buckets
  and teach it about the low watermarks too.
o Make sure the per-cpu stats for an absent CPU has mb_active set to 0,
  explicitly.
o Get rid of the allocate refcounts from mbuf map mess.  Instead,
  just malloc() the refcounts in one shot from mbuf_init()
o Clean up / update comments in subr_mbuf.c
2003-02-20 04:26:58 +00:00
Warner Losh
a163d034fa Back out M_* changes, per decision of the TRB.
Approved by: trb
2003-02-19 05:47:46 +00:00
Poul-Henning Kamp
4db4f5c87f Under #ifdef DIAGNOSTIC, fill malloc(9) allocations which do not have
M_ZERO specified with 0x70.  (malloc_flags=J for the kernel :-)
2003-02-01 10:07:49 +00:00
Alfred Perlstein
44956c9863 Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
2003-01-21 08:56:16 +00:00
Poul-Henning Kamp
1fb14a47a1 Introduce malloc_last_fail() which returns the number of seconds since
malloc(9) failed last time.  This is intended to help code adjust
memory usage to the current circumstances.

A typical use could be:
	if (malloc_last_fail() < 60)
		reduce_cache_by_one();
2002-11-01 18:58:12 +00:00
Jeff Roberson
99571dc345 - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.
- Remove all instances of the mallochash.
 - Stash the slab pointer in the vm page's object pointer when allocating from
   the kmem_obj.
 - Use the overloaded object pointer to find slabs for malloced memory.
2002-09-18 08:26:30 +00:00
Robert Drehmel
280759e75e - Replace the bandaid introduced in revision 1.110 with
a better solution.
 - Add braces for a ``for'' statement containing a single
   multi-line statement.
2002-05-31 09:41:09 +00:00
Jake Burkholder
45eefe7176 Add a bandaid so that sysctl kern.malloc works on sparc64. 2002-05-20 18:29:37 +00:00
John Baldwin
42e498655d Fix the td_intr_nesting_level check to work ok if a flag like M_ZERO is
passed in with M_WAITOK to malloc().
2002-05-20 17:46:57 +00:00
Jeff Roberson
8f70816cf2 Hide a pointer to the malloc_type bucket at the end of the freed memory. If
this memory is modified after it has been freed we can now report it's
previous owner.
2002-05-02 09:07:04 +00:00
Jeff Roberson
5a34a9f089 malloc/free(9) no longer require Giant. Use the malloc_mtx to protect the
mallochash.  Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper.  This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
2002-05-02 07:22:19 +00:00
Jeff Roberson
639c9550fb Remove the temporary alignment check in free().
Implement the following checks on freed memory in the bucket path:
	- Slab membership
	- Alignment
	- Duplicate free

This previously was only done if we skipped the buckets.  This code will slow
down INVARIANTS a bit, but it is smp safe.  The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
2002-05-02 02:08:48 +00:00
Jeff Roberson
289f207c81 Convert longs to u_longs in stats. This will hold off wrap arounds for a
while longer.
2002-04-30 22:39:32 +00:00
Jeff Roberson
8efc4eff00 Add a new UMA debugging facility. This will overwrite freed memory with
0xdeadc0de and then check for it just before memory is handed off as part
of a new request.  This will catch any post free/pre alloc modification of
memory, as well as introduce errors for anything that tries to dereference
it as a pointer.

This code takes the form of special init, fini, ctor and dtor routines that
are specificly used by malloc.  It is in a seperate file because additional
debugging aids will want to live here as well.
2002-04-30 07:54:25 +00:00
Jeff Roberson
2cc35ff9c6 Move the implementation of M_ZERO into UMA so that it can be passed to
uma_zalloc and friends.  Remove this functionality from the malloc wrapper.

Document this change in uma.h and adjust variable names in uma_core.
2002-04-30 04:26:34 +00:00
Robert Watson
43a7c4e919 Re-add the 16384 bucket also.
Submitted by:	green
2002-04-29 17:53:23 +00:00
Robert Watson
bd796eb25f Revert a portion of kern_malloc.c:1.99, which (in addition to adding
malloc profiling) also modified the set of pre-defined buckets for the
memory allocator.  For reasons unknown to me, this resulted in extensive
memory corruption in the kernel, in particular on SMP boxes, so I'm
committing this work-around until Jeff gets a chance to debug it
properly.  David Wolfskill pointed me at this commit as the one that
might be a problem; I've been running this code on two dual-processor
burn-in boxes for about 12 hours now, and the rate of panics due to
memory corruption has dropped to zero (from one every five minutes).

Hopefully not treading on the toes of:	jeff
2002-04-29 17:12:02 +00:00
Poul-Henning Kamp
708da94ef2 Add a basic sanity check on pointers passed to free(9).
Should be improved by:	jeff
2002-04-23 18:50:25 +00:00
Jeff Roberson
5e914b96b9 Finish adding support code for sysctl kern.mprof. This dumps some malloc
information related to bucket size effeciency.  Three things are printed on
each row:

Size is the size the user actually asked for rounded to 16 bytes.
Requests is the number of times this size was asked for.
Real Size is the size we actually handed out.

At the end the total memory used and total waste is displayed.  Currently my
system displays about 33% wasted memory.

The intent of this code is to gather statistics for tuning the malloc bucket
sizes.  It is not intended to be run with INVARIANTS and it is not entirely
mp safe.  It can be enabled via 'options MALLOC_PROFILE' which was commited
earlier.
2002-04-15 05:24:01 +00:00
Jeff Roberson
6f2671750e Remove malloc_type's ks_limit.
Updated the kmemzones logic such that the ks_size bitmap can be used as an
index into it to report the size of the zone used.

Create the kern.malloc sysctl which replaces the kvm mechanism to report
similar data.  This will provide an easy place for statistics aggregation if
malloc_type statistics become per cpu data.

Add some code ifdef'd under MALLOC_PROFILING to facilitate a tool for sizing
the malloc buckets.
2002-04-15 04:05:53 +00:00
John Baldwin
6008862bc2 Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on:	i386, alpha, sparc64
2002-04-04 21:03:38 +00:00
Alfred Perlstein
4d77a549fe Remove __P. 2002-03-19 21:25:46 +00:00
Jeff Roberson
8355f576a9 This is the first part of the new kernel memory allocator. This replaces
malloc(9) and vm_zone with a slab like allocator.

Reviewed by:	arch@
2002-03-19 09:11:49 +00:00
Archie Cobbs
44a8ff315e Add realloc() and reallocf(), and make free(NULL, ...) acceptable.
Reviewed by:	alfred
2002-03-13 01:42:33 +00:00
Julian Elischer
b40ce4165d KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after:    ha ha ha ha
2001-09-12 08:38:13 +00:00
John Baldwin
c4a448100c - Remove asleep(), await(), and M_ASLEEP.
- Callers of asleep() and await() have been converted to calling tsleep().
  The only caller outside of M_ASLEEP was the ata driver, which called both
  asleep() and await() with spl-raised, so there was no need for the
  asleep() and await() pair.  M_ASLEEP was unused.

Reviewed by:	jasone, peter
2001-08-10 06:45:43 +00:00
Bosko Milekic
ba3e88262e Rename mb_init() mbuf subsystem initialization routine to mbuf_init(), in
order to avoid namespace collision with subr_mchain.c's mb_init(). This
wasn't "fatal" as the mbuf initialization routine mb_init() was local to
subr_mbuf.c which in turn didn't pull in subr_mchain.c's mb_init()
declaration, but it should deffinately be changed now before it creates
headache.
2001-08-03 05:05:32 +00:00
Jake Burkholder
f74250ca46 Remove some code that appears to have endian problems with INVARIANTS.
This is #if BIG_ENDIAN, but is only necessary if malloc types are shorts,
not struct malloc_type * like they are now.
2001-08-03 03:31:45 +00:00
Bosko Milekic
08442f8a82 Introduce numerous SMP friendly changes to the mbuf allocator. Namely,
introduce a modified allocation mechanism for mbufs and mbuf clusters; one
which can scale under SMP and which offers the possibility of resource
reclamation to be implemented in the future. Notable advantages:

 o Reduce contention for SMP by offering per-CPU pools and locks.
 o Better use of data cache due to per-CPU pools.
 o Much less code cache pollution due to excessively large allocation macros.
 o Framework for `grouping' objects from same page together so as to be able
   to possibly free wired-down pages back to the system if they are no longer
   needed by the network stacks.

 Additional things changed with this addition:

  - Moved some mbuf specific declarations and initializations from
    sys/conf/param.c into mbuf-specific code where they belong.
  - m_getclr() has been renamed to m_get_clrd() because the old name is really
    confusing. m_getclr() HAS been preserved though and is defined to the new
    name. No tree sweep has been done "to change the interface," as the old
    name will continue to be supported and is not depracated. The change was
    merely done because m_getclr() sounds too much like "m_get a cluster."
  - TEMPORARILY disabled mbtypes statistics displaying in netstat(1) and
    systat(1) (see TODO below).
  - Fixed systat(1) to display number of "free mbufs" based on new per-CPU
    stat structures.
  - Fixed netstat(1) to display new per-CPU stats based on sysctl-exported
    per-CPU stat structures. All infos are fetched via sysctl.

 TODO (in order of priority):

  - Re-enable mbtypes statistics in both netstat(1) and systat(1) after
    introducing an SMP friendly way to collect the mbtypes stats under the
    already introduced per-CPU locks (i.e. hopefully don't use atomic() - it
    seems too costly for a mere stat update, especially when other locks are
    already present).
  - Optionally have systat(1) display not only "total free mbufs" but also
    "total free mbufs per CPU pool."
  - Fix minor length-fetching issues in netstat(1) related to recently
    re-enabled option to read mbuf stats from a core file.
  - Move reference counters at least for mbuf clusters into an unused portion
    of the cluster itself, to save space and need to allocate a counter.
  - Look into introducing resource freeing possibly from a kproc.

Reviewed by (in parts): jlemon, jake, silby, terry
Tested by: jlemon (Intel & Alpha), mjacob (Intel & Alpha)
Preliminary performance measurements: jlemon (and me, obviously)
URL: http://people.freebsd.org/~bmilekic/mb_alloc/
2001-06-22 06:35:32 +00:00
Peter Wemm
0978669829 "Fix" the previous initial attempt at fixing TUNABLE_INT(). This time
around, use a common function for looking up and extracting the tunables
from the kernel environment.  This saves duplicating the same function
over and over again.  This way typically has an overhead of 8 bytes + the
path string, versus about 26 bytes + the path string.
2001-06-08 05:24:21 +00:00
Peter Wemm
4422746fdf Back out part of my previous commit. This was a last minute change
and I botched testing.  This is a perfect example of how NOT to do
this sort of thing. :-(
2001-06-07 03:17:26 +00:00
Peter Wemm
81930014ef Make the TUNABLE_*() macros look and behave more consistantly like the
SYSCTL_*() macros.  TUNABLE_INT_DECL() was an odd name because it didn't
actually declare the int, which is what the name suggests it would do.
2001-06-06 22:17:08 +00:00
Mark Murray
fb919e4d5a Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by:	bde (with reservations)
2001-05-01 08:13:21 +00:00
Bosko Milekic
d04d50d1f7 Fix inconsistency in setup of kernel_map: we need to make sure that
we also reserve _adequate_ space for the mb_map submap; i.e. we need
space for nmbclusters, nmbufs, _and_ nmbcnt. Furthermore, we need to
rounddown, and not roundup, so that we are consistent.

Pointed out by: bde
2001-04-18 23:54:13 +00:00
Bosko Milekic
9ed346bab0 Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)
2001-02-09 06:11:45 +00:00
Boris Popov
1707240d2a Let M_PANIC go back to the private tree as its intention isn't understood well
for now.
2001-01-31 04:50:20 +00:00