2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
2005-07-16 09:51:52 +00:00
|
|
|
* Copyright (c) 2002, 2003, 2004, 2005 Jeffrey Roberson <jeff@FreeBSD.org>
|
|
|
|
* Copyright (c) 2004, 2005 Bosko Milekic <bmilekic@FreeBSD.org>
|
|
|
|
* All rights reserved.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice unmodified, this list of conditions, and the following
|
|
|
|
* disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
|
|
|
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
|
|
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
|
|
|
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
|
|
|
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
|
|
|
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
|
|
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
|
|
|
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* $FreeBSD$
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* uma.h - External definitions for the Universal Memory Allocator
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef VM_UMA_H
|
|
|
|
#define VM_UMA_H
|
|
|
|
|
|
|
|
#include <sys/param.h> /* For NULL */
|
|
|
|
#include <sys/malloc.h> /* For M_* */
|
|
|
|
|
2008-11-02 00:41:26 +00:00
|
|
|
/* User visible parameters */
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_SMALLEST_UNIT (PAGE_SIZE / 256) /* Smallest item allocated */
|
|
|
|
|
|
|
|
/* Types and type defs */
|
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
struct uma_zone;
|
2002-03-19 09:11:49 +00:00
|
|
|
/* Opaque type used as a handle to the zone */
|
|
|
|
typedef struct uma_zone * uma_zone_t;
|
|
|
|
|
2007-01-25 01:05:23 +00:00
|
|
|
void zone_drain(uma_zone_t);
|
|
|
|
|
2012-12-08 09:23:05 +00:00
|
|
|
/*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Item constructor
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* item A pointer to the memory which has been allocated.
|
|
|
|
* arg The arg field passed to uma_zalloc_arg
|
|
|
|
* size The size of the allocated item
|
2004-08-02 00:18:36 +00:00
|
|
|
* flags See zalloc flags
|
2012-12-08 09:23:05 +00:00
|
|
|
*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
2004-08-02 00:18:36 +00:00
|
|
|
* 0 on success
|
|
|
|
* errno on failure
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* The constructor is called just before the memory is returned
|
2002-10-22 12:10:27 +00:00
|
|
|
* to the user. It may block if necessary.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2004-08-02 00:18:36 +00:00
|
|
|
typedef int (*uma_ctor)(void *mem, int size, void *arg, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Item destructor
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* item A pointer to the memory which has been allocated.
|
|
|
|
* size The size of the item being destructed.
|
|
|
|
* arg Argument passed through uma_zfree_arg
|
2012-12-08 09:23:05 +00:00
|
|
|
*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* The destructor may perform operations that differ from those performed
|
|
|
|
* by the initializer, but it must leave the object in the same state.
|
|
|
|
* This IS type stable storage. This is called after EVERY zfree call.
|
|
|
|
*/
|
|
|
|
typedef void (*uma_dtor)(void *mem, int size, void *arg);
|
|
|
|
|
2012-12-08 09:23:05 +00:00
|
|
|
/*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Item initializer
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* item A pointer to the memory which has been allocated.
|
|
|
|
* size The size of the item being initialized.
|
2004-08-02 00:18:36 +00:00
|
|
|
* flags See zalloc flags
|
2012-12-08 09:23:05 +00:00
|
|
|
*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
2004-08-02 00:18:36 +00:00
|
|
|
* 0 on success
|
|
|
|
* errno on failure
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Discussion:
|
2012-12-08 09:23:05 +00:00
|
|
|
* The initializer is called when the memory is cached in the uma zone.
|
2008-11-02 00:41:26 +00:00
|
|
|
* The initializer and the destructor should leave the object in the same
|
|
|
|
* state.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2004-08-02 00:18:36 +00:00
|
|
|
typedef int (*uma_init)(void *mem, int size, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Item discard function
|
|
|
|
*
|
|
|
|
* Arguments:
|
2012-12-08 09:23:05 +00:00
|
|
|
* item A pointer to memory which has been 'freed' but has not left the
|
2002-03-19 09:11:49 +00:00
|
|
|
* zone's cache.
|
|
|
|
* size The size of the item being discarded.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* This routine is called when memory leaves a zone and is returned to the
|
2008-11-02 00:41:26 +00:00
|
|
|
* system for other uses. It is the counter-part to the init function.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
typedef void (*uma_fini)(void *mem, int size);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* What's the difference between initializing and constructing?
|
|
|
|
*
|
2012-12-08 09:23:05 +00:00
|
|
|
* The item is initialized when it is cached, and this is the state that the
|
2002-03-19 09:11:49 +00:00
|
|
|
* object should be in when returned to the allocator. The purpose of this is
|
|
|
|
* to remove some code which would otherwise be called on each allocation by
|
|
|
|
* utilizing a known, stable state. This differs from the constructor which
|
|
|
|
* will be called on EVERY allocation.
|
|
|
|
*
|
2008-11-02 00:41:26 +00:00
|
|
|
* For example, in the initializer you may want to initialize embedded locks,
|
2002-03-19 09:11:49 +00:00
|
|
|
* NULL list pointers, set up initial states, magic numbers, etc. This way if
|
2002-10-22 12:10:27 +00:00
|
|
|
* the object is held in the allocator and re-used it won't be necessary to
|
2002-03-19 09:11:49 +00:00
|
|
|
* re-initialize it.
|
|
|
|
*
|
|
|
|
* The constructor may be used to lock a data structure, link it on to lists,
|
|
|
|
* bump reference counts or total counts of outstanding structures, etc.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
|
|
/* Function proto types */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a new uma zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* name The text name of the zone for debugging and stats. This memory
|
2002-03-19 09:11:49 +00:00
|
|
|
* should not be freed until the zone has been deallocated.
|
|
|
|
* size The size of the object that is being created.
|
2008-11-02 00:41:26 +00:00
|
|
|
* ctor The constructor that is called when the object is allocated.
|
2002-03-19 09:11:49 +00:00
|
|
|
* dtor The destructor that is called when the object is freed.
|
|
|
|
* init An initializer that sets up the initial state of the memory.
|
|
|
|
* fini A discard function that undoes initialization done by init.
|
|
|
|
* ctor/dtor/init/fini may all be null, see notes above.
|
2008-11-02 00:41:26 +00:00
|
|
|
* align A bitmask that corresponds to the requested alignment
|
2002-03-19 09:11:49 +00:00
|
|
|
* eg 4 would be 0x3
|
2008-11-02 00:41:26 +00:00
|
|
|
* flags A set of parameters that control the behavior of the zone.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* A pointer to a structure which is intended to be opaque to users of
|
|
|
|
* the interface. The value may be null if the wait flag is not set.
|
|
|
|
*/
|
2012-10-26 17:51:05 +00:00
|
|
|
uma_zone_t uma_zcreate(const char *name, size_t size, uma_ctor ctor,
|
|
|
|
uma_dtor dtor, uma_init uminit, uma_fini fini,
|
|
|
|
int align, u_int32_t flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
/*
|
|
|
|
* Create a secondary uma zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* name The text name of the zone for debugging and stats. This memory
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* should not be freed until the zone has been deallocated.
|
2008-11-02 00:41:26 +00:00
|
|
|
* ctor The constructor that is called when the object is allocated.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* dtor The destructor that is called when the object is freed.
|
|
|
|
* zinit An initializer that sets up the initial state of the memory
|
|
|
|
* as the object passes from the Keg's slab to the Zone's cache.
|
|
|
|
* zfini A discard function that undoes initialization done by init
|
|
|
|
* as the object passes from the Zone's cache to the Keg's slab.
|
|
|
|
*
|
|
|
|
* ctor/dtor/zinit/zfini may all be null, see notes above.
|
|
|
|
* Note that the zinit and zfini specified here are NOT
|
|
|
|
* exactly the same as the init/fini specified to uma_zcreate()
|
|
|
|
* when creating a master zone. These zinit/zfini are called
|
|
|
|
* on the TRANSITION from keg to zone (and vice-versa). Once
|
|
|
|
* these are set, the primary zone may alter its init/fini
|
|
|
|
* (which are called when the object passes from VM to keg)
|
|
|
|
* using uma_zone_set_init/fini()) as well as its own
|
|
|
|
* zinit/zfini (unset by default for master zone) with
|
|
|
|
* uma_zone_set_zinit/zfini() (note subtle 'z' prefix).
|
|
|
|
*
|
2004-06-01 01:36:26 +00:00
|
|
|
* master A reference to this zone's Master Zone (Primary Zone),
|
|
|
|
* which contains the backing Keg for the Secondary Zone
|
|
|
|
* being added.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* A pointer to a structure which is intended to be opaque to users of
|
|
|
|
* the interface. The value may be null if the wait flag is not set.
|
|
|
|
*/
|
|
|
|
uma_zone_t uma_zsecond_create(char *name, uma_ctor ctor, uma_dtor dtor,
|
|
|
|
uma_init zinit, uma_fini zfini, uma_zone_t master);
|
|
|
|
|
2009-01-25 09:11:24 +00:00
|
|
|
/*
|
|
|
|
* Add a second master to a secondary zone. This provides multiple data
|
|
|
|
* backends for objects with the same size. Both masters must have
|
|
|
|
* compatible allocation flags. Presently, UMA_ZONE_MALLOC type zones are
|
|
|
|
* the only supported.
|
|
|
|
*
|
|
|
|
* Returns:
|
2012-12-08 09:23:05 +00:00
|
|
|
* Error on failure, 0 on success.
|
2009-01-25 09:11:24 +00:00
|
|
|
*/
|
|
|
|
int uma_zsecond_add(uma_zone_t zone, uma_zone_t master);
|
|
|
|
|
2003-09-19 08:37:44 +00:00
|
|
|
/*
|
|
|
|
* Definitions for uma_zcreate flags
|
|
|
|
*
|
|
|
|
* These flags share space with UMA_ZFLAGs in uma_int.h. Be careful not to
|
2005-07-16 02:23:41 +00:00
|
|
|
* overlap when adding new features. 0xf0000000 is in use by uma_int.h.
|
2003-09-19 08:37:44 +00:00
|
|
|
*/
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_ZONE_PAGEABLE 0x0001 /* Return items not fully backed by
|
|
|
|
physical memory XXX Not yet */
|
|
|
|
#define UMA_ZONE_ZINIT 0x0002 /* Initialize with zeros */
|
2008-11-02 00:41:26 +00:00
|
|
|
#define UMA_ZONE_STATIC 0x0004 /* Statically sized zone */
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_ZONE_OFFPAGE 0x0008 /* Force the slab structure allocation
|
|
|
|
off of the real memory */
|
|
|
|
#define UMA_ZONE_MALLOC 0x0010 /* For use by malloc(9) only! */
|
|
|
|
#define UMA_ZONE_NOFREE 0x0020 /* Do not free slabs of this type! */
|
2002-04-29 23:45:41 +00:00
|
|
|
#define UMA_ZONE_MTXCLASS 0x0040 /* Create a new lock class */
|
2002-09-18 08:26:30 +00:00
|
|
|
#define UMA_ZONE_VM 0x0080 /*
|
|
|
|
* Used for internal vm datastructures
|
|
|
|
* only.
|
|
|
|
*/
|
|
|
|
#define UMA_ZONE_HASH 0x0100 /*
|
|
|
|
* Use a hash table instead of caching
|
|
|
|
* information in the vm_page.
|
|
|
|
*/
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
#define UMA_ZONE_SECONDARY 0x0200 /* Zone is a Secondary Zone */
|
|
|
|
#define UMA_ZONE_REFCNT 0x0400 /* Allocate refcnts in slabs */
|
|
|
|
#define UMA_ZONE_MAXBUCKET 0x0800 /* Use largest buckets */
|
2009-01-25 09:11:24 +00:00
|
|
|
#define UMA_ZONE_CACHESPREAD 0x1000 /*
|
|
|
|
* Spread memory start locations across
|
|
|
|
* all possible cache lines. May
|
|
|
|
* require many virtually contiguous
|
|
|
|
* backend pages and can fail early.
|
|
|
|
*/
|
|
|
|
#define UMA_ZONE_VTOSLAB 0x2000 /* Zone uses vtoslab for lookup. */
|
2012-01-27 20:18:31 +00:00
|
|
|
#define UMA_ZONE_NODUMP 0x4000 /*
|
|
|
|
* Zone's pages will not be included in
|
|
|
|
* mini-dumps.
|
|
|
|
*/
|
2009-01-25 09:11:24 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* These flags are shared between the keg and zone. In zones wishing to add
|
|
|
|
* new kegs these flags must be compatible. Some are determined based on
|
|
|
|
* physical parameters of the request and may not be provided by the consumer.
|
|
|
|
*/
|
|
|
|
#define UMA_ZONE_INHERIT \
|
2011-10-12 18:08:28 +00:00
|
|
|
(UMA_ZONE_OFFPAGE | UMA_ZONE_MALLOC | UMA_ZONE_NOFREE | \
|
|
|
|
UMA_ZONE_HASH | UMA_ZONE_REFCNT | UMA_ZONE_VTOSLAB)
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/* Definitions for align */
|
|
|
|
#define UMA_ALIGN_PTR (sizeof(void *) - 1) /* Alignment fit for ptr */
|
|
|
|
#define UMA_ALIGN_LONG (sizeof(long) - 1) /* "" long */
|
|
|
|
#define UMA_ALIGN_INT (sizeof(int) - 1) /* "" int */
|
|
|
|
#define UMA_ALIGN_SHORT (sizeof(short) - 1) /* "" short */
|
|
|
|
#define UMA_ALIGN_CHAR (sizeof(char) - 1) /* "" char */
|
2007-02-11 20:13:52 +00:00
|
|
|
#define UMA_ALIGN_CACHE (0 - 1) /* Cache line size align */
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
2002-04-08 04:48:58 +00:00
|
|
|
* Destroys an empty uma zone. If the zone is not empty uma complains loudly.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we want to destroy.
|
|
|
|
*
|
|
|
|
*/
|
2002-04-08 04:48:58 +00:00
|
|
|
void uma_zdestroy(uma_zone_t zone);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocates an item out of a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we are allocating from
|
|
|
|
* arg This data is passed to the ctor function
|
2002-04-30 04:26:34 +00:00
|
|
|
* flags See sys/malloc.h for available flags.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
2008-11-02 00:41:26 +00:00
|
|
|
* A non-null pointer to an initialized element from the zone is
|
|
|
|
* guaranteed if the wait flag is M_WAITOK. Otherwise a null pointer
|
|
|
|
* may be returned if the zone is empty or the ctor failed.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
|
2002-04-30 04:26:34 +00:00
|
|
|
void *uma_zalloc_arg(uma_zone_t zone, void *arg, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocates an item out of a zone without supplying an argument
|
|
|
|
*
|
|
|
|
* This is just a wrapper for uma_zalloc_arg for convenience.
|
|
|
|
*
|
|
|
|
*/
|
2002-04-30 04:26:34 +00:00
|
|
|
static __inline void *uma_zalloc(uma_zone_t zone, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
static __inline void *
|
2002-04-30 04:26:34 +00:00
|
|
|
uma_zalloc(uma_zone_t zone, int flags)
|
2002-03-19 09:11:49 +00:00
|
|
|
{
|
2002-04-30 04:26:34 +00:00
|
|
|
return uma_zalloc_arg(zone, NULL, flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Frees an item back into the specified zone.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone the item was originally allocated out of.
|
|
|
|
* item The memory to be freed.
|
|
|
|
* arg Argument passed to the destructor
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_zfree_arg(uma_zone_t zone, void *item, void *arg);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Frees an item back to a zone without supplying an argument
|
|
|
|
*
|
|
|
|
* This is just a wrapper for uma_zfree_arg for convenience.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
static __inline void uma_zfree(uma_zone_t zone, void *item);
|
|
|
|
|
|
|
|
static __inline void
|
|
|
|
uma_zfree(uma_zone_t zone, void *item)
|
|
|
|
{
|
2002-07-18 15:53:11 +00:00
|
|
|
uma_zfree_arg(zone, item, NULL);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* XXX The rest of the prototypes in this header are h0h0 magic for the VM.
|
|
|
|
* If you think you need to use it for a normal zone you're probably incorrect.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Backend page supplier routines
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* zone The zone that is requesting pages.
|
|
|
|
* size The number of bytes being requested.
|
2002-03-19 09:11:49 +00:00
|
|
|
* pflag Flags for these memory pages, see below.
|
|
|
|
* wait Indicates our willingness to block.
|
|
|
|
*
|
|
|
|
* Returns:
|
2008-11-02 00:41:26 +00:00
|
|
|
* A pointer to the allocated memory or NULL on failure.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
typedef void *(*uma_alloc)(uma_zone_t zone, int size, u_int8_t *pflag, int wait);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Backend page free routines
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* item A pointer to the previously allocated pages.
|
|
|
|
* size The original size of the allocation.
|
|
|
|
* pflag The flags for the slab. See UMA_SLAB_* below.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* None
|
|
|
|
*/
|
|
|
|
typedef void (*uma_free)(void *item, int size, u_int8_t pflag);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up the uma allocator. (Called by vm_mem_init)
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* bootmem A pointer to memory used to bootstrap the system.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* This memory is used for zones which allocate things before the
|
|
|
|
* backend page supplier can give us pages. It should be
|
2005-10-08 21:03:54 +00:00
|
|
|
* UMA_SLAB_SIZE * boot_pages bytes. (see uma_int.h)
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2005-10-08 21:03:54 +00:00
|
|
|
void uma_startup(void *bootmem, int boot_pages);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Finishes starting up the allocator. This should
|
|
|
|
* be called when kva is ready for normal allocs.
|
|
|
|
*
|
|
|
|
* Arguments:
|
2002-09-18 08:26:30 +00:00
|
|
|
* None
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
2002-09-18 08:26:30 +00:00
|
|
|
* uma_startup2 is called by kmeminit() to enable us of uma for malloc.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2012-12-08 09:23:05 +00:00
|
|
|
|
2002-09-18 08:26:30 +00:00
|
|
|
void uma_startup2(void);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Reclaims unused memory for all zones
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* None
|
|
|
|
* Returns:
|
|
|
|
* None
|
|
|
|
*
|
|
|
|
* This should only be called by the page out daemon.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_reclaim(void);
|
|
|
|
|
2007-02-11 20:13:52 +00:00
|
|
|
/*
|
|
|
|
* Sets the alignment mask to be used for all zones requesting cache
|
|
|
|
* alignment. Should be called by MD boot code prior to starting VM/UMA.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* align The alignment mask
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
|
|
|
void uma_set_align(int align);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2013-02-26 23:35:27 +00:00
|
|
|
* Reserves the maximum KVA space required by the zone and configures the zone
|
|
|
|
* to use a VM_ALLOC_NOOBJ-based backend allocator.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* zone The zone to update.
|
2013-02-26 23:35:27 +00:00
|
|
|
* nitems The upper limit on the number of items that can be allocated.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
2013-02-26 23:35:27 +00:00
|
|
|
* 0 if KVA space can not be allocated
|
2002-03-19 09:11:49 +00:00
|
|
|
* 1 if successful
|
|
|
|
*
|
|
|
|
* Discussion:
|
2013-02-26 23:35:27 +00:00
|
|
|
* When the machine supports a direct map and the zone's items are smaller
|
|
|
|
* than a page, the zone will use the direct map instead of allocating KVA
|
|
|
|
* space.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2013-02-26 23:35:27 +00:00
|
|
|
int uma_zone_reserve_kva(uma_zone_t zone, int nitems);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2002-03-20 05:28:34 +00:00
|
|
|
/*
|
|
|
|
* Sets a high limit on the number of items allowed in a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to limit
|
2010-10-16 04:41:45 +00:00
|
|
|
* nitems The requested upper limit on the number of items allowed
|
2002-03-20 05:28:34 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
2010-10-16 04:41:45 +00:00
|
|
|
* int The effective value of nitems after rounding up based on page size
|
2002-03-20 05:28:34 +00:00
|
|
|
*/
|
2010-10-16 04:41:45 +00:00
|
|
|
int uma_zone_set_max(uma_zone_t zone, int nitems);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2010-08-16 14:24:00 +00:00
|
|
|
/*
|
|
|
|
* Obtains the effective limit on the number of items in a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to obtain the effective limit from
|
|
|
|
*
|
|
|
|
* Return:
|
|
|
|
* 0 No limit
|
|
|
|
* int The effective limit of the zone
|
|
|
|
*/
|
|
|
|
int uma_zone_get_max(uma_zone_t zone);
|
|
|
|
|
2012-12-07 22:27:13 +00:00
|
|
|
/*
|
|
|
|
* Sets a warning to be printed when limit is reached
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we will warn about
|
|
|
|
* warning Warning content
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
|
|
|
void uma_zone_set_warning(uma_zone_t zone, const char *warning);
|
|
|
|
|
2010-10-16 04:14:45 +00:00
|
|
|
/*
|
|
|
|
* Obtains the approximate current number of items allocated from a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to obtain the current allocation count from
|
|
|
|
*
|
|
|
|
* Return:
|
|
|
|
* int The approximate current number of items allocated from the zone
|
|
|
|
*/
|
|
|
|
int uma_zone_get_cur(uma_zone_t zone);
|
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
/*
|
|
|
|
* The following two routines (uma_zone_set_init/fini)
|
|
|
|
* are used to set the backend init/fini pair which acts on an
|
|
|
|
* object as it becomes allocated and is placed in a slab within
|
|
|
|
* the specified zone's backing keg. These should probably not
|
2008-11-02 00:41:26 +00:00
|
|
|
* be changed once allocations have already begun, but only be set
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* immediately upon zone creation.
|
|
|
|
*/
|
|
|
|
void uma_zone_set_init(uma_zone_t zone, uma_init uminit);
|
|
|
|
void uma_zone_set_fini(uma_zone_t zone, uma_fini fini);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The following two routines (uma_zone_set_zinit/zfini) are
|
|
|
|
* used to set the zinit/zfini pair which acts on an object as
|
|
|
|
* it passes from the backing Keg's slab cache to the
|
|
|
|
* specified Zone's bucket cache. These should probably not
|
2008-11-02 00:41:26 +00:00
|
|
|
* be changed once allocations have already begun, but only be set
|
|
|
|
* immediately upon zone creation.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
*/
|
|
|
|
void uma_zone_set_zinit(uma_zone_t zone, uma_init zinit);
|
|
|
|
void uma_zone_set_zfini(uma_zone_t zone, uma_fini zfini);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2013-02-26 23:35:27 +00:00
|
|
|
* Replaces the standard backend allocator for this zone.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* zone The zone whose backend allocator is being changed.
|
2002-03-19 09:11:49 +00:00
|
|
|
* allocf A pointer to the allocation function
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* This could be used to implement pageable allocation, or perhaps
|
|
|
|
* even DMA allocators if used in conjunction with the OFFPAGE
|
|
|
|
* zone flag.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_zone_set_allocf(uma_zone_t zone, uma_alloc allocf);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used for freeing memory provided by the allocf above
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone that intends to use this free routine.
|
|
|
|
* freef The page freeing routine.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_zone_set_freef(uma_zone_t zone, uma_free freef);
|
|
|
|
|
|
|
|
/*
|
2008-11-02 00:41:26 +00:00
|
|
|
* These flags are setable in the allocf and visible in the freef.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
#define UMA_SLAB_BOOT 0x01 /* Slab alloced from boot pages */
|
|
|
|
#define UMA_SLAB_KMEM 0x02 /* Slab alloced from kmem_map */
|
2008-04-04 18:41:12 +00:00
|
|
|
#define UMA_SLAB_KERNEL 0x04 /* Slab alloced from kernel_map */
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_SLAB_PRIV 0x08 /* Slab alloced from priv allocator */
|
2002-04-07 22:56:48 +00:00
|
|
|
#define UMA_SLAB_OFFP 0x10 /* Slab is managed separately */
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_SLAB_MALLOC 0x20 /* Slab is a large malloc slab */
|
|
|
|
/* 0x40 and 0x80 are available */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used to pre-fill a zone with some number of items
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to fill
|
|
|
|
* itemcnt The number of items to reserve
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* NOTE: This is blocking and should only be done at startup
|
|
|
|
*/
|
|
|
|
void uma_prealloc(uma_zone_t zone, int itemcnt);
|
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
/*
|
|
|
|
* Used to lookup the reference counter allocated for an item
|
|
|
|
* from a UMA_ZONE_REFCNT zone. For UMA_ZONE_REFCNT zones,
|
|
|
|
* reference counters are allocated for items and stored in
|
|
|
|
* the underlying slab header.
|
|
|
|
*
|
|
|
|
* Arguments:
|
2012-12-08 09:23:05 +00:00
|
|
|
* zone The UMA_ZONE_REFCNT zone to which the item belongs.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* item The address of the item for which we want a refcnt.
|
|
|
|
*
|
|
|
|
* Returns:
|
2012-12-08 09:23:05 +00:00
|
|
|
* A pointer to a u_int32_t reference counter.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
*/
|
|
|
|
u_int32_t *uma_find_refcnt(uma_zone_t zone, void *item);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2007-01-05 19:09:01 +00:00
|
|
|
/*
|
|
|
|
* Used to determine if a fixed-size zone is exhausted.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to check
|
|
|
|
*
|
|
|
|
* Returns:
|
2012-12-08 09:23:05 +00:00
|
|
|
* Non-zero if zone is exhausted.
|
2007-01-05 19:09:01 +00:00
|
|
|
*/
|
|
|
|
int uma_zone_exhausted(uma_zone_t zone);
|
2007-01-25 01:05:23 +00:00
|
|
|
int uma_zone_exhausted_nolock(uma_zone_t zone);
|
2007-01-05 19:09:01 +00:00
|
|
|
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
/*
|
|
|
|
* Exported statistics structures to be used by user space monitoring tools.
|
2008-11-02 00:41:26 +00:00
|
|
|
* Statistics stream consists of a uma_stream_header, followed by a series of
|
|
|
|
* alternative uma_type_header and uma_type_stat structures.
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
*/
|
|
|
|
#define UMA_STREAM_VERSION 0x00000001
|
|
|
|
struct uma_stream_header {
|
|
|
|
u_int32_t ush_version; /* Stream format version. */
|
|
|
|
u_int32_t ush_maxcpus; /* Value of MAXCPU for stream. */
|
|
|
|
u_int32_t ush_count; /* Number of records. */
|
|
|
|
u_int32_t _ush_pad; /* Pad/reserved field. */
|
|
|
|
};
|
|
|
|
|
2005-07-25 00:47:32 +00:00
|
|
|
#define UTH_MAX_NAME 32
|
|
|
|
#define UTH_ZONE_SECONDARY 0x00000001
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
struct uma_type_header {
|
|
|
|
/*
|
|
|
|
* Static per-zone data, some extracted from the supporting keg.
|
|
|
|
*/
|
2005-07-25 00:47:32 +00:00
|
|
|
char uth_name[UTH_MAX_NAME];
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
u_int32_t uth_align; /* Keg: alignment. */
|
|
|
|
u_int32_t uth_size; /* Keg: requested size of item. */
|
|
|
|
u_int32_t uth_rsize; /* Keg: real size of item. */
|
|
|
|
u_int32_t uth_maxpages; /* Keg: maximum number of pages. */
|
|
|
|
u_int32_t uth_limit; /* Keg: max items to allocate. */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Current dynamic zone/keg-derived statistics.
|
|
|
|
*/
|
|
|
|
u_int32_t uth_pages; /* Keg: pages allocated. */
|
|
|
|
u_int32_t uth_keg_free; /* Keg: items free. */
|
|
|
|
u_int32_t uth_zone_free; /* Zone: items free. */
|
|
|
|
u_int32_t uth_bucketsize; /* Zone: desired bucket size. */
|
2005-07-25 00:47:32 +00:00
|
|
|
u_int32_t uth_zone_flags; /* Zone: flags. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
u_int64_t uth_allocs; /* Zone: number of allocations. */
|
|
|
|
u_int64_t uth_frees; /* Zone: number of frees. */
|
2005-07-15 23:34:39 +00:00
|
|
|
u_int64_t uth_fails; /* Zone: number of alloc failures. */
|
2010-06-15 19:28:37 +00:00
|
|
|
u_int64_t uth_sleeps; /* Zone: number of alloc sleeps. */
|
|
|
|
u_int64_t _uth_reserved1[2]; /* Reserved. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct uma_percpu_stat {
|
2008-11-02 00:41:26 +00:00
|
|
|
u_int64_t ups_allocs; /* Cache: number of allocations. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
u_int64_t ups_frees; /* Cache: number of frees. */
|
|
|
|
u_int64_t ups_cache_free; /* Cache: free items in cache. */
|
|
|
|
u_int64_t _ups_reserved[5]; /* Reserved. */
|
|
|
|
};
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
#endif
|