2005-01-07 02:29:27 +00:00
|
|
|
/*-
|
2017-11-27 15:23:17 +00:00
|
|
|
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
|
|
|
|
*
|
2005-07-16 09:51:52 +00:00
|
|
|
* Copyright (c) 2002, 2003, 2004, 2005 Jeffrey Roberson <jeff@FreeBSD.org>
|
|
|
|
* Copyright (c) 2004, 2005 Bosko Milekic <bmilekic@FreeBSD.org>
|
|
|
|
* All rights reserved.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice unmodified, this list of conditions, and the following
|
|
|
|
* disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
|
|
|
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
|
|
|
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
|
|
|
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
|
|
|
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
|
|
|
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
|
|
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
|
|
|
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* $FreeBSD$
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* uma.h - External definitions for the Universal Memory Allocator
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2014-02-10 19:51:15 +00:00
|
|
|
#ifndef _VM_UMA_H_
|
|
|
|
#define _VM_UMA_H_
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
#include <sys/param.h> /* For NULL */
|
|
|
|
#include <sys/malloc.h> /* For M_* */
|
2020-01-31 00:49:51 +00:00
|
|
|
#include <sys/_smr.h>
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2008-11-02 00:41:26 +00:00
|
|
|
/* User visible parameters */
|
2020-01-14 02:14:15 +00:00
|
|
|
#define UMA_SMALLEST_UNIT 8 /* Smallest item allocated */
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/* Types and type defs */
|
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
struct uma_zone;
|
2002-03-19 09:11:49 +00:00
|
|
|
/* Opaque type used as a handle to the zone */
|
|
|
|
typedef struct uma_zone * uma_zone_t;
|
|
|
|
|
2012-12-08 09:23:05 +00:00
|
|
|
/*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Item constructor
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* item A pointer to the memory which has been allocated.
|
|
|
|
* arg The arg field passed to uma_zalloc_arg
|
|
|
|
* size The size of the allocated item
|
2004-08-02 00:18:36 +00:00
|
|
|
* flags See zalloc flags
|
2012-12-08 09:23:05 +00:00
|
|
|
*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
2004-08-02 00:18:36 +00:00
|
|
|
* 0 on success
|
|
|
|
* errno on failure
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* The constructor is called just before the memory is returned
|
2002-10-22 12:10:27 +00:00
|
|
|
* to the user. It may block if necessary.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2004-08-02 00:18:36 +00:00
|
|
|
typedef int (*uma_ctor)(void *mem, int size, void *arg, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Item destructor
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* item A pointer to the memory which has been allocated.
|
|
|
|
* size The size of the item being destructed.
|
|
|
|
* arg Argument passed through uma_zfree_arg
|
2012-12-08 09:23:05 +00:00
|
|
|
*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* The destructor may perform operations that differ from those performed
|
|
|
|
* by the initializer, but it must leave the object in the same state.
|
|
|
|
* This IS type stable storage. This is called after EVERY zfree call.
|
|
|
|
*/
|
|
|
|
typedef void (*uma_dtor)(void *mem, int size, void *arg);
|
|
|
|
|
2012-12-08 09:23:05 +00:00
|
|
|
/*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Item initializer
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* item A pointer to the memory which has been allocated.
|
|
|
|
* size The size of the item being initialized.
|
2004-08-02 00:18:36 +00:00
|
|
|
* flags See zalloc flags
|
2012-12-08 09:23:05 +00:00
|
|
|
*
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
2004-08-02 00:18:36 +00:00
|
|
|
* 0 on success
|
|
|
|
* errno on failure
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Discussion:
|
2012-12-08 09:23:05 +00:00
|
|
|
* The initializer is called when the memory is cached in the uma zone.
|
2008-11-02 00:41:26 +00:00
|
|
|
* The initializer and the destructor should leave the object in the same
|
|
|
|
* state.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2004-08-02 00:18:36 +00:00
|
|
|
typedef int (*uma_init)(void *mem, int size, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Item discard function
|
|
|
|
*
|
|
|
|
* Arguments:
|
2012-12-08 09:23:05 +00:00
|
|
|
* item A pointer to memory which has been 'freed' but has not left the
|
2002-03-19 09:11:49 +00:00
|
|
|
* zone's cache.
|
|
|
|
* size The size of the item being discarded.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* This routine is called when memory leaves a zone and is returned to the
|
2008-11-02 00:41:26 +00:00
|
|
|
* system for other uses. It is the counter-part to the init function.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
typedef void (*uma_fini)(void *mem, int size);
|
|
|
|
|
2013-06-17 03:43:47 +00:00
|
|
|
/*
|
|
|
|
* Import new memory into a cache zone.
|
|
|
|
*/
|
2018-01-12 23:25:05 +00:00
|
|
|
typedef int (*uma_import)(void *arg, void **store, int count, int domain,
|
|
|
|
int flags);
|
2013-06-17 03:43:47 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Free memory from a cache zone.
|
|
|
|
*/
|
|
|
|
typedef void (*uma_release)(void *arg, void **store, int count);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
|
|
|
* What's the difference between initializing and constructing?
|
|
|
|
*
|
2012-12-08 09:23:05 +00:00
|
|
|
* The item is initialized when it is cached, and this is the state that the
|
2002-03-19 09:11:49 +00:00
|
|
|
* object should be in when returned to the allocator. The purpose of this is
|
|
|
|
* to remove some code which would otherwise be called on each allocation by
|
|
|
|
* utilizing a known, stable state. This differs from the constructor which
|
|
|
|
* will be called on EVERY allocation.
|
|
|
|
*
|
2008-11-02 00:41:26 +00:00
|
|
|
* For example, in the initializer you may want to initialize embedded locks,
|
2002-03-19 09:11:49 +00:00
|
|
|
* NULL list pointers, set up initial states, magic numbers, etc. This way if
|
2002-10-22 12:10:27 +00:00
|
|
|
* the object is held in the allocator and re-used it won't be necessary to
|
2002-03-19 09:11:49 +00:00
|
|
|
* re-initialize it.
|
|
|
|
*
|
|
|
|
* The constructor may be used to lock a data structure, link it on to lists,
|
|
|
|
* bump reference counts or total counts of outstanding structures, etc.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Function proto types */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Create a new uma zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* name The text name of the zone for debugging and stats. This memory
|
2002-03-19 09:11:49 +00:00
|
|
|
* should not be freed until the zone has been deallocated.
|
|
|
|
* size The size of the object that is being created.
|
2008-11-02 00:41:26 +00:00
|
|
|
* ctor The constructor that is called when the object is allocated.
|
2002-03-19 09:11:49 +00:00
|
|
|
* dtor The destructor that is called when the object is freed.
|
|
|
|
* init An initializer that sets up the initial state of the memory.
|
|
|
|
* fini A discard function that undoes initialization done by init.
|
|
|
|
* ctor/dtor/init/fini may all be null, see notes above.
|
2008-11-02 00:41:26 +00:00
|
|
|
* align A bitmask that corresponds to the requested alignment
|
2002-03-19 09:11:49 +00:00
|
|
|
* eg 4 would be 0x3
|
2008-11-02 00:41:26 +00:00
|
|
|
* flags A set of parameters that control the behavior of the zone.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* A pointer to a structure which is intended to be opaque to users of
|
|
|
|
* the interface. The value may be null if the wait flag is not set.
|
|
|
|
*/
|
2012-10-26 17:51:05 +00:00
|
|
|
uma_zone_t uma_zcreate(const char *name, size_t size, uma_ctor ctor,
|
|
|
|
uma_dtor dtor, uma_init uminit, uma_fini fini,
|
2013-04-09 17:43:48 +00:00
|
|
|
int align, uint32_t flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
/*
|
|
|
|
* Create a secondary uma zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* name The text name of the zone for debugging and stats. This memory
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* should not be freed until the zone has been deallocated.
|
2008-11-02 00:41:26 +00:00
|
|
|
* ctor The constructor that is called when the object is allocated.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* dtor The destructor that is called when the object is freed.
|
|
|
|
* zinit An initializer that sets up the initial state of the memory
|
|
|
|
* as the object passes from the Keg's slab to the Zone's cache.
|
|
|
|
* zfini A discard function that undoes initialization done by init
|
|
|
|
* as the object passes from the Zone's cache to the Keg's slab.
|
|
|
|
*
|
|
|
|
* ctor/dtor/zinit/zfini may all be null, see notes above.
|
|
|
|
* Note that the zinit and zfini specified here are NOT
|
|
|
|
* exactly the same as the init/fini specified to uma_zcreate()
|
2020-06-20 20:21:04 +00:00
|
|
|
* when creating a primary zone. These zinit/zfini are called
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* on the TRANSITION from keg to zone (and vice-versa). Once
|
|
|
|
* these are set, the primary zone may alter its init/fini
|
|
|
|
* (which are called when the object passes from VM to keg)
|
|
|
|
* using uma_zone_set_init/fini()) as well as its own
|
2020-06-20 20:21:04 +00:00
|
|
|
* zinit/zfini (unset by default for primary zone) with
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* uma_zone_set_zinit/zfini() (note subtle 'z' prefix).
|
|
|
|
*
|
2020-06-20 20:21:04 +00:00
|
|
|
* primary A reference to this zone's Primary Zone which contains the
|
|
|
|
* backing Keg for the Secondary Zone being added.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* A pointer to a structure which is intended to be opaque to users of
|
|
|
|
* the interface. The value may be null if the wait flag is not set.
|
|
|
|
*/
|
2020-02-22 17:44:28 +00:00
|
|
|
uma_zone_t uma_zsecond_create(const char *name, uma_ctor ctor, uma_dtor dtor,
|
2020-06-20 20:21:04 +00:00
|
|
|
uma_init zinit, uma_fini zfini, uma_zone_t primary);
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
|
2013-06-17 03:43:47 +00:00
|
|
|
/*
|
|
|
|
* Create cache-only zones.
|
|
|
|
*
|
|
|
|
* This allows uma's per-cpu cache facilities to handle arbitrary
|
|
|
|
* pointers. Consumers must specify the import and release functions to
|
|
|
|
* fill and destroy caches. UMA does not allocate any memory for these
|
|
|
|
* zones. The 'arg' parameter is passed to import/release and is caller
|
|
|
|
* specific.
|
|
|
|
*/
|
2020-02-22 17:44:28 +00:00
|
|
|
uma_zone_t uma_zcache_create(const char *name, int size, uma_ctor ctor,
|
|
|
|
uma_dtor dtor, uma_init zinit, uma_fini zfini, uma_import zimport,
|
|
|
|
uma_release zrelease, void *arg, int flags);
|
2013-06-17 03:43:47 +00:00
|
|
|
|
2003-09-19 08:37:44 +00:00
|
|
|
/*
|
|
|
|
* Definitions for uma_zcreate flags
|
|
|
|
*
|
|
|
|
* These flags share space with UMA_ZFLAGs in uma_int.h. Be careful not to
|
2020-01-09 02:03:03 +00:00
|
|
|
* overlap when adding new features.
|
2003-09-19 08:37:44 +00:00
|
|
|
*/
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_ZONE_ZINIT 0x0002 /* Initialize with zeros */
|
2020-02-04 22:40:11 +00:00
|
|
|
#define UMA_ZONE_CONTIG 0x0004 /*
|
|
|
|
* Physical memory underlying an object
|
|
|
|
* must be contiguous.
|
|
|
|
*/
|
2020-01-09 02:03:03 +00:00
|
|
|
#define UMA_ZONE_NOTOUCH 0x0008 /* UMA may not access the memory */
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_ZONE_MALLOC 0x0010 /* For use by malloc(9) only! */
|
|
|
|
#define UMA_ZONE_NOFREE 0x0020 /* Do not free slabs of this type! */
|
2002-04-29 23:45:41 +00:00
|
|
|
#define UMA_ZONE_MTXCLASS 0x0040 /* Create a new lock class */
|
2002-09-18 08:26:30 +00:00
|
|
|
#define UMA_ZONE_VM 0x0080 /*
|
|
|
|
* Used for internal vm datastructures
|
|
|
|
* only.
|
|
|
|
*/
|
2020-01-09 02:03:03 +00:00
|
|
|
#define UMA_ZONE_NOTPAGE 0x0100 /* allocf memory not vm pages */
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
#define UMA_ZONE_SECONDARY 0x0200 /* Zone is a Secondary Zone */
|
2018-04-24 20:05:45 +00:00
|
|
|
#define UMA_ZONE_NOBUCKET 0x0400 /* Do not use buckets. */
|
|
|
|
#define UMA_ZONE_MAXBUCKET 0x0800 /* Use largest buckets. */
|
2020-01-09 02:03:03 +00:00
|
|
|
#define UMA_ZONE_MINBUCKET 0x1000 /* Use smallest buckets. */
|
|
|
|
#define UMA_ZONE_CACHESPREAD 0x2000 /*
|
2009-01-25 09:11:24 +00:00
|
|
|
* Spread memory start locations across
|
|
|
|
* all possible cache lines. May
|
|
|
|
* require many virtually contiguous
|
|
|
|
* backend pages and can fail early.
|
|
|
|
*/
|
2012-01-27 20:18:31 +00:00
|
|
|
#define UMA_ZONE_NODUMP 0x4000 /*
|
|
|
|
* Zone's pages will not be included in
|
|
|
|
* mini-dumps.
|
|
|
|
*/
|
2013-04-08 19:10:45 +00:00
|
|
|
#define UMA_ZONE_PCPU 0x8000 /*
|
2020-01-31 00:49:51 +00:00
|
|
|
* Allocates mp_maxid + 1 slabs of
|
|
|
|
* PAGE_SIZE
|
2013-04-08 19:10:45 +00:00
|
|
|
*/
|
2020-01-09 02:03:03 +00:00
|
|
|
#define UMA_ZONE_FIRSTTOUCH 0x10000 /* First touch NUMA policy */
|
|
|
|
#define UMA_ZONE_ROUNDROBIN 0x20000 /* Round-robin NUMA policy. */
|
2020-01-31 00:49:51 +00:00
|
|
|
#define UMA_ZONE_SMR 0x40000 /*
|
|
|
|
* Safe memory reclamation defers
|
|
|
|
* frees until all read sections
|
|
|
|
* have exited. This flag creates
|
|
|
|
* a unique SMR context for this
|
|
|
|
* zone. To share contexts see
|
|
|
|
* uma_zone_set_smr() below.
|
|
|
|
*
|
|
|
|
* See sys/smr.h for more details.
|
|
|
|
*/
|
2020-01-09 02:03:03 +00:00
|
|
|
/* In use by UMA_ZFLAGs: 0xffe00000 */
|
2009-01-25 09:11:24 +00:00
|
|
|
|
|
|
|
/*
|
2020-01-31 00:49:51 +00:00
|
|
|
* These flags are shared between the keg and zone. Some are determined
|
|
|
|
* based on physical parameters of the request and may not be provided by
|
|
|
|
* the consumer.
|
2009-01-25 09:11:24 +00:00
|
|
|
*/
|
|
|
|
#define UMA_ZONE_INHERIT \
|
2020-01-09 02:03:03 +00:00
|
|
|
(UMA_ZONE_NOTOUCH | UMA_ZONE_MALLOC | UMA_ZONE_NOFREE | \
|
2020-02-06 08:32:25 +00:00
|
|
|
UMA_ZONE_VM | UMA_ZONE_NOTPAGE | UMA_ZONE_PCPU | \
|
|
|
|
UMA_ZONE_FIRSTTOUCH | UMA_ZONE_ROUNDROBIN)
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/* Definitions for align */
|
|
|
|
#define UMA_ALIGN_PTR (sizeof(void *) - 1) /* Alignment fit for ptr */
|
|
|
|
#define UMA_ALIGN_LONG (sizeof(long) - 1) /* "" long */
|
|
|
|
#define UMA_ALIGN_INT (sizeof(int) - 1) /* "" int */
|
|
|
|
#define UMA_ALIGN_SHORT (sizeof(short) - 1) /* "" short */
|
|
|
|
#define UMA_ALIGN_CHAR (sizeof(char) - 1) /* "" char */
|
2007-02-11 20:13:52 +00:00
|
|
|
#define UMA_ALIGN_CACHE (0 - 1) /* Cache line size align */
|
2017-09-27 23:15:33 +00:00
|
|
|
#define UMA_ALIGNOF(type) (_Alignof(type) - 1) /* Alignment fit for 'type' */
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2019-08-25 21:14:46 +00:00
|
|
|
#define UMA_ANYDOMAIN -1 /* Special value for domain search. */
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2002-04-08 04:48:58 +00:00
|
|
|
* Destroys an empty uma zone. If the zone is not empty uma complains loudly.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we want to destroy.
|
|
|
|
*
|
|
|
|
*/
|
2002-04-08 04:48:58 +00:00
|
|
|
void uma_zdestroy(uma_zone_t zone);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocates an item out of a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we are allocating from
|
|
|
|
* arg This data is passed to the ctor function
|
2002-04-30 04:26:34 +00:00
|
|
|
* flags See sys/malloc.h for available flags.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
2008-11-02 00:41:26 +00:00
|
|
|
* A non-null pointer to an initialized element from the zone is
|
|
|
|
* guaranteed if the wait flag is M_WAITOK. Otherwise a null pointer
|
|
|
|
* may be returned if the zone is empty or the ctor failed.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
|
2002-04-30 04:26:34 +00:00
|
|
|
void *uma_zalloc_arg(uma_zone_t zone, void *arg, int flags);
|
2020-01-31 00:49:51 +00:00
|
|
|
|
|
|
|
/* Allocate per-cpu data. Access the correct data with zpcpu_get(). */
|
2018-06-08 21:40:03 +00:00
|
|
|
void *uma_zalloc_pcpu_arg(uma_zone_t zone, void *arg, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2020-01-31 00:49:51 +00:00
|
|
|
/* Use with SMR zones. */
|
|
|
|
void *uma_zalloc_smr(uma_zone_t zone, int flags);
|
|
|
|
|
2018-01-12 23:25:05 +00:00
|
|
|
/*
|
|
|
|
* Allocate an item from a specific NUMA domain. This uses a slow path in
|
|
|
|
* the allocator but is guaranteed to allocate memory from the requested
|
|
|
|
* domain if M_WAITOK is set.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we are allocating from
|
|
|
|
* arg This data is passed to the ctor function
|
|
|
|
* domain The domain to allocate from.
|
|
|
|
* flags See sys/malloc.h for available flags.
|
|
|
|
*/
|
|
|
|
void *uma_zalloc_domain(uma_zone_t zone, void *arg, int domain, int flags);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
|
|
|
* Allocates an item out of a zone without supplying an argument
|
|
|
|
*
|
|
|
|
* This is just a wrapper for uma_zalloc_arg for convenience.
|
|
|
|
*
|
|
|
|
*/
|
2002-04-30 04:26:34 +00:00
|
|
|
static __inline void *uma_zalloc(uma_zone_t zone, int flags);
|
2018-06-08 21:40:03 +00:00
|
|
|
static __inline void *uma_zalloc_pcpu(uma_zone_t zone, int flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
static __inline void *
|
2002-04-30 04:26:34 +00:00
|
|
|
uma_zalloc(uma_zone_t zone, int flags)
|
2002-03-19 09:11:49 +00:00
|
|
|
{
|
2002-04-30 04:26:34 +00:00
|
|
|
return uma_zalloc_arg(zone, NULL, flags);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
2018-06-08 21:40:03 +00:00
|
|
|
static __inline void *
|
|
|
|
uma_zalloc_pcpu(uma_zone_t zone, int flags)
|
|
|
|
{
|
|
|
|
return uma_zalloc_pcpu_arg(zone, NULL, flags);
|
|
|
|
}
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
|
|
|
* Frees an item back into the specified zone.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone the item was originally allocated out of.
|
|
|
|
* item The memory to be freed.
|
|
|
|
* arg Argument passed to the destructor
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_zfree_arg(uma_zone_t zone, void *item, void *arg);
|
2020-01-31 00:49:51 +00:00
|
|
|
|
|
|
|
/* Use with PCPU zones. */
|
2018-06-08 21:40:03 +00:00
|
|
|
void uma_zfree_pcpu_arg(uma_zone_t zone, void *item, void *arg);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2020-01-31 00:49:51 +00:00
|
|
|
/* Use with SMR zones. */
|
|
|
|
void uma_zfree_smr(uma_zone_t zone, void *item);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
|
|
|
* Frees an item back to a zone without supplying an argument
|
|
|
|
*
|
|
|
|
* This is just a wrapper for uma_zfree_arg for convenience.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
static __inline void uma_zfree(uma_zone_t zone, void *item);
|
2018-06-08 21:40:03 +00:00
|
|
|
static __inline void uma_zfree_pcpu(uma_zone_t zone, void *item);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
static __inline void
|
|
|
|
uma_zfree(uma_zone_t zone, void *item)
|
|
|
|
{
|
2002-07-18 15:53:11 +00:00
|
|
|
uma_zfree_arg(zone, item, NULL);
|
2002-03-19 09:11:49 +00:00
|
|
|
}
|
|
|
|
|
2018-06-08 21:40:03 +00:00
|
|
|
static __inline void
|
|
|
|
uma_zfree_pcpu(uma_zone_t zone, void *item)
|
|
|
|
{
|
|
|
|
uma_zfree_pcpu_arg(zone, item, NULL);
|
|
|
|
}
|
|
|
|
|
2017-11-08 02:39:37 +00:00
|
|
|
/*
|
|
|
|
* Wait until the specified zone can allocate an item.
|
|
|
|
*/
|
|
|
|
void uma_zwait(uma_zone_t zone);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
|
|
|
* Backend page supplier routines
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* zone The zone that is requesting pages.
|
|
|
|
* size The number of bytes being requested.
|
2002-03-19 09:11:49 +00:00
|
|
|
* pflag Flags for these memory pages, see below.
|
2018-01-12 23:25:05 +00:00
|
|
|
* domain The NUMA domain that we prefer for this allocation.
|
2002-03-19 09:11:49 +00:00
|
|
|
* wait Indicates our willingness to block.
|
|
|
|
*
|
|
|
|
* Returns:
|
2008-11-02 00:41:26 +00:00
|
|
|
* A pointer to the allocated memory or NULL on failure.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
|
2018-01-12 23:25:05 +00:00
|
|
|
typedef void *(*uma_alloc)(uma_zone_t zone, vm_size_t size, int domain,
|
|
|
|
uint8_t *pflag, int wait);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Backend page free routines
|
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* item A pointer to the previously allocated pages.
|
|
|
|
* size The original size of the allocation.
|
|
|
|
* pflag The flags for the slab. See UMA_SLAB_* below.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* None
|
|
|
|
*/
|
2015-04-01 12:42:26 +00:00
|
|
|
typedef void (*uma_free)(void *item, vm_size_t size, uint8_t pflag);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
2019-09-01 22:22:43 +00:00
|
|
|
* Reclaims unused memory
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
2019-09-01 22:22:43 +00:00
|
|
|
* req Reclamation request type.
|
2002-03-19 09:11:49 +00:00
|
|
|
* Returns:
|
|
|
|
* None
|
|
|
|
*/
|
2019-09-01 22:22:43 +00:00
|
|
|
#define UMA_RECLAIM_DRAIN 1 /* release bucket cache */
|
|
|
|
#define UMA_RECLAIM_DRAIN_CPU 2 /* release bucket and per-CPU caches */
|
|
|
|
#define UMA_RECLAIM_TRIM 3 /* trim bucket cache to WSS */
|
|
|
|
void uma_reclaim(int req);
|
|
|
|
void uma_zone_reclaim(uma_zone_t, int req);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2007-02-11 20:13:52 +00:00
|
|
|
/*
|
|
|
|
* Sets the alignment mask to be used for all zones requesting cache
|
|
|
|
* alignment. Should be called by MD boot code prior to starting VM/UMA.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* align The alignment mask
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
|
|
|
void uma_set_align(int align);
|
|
|
|
|
2013-06-26 00:57:38 +00:00
|
|
|
/*
|
|
|
|
* Set a reserved number of items to hold for M_USE_RESERVE allocations. All
|
|
|
|
* other requests must allocate new backing pages.
|
|
|
|
*/
|
|
|
|
void uma_zone_reserve(uma_zone_t zone, int nitems);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2013-02-26 23:35:27 +00:00
|
|
|
* Reserves the maximum KVA space required by the zone and configures the zone
|
|
|
|
* to use a VM_ALLOC_NOOBJ-based backend allocator.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* zone The zone to update.
|
2013-02-26 23:35:27 +00:00
|
|
|
* nitems The upper limit on the number of items that can be allocated.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
2013-02-26 23:35:27 +00:00
|
|
|
* 0 if KVA space can not be allocated
|
2002-03-19 09:11:49 +00:00
|
|
|
* 1 if successful
|
|
|
|
*
|
|
|
|
* Discussion:
|
2013-02-26 23:35:27 +00:00
|
|
|
* When the machine supports a direct map and the zone's items are smaller
|
|
|
|
* than a page, the zone will use the direct map instead of allocating KVA
|
|
|
|
* space.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
2013-02-26 23:35:27 +00:00
|
|
|
int uma_zone_reserve_kva(uma_zone_t zone, int nitems);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
2002-03-20 05:28:34 +00:00
|
|
|
/*
|
|
|
|
* Sets a high limit on the number of items allowed in a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to limit
|
2010-10-16 04:41:45 +00:00
|
|
|
* nitems The requested upper limit on the number of items allowed
|
2002-03-20 05:28:34 +00:00
|
|
|
*
|
|
|
|
* Returns:
|
2019-11-22 16:30:47 +00:00
|
|
|
* int The effective value of nitems
|
2002-03-20 05:28:34 +00:00
|
|
|
*/
|
2010-10-16 04:41:45 +00:00
|
|
|
int uma_zone_set_max(uma_zone_t zone, int nitems);
|
2002-03-19 09:11:49 +00:00
|
|
|
|
o Move zone limit from keg level up to zone level. This means that now
two zones sharing a keg may have different limits. Now this is going
to work:
zone = uma_zcreate();
uma_zone_set_max(zone, limit);
zone2 = uma_zsecond_create(zone);
uma_zone_set_max(zone2, limit2);
Kegs no longer have uk_maxpages field, but zones have uz_items. When
set, it may be rounded up to minimum possible CPU bucket cache size.
For small limits bucket cache can also be reconfigured to be smaller.
Counter uz_items is updated whenever items transition from keg to a
bucket cache or directly to a consumer. If zone has uz_maxitems set and
it is reached, then we are going to sleep.
o Since new limits don't play well with multi-keg zones, remove them. The
idea of multi-keg zones was introduced exactly 10 years ago, and never
have had a practical usage. In discussion with Jeff we came to a wild
agreement that if we ever want to reintroduce the idea of a smart allocator
that would be able to choose between two (or more) totally different
backing stores, that choice should be made one level higher than UMA,
e.g. in malloc(9) or in mget(), or whatever and choice should be controlled
by the caller.
o Sleeping code is improved to account number of sleepers and wake them one
by one, to avoid thundering herd problem.
o Flag UMA_ZONE_NOBUCKETCACHE removed, instead uma_zone_set_maxcache()
KPI added. Having no bucket cache basically means setting maxcache to 0.
o Now with many fields added and many removed (no multi-keg zones!) make
sure that struct uma_zone is perfectly aligned.
Reviewed by: markj, jeff
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D17773
2019-01-15 00:02:06 +00:00
|
|
|
/*
|
|
|
|
* Sets a high limit on the number of items allowed in zone's bucket cache
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to limit
|
|
|
|
* nitems The requested upper limit on the number of items allowed
|
|
|
|
*/
|
2019-11-22 16:30:47 +00:00
|
|
|
void uma_zone_set_maxcache(uma_zone_t zone, int nitems);
|
o Move zone limit from keg level up to zone level. This means that now
two zones sharing a keg may have different limits. Now this is going
to work:
zone = uma_zcreate();
uma_zone_set_max(zone, limit);
zone2 = uma_zsecond_create(zone);
uma_zone_set_max(zone2, limit2);
Kegs no longer have uk_maxpages field, but zones have uz_items. When
set, it may be rounded up to minimum possible CPU bucket cache size.
For small limits bucket cache can also be reconfigured to be smaller.
Counter uz_items is updated whenever items transition from keg to a
bucket cache or directly to a consumer. If zone has uz_maxitems set and
it is reached, then we are going to sleep.
o Since new limits don't play well with multi-keg zones, remove them. The
idea of multi-keg zones was introduced exactly 10 years ago, and never
have had a practical usage. In discussion with Jeff we came to a wild
agreement that if we ever want to reintroduce the idea of a smart allocator
that would be able to choose between two (or more) totally different
backing stores, that choice should be made one level higher than UMA,
e.g. in malloc(9) or in mget(), or whatever and choice should be controlled
by the caller.
o Sleeping code is improved to account number of sleepers and wake them one
by one, to avoid thundering herd problem.
o Flag UMA_ZONE_NOBUCKETCACHE removed, instead uma_zone_set_maxcache()
KPI added. Having no bucket cache basically means setting maxcache to 0.
o Now with many fields added and many removed (no multi-keg zones!) make
sure that struct uma_zone is perfectly aligned.
Reviewed by: markj, jeff
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D17773
2019-01-15 00:02:06 +00:00
|
|
|
|
2010-08-16 14:24:00 +00:00
|
|
|
/*
|
|
|
|
* Obtains the effective limit on the number of items in a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to obtain the effective limit from
|
|
|
|
*
|
|
|
|
* Return:
|
|
|
|
* 0 No limit
|
|
|
|
* int The effective limit of the zone
|
|
|
|
*/
|
|
|
|
int uma_zone_get_max(uma_zone_t zone);
|
|
|
|
|
2012-12-07 22:27:13 +00:00
|
|
|
/*
|
|
|
|
* Sets a warning to be printed when limit is reached
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone we will warn about
|
|
|
|
* warning Warning content
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
|
|
|
void uma_zone_set_warning(uma_zone_t zone, const char *warning);
|
|
|
|
|
2015-12-20 02:05:33 +00:00
|
|
|
/*
|
|
|
|
* Sets a function to run when limit is reached
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to which this applies
|
|
|
|
* fx The function ro run
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
2016-02-03 23:30:17 +00:00
|
|
|
typedef void (*uma_maxaction_t)(uma_zone_t, int);
|
2015-12-20 02:05:33 +00:00
|
|
|
void uma_zone_set_maxaction(uma_zone_t zone, uma_maxaction_t);
|
|
|
|
|
2010-10-16 04:14:45 +00:00
|
|
|
/*
|
|
|
|
* Obtains the approximate current number of items allocated from a zone
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to obtain the current allocation count from
|
|
|
|
*
|
|
|
|
* Return:
|
|
|
|
* int The approximate current number of items allocated from the zone
|
|
|
|
*/
|
|
|
|
int uma_zone_get_cur(uma_zone_t zone);
|
|
|
|
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
/*
|
|
|
|
* The following two routines (uma_zone_set_init/fini)
|
|
|
|
* are used to set the backend init/fini pair which acts on an
|
|
|
|
* object as it becomes allocated and is placed in a slab within
|
|
|
|
* the specified zone's backing keg. These should probably not
|
2008-11-02 00:41:26 +00:00
|
|
|
* be changed once allocations have already begun, but only be set
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
* immediately upon zone creation.
|
|
|
|
*/
|
|
|
|
void uma_zone_set_init(uma_zone_t zone, uma_init uminit);
|
|
|
|
void uma_zone_set_fini(uma_zone_t zone, uma_fini fini);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The following two routines (uma_zone_set_zinit/zfini) are
|
|
|
|
* used to set the zinit/zfini pair which acts on an object as
|
|
|
|
* it passes from the backing Keg's slab cache to the
|
|
|
|
* specified Zone's bucket cache. These should probably not
|
2008-11-02 00:41:26 +00:00
|
|
|
* be changed once allocations have already begun, but only be set
|
|
|
|
* immediately upon zone creation.
|
Bring in mbuma to replace mballoc.
mbuma is an Mbuf & Cluster allocator built on top of a number of
extensions to the UMA framework, all included herein.
Extensions to UMA worth noting:
- Better layering between slab <-> zone caches; introduce
Keg structure which splits off slab cache away from the
zone structure and allows multiple zones to be stacked
on top of a single Keg (single type of slab cache);
perhaps we should look into defining a subset API on
top of the Keg for special use by malloc(9),
for example.
- UMA_ZONE_REFCNT zones can now be added, and reference
counters automagically allocated for them within the end
of the associated slab structures. uma_find_refcnt()
does a kextract to fetch the slab struct reference from
the underlying page, and lookup the corresponding refcnt.
mbuma things worth noting:
- integrates mbuf & cluster allocations with extended UMA
and provides caches for commonly-allocated items; defines
several zones (two primary, one secondary) and two kegs.
- change up certain code paths that always used to do:
m_get() + m_clget() to instead just use m_getcl() and
try to take advantage of the newly defined secondary
Packet zone.
- netstat(1) and systat(1) quickly hacked up to do basic
stat reporting but additional stats work needs to be
done once some other details within UMA have been taken
care of and it becomes clearer to how stats will work
within the modified framework.
From the user perspective, one implication is that the
NMBCLUSTERS compile-time option is no longer used. The
maximum number of clusters is still capped off according
to maxusers, but it can be made unlimited by setting
the kern.ipc.nmbclusters boot-time tunable to zero.
Work should be done to write an appropriate sysctl
handler allowing dynamic tuning of kern.ipc.nmbclusters
at runtime.
Additional things worth noting/known issues (READ):
- One report of 'ips' (ServeRAID) driver acting really
slow in conjunction with mbuma. Need more data.
Latest report is that ips is equally sucking with
and without mbuma.
- Giant leak in NFS code sometimes occurs, can't
reproduce but currently analyzing; brueffer is
able to reproduce but THIS IS NOT an mbuma-specific
problem and currently occurs even WITHOUT mbuma.
- Issues in network locking: there is at least one
code path in the rip code where one or more locks
are acquired and we end up in m_prepend() with
M_WAITOK, which causes WITNESS to whine from within
UMA. Current temporary solution: force all UMA
allocations to be M_NOWAIT from within UMA for now
to avoid deadlocks unless WITNESS is defined and we
can determine with certainty that we're not holding
any locks when we're M_WAITOK.
- I've seen at least one weird socketbuffer empty-but-
mbuf-still-attached panic. I don't believe this
to be related to mbuma but please keep your eyes
open, turn on debugging, and capture crash dumps.
This change removes more code than it adds.
A paper is available detailing the change and considering
various performance issues, it was presented at BSDCan2004:
http://www.unixdaemons.com/~bmilekic/netbuf_bmilekic.pdf
Please read the paper for Future Work and implementation
details, as well as credits.
Testing and Debugging:
rwatson,
brueffer,
Ketrien I. Saihr-Kesenchedra,
...
Reviewed by: Lots of people (for different parts)
2004-05-31 21:46:06 +00:00
|
|
|
*/
|
|
|
|
void uma_zone_set_zinit(uma_zone_t zone, uma_init zinit);
|
|
|
|
void uma_zone_set_zfini(uma_zone_t zone, uma_fini zfini);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2013-02-26 23:35:27 +00:00
|
|
|
* Replaces the standard backend allocator for this zone.
|
2002-03-19 09:11:49 +00:00
|
|
|
*
|
|
|
|
* Arguments:
|
2008-11-02 00:41:26 +00:00
|
|
|
* zone The zone whose backend allocator is being changed.
|
2002-03-19 09:11:49 +00:00
|
|
|
* allocf A pointer to the allocation function
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* Discussion:
|
|
|
|
* This could be used to implement pageable allocation, or perhaps
|
|
|
|
* even DMA allocators if used in conjunction with the OFFPAGE
|
|
|
|
* zone flag.
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_zone_set_allocf(uma_zone_t zone, uma_alloc allocf);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used for freeing memory provided by the allocf above
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone that intends to use this free routine.
|
|
|
|
* freef The page freeing routine.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*/
|
|
|
|
|
|
|
|
void uma_zone_set_freef(uma_zone_t zone, uma_free freef);
|
|
|
|
|
2020-01-31 00:49:51 +00:00
|
|
|
/*
|
|
|
|
* Associate a zone with a smr context that is allocated after creation
|
|
|
|
* so that multiple zones may share the same context.
|
|
|
|
*/
|
|
|
|
void uma_zone_set_smr(uma_zone_t zone, smr_t smr);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fetch the smr context that was set or made in uma_zcreate().
|
|
|
|
*/
|
|
|
|
smr_t uma_zone_get_smr(uma_zone_t zone);
|
|
|
|
|
2002-03-19 09:11:49 +00:00
|
|
|
/*
|
2008-11-02 00:41:26 +00:00
|
|
|
* These flags are setable in the allocf and visible in the freef.
|
2002-03-19 09:11:49 +00:00
|
|
|
*/
|
|
|
|
#define UMA_SLAB_BOOT 0x01 /* Slab alloced from boot pages */
|
2018-08-25 19:38:08 +00:00
|
|
|
#define UMA_SLAB_KERNEL 0x04 /* Slab alloced from kmem */
|
2002-03-19 09:11:49 +00:00
|
|
|
#define UMA_SLAB_PRIV 0x08 /* Slab alloced from priv allocator */
|
2020-02-04 22:40:11 +00:00
|
|
|
/* 0x02, 0x10, 0x40, and 0x80 are available */
|
2002-03-19 09:11:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Used to pre-fill a zone with some number of items
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to fill
|
|
|
|
* itemcnt The number of items to reserve
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* Nothing
|
|
|
|
*
|
|
|
|
* NOTE: This is blocking and should only be done at startup
|
|
|
|
*/
|
|
|
|
void uma_prealloc(uma_zone_t zone, int itemcnt);
|
|
|
|
|
2007-01-05 19:09:01 +00:00
|
|
|
/*
|
|
|
|
* Used to determine if a fixed-size zone is exhausted.
|
|
|
|
*
|
|
|
|
* Arguments:
|
|
|
|
* zone The zone to check
|
|
|
|
*
|
|
|
|
* Returns:
|
2012-12-08 09:23:05 +00:00
|
|
|
* Non-zero if zone is exhausted.
|
2007-01-05 19:09:01 +00:00
|
|
|
*/
|
|
|
|
int uma_zone_exhausted(uma_zone_t zone);
|
|
|
|
|
2020-02-17 01:59:55 +00:00
|
|
|
/*
|
|
|
|
* Returns the bytes of memory consumed by the zone.
|
|
|
|
*/
|
|
|
|
size_t uma_zone_memory(uma_zone_t zone);
|
|
|
|
|
2014-02-10 19:59:46 +00:00
|
|
|
/*
|
|
|
|
* Common UMA_ZONE_PCPU zones.
|
|
|
|
*/
|
2019-09-16 21:31:02 +00:00
|
|
|
extern uma_zone_t pcpu_zone_int;
|
2014-02-10 19:59:46 +00:00
|
|
|
extern uma_zone_t pcpu_zone_64;
|
|
|
|
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
/*
|
|
|
|
* Exported statistics structures to be used by user space monitoring tools.
|
2008-11-02 00:41:26 +00:00
|
|
|
* Statistics stream consists of a uma_stream_header, followed by a series of
|
|
|
|
* alternative uma_type_header and uma_type_stat structures.
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
*/
|
|
|
|
#define UMA_STREAM_VERSION 0x00000001
|
|
|
|
struct uma_stream_header {
|
2013-04-09 17:43:48 +00:00
|
|
|
uint32_t ush_version; /* Stream format version. */
|
|
|
|
uint32_t ush_maxcpus; /* Value of MAXCPU for stream. */
|
|
|
|
uint32_t ush_count; /* Number of records. */
|
|
|
|
uint32_t _ush_pad; /* Pad/reserved field. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
};
|
|
|
|
|
2005-07-25 00:47:32 +00:00
|
|
|
#define UTH_MAX_NAME 32
|
|
|
|
#define UTH_ZONE_SECONDARY 0x00000001
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
struct uma_type_header {
|
|
|
|
/*
|
|
|
|
* Static per-zone data, some extracted from the supporting keg.
|
|
|
|
*/
|
2005-07-25 00:47:32 +00:00
|
|
|
char uth_name[UTH_MAX_NAME];
|
2013-04-09 17:43:48 +00:00
|
|
|
uint32_t uth_align; /* Keg: alignment. */
|
|
|
|
uint32_t uth_size; /* Keg: requested size of item. */
|
|
|
|
uint32_t uth_rsize; /* Keg: real size of item. */
|
|
|
|
uint32_t uth_maxpages; /* Keg: maximum number of pages. */
|
|
|
|
uint32_t uth_limit; /* Keg: max items to allocate. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Current dynamic zone/keg-derived statistics.
|
|
|
|
*/
|
2013-04-09 17:43:48 +00:00
|
|
|
uint32_t uth_pages; /* Keg: pages allocated. */
|
|
|
|
uint32_t uth_keg_free; /* Keg: items free. */
|
|
|
|
uint32_t uth_zone_free; /* Zone: items free. */
|
|
|
|
uint32_t uth_bucketsize; /* Zone: desired bucket size. */
|
|
|
|
uint32_t uth_zone_flags; /* Zone: flags. */
|
|
|
|
uint64_t uth_allocs; /* Zone: number of allocations. */
|
|
|
|
uint64_t uth_frees; /* Zone: number of frees. */
|
|
|
|
uint64_t uth_fails; /* Zone: number of alloc failures. */
|
|
|
|
uint64_t uth_sleeps; /* Zone: number of alloc sleeps. */
|
2019-08-06 21:50:34 +00:00
|
|
|
uint64_t uth_xdomain; /* Zone: Number of cross domain frees. */
|
|
|
|
uint64_t _uth_reserved1[1]; /* Reserved. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct uma_percpu_stat {
|
2013-04-09 17:43:48 +00:00
|
|
|
uint64_t ups_allocs; /* Cache: number of allocations. */
|
|
|
|
uint64_t ups_frees; /* Cache: number of frees. */
|
|
|
|
uint64_t ups_cache_free; /* Cache: free items in cache. */
|
|
|
|
uint64_t _ups_reserved[5]; /* Reserved. */
|
Introduce a new sysctl, vm.zone_stats, which exports UMA(9) allocator
statistics via a binary structure stream:
- Add structure 'uma_stream_header', which defines a stream version,
definition of MAXCPUs used in the stream, and the number of zone
records in the stream.
- Add structure 'uma_type_header', which defines the name, alignment,
size, resource allocation limits, current pages allocated, preferred
bucket size, and central zone + keg statistics.
- Add structure 'uma_percpu_stat', which, for each per-CPU cache,
includes the number of allocations and frees, as well as the number
of free items in the cache.
- When the sysctl is queried, return a stream header, followed by a
series of type descriptions, each consisting of a type header
followed by a series of MAXCPUs uma_percpu_stat structures holding
per-CPU allocation information. Typical values of MAXCPU will be
1 (UP compiled kernel) and 16 (SMP compiled kernel).
This query mechanism allows user space monitoring tools to extract
memory allocation statistics in a machine-readable form, and to do so
at a per-CPU granularity, allowing monitoring of allocation patterns
across CPUs in order to better understand the distribution of work and
memory flow over multiple CPUs.
While here, also export the number of UMA zones as a sysctl
vm.uma_count, in order to assist in sizing user swpace buffers to
receive the stream.
A follow-up commit of libmemstat(3), a library to monitor kernel memory
allocation, will occur in the next few days. This change directly
supports converting netstat(1)'s "-mb" mode to using UMA-sourced stats
rather than separately maintained mbuf allocator statistics.
MFC after: 1 week
2005-07-14 16:35:13 +00:00
|
|
|
};
|
|
|
|
|
2015-05-09 20:08:36 +00:00
|
|
|
void uma_reclaim_wakeup(void);
|
|
|
|
void uma_reclaim_worker(void *);
|
|
|
|
|
2018-01-02 04:35:56 +00:00
|
|
|
unsigned long uma_limit(void);
|
|
|
|
|
|
|
|
/* Return the amount of memory managed by UMA. */
|
|
|
|
unsigned long uma_size(void);
|
|
|
|
|
|
|
|
/* Return the amount of memory remaining. May be negative. */
|
|
|
|
long uma_avail(void);
|
|
|
|
|
2014-02-10 19:51:15 +00:00
|
|
|
#endif /* _VM_UMA_H_ */
|