Import jemalloc 9ef7f5dc34ff02f50d401e41c8d9a4a928e7c2aa (dev branch,
prior to 3.0.0 release) as contrib/jemalloc, and integrate it into libc. The code being imported by this commit diverged from lib/libc/stdlib/malloc.c in March 2010, which means that a portion of the jemalloc 1.0.0 ChangeLog entries are relevant, as are the entries for all subsequent releases.
This commit is contained in:
parent
f846cf42ab
commit
a4bd5210d5
27
contrib/jemalloc/COPYING
Normal file
27
contrib/jemalloc/COPYING
Normal file
@ -0,0 +1,27 @@
|
||||
Unless otherwise specified, files in the jemalloc source distribution are
|
||||
subject to the following license:
|
||||
--------------------------------------------------------------------------------
|
||||
Copyright (C) 2002-2012 Jason Evans <jasone@canonware.com>.
|
||||
All rights reserved.
|
||||
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
|
||||
Copyright (C) 2009-2012 Facebook, Inc. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
1. Redistributions of source code must retain the above copyright notice(s),
|
||||
this list of conditions and the following disclaimer.
|
||||
2. Redistributions in binary form must reproduce the above copyright notice(s),
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY EXPRESS
|
||||
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
|
||||
EVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
|
||||
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
|
||||
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
--------------------------------------------------------------------------------
|
322
contrib/jemalloc/ChangeLog
Normal file
322
contrib/jemalloc/ChangeLog
Normal file
@ -0,0 +1,322 @@
|
||||
Following are change highlights associated with official releases. Important
|
||||
bug fixes are all mentioned, but internal enhancements are omitted here for
|
||||
brevity (even though they are more fun to write about). Much more detail can be
|
||||
found in the git revision history:
|
||||
|
||||
http://www.canonware.com/cgi-bin/gitweb.cgi?p=jemalloc.git
|
||||
git://canonware.com/jemalloc.git
|
||||
|
||||
* 3.0.0 (XXX not yet released)
|
||||
|
||||
Although this version adds some major new features, the primary focus is on
|
||||
internal code cleanup that facilitates maintainability and portability, most
|
||||
of which is not reflected in the ChangeLog. This is the first release to
|
||||
incorporate substantial contributions from numerous other developers, and the
|
||||
result is a more broadly useful allocator (see the git revision history for
|
||||
contribution details). Note that the license has been unified, thanks to
|
||||
Facebook granting a license under the same terms as the other copyright
|
||||
holders (see COPYING).
|
||||
|
||||
New features:
|
||||
- Implement Valgrind support, redzones, and quarantine.
|
||||
- Add support for additional operating systems:
|
||||
+ FreeBSD
|
||||
+ Mac OS X Lion
|
||||
- Add support for additional architectures:
|
||||
+ MIPS
|
||||
+ SH4
|
||||
+ Tilera
|
||||
- Add support for cross compiling.
|
||||
- Add nallocm(), which rounds a request size up to the nearest size class
|
||||
without actually allocating.
|
||||
- Implement aligned_alloc() (blame C11).
|
||||
- Add the --disable-munmap option, and make it the default on Linux.
|
||||
- Add the --with-mangling option.
|
||||
- Add the --disable-experimental option.
|
||||
- Add the "thread.tcache.enabled" mallctl.
|
||||
|
||||
Incompatible changes:
|
||||
- Enable stats by default.
|
||||
- Enable fill by default.
|
||||
- Disable lazy locking by default.
|
||||
- Rename the "tcache.flush" mallctl to "thread.tcache.flush".
|
||||
- Rename the "arenas.pagesize" mallctl to "arenas.page".
|
||||
|
||||
Removed features:
|
||||
- Remove the swap feature, including the "config.swap", "swap.avail",
|
||||
"swap.prezeroed", "swap.nfds", and "swap.fds" mallctls.
|
||||
- Remove highruns statistics, including the
|
||||
"stats.arenas.<i>.bins.<j>.highruns" and
|
||||
"stats.arenas.<i>.lruns.<j>.highruns" mallctls.
|
||||
- As part of small size class refactoring, remove the "opt.lg_[qc]space_max",
|
||||
"arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and
|
||||
"arenas.[tqcs]bins" mallctls.
|
||||
- Remove the "arenas.chunksize" mallctl.
|
||||
- Remove the "opt.lg_prof_tcmax" option.
|
||||
- Remove the "opt.lg_prof_bt_max" option.
|
||||
- Remove the "opt.lg_tcache_gc_sweep" option.
|
||||
- Remove the --disable-tiny option, including the "config.tiny" mallctl.
|
||||
- Remove the --enable-dynamic-page-shift configure option.
|
||||
- Remove the --enable-sysv configure option.
|
||||
|
||||
Bug fixes:
|
||||
- Fix fork-related bugs that could cause deadlock in children between fork
|
||||
and exec.
|
||||
- Fix a statistics-related bug in the "thread.arena" mallctl that could cause
|
||||
invalid statistics and crashes.
|
||||
- Work around TLS dallocation via free() on Linux. This bug could cause
|
||||
write-after-free memory corruption.
|
||||
- Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
|
||||
- Fix realloc(p, 0) to act like free(p).
|
||||
- Do not enforce minimum alignment in memalign().
|
||||
- Check for NULL pointer in malloc_usable_size().
|
||||
- Fix bin->runcur management to fix a layout policy bug. This bug did not
|
||||
affect correctness.
|
||||
- Fix a bug in choose_arena_hard() that potentially caused more arenas to be
|
||||
initialized than necessary.
|
||||
- Add missing "opt.lg_tcache_max" mallctl implementation.
|
||||
- Use glibc allocator hooks to make mixed allocator usage less likely.
|
||||
- Fix build issues for --disable-tcache.
|
||||
|
||||
* 2.2.5 (November 14, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Fix huge_ralloc() race when using mremap(2). This is a serious bug that
|
||||
could cause memory corruption and/or crashes.
|
||||
- Fix huge_ralloc() to maintain chunk statistics.
|
||||
- Fix malloc_stats_print(..., "a") output.
|
||||
|
||||
* 2.2.4 (November 5, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Initialize arenas_tsd before using it. This bug existed for 2.2.[0-3], as
|
||||
well as for --disable-tls builds in earlier releases.
|
||||
- Do not assume a 4 KiB page size in test/rallocm.c.
|
||||
|
||||
* 2.2.3 (August 31, 2011)
|
||||
|
||||
This version fixes numerous bugs related to heap profiling.
|
||||
|
||||
Bug fixes:
|
||||
- Fix a prof-related race condition. This bug could cause memory corruption,
|
||||
but only occurred in non-default configurations (prof_accum:false).
|
||||
- Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is
|
||||
excluded from backtraces).
|
||||
- Fix a prof-related bug in realloc() (only triggered by OOM errors).
|
||||
- Fix prof-related bugs in allocm() and rallocm().
|
||||
- Fix prof_tdata_cleanup() for --disable-tls builds.
|
||||
- Fix a relative include path, to fix objdir builds.
|
||||
|
||||
* 2.2.2 (July 30, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Fix a build error for --disable-tcache.
|
||||
- Fix assertions in arena_purge() (for real this time).
|
||||
- Add the --with-private-namespace option. This is a workaround for symbol
|
||||
conflicts that can inadvertently arise when using static libraries.
|
||||
|
||||
* 2.2.1 (March 30, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Implement atomic operations for x86/x64. This fixes compilation failures
|
||||
for versions of gcc that are still in wide use.
|
||||
- Fix an assertion in arena_purge().
|
||||
|
||||
* 2.2.0 (March 22, 2011)
|
||||
|
||||
This version incorporates several improvements to algorithms and data
|
||||
structures that tend to reduce fragmentation and increase speed.
|
||||
|
||||
New features:
|
||||
- Add the "stats.cactive" mallctl.
|
||||
- Update pprof (from google-perftools 1.7).
|
||||
- Improve backtracing-related configuration logic, and add the
|
||||
--disable-prof-libgcc option.
|
||||
|
||||
Bug fixes:
|
||||
- Change default symbol visibility from "internal", to "hidden", which
|
||||
decreases the overhead of library-internal function calls.
|
||||
- Fix symbol visibility so that it is also set on OS X.
|
||||
- Fix a build dependency regression caused by the introduction of the .pic.o
|
||||
suffix for PIC object files.
|
||||
- Add missing checks for mutex initialization failures.
|
||||
- Don't use libgcc-based backtracing except on x64, where it is known to work.
|
||||
- Fix deadlocks on OS X that were due to memory allocation in
|
||||
pthread_mutex_lock().
|
||||
- Heap profiling-specific fixes:
|
||||
+ Fix memory corruption due to integer overflow in small region index
|
||||
computation, when using a small enough sample interval that profiling
|
||||
context pointers are stored in small run headers.
|
||||
+ Fix a bootstrap ordering bug that only occurred with TLS disabled.
|
||||
+ Fix a rallocm() rsize bug.
|
||||
+ Fix error detection bugs for aligned memory allocation.
|
||||
|
||||
* 2.1.3 (March 14, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix
|
||||
for OS X in 2.1.2).
|
||||
- Fix a "thread.arena" mallctl bug.
|
||||
- Fix a thread cache stats merging bug.
|
||||
|
||||
* 2.1.2 (March 2, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Fix "thread.{de,}allocatedp" mallctl for OS X.
|
||||
- Add missing jemalloc.a to build system.
|
||||
|
||||
* 2.1.1 (January 31, 2011)
|
||||
|
||||
Bug fixes:
|
||||
- Fix aligned huge reallocation (affected allocm()).
|
||||
- Fix the ALLOCM_LG_ALIGN macro definition.
|
||||
- Fix a heap dumping deadlock.
|
||||
- Fix a "thread.arena" mallctl bug.
|
||||
|
||||
* 2.1.0 (December 3, 2010)
|
||||
|
||||
This version incorporates some optimizations that can't quite be considered
|
||||
bug fixes.
|
||||
|
||||
New features:
|
||||
- Use Linux's mremap(2) for huge object reallocation when possible.
|
||||
- Avoid locking in mallctl*() when possible.
|
||||
- Add the "thread.[de]allocatedp" mallctl's.
|
||||
- Convert the manual page source from roff to DocBook, and generate both roff
|
||||
and HTML manuals.
|
||||
|
||||
Bug fixes:
|
||||
- Fix a crash due to incorrect bootstrap ordering. This only impacted
|
||||
--enable-debug --enable-dss configurations.
|
||||
- Fix a minor statistics bug for mallctl("swap.avail", ...).
|
||||
|
||||
* 2.0.1 (October 29, 2010)
|
||||
|
||||
Bug fixes:
|
||||
- Fix a race condition in heap profiling that could cause undefined behavior
|
||||
if "opt.prof_accum" were disabled.
|
||||
- Add missing mutex unlocks for some OOM error paths in the heap profiling
|
||||
code.
|
||||
- Fix a compilation error for non-C99 builds.
|
||||
|
||||
* 2.0.0 (October 24, 2010)
|
||||
|
||||
This version focuses on the experimental *allocm() API, and on improved
|
||||
run-time configuration/introspection. Nonetheless, numerous performance
|
||||
improvements are also included.
|
||||
|
||||
New features:
|
||||
- Implement the experimental {,r,s,d}allocm() API, which provides a superset
|
||||
of the functionality available via malloc(), calloc(), posix_memalign(),
|
||||
realloc(), malloc_usable_size(), and free(). These functions can be used to
|
||||
allocate/reallocate aligned zeroed memory, ask for optional extra memory
|
||||
during reallocation, prevent object movement during reallocation, etc.
|
||||
- Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is
|
||||
more human-readable, and more flexible. For example:
|
||||
JEMALLOC_OPTIONS=AJP
|
||||
is now:
|
||||
MALLOC_CONF=abort:true,fill:true,stats_print:true
|
||||
- Port to Apple OS X. Sponsored by Mozilla.
|
||||
- Make it possible for the application to control thread-->arena mappings via
|
||||
the "thread.arena" mallctl.
|
||||
- Add compile-time support for all TLS-related functionality via pthreads TSD.
|
||||
This is mainly of interest for OS X, which does not support TLS, but has a
|
||||
TSD implementation with similar performance.
|
||||
- Override memalign() and valloc() if they are provided by the system.
|
||||
- Add the "arenas.purge" mallctl, which can be used to synchronously purge all
|
||||
dirty unused pages.
|
||||
- Make cumulative heap profiling data optional, so that it is possible to
|
||||
limit the amount of memory consumed by heap profiling data structures.
|
||||
- Add per thread allocation counters that can be accessed via the
|
||||
"thread.allocated" and "thread.deallocated" mallctls.
|
||||
|
||||
Incompatible changes:
|
||||
- Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above).
|
||||
- Increase default backtrace depth from 4 to 128 for heap profiling.
|
||||
- Disable interval-based profile dumps by default.
|
||||
|
||||
Bug fixes:
|
||||
- Remove bad assertions in fork handler functions. These assertions could
|
||||
cause aborts for some combinations of configure settings.
|
||||
- Fix strerror_r() usage to deal with non-standard semantics in GNU libc.
|
||||
- Fix leak context reporting. This bug tended to cause the number of contexts
|
||||
to be underreported (though the reported number of objects and bytes were
|
||||
correct).
|
||||
- Fix a realloc() bug for large in-place growing reallocation. This bug could
|
||||
cause memory corruption, but it was hard to trigger.
|
||||
- Fix an allocation bug for small allocations that could be triggered if
|
||||
multiple threads raced to create a new run of backing pages.
|
||||
- Enhance the heap profiler to trigger samples based on usable size, rather
|
||||
than request size.
|
||||
- Fix a heap profiling bug due to sometimes losing track of requested object
|
||||
size for sampled objects.
|
||||
|
||||
* 1.0.3 (August 12, 2010)
|
||||
|
||||
Bug fixes:
|
||||
- Fix the libunwind-based implementation of stack backtracing (used for heap
|
||||
profiling). This bug could cause zero-length backtraces to be reported.
|
||||
- Add a missing mutex unlock in library initialization code. If multiple
|
||||
threads raced to initialize malloc, some of them could end up permanently
|
||||
blocked.
|
||||
|
||||
* 1.0.2 (May 11, 2010)
|
||||
|
||||
Bug fixes:
|
||||
- Fix junk filling of large objects, which could cause memory corruption.
|
||||
- Add MAP_NORESERVE support for chunk mapping, because otherwise virtual
|
||||
memory limits could cause swap file configuration to fail. Contributed by
|
||||
Jordan DeLong.
|
||||
|
||||
* 1.0.1 (April 14, 2010)
|
||||
|
||||
Bug fixes:
|
||||
- Fix compilation when --enable-fill is specified.
|
||||
- Fix threads-related profiling bugs that affected accuracy and caused memory
|
||||
to be leaked during thread exit.
|
||||
- Fix dirty page purging race conditions that could cause crashes.
|
||||
- Fix crash in tcache flushing code during thread destruction.
|
||||
|
||||
* 1.0.0 (April 11, 2010)
|
||||
|
||||
This release focuses on speed and run-time introspection. Numerous
|
||||
algorithmic improvements make this release substantially faster than its
|
||||
predecessors.
|
||||
|
||||
New features:
|
||||
- Implement autoconf-based configuration system.
|
||||
- Add mallctl*(), for the purposes of introspection and run-time
|
||||
configuration.
|
||||
- Make it possible for the application to manually flush a thread's cache, via
|
||||
the "tcache.flush" mallctl.
|
||||
- Base maximum dirty page count on proportion of active memory.
|
||||
- Compute various addtional run-time statistics, including per size class
|
||||
statistics for large objects.
|
||||
- Expose malloc_stats_print(), which can be called repeatedly by the
|
||||
application.
|
||||
- Simplify the malloc_message() signature to only take one string argument,
|
||||
and incorporate an opaque data pointer argument for use by the application
|
||||
in combination with malloc_stats_print().
|
||||
- Add support for allocation backed by one or more swap files, and allow the
|
||||
application to disable over-commit if swap files are in use.
|
||||
- Implement allocation profiling and leak checking.
|
||||
|
||||
Removed features:
|
||||
- Remove the dynamic arena rebalancing code, since thread-specific caching
|
||||
reduces its utility.
|
||||
|
||||
Bug fixes:
|
||||
- Modify chunk allocation to work when address space layout randomization
|
||||
(ASLR) is in use.
|
||||
- Fix thread cleanup bugs related to TLS destruction.
|
||||
- Handle 0-size allocation requests in posix_memalign().
|
||||
- Fix a chunk leak. The leaked chunks were never touched, so this impacted
|
||||
virtual memory usage, but not physical memory usage.
|
||||
|
||||
* linux_2008082[78]a (August 27/28, 2008)
|
||||
|
||||
These snapshot releases are the simple result of incorporating Linux-specific
|
||||
support into the FreeBSD malloc sources.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
vim:filetype=text:textwidth=80
|
23
contrib/jemalloc/FREEBSD-Xlist
Normal file
23
contrib/jemalloc/FREEBSD-Xlist
Normal file
@ -0,0 +1,23 @@
|
||||
$FreeBSD$
|
||||
.git
|
||||
.gitignore
|
||||
FREEBSD-*
|
||||
INSTALL
|
||||
Makefile*
|
||||
README
|
||||
autogen.sh
|
||||
autom4te.cache/
|
||||
bin/
|
||||
config.*
|
||||
configure*
|
||||
doc/*.in
|
||||
doc/*.xml
|
||||
doc/*.xsl
|
||||
doc/*.html
|
||||
include/jemalloc/internal/jemalloc_internal.h.in
|
||||
include/jemalloc/internal/size_classes.sh
|
||||
include/jemalloc/jemalloc.h.in
|
||||
include/jemalloc/jemalloc_defs.h.in
|
||||
install-sh
|
||||
src/zone.c
|
||||
test/
|
247
contrib/jemalloc/FREEBSD-diffs
Normal file
247
contrib/jemalloc/FREEBSD-diffs
Normal file
@ -0,0 +1,247 @@
|
||||
diff --git a/doc/jemalloc.xml.in b/doc/jemalloc.xml.in
|
||||
index 98d0ba4..23d2152 100644
|
||||
--- a/doc/jemalloc.xml.in
|
||||
+++ b/doc/jemalloc.xml.in
|
||||
@@ -51,12 +51,23 @@
|
||||
<para>This manual describes jemalloc @jemalloc_version@. More information
|
||||
can be found at the <ulink
|
||||
url="http://www.canonware.com/jemalloc/">jemalloc website</ulink>.</para>
|
||||
+
|
||||
+ <para>The following configuration options are enabled in libc's built-in
|
||||
+ jemalloc: <option>--enable-dss</option>,
|
||||
+ <option>--enable-experimental</option>, <option>--enable-fill</option>,
|
||||
+ <option>--enable-lazy-lock</option>, <option>--enable-munmap</option>,
|
||||
+ <option>--enable-stats</option>, <option>--enable-tcache</option>,
|
||||
+ <option>--enable-tls</option>, <option>--enable-utrace</option>, and
|
||||
+ <option>--enable-xmalloc</option>. Additionally,
|
||||
+ <option>--enable-debug</option> is enabled in development versions of
|
||||
+ FreeBSD (controlled by the <constant>MALLOC_PRODUCTION</constant> make
|
||||
+ variable).</para>
|
||||
</refsect1>
|
||||
<refsynopsisdiv>
|
||||
<title>SYNOPSIS</title>
|
||||
<funcsynopsis>
|
||||
<funcsynopsisinfo>#include <<filename class="headerfile">stdlib.h</filename>>
|
||||
-#include <<filename class="headerfile">jemalloc/jemalloc.h</filename>></funcsynopsisinfo>
|
||||
+#include <<filename class="headerfile">malloc_np.h</filename>></funcsynopsisinfo>
|
||||
<refsect2>
|
||||
<title>Standard API</title>
|
||||
<funcprototype>
|
||||
@@ -2080,4 +2091,16 @@ malloc_conf = "lg_chunk:24";]]></programlisting></para>
|
||||
<para>The <function>posix_memalign<parameter/></function> function conforms
|
||||
to IEEE Std 1003.1-2001 (“POSIX.1”).</para>
|
||||
</refsect1>
|
||||
+ <refsect1 id="history">
|
||||
+ <title>HISTORY</title>
|
||||
+ <para>The <function>malloc_usable_size<parameter/></function> and
|
||||
+ <function>posix_memalign<parameter/></function> functions first appeared in
|
||||
+ FreeBSD 7.0.</para>
|
||||
+
|
||||
+ <para>The <function>aligned_alloc<parameter/></function>,
|
||||
+ <function>malloc_stats_print<parameter/></function>,
|
||||
+ <function>mallctl*<parameter/></function>, and
|
||||
+ <function>*allocm<parameter/></function> functions first appeared in
|
||||
+ FreeBSD 10.0.</para>
|
||||
+ </refsect1>
|
||||
</refentry>
|
||||
diff --git a/include/jemalloc/internal/jemalloc_internal.h.in b/include/jemalloc/internal/jemalloc_internal.h.in
|
||||
index aa21aa5..e0f5fed 100644
|
||||
--- a/include/jemalloc/internal/jemalloc_internal.h.in
|
||||
+++ b/include/jemalloc/internal/jemalloc_internal.h.in
|
||||
@@ -1,3 +1,6 @@
|
||||
+#include "libc_private.h"
|
||||
+#include "namespace.h"
|
||||
+
|
||||
#include <sys/mman.h>
|
||||
#include <sys/param.h>
|
||||
#include <sys/syscall.h>
|
||||
@@ -33,6 +36,9 @@
|
||||
#include <pthread.h>
|
||||
#include <math.h>
|
||||
|
||||
+#include "un-namespace.h"
|
||||
+#include "libc_private.h"
|
||||
+
|
||||
#define JEMALLOC_NO_DEMANGLE
|
||||
#include "../jemalloc@install_suffix@.h"
|
||||
|
||||
diff --git a/include/jemalloc/internal/mutex.h b/include/jemalloc/internal/mutex.h
|
||||
index c46feee..d7133f4 100644
|
||||
--- a/include/jemalloc/internal/mutex.h
|
||||
+++ b/include/jemalloc/internal/mutex.h
|
||||
@@ -39,8 +39,6 @@ struct malloc_mutex_s {
|
||||
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
extern bool isthreaded;
|
||||
-#else
|
||||
-# define isthreaded true
|
||||
#endif
|
||||
|
||||
bool malloc_mutex_init(malloc_mutex_t *mutex);
|
||||
diff --git a/include/jemalloc/jemalloc.h.in b/include/jemalloc/jemalloc.h.in
|
||||
index f0581db..f26d8bc 100644
|
||||
--- a/include/jemalloc/jemalloc.h.in
|
||||
+++ b/include/jemalloc/jemalloc.h.in
|
||||
@@ -15,6 +15,7 @@ extern "C" {
|
||||
#define JEMALLOC_VERSION_GID "@jemalloc_version_gid@"
|
||||
|
||||
#include "jemalloc_defs@install_suffix@.h"
|
||||
+#include "jemalloc_FreeBSD.h"
|
||||
|
||||
#ifdef JEMALLOC_EXPERIMENTAL
|
||||
#define ALLOCM_LG_ALIGN(la) (la)
|
||||
diff --git a/include/jemalloc/jemalloc_FreeBSD.h b/include/jemalloc/jemalloc_FreeBSD.h
|
||||
new file mode 100644
|
||||
index 0000000..2c5797f
|
||||
--- /dev/null
|
||||
+++ b/include/jemalloc/jemalloc_FreeBSD.h
|
||||
@@ -0,0 +1,76 @@
|
||||
+/*
|
||||
+ * Override settings that were generated in jemalloc_defs.h as necessary.
|
||||
+ */
|
||||
+
|
||||
+#undef JEMALLOC_OVERRIDE_VALLOC
|
||||
+
|
||||
+#ifndef MALLOC_PRODUCTION
|
||||
+#define JEMALLOC_DEBUG
|
||||
+#endif
|
||||
+
|
||||
+/*
|
||||
+ * The following are architecture-dependent, so conditionally define them for
|
||||
+ * each supported architecture.
|
||||
+ */
|
||||
+#undef CPU_SPINWAIT
|
||||
+#undef JEMALLOC_TLS_MODEL
|
||||
+#undef STATIC_PAGE_SHIFT
|
||||
+#undef LG_SIZEOF_PTR
|
||||
+#undef LG_SIZEOF_INT
|
||||
+#undef LG_SIZEOF_LONG
|
||||
+#undef LG_SIZEOF_INTMAX_T
|
||||
+
|
||||
+#ifdef __i386__
|
||||
+# define LG_SIZEOF_PTR 2
|
||||
+# define CPU_SPINWAIT __asm__ volatile("pause")
|
||||
+# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
+#endif
|
||||
+#ifdef __ia64__
|
||||
+# define LG_SIZEOF_PTR 3
|
||||
+#endif
|
||||
+#ifdef __sparc64__
|
||||
+# define LG_SIZEOF_PTR 3
|
||||
+# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
+#endif
|
||||
+#ifdef __amd64__
|
||||
+# define LG_SIZEOF_PTR 3
|
||||
+# define CPU_SPINWAIT __asm__ volatile("pause")
|
||||
+# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
+#endif
|
||||
+#ifdef __arm__
|
||||
+# define LG_SIZEOF_PTR 2
|
||||
+#endif
|
||||
+#ifdef __mips__
|
||||
+# define LG_SIZEOF_PTR 2
|
||||
+#endif
|
||||
+#ifdef __powerpc64__
|
||||
+# define LG_SIZEOF_PTR 3
|
||||
+#elif defined(__powerpc__)
|
||||
+# define LG_SIZEOF_PTR 2
|
||||
+#endif
|
||||
+
|
||||
+#ifndef JEMALLOC_TLS_MODEL
|
||||
+# define JEMALLOC_TLS_MODEL /* Default. */
|
||||
+#endif
|
||||
+#ifdef __clang__
|
||||
+# undef JEMALLOC_TLS_MODEL
|
||||
+# define JEMALLOC_TLS_MODEL /* clang does not support tls_model yet. */
|
||||
+#endif
|
||||
+
|
||||
+#define STATIC_PAGE_SHIFT PAGE_SHIFT
|
||||
+#define LG_SIZEOF_INT 2
|
||||
+#define LG_SIZEOF_LONG LG_SIZEOF_PTR
|
||||
+#define LG_SIZEOF_INTMAX_T 3
|
||||
+
|
||||
+/* Disable lazy-lock machinery, mangle isthreaded, and adjust its type. */
|
||||
+#undef JEMALLOC_LAZY_LOCK
|
||||
+extern int __isthreaded;
|
||||
+#define isthreaded ((bool)__isthreaded)
|
||||
+
|
||||
+/* Mangle. */
|
||||
+#define open _open
|
||||
+#define read _read
|
||||
+#define write _write
|
||||
+#define close _close
|
||||
+#define pthread_mutex_lock _pthread_mutex_lock
|
||||
+#define pthread_mutex_unlock _pthread_mutex_unlock
|
||||
diff --git a/src/jemalloc.c b/src/jemalloc.c
|
||||
index 0decd8a..73fad29 100644
|
||||
--- a/src/jemalloc.c
|
||||
+++ b/src/jemalloc.c
|
||||
@@ -8,6 +8,9 @@ malloc_tsd_data(, arenas, arena_t *, NULL)
|
||||
malloc_tsd_data(, thread_allocated, thread_allocated_t,
|
||||
THREAD_ALLOCATED_INITIALIZER)
|
||||
|
||||
+const char *__malloc_options_1_0;
|
||||
+__sym_compat(_malloc_options, __malloc_options_1_0, FBSD_1.0);
|
||||
+
|
||||
/* Runtime configuration options. */
|
||||
const char *je_malloc_conf JEMALLOC_ATTR(visibility("default"));
|
||||
#ifdef JEMALLOC_DEBUG
|
||||
@@ -401,7 +404,8 @@ malloc_conf_init(void)
|
||||
#endif
|
||||
;
|
||||
|
||||
- if ((opts = getenv(envname)) != NULL) {
|
||||
+ if (issetugid() == 0 && (opts = getenv(envname)) !=
|
||||
+ NULL) {
|
||||
/*
|
||||
* Do nothing; opts is already initialized to
|
||||
* the value of the MALLOC_CONF environment
|
||||
diff --git a/src/mutex.c b/src/mutex.c
|
||||
index 4b8ce57..7be5fc9 100644
|
||||
--- a/src/mutex.c
|
||||
+++ b/src/mutex.c
|
||||
@@ -63,6 +63,17 @@ pthread_create(pthread_t *__restrict thread,
|
||||
#ifdef JEMALLOC_MUTEX_INIT_CB
|
||||
int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex,
|
||||
void *(calloc_cb)(size_t, size_t));
|
||||
+
|
||||
+__weak_reference(_pthread_mutex_init_calloc_cb_stub,
|
||||
+ _pthread_mutex_init_calloc_cb);
|
||||
+
|
||||
+int
|
||||
+_pthread_mutex_init_calloc_cb_stub(pthread_mutex_t *mutex,
|
||||
+ void *(calloc_cb)(size_t, size_t))
|
||||
+{
|
||||
+
|
||||
+ return (0);
|
||||
+}
|
||||
#endif
|
||||
|
||||
bool
|
||||
diff --git a/src/util.c b/src/util.c
|
||||
index 2aab61f..8b05042 100644
|
||||
--- a/src/util.c
|
||||
+++ b/src/util.c
|
||||
@@ -60,6 +60,22 @@ wrtmessage(void *cbopaque, const char *s)
|
||||
void (*je_malloc_message)(void *, const char *s)
|
||||
JEMALLOC_ATTR(visibility("default")) = wrtmessage;
|
||||
|
||||
+JEMALLOC_CATTR(visibility("hidden"), static)
|
||||
+void
|
||||
+wrtmessage_1_0(const char *s1, const char *s2, const char *s3,
|
||||
+ const char *s4)
|
||||
+{
|
||||
+
|
||||
+ wrtmessage(NULL, s1);
|
||||
+ wrtmessage(NULL, s2);
|
||||
+ wrtmessage(NULL, s3);
|
||||
+ wrtmessage(NULL, s4);
|
||||
+}
|
||||
+
|
||||
+void (*__malloc_message_1_0)(const char *s1, const char *s2, const char *s3,
|
||||
+ const char *s4) = wrtmessage_1_0;
|
||||
+__sym_compat(_malloc_message, __malloc_message_1_0, FBSD_1.0);
|
||||
+
|
||||
/*
|
||||
* glibc provides a non-standard strerror_r() when _GNU_SOURCE is defined, so
|
||||
* provide a wrapper.
|
122
contrib/jemalloc/FREEBSD-upgrade
Executable file
122
contrib/jemalloc/FREEBSD-upgrade
Executable file
@ -0,0 +1,122 @@
|
||||
#!/bin/sh
|
||||
# $FreeBSD$
|
||||
#
|
||||
# Usage: cd /usr/src/contrib/jemalloc
|
||||
# ./FREEBSD-upgrade <command> [args]
|
||||
#
|
||||
# At least the following ports are required when importing jemalloc:
|
||||
# - devel/autoconf
|
||||
# - devel/git
|
||||
# - devel/gmake
|
||||
# - textproc/docbook-xsl
|
||||
#
|
||||
# The normal workflow for importing a new release is:
|
||||
#
|
||||
# cd /usr/src/contrib/jemalloc
|
||||
#
|
||||
# Merge local changes that were made since the previous import:
|
||||
#
|
||||
# ./FREEBSD-upgrade merge-changes
|
||||
# ./FREEBSD-upgrade rediff
|
||||
#
|
||||
# Extract latest jemalloc release.
|
||||
#
|
||||
# ./FREEBSD-upgrade extract
|
||||
#
|
||||
# Fix patch conflicts as necessary, then regenerate diffs to update line
|
||||
# offsets:
|
||||
#
|
||||
# ./FREEBSD-upgrade rediff
|
||||
# ./FREEBSD-upgrade extract
|
||||
#
|
||||
# Do multiple buildworld/installworld rounds. If problems arise and patches
|
||||
# are needed, edit the code in ${work} as necessary, then:
|
||||
#
|
||||
# ./FREEBSD-upgrade rediff
|
||||
# ./FREEBSD-upgrade extract
|
||||
#
|
||||
# The rediff/extract order is important because rediff saves the local
|
||||
# changes, then extract blows away the work tree and re-creates it with the
|
||||
# diffs applied.
|
||||
#
|
||||
# Finally, to clean up:
|
||||
#
|
||||
# ./FREEBSD-upgrade clean
|
||||
|
||||
set -e
|
||||
|
||||
if [ ! -x "FREEBSD-upgrade" ] ; then
|
||||
echo "Run from within src/contrib/jemalloc/" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
src=`pwd`
|
||||
workname="jemalloc.git"
|
||||
work="${src}/../${workname}" # merge-changes expects ${workname} in "..".
|
||||
changes="${src}/FREEBSD-changes"
|
||||
|
||||
do_extract() {
|
||||
local rev=$1
|
||||
# Clone.
|
||||
rm -rf ${work}
|
||||
git clone git://canonware.com/jemalloc.git ${work}
|
||||
(
|
||||
cd ${work}
|
||||
if [ "x${rev}" != "x" ] ; then
|
||||
# Use optional rev argument to check out a revision other than HEAD on
|
||||
# master.
|
||||
git checkout ${rev}
|
||||
fi
|
||||
# Apply diffs before generating files.
|
||||
patch -p1 < "${src}/FREEBSD-diffs"
|
||||
find . -name '*.orig' -delete
|
||||
# Generate various files.
|
||||
./autogen.sh --enable-cc-silence --enable-dss --enable-xmalloc \
|
||||
--enable-utrace --with-xslroot=/usr/local/share/xsl/docbook
|
||||
gmake dist
|
||||
)
|
||||
}
|
||||
|
||||
do_diff() {
|
||||
(cd ${work}; git add -A; git diff --cached) > FREEBSD-diffs
|
||||
}
|
||||
|
||||
command=$1
|
||||
shift
|
||||
case "${command}" in
|
||||
merge-changes) # Merge local changes that were made since the previous import.
|
||||
rev=`cat VERSION |tr 'g' ' ' |awk '{print $2}'`
|
||||
# Extract code corresponding to most recent import.
|
||||
do_extract ${rev}
|
||||
# Compute local differences to the upstream+patches and apply them.
|
||||
(
|
||||
cd ..
|
||||
diff -ru -X ${src}/FREEBSD-Xlist ${workname} jemalloc > ${changes} || true
|
||||
)
|
||||
(
|
||||
cd ${work}
|
||||
patch -p1 < ${changes}
|
||||
find . -name '*.orig' -delete
|
||||
)
|
||||
# Update diff.
|
||||
do_diff
|
||||
;;
|
||||
extract) # Extract upstream sources, apply patches, copy to contrib/jemalloc.
|
||||
rev=$1
|
||||
do_extract ${rev}
|
||||
# Delete existing files so that cruft doesn't silently remain.
|
||||
rm -rf ChangeLog COPYING VERSION doc include src
|
||||
# Copy files over.
|
||||
tar cf - -C ${work} -X FREEBSD-Xlist . |tar xvf -
|
||||
;;
|
||||
rediff) # Regenerate diffs based on working tree.
|
||||
do_diff
|
||||
;;
|
||||
clean) # Remove working tree and temporary files.
|
||||
rm -rf ${work} ${changes}
|
||||
;;
|
||||
*)
|
||||
echo "Unsupported command: \"${command}\"" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
1
contrib/jemalloc/VERSION
Normal file
1
contrib/jemalloc/VERSION
Normal file
@ -0,0 +1 @@
|
||||
1.0.0-258-g9ef7f5dc34ff02f50d401e41c8d9a4a928e7c2aa
|
1464
contrib/jemalloc/doc/jemalloc.3
Normal file
1464
contrib/jemalloc/doc/jemalloc.3
Normal file
File diff suppressed because it is too large
Load Diff
685
contrib/jemalloc/include/jemalloc/internal/arena.h
Normal file
685
contrib/jemalloc/include/jemalloc/internal/arena.h
Normal file
@ -0,0 +1,685 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/*
|
||||
* RUN_MAX_OVRHD indicates maximum desired run header overhead. Runs are sized
|
||||
* as small as possible such that this setting is still honored, without
|
||||
* violating other constraints. The goal is to make runs as small as possible
|
||||
* without exceeding a per run external fragmentation threshold.
|
||||
*
|
||||
* We use binary fixed point math for overhead computations, where the binary
|
||||
* point is implicitly RUN_BFP bits to the left.
|
||||
*
|
||||
* Note that it is possible to set RUN_MAX_OVRHD low enough that it cannot be
|
||||
* honored for some/all object sizes, since when heap profiling is enabled
|
||||
* there is one pointer of header overhead per object (plus a constant). This
|
||||
* constraint is relaxed (ignored) for runs that are so small that the
|
||||
* per-region overhead is greater than:
|
||||
*
|
||||
* (RUN_MAX_OVRHD / (reg_interval << (3+RUN_BFP))
|
||||
*/
|
||||
#define RUN_BFP 12
|
||||
/* \/ Implicit binary fixed point. */
|
||||
#define RUN_MAX_OVRHD 0x0000003dU
|
||||
#define RUN_MAX_OVRHD_RELAX 0x00001800U
|
||||
|
||||
/* Maximum number of regions in one run. */
|
||||
#define LG_RUN_MAXREGS 11
|
||||
#define RUN_MAXREGS (1U << LG_RUN_MAXREGS)
|
||||
|
||||
/*
|
||||
* Minimum redzone size. Redzones may be larger than this if necessary to
|
||||
* preserve region alignment.
|
||||
*/
|
||||
#define REDZONE_MINSIZE 16
|
||||
|
||||
/*
|
||||
* The minimum ratio of active:dirty pages per arena is computed as:
|
||||
*
|
||||
* (nactive >> opt_lg_dirty_mult) >= ndirty
|
||||
*
|
||||
* So, supposing that opt_lg_dirty_mult is 5, there can be no less than 32
|
||||
* times as many active pages as dirty pages.
|
||||
*/
|
||||
#define LG_DIRTY_MULT_DEFAULT 5
|
||||
|
||||
typedef struct arena_chunk_map_s arena_chunk_map_t;
|
||||
typedef struct arena_chunk_s arena_chunk_t;
|
||||
typedef struct arena_run_s arena_run_t;
|
||||
typedef struct arena_bin_info_s arena_bin_info_t;
|
||||
typedef struct arena_bin_s arena_bin_t;
|
||||
typedef struct arena_s arena_t;
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
/* Each element of the chunk map corresponds to one page within the chunk. */
|
||||
struct arena_chunk_map_s {
|
||||
#ifndef JEMALLOC_PROF
|
||||
/*
|
||||
* Overlay prof_ctx in order to allow it to be referenced by dead code.
|
||||
* Such antics aren't warranted for per arena data structures, but
|
||||
* chunk map overhead accounts for a percentage of memory, rather than
|
||||
* being just a fixed cost.
|
||||
*/
|
||||
union {
|
||||
#endif
|
||||
union {
|
||||
/*
|
||||
* Linkage for run trees. There are two disjoint uses:
|
||||
*
|
||||
* 1) arena_t's runs_avail_{clean,dirty} trees.
|
||||
* 2) arena_run_t conceptually uses this linkage for in-use
|
||||
* non-full runs, rather than directly embedding linkage.
|
||||
*/
|
||||
rb_node(arena_chunk_map_t) rb_link;
|
||||
/*
|
||||
* List of runs currently in purgatory. arena_chunk_purge()
|
||||
* temporarily allocates runs that contain dirty pages while
|
||||
* purging, so that other threads cannot use the runs while the
|
||||
* purging thread is operating without the arena lock held.
|
||||
*/
|
||||
ql_elm(arena_chunk_map_t) ql_link;
|
||||
} u;
|
||||
|
||||
/* Profile counters, used for large object runs. */
|
||||
prof_ctx_t *prof_ctx;
|
||||
#ifndef JEMALLOC_PROF
|
||||
}; /* union { ... }; */
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Run address (or size) and various flags are stored together. The bit
|
||||
* layout looks like (assuming 32-bit system):
|
||||
*
|
||||
* ???????? ???????? ????---- ----dula
|
||||
*
|
||||
* ? : Unallocated: Run address for first/last pages, unset for internal
|
||||
* pages.
|
||||
* Small: Run page offset.
|
||||
* Large: Run size for first page, unset for trailing pages.
|
||||
* - : Unused.
|
||||
* d : dirty?
|
||||
* u : unzeroed?
|
||||
* l : large?
|
||||
* a : allocated?
|
||||
*
|
||||
* Following are example bit patterns for the three types of runs.
|
||||
*
|
||||
* p : run page offset
|
||||
* s : run size
|
||||
* c : (binind+1) for size class (used only if prof_promote is true)
|
||||
* x : don't care
|
||||
* - : 0
|
||||
* + : 1
|
||||
* [DULA] : bit set
|
||||
* [dula] : bit unset
|
||||
*
|
||||
* Unallocated (clean):
|
||||
* ssssssss ssssssss ssss---- ----du-a
|
||||
* xxxxxxxx xxxxxxxx xxxx---- -----Uxx
|
||||
* ssssssss ssssssss ssss---- ----dU-a
|
||||
*
|
||||
* Unallocated (dirty):
|
||||
* ssssssss ssssssss ssss---- ----D--a
|
||||
* xxxxxxxx xxxxxxxx xxxx---- ----xxxx
|
||||
* ssssssss ssssssss ssss---- ----D--a
|
||||
*
|
||||
* Small:
|
||||
* pppppppp pppppppp pppp---- ----d--A
|
||||
* pppppppp pppppppp pppp---- -------A
|
||||
* pppppppp pppppppp pppp---- ----d--A
|
||||
*
|
||||
* Large:
|
||||
* ssssssss ssssssss ssss---- ----D-LA
|
||||
* xxxxxxxx xxxxxxxx xxxx---- ----xxxx
|
||||
* -------- -------- -------- ----D-LA
|
||||
*
|
||||
* Large (sampled, size <= PAGE):
|
||||
* ssssssss ssssssss sssscccc ccccD-LA
|
||||
*
|
||||
* Large (not sampled, size == PAGE):
|
||||
* ssssssss ssssssss ssss---- ----D-LA
|
||||
*/
|
||||
size_t bits;
|
||||
#define CHUNK_MAP_CLASS_SHIFT 4
|
||||
#define CHUNK_MAP_CLASS_MASK ((size_t)0xff0U)
|
||||
#define CHUNK_MAP_FLAGS_MASK ((size_t)0xfU)
|
||||
#define CHUNK_MAP_DIRTY ((size_t)0x8U)
|
||||
#define CHUNK_MAP_UNZEROED ((size_t)0x4U)
|
||||
#define CHUNK_MAP_LARGE ((size_t)0x2U)
|
||||
#define CHUNK_MAP_ALLOCATED ((size_t)0x1U)
|
||||
#define CHUNK_MAP_KEY CHUNK_MAP_ALLOCATED
|
||||
};
|
||||
typedef rb_tree(arena_chunk_map_t) arena_avail_tree_t;
|
||||
typedef rb_tree(arena_chunk_map_t) arena_run_tree_t;
|
||||
|
||||
/* Arena chunk header. */
|
||||
struct arena_chunk_s {
|
||||
/* Arena that owns the chunk. */
|
||||
arena_t *arena;
|
||||
|
||||
/* Linkage for the arena's chunks_dirty list. */
|
||||
ql_elm(arena_chunk_t) link_dirty;
|
||||
|
||||
/*
|
||||
* True if the chunk is currently in the chunks_dirty list, due to
|
||||
* having at some point contained one or more dirty pages. Removal
|
||||
* from chunks_dirty is lazy, so (dirtied && ndirty == 0) is possible.
|
||||
*/
|
||||
bool dirtied;
|
||||
|
||||
/* Number of dirty pages. */
|
||||
size_t ndirty;
|
||||
|
||||
/*
|
||||
* Map of pages within chunk that keeps track of free/large/small. The
|
||||
* first map_bias entries are omitted, since the chunk header does not
|
||||
* need to be tracked in the map. This omission saves a header page
|
||||
* for common chunk sizes (e.g. 4 MiB).
|
||||
*/
|
||||
arena_chunk_map_t map[1]; /* Dynamically sized. */
|
||||
};
|
||||
typedef rb_tree(arena_chunk_t) arena_chunk_tree_t;
|
||||
|
||||
struct arena_run_s {
|
||||
/* Bin this run is associated with. */
|
||||
arena_bin_t *bin;
|
||||
|
||||
/* Index of next region that has never been allocated, or nregs. */
|
||||
uint32_t nextind;
|
||||
|
||||
/* Number of free regions in run. */
|
||||
unsigned nfree;
|
||||
};
|
||||
|
||||
/*
|
||||
* Read-only information associated with each element of arena_t's bins array
|
||||
* is stored separately, partly to reduce memory usage (only one copy, rather
|
||||
* than one per arena), but mainly to avoid false cacheline sharing.
|
||||
*
|
||||
* Each run has the following layout:
|
||||
*
|
||||
* /--------------------\
|
||||
* | arena_run_t header |
|
||||
* | ... |
|
||||
* bitmap_offset | bitmap |
|
||||
* | ... |
|
||||
* ctx0_offset | ctx map |
|
||||
* | ... |
|
||||
* |--------------------|
|
||||
* | redzone |
|
||||
* reg0_offset | region 0 |
|
||||
* | redzone |
|
||||
* |--------------------| \
|
||||
* | redzone | |
|
||||
* | region 1 | > reg_interval
|
||||
* | redzone | /
|
||||
* |--------------------|
|
||||
* | ... |
|
||||
* | ... |
|
||||
* | ... |
|
||||
* |--------------------|
|
||||
* | redzone |
|
||||
* | region nregs-1 |
|
||||
* | redzone |
|
||||
* |--------------------|
|
||||
* | alignment pad? |
|
||||
* \--------------------/
|
||||
*
|
||||
* reg_interval has at least the same minimum alignment as reg_size; this
|
||||
* preserves the alignment constraint that sa2u() depends on. Alignment pad is
|
||||
* either 0 or redzone_size; it is present only if needed to align reg0_offset.
|
||||
*/
|
||||
struct arena_bin_info_s {
|
||||
/* Size of regions in a run for this bin's size class. */
|
||||
size_t reg_size;
|
||||
|
||||
/* Redzone size. */
|
||||
size_t redzone_size;
|
||||
|
||||
/* Interval between regions (reg_size + (redzone_size << 1)). */
|
||||
size_t reg_interval;
|
||||
|
||||
/* Total size of a run for this bin's size class. */
|
||||
size_t run_size;
|
||||
|
||||
/* Total number of regions in a run for this bin's size class. */
|
||||
uint32_t nregs;
|
||||
|
||||
/*
|
||||
* Offset of first bitmap_t element in a run header for this bin's size
|
||||
* class.
|
||||
*/
|
||||
uint32_t bitmap_offset;
|
||||
|
||||
/*
|
||||
* Metadata used to manipulate bitmaps for runs associated with this
|
||||
* bin.
|
||||
*/
|
||||
bitmap_info_t bitmap_info;
|
||||
|
||||
/*
|
||||
* Offset of first (prof_ctx_t *) in a run header for this bin's size
|
||||
* class, or 0 if (config_prof == false || opt_prof == false).
|
||||
*/
|
||||
uint32_t ctx0_offset;
|
||||
|
||||
/* Offset of first region in a run for this bin's size class. */
|
||||
uint32_t reg0_offset;
|
||||
};
|
||||
|
||||
struct arena_bin_s {
|
||||
/*
|
||||
* All operations on runcur, runs, and stats require that lock be
|
||||
* locked. Run allocation/deallocation are protected by the arena lock,
|
||||
* which may be acquired while holding one or more bin locks, but not
|
||||
* vise versa.
|
||||
*/
|
||||
malloc_mutex_t lock;
|
||||
|
||||
/*
|
||||
* Current run being used to service allocations of this bin's size
|
||||
* class.
|
||||
*/
|
||||
arena_run_t *runcur;
|
||||
|
||||
/*
|
||||
* Tree of non-full runs. This tree is used when looking for an
|
||||
* existing run when runcur is no longer usable. We choose the
|
||||
* non-full run that is lowest in memory; this policy tends to keep
|
||||
* objects packed well, and it can also help reduce the number of
|
||||
* almost-empty chunks.
|
||||
*/
|
||||
arena_run_tree_t runs;
|
||||
|
||||
/* Bin statistics. */
|
||||
malloc_bin_stats_t stats;
|
||||
};
|
||||
|
||||
struct arena_s {
|
||||
/* This arena's index within the arenas array. */
|
||||
unsigned ind;
|
||||
|
||||
/*
|
||||
* Number of threads currently assigned to this arena. This field is
|
||||
* protected by arenas_lock.
|
||||
*/
|
||||
unsigned nthreads;
|
||||
|
||||
/*
|
||||
* There are three classes of arena operations from a locking
|
||||
* perspective:
|
||||
* 1) Thread asssignment (modifies nthreads) is protected by
|
||||
* arenas_lock.
|
||||
* 2) Bin-related operations are protected by bin locks.
|
||||
* 3) Chunk- and run-related operations are protected by this mutex.
|
||||
*/
|
||||
malloc_mutex_t lock;
|
||||
|
||||
arena_stats_t stats;
|
||||
/*
|
||||
* List of tcaches for extant threads associated with this arena.
|
||||
* Stats from these are merged incrementally, and at exit.
|
||||
*/
|
||||
ql_head(tcache_t) tcache_ql;
|
||||
|
||||
uint64_t prof_accumbytes;
|
||||
|
||||
/* List of dirty-page-containing chunks this arena manages. */
|
||||
ql_head(arena_chunk_t) chunks_dirty;
|
||||
|
||||
/*
|
||||
* In order to avoid rapid chunk allocation/deallocation when an arena
|
||||
* oscillates right on the cusp of needing a new chunk, cache the most
|
||||
* recently freed chunk. The spare is left in the arena's chunk trees
|
||||
* until it is deleted.
|
||||
*
|
||||
* There is one spare chunk per arena, rather than one spare total, in
|
||||
* order to avoid interactions between multiple threads that could make
|
||||
* a single spare inadequate.
|
||||
*/
|
||||
arena_chunk_t *spare;
|
||||
|
||||
/* Number of pages in active runs. */
|
||||
size_t nactive;
|
||||
|
||||
/*
|
||||
* Current count of pages within unused runs that are potentially
|
||||
* dirty, and for which madvise(... MADV_DONTNEED) has not been called.
|
||||
* By tracking this, we can institute a limit on how much dirty unused
|
||||
* memory is mapped for each arena.
|
||||
*/
|
||||
size_t ndirty;
|
||||
|
||||
/*
|
||||
* Approximate number of pages being purged. It is possible for
|
||||
* multiple threads to purge dirty pages concurrently, and they use
|
||||
* npurgatory to indicate the total number of pages all threads are
|
||||
* attempting to purge.
|
||||
*/
|
||||
size_t npurgatory;
|
||||
|
||||
/*
|
||||
* Size/address-ordered trees of this arena's available runs. The trees
|
||||
* are used for first-best-fit run allocation. The dirty tree contains
|
||||
* runs with dirty pages (i.e. very likely to have been touched and
|
||||
* therefore have associated physical pages), whereas the clean tree
|
||||
* contains runs with pages that either have no associated physical
|
||||
* pages, or have pages that the kernel may recycle at any time due to
|
||||
* previous madvise(2) calls. The dirty tree is used in preference to
|
||||
* the clean tree for allocations, because using dirty pages reduces
|
||||
* the amount of dirty purging necessary to keep the active:dirty page
|
||||
* ratio below the purge threshold.
|
||||
*/
|
||||
arena_avail_tree_t runs_avail_clean;
|
||||
arena_avail_tree_t runs_avail_dirty;
|
||||
|
||||
/* bins is used to store trees of free regions. */
|
||||
arena_bin_t bins[NBINS];
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
extern ssize_t opt_lg_dirty_mult;
|
||||
/*
|
||||
* small_size2bin is a compact lookup table that rounds request sizes up to
|
||||
* size classes. In order to reduce cache footprint, the table is compressed,
|
||||
* and all accesses are via the SMALL_SIZE2BIN macro.
|
||||
*/
|
||||
extern uint8_t const small_size2bin[];
|
||||
#define SMALL_SIZE2BIN(s) (small_size2bin[(s-1) >> LG_TINY_MIN])
|
||||
|
||||
extern arena_bin_info_t arena_bin_info[NBINS];
|
||||
|
||||
/* Number of large size classes. */
|
||||
#define nlclasses (chunk_npages - map_bias)
|
||||
|
||||
void arena_purge_all(arena_t *arena);
|
||||
void arena_prof_accum(arena_t *arena, uint64_t accumbytes);
|
||||
void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin,
|
||||
size_t binind, uint64_t prof_accumbytes);
|
||||
void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info,
|
||||
bool zero);
|
||||
void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info);
|
||||
void *arena_malloc_small(arena_t *arena, size_t size, bool zero);
|
||||
void *arena_malloc_large(arena_t *arena, size_t size, bool zero);
|
||||
void *arena_palloc(arena_t *arena, size_t size, size_t alignment, bool zero);
|
||||
size_t arena_salloc(const void *ptr, bool demote);
|
||||
void arena_prof_promoted(const void *ptr, size_t size);
|
||||
void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr,
|
||||
arena_chunk_map_t *mapelm);
|
||||
void arena_dalloc_large(arena_t *arena, arena_chunk_t *chunk, void *ptr);
|
||||
void arena_stats_merge(arena_t *arena, size_t *nactive, size_t *ndirty,
|
||||
arena_stats_t *astats, malloc_bin_stats_t *bstats,
|
||||
malloc_large_stats_t *lstats);
|
||||
void *arena_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
|
||||
size_t extra, bool zero);
|
||||
void *arena_ralloc(void *ptr, size_t oldsize, size_t size, size_t extra,
|
||||
size_t alignment, bool zero, bool try_tcache);
|
||||
bool arena_new(arena_t *arena, unsigned ind);
|
||||
void arena_boot(void);
|
||||
void arena_prefork(arena_t *arena);
|
||||
void arena_postfork_parent(arena_t *arena);
|
||||
void arena_postfork_child(arena_t *arena);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
size_t arena_bin_index(arena_t *arena, arena_bin_t *bin);
|
||||
unsigned arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info,
|
||||
const void *ptr);
|
||||
prof_ctx_t *arena_prof_ctx_get(const void *ptr);
|
||||
void arena_prof_ctx_set(const void *ptr, prof_ctx_t *ctx);
|
||||
void *arena_malloc(arena_t *arena, size_t size, bool zero, bool try_tcache);
|
||||
void arena_dalloc(arena_t *arena, arena_chunk_t *chunk, void *ptr,
|
||||
bool try_tcache);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_))
|
||||
JEMALLOC_INLINE size_t
|
||||
arena_bin_index(arena_t *arena, arena_bin_t *bin)
|
||||
{
|
||||
size_t binind = bin - arena->bins;
|
||||
assert(binind < NBINS);
|
||||
return (binind);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE unsigned
|
||||
arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr)
|
||||
{
|
||||
unsigned shift, diff, regind;
|
||||
size_t interval;
|
||||
|
||||
/*
|
||||
* Freeing a pointer lower than region zero can cause assertion
|
||||
* failure.
|
||||
*/
|
||||
assert((uintptr_t)ptr >= (uintptr_t)run +
|
||||
(uintptr_t)bin_info->reg0_offset);
|
||||
|
||||
/*
|
||||
* Avoid doing division with a variable divisor if possible. Using
|
||||
* actual division here can reduce allocator throughput by over 20%!
|
||||
*/
|
||||
diff = (unsigned)((uintptr_t)ptr - (uintptr_t)run -
|
||||
bin_info->reg0_offset);
|
||||
|
||||
/* Rescale (factor powers of 2 out of the numerator and denominator). */
|
||||
interval = bin_info->reg_interval;
|
||||
shift = ffs(interval) - 1;
|
||||
diff >>= shift;
|
||||
interval >>= shift;
|
||||
|
||||
if (interval == 1) {
|
||||
/* The divisor was a power of 2. */
|
||||
regind = diff;
|
||||
} else {
|
||||
/*
|
||||
* To divide by a number D that is not a power of two we
|
||||
* multiply by (2^21 / D) and then right shift by 21 positions.
|
||||
*
|
||||
* X / D
|
||||
*
|
||||
* becomes
|
||||
*
|
||||
* (X * interval_invs[D - 3]) >> SIZE_INV_SHIFT
|
||||
*
|
||||
* We can omit the first three elements, because we never
|
||||
* divide by 0, and 1 and 2 are both powers of two, which are
|
||||
* handled above.
|
||||
*/
|
||||
#define SIZE_INV_SHIFT ((sizeof(unsigned) << 3) - LG_RUN_MAXREGS)
|
||||
#define SIZE_INV(s) (((1U << SIZE_INV_SHIFT) / (s)) + 1)
|
||||
static const unsigned interval_invs[] = {
|
||||
SIZE_INV(3),
|
||||
SIZE_INV(4), SIZE_INV(5), SIZE_INV(6), SIZE_INV(7),
|
||||
SIZE_INV(8), SIZE_INV(9), SIZE_INV(10), SIZE_INV(11),
|
||||
SIZE_INV(12), SIZE_INV(13), SIZE_INV(14), SIZE_INV(15),
|
||||
SIZE_INV(16), SIZE_INV(17), SIZE_INV(18), SIZE_INV(19),
|
||||
SIZE_INV(20), SIZE_INV(21), SIZE_INV(22), SIZE_INV(23),
|
||||
SIZE_INV(24), SIZE_INV(25), SIZE_INV(26), SIZE_INV(27),
|
||||
SIZE_INV(28), SIZE_INV(29), SIZE_INV(30), SIZE_INV(31)
|
||||
};
|
||||
|
||||
if (interval <= ((sizeof(interval_invs) / sizeof(unsigned)) +
|
||||
2)) {
|
||||
regind = (diff * interval_invs[interval - 3]) >>
|
||||
SIZE_INV_SHIFT;
|
||||
} else
|
||||
regind = diff / interval;
|
||||
#undef SIZE_INV
|
||||
#undef SIZE_INV_SHIFT
|
||||
}
|
||||
assert(diff == regind * interval);
|
||||
assert(regind < bin_info->nregs);
|
||||
|
||||
return (regind);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE prof_ctx_t *
|
||||
arena_prof_ctx_get(const void *ptr)
|
||||
{
|
||||
prof_ctx_t *ret;
|
||||
arena_chunk_t *chunk;
|
||||
size_t pageind, mapbits;
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
assert(CHUNK_ADDR2BASE(ptr) != ptr);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
|
||||
mapbits = chunk->map[pageind-map_bias].bits;
|
||||
assert((mapbits & CHUNK_MAP_ALLOCATED) != 0);
|
||||
if ((mapbits & CHUNK_MAP_LARGE) == 0) {
|
||||
if (prof_promote)
|
||||
ret = (prof_ctx_t *)(uintptr_t)1U;
|
||||
else {
|
||||
arena_run_t *run = (arena_run_t *)((uintptr_t)chunk +
|
||||
(uintptr_t)((pageind - (mapbits >> LG_PAGE)) <<
|
||||
LG_PAGE));
|
||||
size_t binind = arena_bin_index(chunk->arena, run->bin);
|
||||
arena_bin_info_t *bin_info = &arena_bin_info[binind];
|
||||
unsigned regind;
|
||||
|
||||
regind = arena_run_regind(run, bin_info, ptr);
|
||||
ret = *(prof_ctx_t **)((uintptr_t)run +
|
||||
bin_info->ctx0_offset + (regind *
|
||||
sizeof(prof_ctx_t *)));
|
||||
}
|
||||
} else
|
||||
ret = chunk->map[pageind-map_bias].prof_ctx;
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
arena_prof_ctx_set(const void *ptr, prof_ctx_t *ctx)
|
||||
{
|
||||
arena_chunk_t *chunk;
|
||||
size_t pageind, mapbits;
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
assert(CHUNK_ADDR2BASE(ptr) != ptr);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
|
||||
mapbits = chunk->map[pageind-map_bias].bits;
|
||||
assert((mapbits & CHUNK_MAP_ALLOCATED) != 0);
|
||||
if ((mapbits & CHUNK_MAP_LARGE) == 0) {
|
||||
if (prof_promote == false) {
|
||||
arena_run_t *run = (arena_run_t *)((uintptr_t)chunk +
|
||||
(uintptr_t)((pageind - (mapbits >> LG_PAGE)) <<
|
||||
LG_PAGE));
|
||||
arena_bin_t *bin = run->bin;
|
||||
size_t binind;
|
||||
arena_bin_info_t *bin_info;
|
||||
unsigned regind;
|
||||
|
||||
binind = arena_bin_index(chunk->arena, bin);
|
||||
bin_info = &arena_bin_info[binind];
|
||||
regind = arena_run_regind(run, bin_info, ptr);
|
||||
|
||||
*((prof_ctx_t **)((uintptr_t)run + bin_info->ctx0_offset
|
||||
+ (regind * sizeof(prof_ctx_t *)))) = ctx;
|
||||
} else
|
||||
assert((uintptr_t)ctx == (uintptr_t)1U);
|
||||
} else
|
||||
chunk->map[pageind-map_bias].prof_ctx = ctx;
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
arena_malloc(arena_t *arena, size_t size, bool zero, bool try_tcache)
|
||||
{
|
||||
tcache_t *tcache;
|
||||
|
||||
assert(size != 0);
|
||||
assert(size <= arena_maxclass);
|
||||
|
||||
if (size <= SMALL_MAXCLASS) {
|
||||
if (try_tcache && (tcache = tcache_get(true)) != NULL)
|
||||
return (tcache_alloc_small(tcache, size, zero));
|
||||
else {
|
||||
return (arena_malloc_small(choose_arena(arena), size,
|
||||
zero));
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* Initialize tcache after checking size in order to avoid
|
||||
* infinite recursion during tcache initialization.
|
||||
*/
|
||||
if (try_tcache && size <= tcache_maxclass && (tcache =
|
||||
tcache_get(true)) != NULL)
|
||||
return (tcache_alloc_large(tcache, size, zero));
|
||||
else {
|
||||
return (arena_malloc_large(choose_arena(arena), size,
|
||||
zero));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
arena_dalloc(arena_t *arena, arena_chunk_t *chunk, void *ptr, bool try_tcache)
|
||||
{
|
||||
size_t pageind;
|
||||
arena_chunk_map_t *mapelm;
|
||||
tcache_t *tcache;
|
||||
|
||||
assert(arena != NULL);
|
||||
assert(chunk->arena == arena);
|
||||
assert(ptr != NULL);
|
||||
assert(CHUNK_ADDR2BASE(ptr) != ptr);
|
||||
|
||||
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
|
||||
mapelm = &chunk->map[pageind-map_bias];
|
||||
assert((mapelm->bits & CHUNK_MAP_ALLOCATED) != 0);
|
||||
if ((mapelm->bits & CHUNK_MAP_LARGE) == 0) {
|
||||
/* Small allocation. */
|
||||
if (try_tcache && (tcache = tcache_get(false)) != NULL)
|
||||
tcache_dalloc_small(tcache, ptr);
|
||||
else {
|
||||
arena_run_t *run;
|
||||
arena_bin_t *bin;
|
||||
|
||||
run = (arena_run_t *)((uintptr_t)chunk +
|
||||
(uintptr_t)((pageind - (mapelm->bits >> LG_PAGE)) <<
|
||||
LG_PAGE));
|
||||
bin = run->bin;
|
||||
if (config_debug) {
|
||||
size_t binind = arena_bin_index(arena, bin);
|
||||
UNUSED arena_bin_info_t *bin_info =
|
||||
&arena_bin_info[binind];
|
||||
assert(((uintptr_t)ptr - ((uintptr_t)run +
|
||||
(uintptr_t)bin_info->reg0_offset)) %
|
||||
bin_info->reg_interval == 0);
|
||||
}
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
arena_dalloc_bin(arena, chunk, ptr, mapelm);
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
}
|
||||
} else {
|
||||
size_t size = mapelm->bits & ~PAGE_MASK;
|
||||
|
||||
assert(((uintptr_t)ptr & PAGE_MASK) == 0);
|
||||
|
||||
if (try_tcache && size <= tcache_maxclass && (tcache =
|
||||
tcache_get(false)) != NULL) {
|
||||
tcache_dalloc_large(tcache, ptr, size);
|
||||
} else {
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
arena_dalloc_large(arena, chunk, ptr);
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
240
contrib/jemalloc/include/jemalloc/internal/atomic.h
Normal file
240
contrib/jemalloc/include/jemalloc/internal/atomic.h
Normal file
@ -0,0 +1,240 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
#define atomic_read_uint64(p) atomic_add_uint64(p, 0)
|
||||
#define atomic_read_uint32(p) atomic_add_uint32(p, 0)
|
||||
#define atomic_read_z(p) atomic_add_z(p, 0)
|
||||
#define atomic_read_u(p) atomic_add_u(p, 0)
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
uint64_t atomic_add_uint64(uint64_t *p, uint64_t x);
|
||||
uint64_t atomic_sub_uint64(uint64_t *p, uint64_t x);
|
||||
uint32_t atomic_add_uint32(uint32_t *p, uint32_t x);
|
||||
uint32_t atomic_sub_uint32(uint32_t *p, uint32_t x);
|
||||
size_t atomic_add_z(size_t *p, size_t x);
|
||||
size_t atomic_sub_z(size_t *p, size_t x);
|
||||
unsigned atomic_add_u(unsigned *p, unsigned x);
|
||||
unsigned atomic_sub_u(unsigned *p, unsigned x);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ATOMIC_C_))
|
||||
/******************************************************************************/
|
||||
/* 64-bit operations. */
|
||||
#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_add_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
return (__sync_add_and_fetch(p, x));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_sub_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
return (__sync_sub_and_fetch(p, x));
|
||||
}
|
||||
#elif (defined(JEMALLOC_OSATOMIC))
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_add_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
return (OSAtomicAdd64((int64_t)x, (int64_t *)p));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_sub_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
return (OSAtomicAdd64(-((int64_t)x), (int64_t *)p));
|
||||
}
|
||||
#elif (defined(__amd64__) || defined(__x86_64__))
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_add_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
asm volatile (
|
||||
"lock; xaddq %0, %1;"
|
||||
: "+r" (x), "=m" (*p) /* Outputs. */
|
||||
: "m" (*p) /* Inputs. */
|
||||
);
|
||||
|
||||
return (x);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_sub_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
x = (uint64_t)(-(int64_t)x);
|
||||
asm volatile (
|
||||
"lock; xaddq %0, %1;"
|
||||
: "+r" (x), "=m" (*p) /* Outputs. */
|
||||
: "m" (*p) /* Inputs. */
|
||||
);
|
||||
|
||||
return (x);
|
||||
}
|
||||
#elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_8))
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_add_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
return (__sync_add_and_fetch(p, x));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint64_t
|
||||
atomic_sub_uint64(uint64_t *p, uint64_t x)
|
||||
{
|
||||
|
||||
return (__sync_sub_and_fetch(p, x));
|
||||
}
|
||||
#else
|
||||
# if (LG_SIZEOF_PTR == 3)
|
||||
# error "Missing implementation for 64-bit atomic operations"
|
||||
# endif
|
||||
#endif
|
||||
|
||||
/******************************************************************************/
|
||||
/* 32-bit operations. */
|
||||
#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_add_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
return (__sync_add_and_fetch(p, x));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_sub_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
return (__sync_sub_and_fetch(p, x));
|
||||
}
|
||||
#elif (defined(JEMALLOC_OSATOMIC))
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_add_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
return (OSAtomicAdd32((int32_t)x, (int32_t *)p));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_sub_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
return (OSAtomicAdd32(-((int32_t)x), (int32_t *)p));
|
||||
}
|
||||
#elif (defined(__i386__) || defined(__amd64__) || defined(__x86_64__))
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_add_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
asm volatile (
|
||||
"lock; xaddl %0, %1;"
|
||||
: "+r" (x), "=m" (*p) /* Outputs. */
|
||||
: "m" (*p) /* Inputs. */
|
||||
);
|
||||
|
||||
return (x);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_sub_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
x = (uint32_t)(-(int32_t)x);
|
||||
asm volatile (
|
||||
"lock; xaddl %0, %1;"
|
||||
: "+r" (x), "=m" (*p) /* Outputs. */
|
||||
: "m" (*p) /* Inputs. */
|
||||
);
|
||||
|
||||
return (x);
|
||||
}
|
||||
#elif (defined(JE_FORCE_SYNC_COMPARE_AND_SWAP_4))
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_add_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
return (__sync_add_and_fetch(p, x));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE uint32_t
|
||||
atomic_sub_uint32(uint32_t *p, uint32_t x)
|
||||
{
|
||||
|
||||
return (__sync_sub_and_fetch(p, x));
|
||||
}
|
||||
#else
|
||||
# error "Missing implementation for 32-bit atomic operations"
|
||||
#endif
|
||||
|
||||
/******************************************************************************/
|
||||
/* size_t operations. */
|
||||
JEMALLOC_INLINE size_t
|
||||
atomic_add_z(size_t *p, size_t x)
|
||||
{
|
||||
|
||||
#if (LG_SIZEOF_PTR == 3)
|
||||
return ((size_t)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
|
||||
#elif (LG_SIZEOF_PTR == 2)
|
||||
return ((size_t)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
|
||||
#endif
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE size_t
|
||||
atomic_sub_z(size_t *p, size_t x)
|
||||
{
|
||||
|
||||
#if (LG_SIZEOF_PTR == 3)
|
||||
return ((size_t)atomic_add_uint64((uint64_t *)p,
|
||||
(uint64_t)-((int64_t)x)));
|
||||
#elif (LG_SIZEOF_PTR == 2)
|
||||
return ((size_t)atomic_add_uint32((uint32_t *)p,
|
||||
(uint32_t)-((int32_t)x)));
|
||||
#endif
|
||||
}
|
||||
|
||||
/******************************************************************************/
|
||||
/* unsigned operations. */
|
||||
JEMALLOC_INLINE unsigned
|
||||
atomic_add_u(unsigned *p, unsigned x)
|
||||
{
|
||||
|
||||
#if (LG_SIZEOF_INT == 3)
|
||||
return ((unsigned)atomic_add_uint64((uint64_t *)p, (uint64_t)x));
|
||||
#elif (LG_SIZEOF_INT == 2)
|
||||
return ((unsigned)atomic_add_uint32((uint32_t *)p, (uint32_t)x));
|
||||
#endif
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE unsigned
|
||||
atomic_sub_u(unsigned *p, unsigned x)
|
||||
{
|
||||
|
||||
#if (LG_SIZEOF_INT == 3)
|
||||
return ((unsigned)atomic_add_uint64((uint64_t *)p,
|
||||
(uint64_t)-((int64_t)x)));
|
||||
#elif (LG_SIZEOF_INT == 2)
|
||||
return ((unsigned)atomic_add_uint32((uint32_t *)p,
|
||||
(uint32_t)-((int32_t)x)));
|
||||
#endif
|
||||
}
|
||||
/******************************************************************************/
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
26
contrib/jemalloc/include/jemalloc/internal/base.h
Normal file
26
contrib/jemalloc/include/jemalloc/internal/base.h
Normal file
@ -0,0 +1,26 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
void *base_alloc(size_t size);
|
||||
void *base_calloc(size_t number, size_t size);
|
||||
extent_node_t *base_node_alloc(void);
|
||||
void base_node_dealloc(extent_node_t *node);
|
||||
bool base_boot(void);
|
||||
void base_prefork(void);
|
||||
void base_postfork_parent(void);
|
||||
void base_postfork_child(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
184
contrib/jemalloc/include/jemalloc/internal/bitmap.h
Normal file
184
contrib/jemalloc/include/jemalloc/internal/bitmap.h
Normal file
@ -0,0 +1,184 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/* Maximum bitmap bit count is 2^LG_BITMAP_MAXBITS. */
|
||||
#define LG_BITMAP_MAXBITS LG_RUN_MAXREGS
|
||||
|
||||
typedef struct bitmap_level_s bitmap_level_t;
|
||||
typedef struct bitmap_info_s bitmap_info_t;
|
||||
typedef unsigned long bitmap_t;
|
||||
#define LG_SIZEOF_BITMAP LG_SIZEOF_LONG
|
||||
|
||||
/* Number of bits per group. */
|
||||
#define LG_BITMAP_GROUP_NBITS (LG_SIZEOF_BITMAP + 3)
|
||||
#define BITMAP_GROUP_NBITS (ZU(1) << LG_BITMAP_GROUP_NBITS)
|
||||
#define BITMAP_GROUP_NBITS_MASK (BITMAP_GROUP_NBITS-1)
|
||||
|
||||
/* Maximum number of levels possible. */
|
||||
#define BITMAP_MAX_LEVELS \
|
||||
(LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \
|
||||
+ !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP)
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
struct bitmap_level_s {
|
||||
/* Offset of this level's groups within the array of groups. */
|
||||
size_t group_offset;
|
||||
};
|
||||
|
||||
struct bitmap_info_s {
|
||||
/* Logical number of bits in bitmap (stored at bottom level). */
|
||||
size_t nbits;
|
||||
|
||||
/* Number of levels necessary for nbits. */
|
||||
unsigned nlevels;
|
||||
|
||||
/*
|
||||
* Only the first (nlevels+1) elements are used, and levels are ordered
|
||||
* bottom to top (e.g. the bottom level is stored in levels[0]).
|
||||
*/
|
||||
bitmap_level_t levels[BITMAP_MAX_LEVELS+1];
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
void bitmap_info_init(bitmap_info_t *binfo, size_t nbits);
|
||||
size_t bitmap_info_ngroups(const bitmap_info_t *binfo);
|
||||
size_t bitmap_size(size_t nbits);
|
||||
void bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
bool bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo);
|
||||
bool bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
|
||||
void bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
|
||||
size_t bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo);
|
||||
void bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_BITMAP_C_))
|
||||
JEMALLOC_INLINE bool
|
||||
bitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo)
|
||||
{
|
||||
unsigned rgoff = binfo->levels[binfo->nlevels].group_offset - 1;
|
||||
bitmap_t rg = bitmap[rgoff];
|
||||
/* The bitmap is full iff the root group is 0. */
|
||||
return (rg == 0);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE bool
|
||||
bitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
|
||||
{
|
||||
size_t goff;
|
||||
bitmap_t g;
|
||||
|
||||
assert(bit < binfo->nbits);
|
||||
goff = bit >> LG_BITMAP_GROUP_NBITS;
|
||||
g = bitmap[goff];
|
||||
return (!(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
bitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
|
||||
{
|
||||
size_t goff;
|
||||
bitmap_t *gp;
|
||||
bitmap_t g;
|
||||
|
||||
assert(bit < binfo->nbits);
|
||||
assert(bitmap_get(bitmap, binfo, bit) == false);
|
||||
goff = bit >> LG_BITMAP_GROUP_NBITS;
|
||||
gp = &bitmap[goff];
|
||||
g = *gp;
|
||||
assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)));
|
||||
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
|
||||
*gp = g;
|
||||
assert(bitmap_get(bitmap, binfo, bit));
|
||||
/* Propagate group state transitions up the tree. */
|
||||
if (g == 0) {
|
||||
unsigned i;
|
||||
for (i = 1; i < binfo->nlevels; i++) {
|
||||
bit = goff;
|
||||
goff = bit >> LG_BITMAP_GROUP_NBITS;
|
||||
gp = &bitmap[binfo->levels[i].group_offset + goff];
|
||||
g = *gp;
|
||||
assert(g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)));
|
||||
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
|
||||
*gp = g;
|
||||
if (g != 0)
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* sfu: set first unset. */
|
||||
JEMALLOC_INLINE size_t
|
||||
bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo)
|
||||
{
|
||||
size_t bit;
|
||||
bitmap_t g;
|
||||
unsigned i;
|
||||
|
||||
assert(bitmap_full(bitmap, binfo) == false);
|
||||
|
||||
i = binfo->nlevels - 1;
|
||||
g = bitmap[binfo->levels[i].group_offset];
|
||||
bit = ffsl(g) - 1;
|
||||
while (i > 0) {
|
||||
i--;
|
||||
g = bitmap[binfo->levels[i].group_offset + bit];
|
||||
bit = (bit << LG_BITMAP_GROUP_NBITS) + (ffsl(g) - 1);
|
||||
}
|
||||
|
||||
bitmap_set(bitmap, binfo, bit);
|
||||
return (bit);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
bitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit)
|
||||
{
|
||||
size_t goff;
|
||||
bitmap_t *gp;
|
||||
bitmap_t g;
|
||||
bool propagate;
|
||||
|
||||
assert(bit < binfo->nbits);
|
||||
assert(bitmap_get(bitmap, binfo, bit));
|
||||
goff = bit >> LG_BITMAP_GROUP_NBITS;
|
||||
gp = &bitmap[goff];
|
||||
g = *gp;
|
||||
propagate = (g == 0);
|
||||
assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK))) == 0);
|
||||
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
|
||||
*gp = g;
|
||||
assert(bitmap_get(bitmap, binfo, bit) == false);
|
||||
/* Propagate group state transitions up the tree. */
|
||||
if (propagate) {
|
||||
unsigned i;
|
||||
for (i = 1; i < binfo->nlevels; i++) {
|
||||
bit = goff;
|
||||
goff = bit >> LG_BITMAP_GROUP_NBITS;
|
||||
gp = &bitmap[binfo->levels[i].group_offset + goff];
|
||||
g = *gp;
|
||||
propagate = (g == 0);
|
||||
assert((g & (1LU << (bit & BITMAP_GROUP_NBITS_MASK)))
|
||||
== 0);
|
||||
g ^= 1LU << (bit & BITMAP_GROUP_NBITS_MASK);
|
||||
*gp = g;
|
||||
if (propagate == false)
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
58
contrib/jemalloc/include/jemalloc/internal/chunk.h
Normal file
58
contrib/jemalloc/include/jemalloc/internal/chunk.h
Normal file
@ -0,0 +1,58 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/*
|
||||
* Size and alignment of memory chunks that are allocated by the OS's virtual
|
||||
* memory system.
|
||||
*/
|
||||
#define LG_CHUNK_DEFAULT 22
|
||||
|
||||
/* Return the chunk address for allocation address a. */
|
||||
#define CHUNK_ADDR2BASE(a) \
|
||||
((void *)((uintptr_t)(a) & ~chunksize_mask))
|
||||
|
||||
/* Return the chunk offset of address a. */
|
||||
#define CHUNK_ADDR2OFFSET(a) \
|
||||
((size_t)((uintptr_t)(a) & chunksize_mask))
|
||||
|
||||
/* Return the smallest chunk multiple that is >= s. */
|
||||
#define CHUNK_CEILING(s) \
|
||||
(((s) + chunksize_mask) & ~chunksize_mask)
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
extern size_t opt_lg_chunk;
|
||||
|
||||
/* Protects stats_chunks; currently not used for any other purpose. */
|
||||
extern malloc_mutex_t chunks_mtx;
|
||||
/* Chunk statistics. */
|
||||
extern chunk_stats_t stats_chunks;
|
||||
|
||||
extern rtree_t *chunks_rtree;
|
||||
|
||||
extern size_t chunksize;
|
||||
extern size_t chunksize_mask; /* (chunksize - 1). */
|
||||
extern size_t chunk_npages;
|
||||
extern size_t map_bias; /* Number of arena chunk header pages. */
|
||||
extern size_t arena_maxclass; /* Max size class for arenas. */
|
||||
|
||||
void *chunk_alloc(size_t size, size_t alignment, bool base, bool *zero);
|
||||
void chunk_dealloc(void *chunk, size_t size, bool unmap);
|
||||
bool chunk_boot0(void);
|
||||
bool chunk_boot1(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
||||
|
||||
#include "jemalloc/internal/chunk_dss.h"
|
||||
#include "jemalloc/internal/chunk_mmap.h"
|
24
contrib/jemalloc/include/jemalloc/internal/chunk_dss.h
Normal file
24
contrib/jemalloc/include/jemalloc/internal/chunk_dss.h
Normal file
@ -0,0 +1,24 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
void *chunk_alloc_dss(size_t size, size_t alignment, bool *zero);
|
||||
bool chunk_in_dss(void *chunk);
|
||||
bool chunk_dss_boot(void);
|
||||
void chunk_dss_prefork(void);
|
||||
void chunk_dss_postfork_parent(void);
|
||||
void chunk_dss_postfork_child(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
22
contrib/jemalloc/include/jemalloc/internal/chunk_mmap.h
Normal file
22
contrib/jemalloc/include/jemalloc/internal/chunk_mmap.h
Normal file
@ -0,0 +1,22 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
void *chunk_alloc_mmap(size_t size, size_t alignment);
|
||||
bool chunk_dealloc_mmap(void *chunk, size_t size);
|
||||
|
||||
bool chunk_mmap_boot(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
90
contrib/jemalloc/include/jemalloc/internal/ckh.h
Normal file
90
contrib/jemalloc/include/jemalloc/internal/ckh.h
Normal file
@ -0,0 +1,90 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct ckh_s ckh_t;
|
||||
typedef struct ckhc_s ckhc_t;
|
||||
|
||||
/* Typedefs to allow easy function pointer passing. */
|
||||
typedef void ckh_hash_t (const void *, unsigned, size_t *, size_t *);
|
||||
typedef bool ckh_keycomp_t (const void *, const void *);
|
||||
|
||||
/* Maintain counters used to get an idea of performance. */
|
||||
/* #define CKH_COUNT */
|
||||
/* Print counter values in ckh_delete() (requires CKH_COUNT). */
|
||||
/* #define CKH_VERBOSE */
|
||||
|
||||
/*
|
||||
* There are 2^LG_CKH_BUCKET_CELLS cells in each hash table bucket. Try to fit
|
||||
* one bucket per L1 cache line.
|
||||
*/
|
||||
#define LG_CKH_BUCKET_CELLS (LG_CACHELINE - LG_SIZEOF_PTR - 1)
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
/* Hash table cell. */
|
||||
struct ckhc_s {
|
||||
const void *key;
|
||||
const void *data;
|
||||
};
|
||||
|
||||
struct ckh_s {
|
||||
#ifdef CKH_COUNT
|
||||
/* Counters used to get an idea of performance. */
|
||||
uint64_t ngrows;
|
||||
uint64_t nshrinks;
|
||||
uint64_t nshrinkfails;
|
||||
uint64_t ninserts;
|
||||
uint64_t nrelocs;
|
||||
#endif
|
||||
|
||||
/* Used for pseudo-random number generation. */
|
||||
#define CKH_A 1103515241
|
||||
#define CKH_C 12347
|
||||
uint32_t prng_state;
|
||||
|
||||
/* Total number of items. */
|
||||
size_t count;
|
||||
|
||||
/*
|
||||
* Minimum and current number of hash table buckets. There are
|
||||
* 2^LG_CKH_BUCKET_CELLS cells per bucket.
|
||||
*/
|
||||
unsigned lg_minbuckets;
|
||||
unsigned lg_curbuckets;
|
||||
|
||||
/* Hash and comparison functions. */
|
||||
ckh_hash_t *hash;
|
||||
ckh_keycomp_t *keycomp;
|
||||
|
||||
/* Hash table with 2^lg_curbuckets buckets. */
|
||||
ckhc_t *tab;
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
bool ckh_new(ckh_t *ckh, size_t minitems, ckh_hash_t *hash,
|
||||
ckh_keycomp_t *keycomp);
|
||||
void ckh_delete(ckh_t *ckh);
|
||||
size_t ckh_count(ckh_t *ckh);
|
||||
bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data);
|
||||
bool ckh_insert(ckh_t *ckh, const void *key, const void *data);
|
||||
bool ckh_remove(ckh_t *ckh, const void *searchkey, void **key,
|
||||
void **data);
|
||||
bool ckh_search(ckh_t *ckh, const void *seachkey, void **key, void **data);
|
||||
void ckh_string_hash(const void *key, unsigned minbits, size_t *hash1,
|
||||
size_t *hash2);
|
||||
bool ckh_string_keycomp(const void *k1, const void *k2);
|
||||
void ckh_pointer_hash(const void *key, unsigned minbits, size_t *hash1,
|
||||
size_t *hash2);
|
||||
bool ckh_pointer_keycomp(const void *k1, const void *k2);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
109
contrib/jemalloc/include/jemalloc/internal/ctl.h
Normal file
109
contrib/jemalloc/include/jemalloc/internal/ctl.h
Normal file
@ -0,0 +1,109 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct ctl_node_s ctl_node_t;
|
||||
typedef struct ctl_arena_stats_s ctl_arena_stats_t;
|
||||
typedef struct ctl_stats_s ctl_stats_t;
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
struct ctl_node_s {
|
||||
bool named;
|
||||
union {
|
||||
struct {
|
||||
const char *name;
|
||||
/* If (nchildren == 0), this is a terminal node. */
|
||||
unsigned nchildren;
|
||||
const ctl_node_t *children;
|
||||
} named;
|
||||
struct {
|
||||
const ctl_node_t *(*index)(const size_t *, size_t,
|
||||
size_t);
|
||||
} indexed;
|
||||
} u;
|
||||
int (*ctl)(const size_t *, size_t, void *, size_t *, void *,
|
||||
size_t);
|
||||
};
|
||||
|
||||
struct ctl_arena_stats_s {
|
||||
bool initialized;
|
||||
unsigned nthreads;
|
||||
size_t pactive;
|
||||
size_t pdirty;
|
||||
arena_stats_t astats;
|
||||
|
||||
/* Aggregate stats for small size classes, based on bin stats. */
|
||||
size_t allocated_small;
|
||||
uint64_t nmalloc_small;
|
||||
uint64_t ndalloc_small;
|
||||
uint64_t nrequests_small;
|
||||
|
||||
malloc_bin_stats_t bstats[NBINS];
|
||||
malloc_large_stats_t *lstats; /* nlclasses elements. */
|
||||
};
|
||||
|
||||
struct ctl_stats_s {
|
||||
size_t allocated;
|
||||
size_t active;
|
||||
size_t mapped;
|
||||
struct {
|
||||
size_t current; /* stats_chunks.curchunks */
|
||||
uint64_t total; /* stats_chunks.nchunks */
|
||||
size_t high; /* stats_chunks.highchunks */
|
||||
} chunks;
|
||||
struct {
|
||||
size_t allocated; /* huge_allocated */
|
||||
uint64_t nmalloc; /* huge_nmalloc */
|
||||
uint64_t ndalloc; /* huge_ndalloc */
|
||||
} huge;
|
||||
ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp,
|
||||
size_t newlen);
|
||||
int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp);
|
||||
|
||||
int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,
|
||||
void *newp, size_t newlen);
|
||||
bool ctl_boot(void);
|
||||
|
||||
#define xmallctl(name, oldp, oldlenp, newp, newlen) do { \
|
||||
if (je_mallctl(name, oldp, oldlenp, newp, newlen) \
|
||||
!= 0) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: Failure in xmallctl(\"%s\", ...)\n", \
|
||||
name); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define xmallctlnametomib(name, mibp, miblenp) do { \
|
||||
if (je_mallctlnametomib(name, mibp, miblenp) != 0) { \
|
||||
malloc_printf("<jemalloc>: Failure in " \
|
||||
"xmallctlnametomib(\"%s\", ...)\n", name); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define xmallctlbymib(mib, miblen, oldp, oldlenp, newp, newlen) do { \
|
||||
if (je_mallctlbymib(mib, miblen, oldp, oldlenp, newp, \
|
||||
newlen) != 0) { \
|
||||
malloc_write( \
|
||||
"<jemalloc>: Failure in xmallctlbymib()\n"); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
||||
|
43
contrib/jemalloc/include/jemalloc/internal/extent.h
Normal file
43
contrib/jemalloc/include/jemalloc/internal/extent.h
Normal file
@ -0,0 +1,43 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct extent_node_s extent_node_t;
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
/* Tree of extents. */
|
||||
struct extent_node_s {
|
||||
/* Linkage for the size/address-ordered tree. */
|
||||
rb_node(extent_node_t) link_szad;
|
||||
|
||||
/* Linkage for the address-ordered tree. */
|
||||
rb_node(extent_node_t) link_ad;
|
||||
|
||||
/* Profile counters, used for huge objects. */
|
||||
prof_ctx_t *prof_ctx;
|
||||
|
||||
/* Pointer to the extent that this tree node is responsible for. */
|
||||
void *addr;
|
||||
|
||||
/* Total region size. */
|
||||
size_t size;
|
||||
};
|
||||
typedef rb_tree(extent_node_t) extent_tree_t;
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
rb_proto(, extent_tree_szad_, extent_tree_t, extent_node_t)
|
||||
|
||||
rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t)
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
||||
|
70
contrib/jemalloc/include/jemalloc/internal/hash.h
Normal file
70
contrib/jemalloc/include/jemalloc/internal/hash.h
Normal file
@ -0,0 +1,70 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
uint64_t hash(const void *key, size_t len, uint64_t seed);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_HASH_C_))
|
||||
/*
|
||||
* The following hash function is based on MurmurHash64A(), placed into the
|
||||
* public domain by Austin Appleby. See http://murmurhash.googlepages.com/ for
|
||||
* details.
|
||||
*/
|
||||
JEMALLOC_INLINE uint64_t
|
||||
hash(const void *key, size_t len, uint64_t seed)
|
||||
{
|
||||
const uint64_t m = UINT64_C(0xc6a4a7935bd1e995);
|
||||
const int r = 47;
|
||||
uint64_t h = seed ^ (len * m);
|
||||
const uint64_t *data = (const uint64_t *)key;
|
||||
const uint64_t *end = data + (len/8);
|
||||
const unsigned char *data2;
|
||||
|
||||
assert(((uintptr_t)key & 0x7) == 0);
|
||||
|
||||
while(data != end) {
|
||||
uint64_t k = *data++;
|
||||
|
||||
k *= m;
|
||||
k ^= k >> r;
|
||||
k *= m;
|
||||
|
||||
h ^= k;
|
||||
h *= m;
|
||||
}
|
||||
|
||||
data2 = (const unsigned char *)data;
|
||||
switch(len & 7) {
|
||||
case 7: h ^= ((uint64_t)(data2[6])) << 48;
|
||||
case 6: h ^= ((uint64_t)(data2[5])) << 40;
|
||||
case 5: h ^= ((uint64_t)(data2[4])) << 32;
|
||||
case 4: h ^= ((uint64_t)(data2[3])) << 24;
|
||||
case 3: h ^= ((uint64_t)(data2[2])) << 16;
|
||||
case 2: h ^= ((uint64_t)(data2[1])) << 8;
|
||||
case 1: h ^= ((uint64_t)(data2[0]));
|
||||
h *= m;
|
||||
}
|
||||
|
||||
h ^= h >> r;
|
||||
h *= m;
|
||||
h ^= h >> r;
|
||||
|
||||
return (h);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
40
contrib/jemalloc/include/jemalloc/internal/huge.h
Normal file
40
contrib/jemalloc/include/jemalloc/internal/huge.h
Normal file
@ -0,0 +1,40 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
/* Huge allocation statistics. */
|
||||
extern uint64_t huge_nmalloc;
|
||||
extern uint64_t huge_ndalloc;
|
||||
extern size_t huge_allocated;
|
||||
|
||||
/* Protects chunk-related data structures. */
|
||||
extern malloc_mutex_t huge_mtx;
|
||||
|
||||
void *huge_malloc(size_t size, bool zero);
|
||||
void *huge_palloc(size_t size, size_t alignment, bool zero);
|
||||
void *huge_ralloc_no_move(void *ptr, size_t oldsize, size_t size,
|
||||
size_t extra);
|
||||
void *huge_ralloc(void *ptr, size_t oldsize, size_t size, size_t extra,
|
||||
size_t alignment, bool zero);
|
||||
void huge_dalloc(void *ptr, bool unmap);
|
||||
size_t huge_salloc(const void *ptr);
|
||||
prof_ctx_t *huge_prof_ctx_get(const void *ptr);
|
||||
void huge_prof_ctx_set(const void *ptr, prof_ctx_t *ctx);
|
||||
bool huge_boot(void);
|
||||
void huge_prefork(void);
|
||||
void huge_postfork_parent(void);
|
||||
void huge_postfork_child(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
876
contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h
Normal file
876
contrib/jemalloc/include/jemalloc/internal/jemalloc_internal.h
Normal file
@ -0,0 +1,876 @@
|
||||
#include "libc_private.h"
|
||||
#include "namespace.h"
|
||||
|
||||
#include <sys/mman.h>
|
||||
#include <sys/param.h>
|
||||
#include <sys/syscall.h>
|
||||
#if !defined(SYS_write) && defined(__NR_write)
|
||||
#define SYS_write __NR_write
|
||||
#endif
|
||||
#include <sys/time.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/uio.h>
|
||||
|
||||
#include <errno.h>
|
||||
#include <limits.h>
|
||||
#ifndef SIZE_T_MAX
|
||||
# define SIZE_T_MAX SIZE_MAX
|
||||
#endif
|
||||
#include <pthread.h>
|
||||
#include <sched.h>
|
||||
#include <stdarg.h>
|
||||
#include <stdbool.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <stddef.h>
|
||||
#ifndef offsetof
|
||||
# define offsetof(type, member) ((size_t)&(((type *)NULL)->member))
|
||||
#endif
|
||||
#include <inttypes.h>
|
||||
#include <string.h>
|
||||
#include <strings.h>
|
||||
#include <ctype.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <pthread.h>
|
||||
#include <math.h>
|
||||
|
||||
#include "un-namespace.h"
|
||||
#include "libc_private.h"
|
||||
|
||||
#define JEMALLOC_NO_DEMANGLE
|
||||
#include "../jemalloc.h"
|
||||
|
||||
#ifdef JEMALLOC_UTRACE
|
||||
#include <sys/ktrace.h>
|
||||
#endif
|
||||
|
||||
#ifdef JEMALLOC_VALGRIND
|
||||
#include <valgrind/valgrind.h>
|
||||
#include <valgrind/memcheck.h>
|
||||
#endif
|
||||
|
||||
#include "jemalloc/internal/private_namespace.h"
|
||||
|
||||
#ifdef JEMALLOC_CC_SILENCE
|
||||
#define UNUSED JEMALLOC_ATTR(unused)
|
||||
#else
|
||||
#define UNUSED
|
||||
#endif
|
||||
|
||||
static const bool config_debug =
|
||||
#ifdef JEMALLOC_DEBUG
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_dss =
|
||||
#ifdef JEMALLOC_DSS
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_fill =
|
||||
#ifdef JEMALLOC_FILL
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_lazy_lock =
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_prof =
|
||||
#ifdef JEMALLOC_PROF
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_prof_libgcc =
|
||||
#ifdef JEMALLOC_PROF_LIBGCC
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_prof_libunwind =
|
||||
#ifdef JEMALLOC_PROF_LIBUNWIND
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_munmap =
|
||||
#ifdef JEMALLOC_MUNMAP
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_stats =
|
||||
#ifdef JEMALLOC_STATS
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_tcache =
|
||||
#ifdef JEMALLOC_TCACHE
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_tls =
|
||||
#ifdef JEMALLOC_TLS
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_utrace =
|
||||
#ifdef JEMALLOC_UTRACE
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_valgrind =
|
||||
#ifdef JEMALLOC_VALGRIND
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_xmalloc =
|
||||
#ifdef JEMALLOC_XMALLOC
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
static const bool config_ivsalloc =
|
||||
#ifdef JEMALLOC_IVSALLOC
|
||||
true
|
||||
#else
|
||||
false
|
||||
#endif
|
||||
;
|
||||
|
||||
#if (defined(JEMALLOC_OSATOMIC) || defined(JEMALLOC_OSSPIN))
|
||||
#include <libkern/OSAtomic.h>
|
||||
#endif
|
||||
|
||||
#ifdef JEMALLOC_ZONE
|
||||
#include <mach/mach_error.h>
|
||||
#include <mach/mach_init.h>
|
||||
#include <mach/vm_map.h>
|
||||
#include <malloc/malloc.h>
|
||||
#endif
|
||||
|
||||
#define RB_COMPACT
|
||||
#include "jemalloc/internal/rb.h"
|
||||
#include "jemalloc/internal/qr.h"
|
||||
#include "jemalloc/internal/ql.h"
|
||||
|
||||
/*
|
||||
* jemalloc can conceptually be broken into components (arena, tcache, etc.),
|
||||
* but there are circular dependencies that cannot be broken without
|
||||
* substantial performance degradation. In order to reduce the effect on
|
||||
* visual code flow, read the header files in multiple passes, with one of the
|
||||
* following cpp variables defined during each pass:
|
||||
*
|
||||
* JEMALLOC_H_TYPES : Preprocessor-defined constants and psuedo-opaque data
|
||||
* types.
|
||||
* JEMALLOC_H_STRUCTS : Data structures.
|
||||
* JEMALLOC_H_EXTERNS : Extern data declarations and function prototypes.
|
||||
* JEMALLOC_H_INLINES : Inline functions.
|
||||
*/
|
||||
/******************************************************************************/
|
||||
#define JEMALLOC_H_TYPES
|
||||
|
||||
#define ALLOCM_LG_ALIGN_MASK ((int)0x3f)
|
||||
|
||||
#define ZU(z) ((size_t)z)
|
||||
|
||||
#ifndef __DECONST
|
||||
# define __DECONST(type, var) ((type)(uintptr_t)(const void *)(var))
|
||||
#endif
|
||||
|
||||
#ifdef JEMALLOC_DEBUG
|
||||
/* Disable inlining to make debugging easier. */
|
||||
# define JEMALLOC_INLINE
|
||||
# define inline
|
||||
#else
|
||||
# define JEMALLOC_ENABLE_INLINE
|
||||
# define JEMALLOC_INLINE static inline
|
||||
#endif
|
||||
|
||||
/* Smallest size class to support. */
|
||||
#define LG_TINY_MIN 3
|
||||
#define TINY_MIN (1U << LG_TINY_MIN)
|
||||
|
||||
/*
|
||||
* Minimum alignment of allocations is 2^LG_QUANTUM bytes (ignoring tiny size
|
||||
* classes).
|
||||
*/
|
||||
#ifndef LG_QUANTUM
|
||||
# ifdef __i386__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __ia64__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __alpha__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __sparc64__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# if (defined(__amd64__) || defined(__x86_64__))
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __arm__
|
||||
# define LG_QUANTUM 3
|
||||
# endif
|
||||
# ifdef __mips__
|
||||
# define LG_QUANTUM 3
|
||||
# endif
|
||||
# ifdef __powerpc__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __s390x__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __SH4__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifdef __tile__
|
||||
# define LG_QUANTUM 4
|
||||
# endif
|
||||
# ifndef LG_QUANTUM
|
||||
# error "No LG_QUANTUM definition for architecture; specify via CPPFLAGS"
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#define QUANTUM ((size_t)(1U << LG_QUANTUM))
|
||||
#define QUANTUM_MASK (QUANTUM - 1)
|
||||
|
||||
/* Return the smallest quantum multiple that is >= a. */
|
||||
#define QUANTUM_CEILING(a) \
|
||||
(((a) + QUANTUM_MASK) & ~QUANTUM_MASK)
|
||||
|
||||
#define LONG ((size_t)(1U << LG_SIZEOF_LONG))
|
||||
#define LONG_MASK (LONG - 1)
|
||||
|
||||
/* Return the smallest long multiple that is >= a. */
|
||||
#define LONG_CEILING(a) \
|
||||
(((a) + LONG_MASK) & ~LONG_MASK)
|
||||
|
||||
#define SIZEOF_PTR (1U << LG_SIZEOF_PTR)
|
||||
#define PTR_MASK (SIZEOF_PTR - 1)
|
||||
|
||||
/* Return the smallest (void *) multiple that is >= a. */
|
||||
#define PTR_CEILING(a) \
|
||||
(((a) + PTR_MASK) & ~PTR_MASK)
|
||||
|
||||
/*
|
||||
* Maximum size of L1 cache line. This is used to avoid cache line aliasing.
|
||||
* In addition, this controls the spacing of cacheline-spaced size classes.
|
||||
*/
|
||||
#define LG_CACHELINE 6
|
||||
#define CACHELINE ((size_t)(1U << LG_CACHELINE))
|
||||
#define CACHELINE_MASK (CACHELINE - 1)
|
||||
|
||||
/* Return the smallest cacheline multiple that is >= s. */
|
||||
#define CACHELINE_CEILING(s) \
|
||||
(((s) + CACHELINE_MASK) & ~CACHELINE_MASK)
|
||||
|
||||
/* Page size. STATIC_PAGE_SHIFT is determined by the configure script. */
|
||||
#ifdef PAGE_MASK
|
||||
# undef PAGE_MASK
|
||||
#endif
|
||||
#define LG_PAGE STATIC_PAGE_SHIFT
|
||||
#define PAGE ((size_t)(1U << STATIC_PAGE_SHIFT))
|
||||
#define PAGE_MASK ((size_t)(PAGE - 1))
|
||||
|
||||
/* Return the smallest pagesize multiple that is >= s. */
|
||||
#define PAGE_CEILING(s) \
|
||||
(((s) + PAGE_MASK) & ~PAGE_MASK)
|
||||
|
||||
/* Return the nearest aligned address at or below a. */
|
||||
#define ALIGNMENT_ADDR2BASE(a, alignment) \
|
||||
((void *)((uintptr_t)(a) & (-(alignment))))
|
||||
|
||||
/* Return the offset between a and the nearest aligned address at or below a. */
|
||||
#define ALIGNMENT_ADDR2OFFSET(a, alignment) \
|
||||
((size_t)((uintptr_t)(a) & (alignment - 1)))
|
||||
|
||||
/* Return the smallest alignment multiple that is >= s. */
|
||||
#define ALIGNMENT_CEILING(s, alignment) \
|
||||
(((s) + (alignment - 1)) & (-(alignment)))
|
||||
|
||||
#ifdef JEMALLOC_VALGRIND
|
||||
/*
|
||||
* The JEMALLOC_VALGRIND_*() macros must be macros rather than functions
|
||||
* so that when Valgrind reports errors, there are no extra stack frames
|
||||
* in the backtraces.
|
||||
*
|
||||
* The size that is reported to valgrind must be consistent through a chain of
|
||||
* malloc..realloc..realloc calls. Request size isn't recorded anywhere in
|
||||
* jemalloc, so it is critical that all callers of these macros provide usize
|
||||
* rather than request size. As a result, buffer overflow detection is
|
||||
* technically weakened for the standard API, though it is generally accepted
|
||||
* practice to consider any extra bytes reported by malloc_usable_size() as
|
||||
* usable space.
|
||||
*/
|
||||
#define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do { \
|
||||
if (config_valgrind && opt_valgrind && cond) \
|
||||
VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(ptr), zero); \
|
||||
} while (0)
|
||||
#define JEMALLOC_VALGRIND_REALLOC(ptr, usize, old_ptr, old_usize, \
|
||||
old_rzsize, zero) do { \
|
||||
if (config_valgrind && opt_valgrind) { \
|
||||
size_t rzsize = p2rz(ptr); \
|
||||
\
|
||||
if (ptr == old_ptr) { \
|
||||
VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \
|
||||
usize, rzsize); \
|
||||
if (zero && old_usize < usize) { \
|
||||
VALGRIND_MAKE_MEM_DEFINED( \
|
||||
(void *)((uintptr_t)ptr + \
|
||||
old_usize), usize - old_usize); \
|
||||
} \
|
||||
} else { \
|
||||
if (old_ptr != NULL) { \
|
||||
VALGRIND_FREELIKE_BLOCK(old_ptr, \
|
||||
old_rzsize); \
|
||||
} \
|
||||
if (ptr != NULL) { \
|
||||
size_t copy_size = (old_usize < usize) \
|
||||
? old_usize : usize; \
|
||||
size_t tail_size = usize - copy_size; \
|
||||
VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, \
|
||||
rzsize, false); \
|
||||
if (copy_size > 0) { \
|
||||
VALGRIND_MAKE_MEM_DEFINED(ptr, \
|
||||
copy_size); \
|
||||
} \
|
||||
if (zero && tail_size > 0) { \
|
||||
VALGRIND_MAKE_MEM_DEFINED( \
|
||||
(void *)((uintptr_t)ptr + \
|
||||
copy_size), tail_size); \
|
||||
} \
|
||||
} \
|
||||
} \
|
||||
} \
|
||||
} while (0)
|
||||
#define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do { \
|
||||
if (config_valgrind && opt_valgrind) \
|
||||
VALGRIND_FREELIKE_BLOCK(ptr, rzsize); \
|
||||
} while (0)
|
||||
#else
|
||||
#define VALGRIND_MALLOCLIKE_BLOCK(addr, sizeB, rzB, is_zeroed)
|
||||
#define VALGRIND_RESIZEINPLACE_BLOCK(addr, oldSizeB, newSizeB, rzB)
|
||||
#define VALGRIND_FREELIKE_BLOCK(addr, rzB)
|
||||
#define VALGRIND_MAKE_MEM_UNDEFINED(_qzz_addr, _qzz_len)
|
||||
#define VALGRIND_MAKE_MEM_DEFINED(_qzz_addr, _qzz_len)
|
||||
#define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero)
|
||||
#define JEMALLOC_VALGRIND_REALLOC(ptr, usize, old_ptr, old_usize, \
|
||||
old_rzsize, zero)
|
||||
#define JEMALLOC_VALGRIND_FREE(ptr, rzsize)
|
||||
#endif
|
||||
|
||||
#include "jemalloc/internal/util.h"
|
||||
#include "jemalloc/internal/atomic.h"
|
||||
#include "jemalloc/internal/prng.h"
|
||||
#include "jemalloc/internal/ckh.h"
|
||||
#include "jemalloc/internal/size_classes.h"
|
||||
#include "jemalloc/internal/stats.h"
|
||||
#include "jemalloc/internal/ctl.h"
|
||||
#include "jemalloc/internal/mutex.h"
|
||||
#include "jemalloc/internal/tsd.h"
|
||||
#include "jemalloc/internal/mb.h"
|
||||
#include "jemalloc/internal/extent.h"
|
||||
#include "jemalloc/internal/arena.h"
|
||||
#include "jemalloc/internal/bitmap.h"
|
||||
#include "jemalloc/internal/base.h"
|
||||
#include "jemalloc/internal/chunk.h"
|
||||
#include "jemalloc/internal/huge.h"
|
||||
#include "jemalloc/internal/rtree.h"
|
||||
#include "jemalloc/internal/tcache.h"
|
||||
#include "jemalloc/internal/hash.h"
|
||||
#include "jemalloc/internal/quarantine.h"
|
||||
#include "jemalloc/internal/prof.h"
|
||||
|
||||
#undef JEMALLOC_H_TYPES
|
||||
/******************************************************************************/
|
||||
#define JEMALLOC_H_STRUCTS
|
||||
|
||||
#include "jemalloc/internal/util.h"
|
||||
#include "jemalloc/internal/atomic.h"
|
||||
#include "jemalloc/internal/prng.h"
|
||||
#include "jemalloc/internal/ckh.h"
|
||||
#include "jemalloc/internal/size_classes.h"
|
||||
#include "jemalloc/internal/stats.h"
|
||||
#include "jemalloc/internal/ctl.h"
|
||||
#include "jemalloc/internal/mutex.h"
|
||||
#include "jemalloc/internal/tsd.h"
|
||||
#include "jemalloc/internal/mb.h"
|
||||
#include "jemalloc/internal/bitmap.h"
|
||||
#include "jemalloc/internal/extent.h"
|
||||
#include "jemalloc/internal/arena.h"
|
||||
#include "jemalloc/internal/base.h"
|
||||
#include "jemalloc/internal/chunk.h"
|
||||
#include "jemalloc/internal/huge.h"
|
||||
#include "jemalloc/internal/rtree.h"
|
||||
#include "jemalloc/internal/tcache.h"
|
||||
#include "jemalloc/internal/hash.h"
|
||||
#include "jemalloc/internal/quarantine.h"
|
||||
#include "jemalloc/internal/prof.h"
|
||||
|
||||
typedef struct {
|
||||
uint64_t allocated;
|
||||
uint64_t deallocated;
|
||||
} thread_allocated_t;
|
||||
/*
|
||||
* The JEMALLOC_CONCAT() wrapper is necessary to pass {0, 0} via a cpp macro
|
||||
* argument.
|
||||
*/
|
||||
#define THREAD_ALLOCATED_INITIALIZER JEMALLOC_CONCAT({0, 0})
|
||||
|
||||
#undef JEMALLOC_H_STRUCTS
|
||||
/******************************************************************************/
|
||||
#define JEMALLOC_H_EXTERNS
|
||||
|
||||
extern bool opt_abort;
|
||||
extern bool opt_junk;
|
||||
extern size_t opt_quarantine;
|
||||
extern bool opt_redzone;
|
||||
extern bool opt_utrace;
|
||||
extern bool opt_valgrind;
|
||||
extern bool opt_xmalloc;
|
||||
extern bool opt_zero;
|
||||
extern size_t opt_narenas;
|
||||
|
||||
/* Number of CPUs. */
|
||||
extern unsigned ncpus;
|
||||
|
||||
extern malloc_mutex_t arenas_lock; /* Protects arenas initialization. */
|
||||
/*
|
||||
* Arenas that are used to service external requests. Not all elements of the
|
||||
* arenas array are necessarily used; arenas are created lazily as needed.
|
||||
*/
|
||||
extern arena_t **arenas;
|
||||
extern unsigned narenas;
|
||||
|
||||
arena_t *arenas_extend(unsigned ind);
|
||||
void arenas_cleanup(void *arg);
|
||||
arena_t *choose_arena_hard(void);
|
||||
void jemalloc_prefork(void);
|
||||
void jemalloc_postfork_parent(void);
|
||||
void jemalloc_postfork_child(void);
|
||||
|
||||
#include "jemalloc/internal/util.h"
|
||||
#include "jemalloc/internal/atomic.h"
|
||||
#include "jemalloc/internal/prng.h"
|
||||
#include "jemalloc/internal/ckh.h"
|
||||
#include "jemalloc/internal/size_classes.h"
|
||||
#include "jemalloc/internal/stats.h"
|
||||
#include "jemalloc/internal/ctl.h"
|
||||
#include "jemalloc/internal/mutex.h"
|
||||
#include "jemalloc/internal/tsd.h"
|
||||
#include "jemalloc/internal/mb.h"
|
||||
#include "jemalloc/internal/bitmap.h"
|
||||
#include "jemalloc/internal/extent.h"
|
||||
#include "jemalloc/internal/arena.h"
|
||||
#include "jemalloc/internal/base.h"
|
||||
#include "jemalloc/internal/chunk.h"
|
||||
#include "jemalloc/internal/huge.h"
|
||||
#include "jemalloc/internal/rtree.h"
|
||||
#include "jemalloc/internal/tcache.h"
|
||||
#include "jemalloc/internal/hash.h"
|
||||
#include "jemalloc/internal/quarantine.h"
|
||||
#include "jemalloc/internal/prof.h"
|
||||
|
||||
#undef JEMALLOC_H_EXTERNS
|
||||
/******************************************************************************/
|
||||
#define JEMALLOC_H_INLINES
|
||||
|
||||
#include "jemalloc/internal/util.h"
|
||||
#include "jemalloc/internal/atomic.h"
|
||||
#include "jemalloc/internal/prng.h"
|
||||
#include "jemalloc/internal/ckh.h"
|
||||
#include "jemalloc/internal/size_classes.h"
|
||||
#include "jemalloc/internal/stats.h"
|
||||
#include "jemalloc/internal/ctl.h"
|
||||
#include "jemalloc/internal/mutex.h"
|
||||
#include "jemalloc/internal/tsd.h"
|
||||
#include "jemalloc/internal/mb.h"
|
||||
#include "jemalloc/internal/extent.h"
|
||||
#include "jemalloc/internal/base.h"
|
||||
#include "jemalloc/internal/chunk.h"
|
||||
#include "jemalloc/internal/huge.h"
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
malloc_tsd_protos(JEMALLOC_ATTR(unused), arenas, arena_t *)
|
||||
|
||||
size_t s2u(size_t size);
|
||||
size_t sa2u(size_t size, size_t alignment);
|
||||
arena_t *choose_arena(arena_t *arena);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_))
|
||||
/*
|
||||
* Map of pthread_self() --> arenas[???], used for selecting an arena to use
|
||||
* for allocations.
|
||||
*/
|
||||
malloc_tsd_externs(arenas, arena_t *)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, arenas, arena_t *, NULL, arenas_cleanup)
|
||||
|
||||
/*
|
||||
* Compute usable size that would result from allocating an object with the
|
||||
* specified size.
|
||||
*/
|
||||
JEMALLOC_INLINE size_t
|
||||
s2u(size_t size)
|
||||
{
|
||||
|
||||
if (size <= SMALL_MAXCLASS)
|
||||
return (arena_bin_info[SMALL_SIZE2BIN(size)].reg_size);
|
||||
if (size <= arena_maxclass)
|
||||
return (PAGE_CEILING(size));
|
||||
return (CHUNK_CEILING(size));
|
||||
}
|
||||
|
||||
/*
|
||||
* Compute usable size that would result from allocating an object with the
|
||||
* specified size and alignment.
|
||||
*/
|
||||
JEMALLOC_INLINE size_t
|
||||
sa2u(size_t size, size_t alignment)
|
||||
{
|
||||
size_t usize;
|
||||
|
||||
assert(alignment != 0 && ((alignment - 1) & alignment) == 0);
|
||||
|
||||
/*
|
||||
* Round size up to the nearest multiple of alignment.
|
||||
*
|
||||
* This done, we can take advantage of the fact that for each small
|
||||
* size class, every object is aligned at the smallest power of two
|
||||
* that is non-zero in the base two representation of the size. For
|
||||
* example:
|
||||
*
|
||||
* Size | Base 2 | Minimum alignment
|
||||
* -----+----------+------------------
|
||||
* 96 | 1100000 | 32
|
||||
* 144 | 10100000 | 32
|
||||
* 192 | 11000000 | 64
|
||||
*/
|
||||
usize = ALIGNMENT_CEILING(size, alignment);
|
||||
/*
|
||||
* (usize < size) protects against the combination of maximal
|
||||
* alignment and size greater than maximal alignment.
|
||||
*/
|
||||
if (usize < size) {
|
||||
/* size_t overflow. */
|
||||
return (0);
|
||||
}
|
||||
|
||||
if (usize <= arena_maxclass && alignment <= PAGE) {
|
||||
if (usize <= SMALL_MAXCLASS)
|
||||
return (arena_bin_info[SMALL_SIZE2BIN(usize)].reg_size);
|
||||
return (PAGE_CEILING(usize));
|
||||
} else {
|
||||
size_t run_size;
|
||||
|
||||
/*
|
||||
* We can't achieve subpage alignment, so round up alignment
|
||||
* permanently; it makes later calculations simpler.
|
||||
*/
|
||||
alignment = PAGE_CEILING(alignment);
|
||||
usize = PAGE_CEILING(size);
|
||||
/*
|
||||
* (usize < size) protects against very large sizes within
|
||||
* PAGE of SIZE_T_MAX.
|
||||
*
|
||||
* (usize + alignment < usize) protects against the
|
||||
* combination of maximal alignment and usize large enough
|
||||
* to cause overflow. This is similar to the first overflow
|
||||
* check above, but it needs to be repeated due to the new
|
||||
* usize value, which may now be *equal* to maximal
|
||||
* alignment, whereas before we only detected overflow if the
|
||||
* original size was *greater* than maximal alignment.
|
||||
*/
|
||||
if (usize < size || usize + alignment < usize) {
|
||||
/* size_t overflow. */
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Calculate the size of the over-size run that arena_palloc()
|
||||
* would need to allocate in order to guarantee the alignment.
|
||||
* If the run wouldn't fit within a chunk, round up to a huge
|
||||
* allocation size.
|
||||
*/
|
||||
run_size = usize + alignment - PAGE;
|
||||
if (run_size <= arena_maxclass)
|
||||
return (PAGE_CEILING(usize));
|
||||
return (CHUNK_CEILING(usize));
|
||||
}
|
||||
}
|
||||
|
||||
/* Choose an arena based on a per-thread value. */
|
||||
JEMALLOC_INLINE arena_t *
|
||||
choose_arena(arena_t *arena)
|
||||
{
|
||||
arena_t *ret;
|
||||
|
||||
if (arena != NULL)
|
||||
return (arena);
|
||||
|
||||
if ((ret = *arenas_tsd_get()) == NULL) {
|
||||
ret = choose_arena_hard();
|
||||
assert(ret != NULL);
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
#endif
|
||||
|
||||
#include "jemalloc/internal/bitmap.h"
|
||||
#include "jemalloc/internal/rtree.h"
|
||||
#include "jemalloc/internal/tcache.h"
|
||||
#include "jemalloc/internal/arena.h"
|
||||
#include "jemalloc/internal/hash.h"
|
||||
#include "jemalloc/internal/quarantine.h"
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
void *imalloc(size_t size);
|
||||
void *icalloc(size_t size);
|
||||
void *ipalloc(size_t usize, size_t alignment, bool zero);
|
||||
size_t isalloc(const void *ptr, bool demote);
|
||||
size_t ivsalloc(const void *ptr, bool demote);
|
||||
size_t u2rz(size_t usize);
|
||||
size_t p2rz(const void *ptr);
|
||||
void idalloc(void *ptr);
|
||||
void iqalloc(void *ptr);
|
||||
void *iralloc(void *ptr, size_t size, size_t extra, size_t alignment,
|
||||
bool zero, bool no_move);
|
||||
malloc_tsd_protos(JEMALLOC_ATTR(unused), thread_allocated, thread_allocated_t)
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_C_))
|
||||
JEMALLOC_INLINE void *
|
||||
imalloc(size_t size)
|
||||
{
|
||||
|
||||
assert(size != 0);
|
||||
|
||||
if (size <= arena_maxclass)
|
||||
return (arena_malloc(NULL, size, false, true));
|
||||
else
|
||||
return (huge_malloc(size, false));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
icalloc(size_t size)
|
||||
{
|
||||
|
||||
if (size <= arena_maxclass)
|
||||
return (arena_malloc(NULL, size, true, true));
|
||||
else
|
||||
return (huge_malloc(size, true));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
ipalloc(size_t usize, size_t alignment, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
assert(usize != 0);
|
||||
assert(usize == sa2u(usize, alignment));
|
||||
|
||||
if (usize <= arena_maxclass && alignment <= PAGE)
|
||||
ret = arena_malloc(NULL, usize, zero, true);
|
||||
else {
|
||||
if (usize <= arena_maxclass) {
|
||||
ret = arena_palloc(choose_arena(NULL), usize, alignment,
|
||||
zero);
|
||||
} else if (alignment <= chunksize)
|
||||
ret = huge_malloc(usize, zero);
|
||||
else
|
||||
ret = huge_palloc(usize, alignment, zero);
|
||||
}
|
||||
|
||||
assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* Typical usage:
|
||||
* void *ptr = [...]
|
||||
* size_t sz = isalloc(ptr, config_prof);
|
||||
*/
|
||||
JEMALLOC_INLINE size_t
|
||||
isalloc(const void *ptr, bool demote)
|
||||
{
|
||||
size_t ret;
|
||||
arena_chunk_t *chunk;
|
||||
|
||||
assert(ptr != NULL);
|
||||
/* Demotion only makes sense if config_prof is true. */
|
||||
assert(config_prof || demote == false);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (chunk != ptr) {
|
||||
/* Region. */
|
||||
ret = arena_salloc(ptr, demote);
|
||||
} else
|
||||
ret = huge_salloc(ptr);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE size_t
|
||||
ivsalloc(const void *ptr, bool demote)
|
||||
{
|
||||
|
||||
/* Return 0 if ptr is not within a chunk managed by jemalloc. */
|
||||
if (rtree_get(chunks_rtree, (uintptr_t)CHUNK_ADDR2BASE(ptr)) == NULL)
|
||||
return (0);
|
||||
|
||||
return (isalloc(ptr, demote));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE size_t
|
||||
u2rz(size_t usize)
|
||||
{
|
||||
size_t ret;
|
||||
|
||||
if (usize <= SMALL_MAXCLASS) {
|
||||
size_t binind = SMALL_SIZE2BIN(usize);
|
||||
ret = arena_bin_info[binind].redzone_size;
|
||||
} else
|
||||
ret = 0;
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE size_t
|
||||
p2rz(const void *ptr)
|
||||
{
|
||||
size_t usize = isalloc(ptr, false);
|
||||
|
||||
return (u2rz(usize));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
idalloc(void *ptr)
|
||||
{
|
||||
arena_chunk_t *chunk;
|
||||
|
||||
assert(ptr != NULL);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (chunk != ptr)
|
||||
arena_dalloc(chunk->arena, chunk, ptr, true);
|
||||
else
|
||||
huge_dalloc(ptr, true);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
iqalloc(void *ptr)
|
||||
{
|
||||
|
||||
if (config_fill && opt_quarantine)
|
||||
quarantine(ptr);
|
||||
else
|
||||
idalloc(ptr);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
iralloc(void *ptr, size_t size, size_t extra, size_t alignment, bool zero,
|
||||
bool no_move)
|
||||
{
|
||||
void *ret;
|
||||
size_t oldsize;
|
||||
|
||||
assert(ptr != NULL);
|
||||
assert(size != 0);
|
||||
|
||||
oldsize = isalloc(ptr, config_prof);
|
||||
|
||||
if (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1))
|
||||
!= 0) {
|
||||
size_t usize, copysize;
|
||||
|
||||
/*
|
||||
* Existing object alignment is inadequate; allocate new space
|
||||
* and copy.
|
||||
*/
|
||||
if (no_move)
|
||||
return (NULL);
|
||||
usize = sa2u(size + extra, alignment);
|
||||
if (usize == 0)
|
||||
return (NULL);
|
||||
ret = ipalloc(usize, alignment, zero);
|
||||
if (ret == NULL) {
|
||||
if (extra == 0)
|
||||
return (NULL);
|
||||
/* Try again, without extra this time. */
|
||||
usize = sa2u(size, alignment);
|
||||
if (usize == 0)
|
||||
return (NULL);
|
||||
ret = ipalloc(usize, alignment, zero);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
}
|
||||
/*
|
||||
* Copy at most size bytes (not size+extra), since the caller
|
||||
* has no expectation that the extra bytes will be reliably
|
||||
* preserved.
|
||||
*/
|
||||
copysize = (size < oldsize) ? size : oldsize;
|
||||
memcpy(ret, ptr, copysize);
|
||||
iqalloc(ptr);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
if (no_move) {
|
||||
if (size <= arena_maxclass) {
|
||||
return (arena_ralloc_no_move(ptr, oldsize, size,
|
||||
extra, zero));
|
||||
} else {
|
||||
return (huge_ralloc_no_move(ptr, oldsize, size,
|
||||
extra));
|
||||
}
|
||||
} else {
|
||||
if (size + extra <= arena_maxclass) {
|
||||
return (arena_ralloc(ptr, oldsize, size, extra,
|
||||
alignment, zero, true));
|
||||
} else {
|
||||
return (huge_ralloc(ptr, oldsize, size, extra,
|
||||
alignment, zero));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
malloc_tsd_externs(thread_allocated, thread_allocated_t)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, thread_allocated, thread_allocated_t,
|
||||
THREAD_ALLOCATED_INITIALIZER, malloc_tsd_no_cleanup)
|
||||
#endif
|
||||
|
||||
#include "jemalloc/internal/prof.h"
|
||||
|
||||
#undef JEMALLOC_H_INLINES
|
||||
/******************************************************************************/
|
115
contrib/jemalloc/include/jemalloc/internal/mb.h
Normal file
115
contrib/jemalloc/include/jemalloc/internal/mb.h
Normal file
@ -0,0 +1,115 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
void mb_write(void);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MB_C_))
|
||||
#ifdef __i386__
|
||||
/*
|
||||
* According to the Intel Architecture Software Developer's Manual, current
|
||||
* processors execute instructions in order from the perspective of other
|
||||
* processors in a multiprocessor system, but 1) Intel reserves the right to
|
||||
* change that, and 2) the compiler's optimizer could re-order instructions if
|
||||
* there weren't some form of barrier. Therefore, even if running on an
|
||||
* architecture that does not need memory barriers (everything through at least
|
||||
* i686), an "optimizer barrier" is necessary.
|
||||
*/
|
||||
JEMALLOC_INLINE void
|
||||
mb_write(void)
|
||||
{
|
||||
|
||||
# if 0
|
||||
/* This is a true memory barrier. */
|
||||
asm volatile ("pusha;"
|
||||
"xor %%eax,%%eax;"
|
||||
"cpuid;"
|
||||
"popa;"
|
||||
: /* Outputs. */
|
||||
: /* Inputs. */
|
||||
: "memory" /* Clobbers. */
|
||||
);
|
||||
#else
|
||||
/*
|
||||
* This is hopefully enough to keep the compiler from reordering
|
||||
* instructions around this one.
|
||||
*/
|
||||
asm volatile ("nop;"
|
||||
: /* Outputs. */
|
||||
: /* Inputs. */
|
||||
: "memory" /* Clobbers. */
|
||||
);
|
||||
#endif
|
||||
}
|
||||
#elif (defined(__amd64__) || defined(__x86_64__))
|
||||
JEMALLOC_INLINE void
|
||||
mb_write(void)
|
||||
{
|
||||
|
||||
asm volatile ("sfence"
|
||||
: /* Outputs. */
|
||||
: /* Inputs. */
|
||||
: "memory" /* Clobbers. */
|
||||
);
|
||||
}
|
||||
#elif defined(__powerpc__)
|
||||
JEMALLOC_INLINE void
|
||||
mb_write(void)
|
||||
{
|
||||
|
||||
asm volatile ("eieio"
|
||||
: /* Outputs. */
|
||||
: /* Inputs. */
|
||||
: "memory" /* Clobbers. */
|
||||
);
|
||||
}
|
||||
#elif defined(__sparc64__)
|
||||
JEMALLOC_INLINE void
|
||||
mb_write(void)
|
||||
{
|
||||
|
||||
asm volatile ("membar #StoreStore"
|
||||
: /* Outputs. */
|
||||
: /* Inputs. */
|
||||
: "memory" /* Clobbers. */
|
||||
);
|
||||
}
|
||||
#elif defined(__tile__)
|
||||
JEMALLOC_INLINE void
|
||||
mb_write(void)
|
||||
{
|
||||
|
||||
__sync_synchronize();
|
||||
}
|
||||
#else
|
||||
/*
|
||||
* This is much slower than a simple memory barrier, but the semantics of mutex
|
||||
* unlock make this work.
|
||||
*/
|
||||
JEMALLOC_INLINE void
|
||||
mb_write(void)
|
||||
{
|
||||
malloc_mutex_t mtx;
|
||||
|
||||
malloc_mutex_init(&mtx);
|
||||
malloc_mutex_lock(&mtx);
|
||||
malloc_mutex_unlock(&mtx);
|
||||
}
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
88
contrib/jemalloc/include/jemalloc/internal/mutex.h
Normal file
88
contrib/jemalloc/include/jemalloc/internal/mutex.h
Normal file
@ -0,0 +1,88 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct malloc_mutex_s malloc_mutex_t;
|
||||
|
||||
#ifdef JEMALLOC_OSSPIN
|
||||
#define MALLOC_MUTEX_INITIALIZER {0}
|
||||
#elif (defined(JEMALLOC_MUTEX_INIT_CB))
|
||||
#define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL}
|
||||
#else
|
||||
# if (defined(PTHREAD_MUTEX_ADAPTIVE_NP) && \
|
||||
defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP))
|
||||
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP
|
||||
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP}
|
||||
# else
|
||||
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT
|
||||
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER}
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
struct malloc_mutex_s {
|
||||
#ifdef JEMALLOC_OSSPIN
|
||||
OSSpinLock lock;
|
||||
#elif (defined(JEMALLOC_MUTEX_INIT_CB))
|
||||
pthread_mutex_t lock;
|
||||
malloc_mutex_t *postponed_next;
|
||||
#else
|
||||
pthread_mutex_t lock;
|
||||
#endif
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
extern bool isthreaded;
|
||||
#endif
|
||||
|
||||
bool malloc_mutex_init(malloc_mutex_t *mutex);
|
||||
void malloc_mutex_prefork(malloc_mutex_t *mutex);
|
||||
void malloc_mutex_postfork_parent(malloc_mutex_t *mutex);
|
||||
void malloc_mutex_postfork_child(malloc_mutex_t *mutex);
|
||||
bool mutex_boot(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
void malloc_mutex_lock(malloc_mutex_t *mutex);
|
||||
void malloc_mutex_unlock(malloc_mutex_t *mutex);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_))
|
||||
JEMALLOC_INLINE void
|
||||
malloc_mutex_lock(malloc_mutex_t *mutex)
|
||||
{
|
||||
|
||||
if (isthreaded) {
|
||||
#ifdef JEMALLOC_OSSPIN
|
||||
OSSpinLockLock(&mutex->lock);
|
||||
#else
|
||||
pthread_mutex_lock(&mutex->lock);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
malloc_mutex_unlock(malloc_mutex_t *mutex)
|
||||
{
|
||||
|
||||
if (isthreaded) {
|
||||
#ifdef JEMALLOC_OSSPIN
|
||||
OSSpinLockUnlock(&mutex->lock);
|
||||
#else
|
||||
pthread_mutex_unlock(&mutex->lock);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
274
contrib/jemalloc/include/jemalloc/internal/private_namespace.h
Normal file
274
contrib/jemalloc/include/jemalloc/internal/private_namespace.h
Normal file
@ -0,0 +1,274 @@
|
||||
#define arena_alloc_junk_small JEMALLOC_N(arena_alloc_junk_small)
|
||||
#define arena_bin_index JEMALLOC_N(arena_bin_index)
|
||||
#define arena_boot JEMALLOC_N(arena_boot)
|
||||
#define arena_dalloc JEMALLOC_N(arena_dalloc)
|
||||
#define arena_dalloc_bin JEMALLOC_N(arena_dalloc_bin)
|
||||
#define arena_dalloc_junk_small JEMALLOC_N(arena_dalloc_junk_small)
|
||||
#define arena_dalloc_large JEMALLOC_N(arena_dalloc_large)
|
||||
#define arena_malloc JEMALLOC_N(arena_malloc)
|
||||
#define arena_malloc_large JEMALLOC_N(arena_malloc_large)
|
||||
#define arena_malloc_small JEMALLOC_N(arena_malloc_small)
|
||||
#define arena_new JEMALLOC_N(arena_new)
|
||||
#define arena_palloc JEMALLOC_N(arena_palloc)
|
||||
#define arena_postfork_child JEMALLOC_N(arena_postfork_child)
|
||||
#define arena_postfork_parent JEMALLOC_N(arena_postfork_parent)
|
||||
#define arena_prefork JEMALLOC_N(arena_prefork)
|
||||
#define arena_prof_accum JEMALLOC_N(arena_prof_accum)
|
||||
#define arena_prof_ctx_get JEMALLOC_N(arena_prof_ctx_get)
|
||||
#define arena_prof_ctx_set JEMALLOC_N(arena_prof_ctx_set)
|
||||
#define arena_prof_promoted JEMALLOC_N(arena_prof_promoted)
|
||||
#define arena_purge_all JEMALLOC_N(arena_purge_all)
|
||||
#define arena_ralloc JEMALLOC_N(arena_ralloc)
|
||||
#define arena_ralloc_no_move JEMALLOC_N(arena_ralloc_no_move)
|
||||
#define arena_run_regind JEMALLOC_N(arena_run_regind)
|
||||
#define arena_salloc JEMALLOC_N(arena_salloc)
|
||||
#define arena_stats_merge JEMALLOC_N(arena_stats_merge)
|
||||
#define arena_tcache_fill_small JEMALLOC_N(arena_tcache_fill_small)
|
||||
#define arenas_bin_i_index JEMALLOC_N(arenas_bin_i_index)
|
||||
#define arenas_cleanup JEMALLOC_N(arenas_cleanup)
|
||||
#define arenas_extend JEMALLOC_N(arenas_extend)
|
||||
#define arenas_lrun_i_index JEMALLOC_N(arenas_lrun_i_index)
|
||||
#define arenas_tls JEMALLOC_N(arenas_tls)
|
||||
#define arenas_tsd_boot JEMALLOC_N(arenas_tsd_boot)
|
||||
#define arenas_tsd_cleanup_wrapper JEMALLOC_N(arenas_tsd_cleanup_wrapper)
|
||||
#define arenas_tsd_get JEMALLOC_N(arenas_tsd_get)
|
||||
#define arenas_tsd_set JEMALLOC_N(arenas_tsd_set)
|
||||
#define atomic_add_u JEMALLOC_N(atomic_add_u)
|
||||
#define atomic_add_uint32 JEMALLOC_N(atomic_add_uint32)
|
||||
#define atomic_add_uint64 JEMALLOC_N(atomic_add_uint64)
|
||||
#define atomic_add_z JEMALLOC_N(atomic_add_z)
|
||||
#define atomic_sub_u JEMALLOC_N(atomic_sub_u)
|
||||
#define atomic_sub_uint32 JEMALLOC_N(atomic_sub_uint32)
|
||||
#define atomic_sub_uint64 JEMALLOC_N(atomic_sub_uint64)
|
||||
#define atomic_sub_z JEMALLOC_N(atomic_sub_z)
|
||||
#define base_alloc JEMALLOC_N(base_alloc)
|
||||
#define base_boot JEMALLOC_N(base_boot)
|
||||
#define base_calloc JEMALLOC_N(base_calloc)
|
||||
#define base_node_alloc JEMALLOC_N(base_node_alloc)
|
||||
#define base_node_dealloc JEMALLOC_N(base_node_dealloc)
|
||||
#define base_postfork_child JEMALLOC_N(base_postfork_child)
|
||||
#define base_postfork_parent JEMALLOC_N(base_postfork_parent)
|
||||
#define base_prefork JEMALLOC_N(base_prefork)
|
||||
#define bitmap_full JEMALLOC_N(bitmap_full)
|
||||
#define bitmap_get JEMALLOC_N(bitmap_get)
|
||||
#define bitmap_info_init JEMALLOC_N(bitmap_info_init)
|
||||
#define bitmap_info_ngroups JEMALLOC_N(bitmap_info_ngroups)
|
||||
#define bitmap_init JEMALLOC_N(bitmap_init)
|
||||
#define bitmap_set JEMALLOC_N(bitmap_set)
|
||||
#define bitmap_sfu JEMALLOC_N(bitmap_sfu)
|
||||
#define bitmap_size JEMALLOC_N(bitmap_size)
|
||||
#define bitmap_unset JEMALLOC_N(bitmap_unset)
|
||||
#define bt_init JEMALLOC_N(bt_init)
|
||||
#define buferror JEMALLOC_N(buferror)
|
||||
#define choose_arena JEMALLOC_N(choose_arena)
|
||||
#define choose_arena_hard JEMALLOC_N(choose_arena_hard)
|
||||
#define chunk_alloc JEMALLOC_N(chunk_alloc)
|
||||
#define chunk_alloc_dss JEMALLOC_N(chunk_alloc_dss)
|
||||
#define chunk_alloc_mmap JEMALLOC_N(chunk_alloc_mmap)
|
||||
#define chunk_boot0 JEMALLOC_N(chunk_boot0)
|
||||
#define chunk_boot1 JEMALLOC_N(chunk_boot1)
|
||||
#define chunk_dealloc JEMALLOC_N(chunk_dealloc)
|
||||
#define chunk_dealloc_mmap JEMALLOC_N(chunk_dealloc_mmap)
|
||||
#define chunk_dss_boot JEMALLOC_N(chunk_dss_boot)
|
||||
#define chunk_dss_postfork_child JEMALLOC_N(chunk_dss_postfork_child)
|
||||
#define chunk_dss_postfork_parent JEMALLOC_N(chunk_dss_postfork_parent)
|
||||
#define chunk_dss_prefork JEMALLOC_N(chunk_dss_prefork)
|
||||
#define chunk_in_dss JEMALLOC_N(chunk_in_dss)
|
||||
#define chunk_mmap_boot JEMALLOC_N(chunk_mmap_boot)
|
||||
#define ckh_bucket_search JEMALLOC_N(ckh_bucket_search)
|
||||
#define ckh_count JEMALLOC_N(ckh_count)
|
||||
#define ckh_delete JEMALLOC_N(ckh_delete)
|
||||
#define ckh_evict_reloc_insert JEMALLOC_N(ckh_evict_reloc_insert)
|
||||
#define ckh_insert JEMALLOC_N(ckh_insert)
|
||||
#define ckh_isearch JEMALLOC_N(ckh_isearch)
|
||||
#define ckh_iter JEMALLOC_N(ckh_iter)
|
||||
#define ckh_new JEMALLOC_N(ckh_new)
|
||||
#define ckh_pointer_hash JEMALLOC_N(ckh_pointer_hash)
|
||||
#define ckh_pointer_keycomp JEMALLOC_N(ckh_pointer_keycomp)
|
||||
#define ckh_rebuild JEMALLOC_N(ckh_rebuild)
|
||||
#define ckh_remove JEMALLOC_N(ckh_remove)
|
||||
#define ckh_search JEMALLOC_N(ckh_search)
|
||||
#define ckh_string_hash JEMALLOC_N(ckh_string_hash)
|
||||
#define ckh_string_keycomp JEMALLOC_N(ckh_string_keycomp)
|
||||
#define ckh_try_bucket_insert JEMALLOC_N(ckh_try_bucket_insert)
|
||||
#define ckh_try_insert JEMALLOC_N(ckh_try_insert)
|
||||
#define ctl_boot JEMALLOC_N(ctl_boot)
|
||||
#define ctl_bymib JEMALLOC_N(ctl_bymib)
|
||||
#define ctl_byname JEMALLOC_N(ctl_byname)
|
||||
#define ctl_nametomib JEMALLOC_N(ctl_nametomib)
|
||||
#define extent_tree_ad_first JEMALLOC_N(extent_tree_ad_first)
|
||||
#define extent_tree_ad_insert JEMALLOC_N(extent_tree_ad_insert)
|
||||
#define extent_tree_ad_iter JEMALLOC_N(extent_tree_ad_iter)
|
||||
#define extent_tree_ad_iter_recurse JEMALLOC_N(extent_tree_ad_iter_recurse)
|
||||
#define extent_tree_ad_iter_start JEMALLOC_N(extent_tree_ad_iter_start)
|
||||
#define extent_tree_ad_last JEMALLOC_N(extent_tree_ad_last)
|
||||
#define extent_tree_ad_new JEMALLOC_N(extent_tree_ad_new)
|
||||
#define extent_tree_ad_next JEMALLOC_N(extent_tree_ad_next)
|
||||
#define extent_tree_ad_nsearch JEMALLOC_N(extent_tree_ad_nsearch)
|
||||
#define extent_tree_ad_prev JEMALLOC_N(extent_tree_ad_prev)
|
||||
#define extent_tree_ad_psearch JEMALLOC_N(extent_tree_ad_psearch)
|
||||
#define extent_tree_ad_remove JEMALLOC_N(extent_tree_ad_remove)
|
||||
#define extent_tree_ad_reverse_iter JEMALLOC_N(extent_tree_ad_reverse_iter)
|
||||
#define extent_tree_ad_reverse_iter_recurse JEMALLOC_N(extent_tree_ad_reverse_iter_recurse)
|
||||
#define extent_tree_ad_reverse_iter_start JEMALLOC_N(extent_tree_ad_reverse_iter_start)
|
||||
#define extent_tree_ad_search JEMALLOC_N(extent_tree_ad_search)
|
||||
#define extent_tree_szad_first JEMALLOC_N(extent_tree_szad_first)
|
||||
#define extent_tree_szad_insert JEMALLOC_N(extent_tree_szad_insert)
|
||||
#define extent_tree_szad_iter JEMALLOC_N(extent_tree_szad_iter)
|
||||
#define extent_tree_szad_iter_recurse JEMALLOC_N(extent_tree_szad_iter_recurse)
|
||||
#define extent_tree_szad_iter_start JEMALLOC_N(extent_tree_szad_iter_start)
|
||||
#define extent_tree_szad_last JEMALLOC_N(extent_tree_szad_last)
|
||||
#define extent_tree_szad_new JEMALLOC_N(extent_tree_szad_new)
|
||||
#define extent_tree_szad_next JEMALLOC_N(extent_tree_szad_next)
|
||||
#define extent_tree_szad_nsearch JEMALLOC_N(extent_tree_szad_nsearch)
|
||||
#define extent_tree_szad_prev JEMALLOC_N(extent_tree_szad_prev)
|
||||
#define extent_tree_szad_psearch JEMALLOC_N(extent_tree_szad_psearch)
|
||||
#define extent_tree_szad_remove JEMALLOC_N(extent_tree_szad_remove)
|
||||
#define extent_tree_szad_reverse_iter JEMALLOC_N(extent_tree_szad_reverse_iter)
|
||||
#define extent_tree_szad_reverse_iter_recurse JEMALLOC_N(extent_tree_szad_reverse_iter_recurse)
|
||||
#define extent_tree_szad_reverse_iter_start JEMALLOC_N(extent_tree_szad_reverse_iter_start)
|
||||
#define extent_tree_szad_search JEMALLOC_N(extent_tree_szad_search)
|
||||
#define hash JEMALLOC_N(hash)
|
||||
#define huge_boot JEMALLOC_N(huge_boot)
|
||||
#define huge_dalloc JEMALLOC_N(huge_dalloc)
|
||||
#define huge_malloc JEMALLOC_N(huge_malloc)
|
||||
#define huge_palloc JEMALLOC_N(huge_palloc)
|
||||
#define huge_postfork_child JEMALLOC_N(huge_postfork_child)
|
||||
#define huge_postfork_parent JEMALLOC_N(huge_postfork_parent)
|
||||
#define huge_prefork JEMALLOC_N(huge_prefork)
|
||||
#define huge_prof_ctx_get JEMALLOC_N(huge_prof_ctx_get)
|
||||
#define huge_prof_ctx_set JEMALLOC_N(huge_prof_ctx_set)
|
||||
#define huge_ralloc JEMALLOC_N(huge_ralloc)
|
||||
#define huge_ralloc_no_move JEMALLOC_N(huge_ralloc_no_move)
|
||||
#define huge_salloc JEMALLOC_N(huge_salloc)
|
||||
#define iallocm JEMALLOC_N(iallocm)
|
||||
#define icalloc JEMALLOC_N(icalloc)
|
||||
#define idalloc JEMALLOC_N(idalloc)
|
||||
#define imalloc JEMALLOC_N(imalloc)
|
||||
#define ipalloc JEMALLOC_N(ipalloc)
|
||||
#define iqalloc JEMALLOC_N(iqalloc)
|
||||
#define iralloc JEMALLOC_N(iralloc)
|
||||
#define isalloc JEMALLOC_N(isalloc)
|
||||
#define ivsalloc JEMALLOC_N(ivsalloc)
|
||||
#define jemalloc_postfork_child JEMALLOC_N(jemalloc_postfork_child)
|
||||
#define jemalloc_postfork_parent JEMALLOC_N(jemalloc_postfork_parent)
|
||||
#define jemalloc_prefork JEMALLOC_N(jemalloc_prefork)
|
||||
#define malloc_cprintf JEMALLOC_N(malloc_cprintf)
|
||||
#define malloc_mutex_init JEMALLOC_N(malloc_mutex_init)
|
||||
#define malloc_mutex_lock JEMALLOC_N(malloc_mutex_lock)
|
||||
#define malloc_mutex_postfork_child JEMALLOC_N(malloc_mutex_postfork_child)
|
||||
#define malloc_mutex_postfork_parent JEMALLOC_N(malloc_mutex_postfork_parent)
|
||||
#define malloc_mutex_prefork JEMALLOC_N(malloc_mutex_prefork)
|
||||
#define malloc_mutex_unlock JEMALLOC_N(malloc_mutex_unlock)
|
||||
#define malloc_printf JEMALLOC_N(malloc_printf)
|
||||
#define malloc_snprintf JEMALLOC_N(malloc_snprintf)
|
||||
#define malloc_strtoumax JEMALLOC_N(malloc_strtoumax)
|
||||
#define malloc_tsd_boot JEMALLOC_N(malloc_tsd_boot)
|
||||
#define malloc_tsd_cleanup_register JEMALLOC_N(malloc_tsd_cleanup_register)
|
||||
#define malloc_tsd_dalloc JEMALLOC_N(malloc_tsd_dalloc)
|
||||
#define malloc_tsd_malloc JEMALLOC_N(malloc_tsd_malloc)
|
||||
#define malloc_tsd_no_cleanup JEMALLOC_N(malloc_tsd_no_cleanup)
|
||||
#define malloc_vcprintf JEMALLOC_N(malloc_vcprintf)
|
||||
#define malloc_vsnprintf JEMALLOC_N(malloc_vsnprintf)
|
||||
#define malloc_write JEMALLOC_N(malloc_write)
|
||||
#define mb_write JEMALLOC_N(mb_write)
|
||||
#define mmap_unaligned_tsd_boot JEMALLOC_N(mmap_unaligned_tsd_boot)
|
||||
#define mmap_unaligned_tsd_cleanup_wrapper JEMALLOC_N(mmap_unaligned_tsd_cleanup_wrapper)
|
||||
#define mmap_unaligned_tsd_get JEMALLOC_N(mmap_unaligned_tsd_get)
|
||||
#define mmap_unaligned_tsd_set JEMALLOC_N(mmap_unaligned_tsd_set)
|
||||
#define mutex_boot JEMALLOC_N(mutex_boot)
|
||||
#define opt_abort JEMALLOC_N(opt_abort)
|
||||
#define opt_junk JEMALLOC_N(opt_junk)
|
||||
#define opt_lg_chunk JEMALLOC_N(opt_lg_chunk)
|
||||
#define opt_lg_dirty_mult JEMALLOC_N(opt_lg_dirty_mult)
|
||||
#define opt_lg_prof_interval JEMALLOC_N(opt_lg_prof_interval)
|
||||
#define opt_lg_prof_sample JEMALLOC_N(opt_lg_prof_sample)
|
||||
#define opt_lg_tcache_max JEMALLOC_N(opt_lg_tcache_max)
|
||||
#define opt_narenas JEMALLOC_N(opt_narenas)
|
||||
#define opt_prof JEMALLOC_N(opt_prof)
|
||||
#define opt_prof_accum JEMALLOC_N(opt_prof_accum)
|
||||
#define opt_prof_active JEMALLOC_N(opt_prof_active)
|
||||
#define opt_prof_gdump JEMALLOC_N(opt_prof_gdump)
|
||||
#define opt_prof_leak JEMALLOC_N(opt_prof_leak)
|
||||
#define opt_stats_print JEMALLOC_N(opt_stats_print)
|
||||
#define opt_tcache JEMALLOC_N(opt_tcache)
|
||||
#define opt_utrace JEMALLOC_N(opt_utrace)
|
||||
#define opt_xmalloc JEMALLOC_N(opt_xmalloc)
|
||||
#define opt_zero JEMALLOC_N(opt_zero)
|
||||
#define p2rz JEMALLOC_N(p2rz)
|
||||
#define pow2_ceil JEMALLOC_N(pow2_ceil)
|
||||
#define prof_backtrace JEMALLOC_N(prof_backtrace)
|
||||
#define prof_boot0 JEMALLOC_N(prof_boot0)
|
||||
#define prof_boot1 JEMALLOC_N(prof_boot1)
|
||||
#define prof_boot2 JEMALLOC_N(prof_boot2)
|
||||
#define prof_ctx_get JEMALLOC_N(prof_ctx_get)
|
||||
#define prof_ctx_set JEMALLOC_N(prof_ctx_set)
|
||||
#define prof_free JEMALLOC_N(prof_free)
|
||||
#define prof_gdump JEMALLOC_N(prof_gdump)
|
||||
#define prof_idump JEMALLOC_N(prof_idump)
|
||||
#define prof_lookup JEMALLOC_N(prof_lookup)
|
||||
#define prof_malloc JEMALLOC_N(prof_malloc)
|
||||
#define prof_mdump JEMALLOC_N(prof_mdump)
|
||||
#define prof_realloc JEMALLOC_N(prof_realloc)
|
||||
#define prof_sample_accum_update JEMALLOC_N(prof_sample_accum_update)
|
||||
#define prof_sample_threshold_update JEMALLOC_N(prof_sample_threshold_update)
|
||||
#define prof_tdata_cleanup JEMALLOC_N(prof_tdata_cleanup)
|
||||
#define prof_tdata_tsd_boot JEMALLOC_N(prof_tdata_tsd_boot)
|
||||
#define prof_tdata_tsd_cleanup_wrapper JEMALLOC_N(prof_tdata_tsd_cleanup_wrapper)
|
||||
#define prof_tdata_tsd_get JEMALLOC_N(prof_tdata_tsd_get)
|
||||
#define prof_tdata_tsd_set JEMALLOC_N(prof_tdata_tsd_set)
|
||||
#define pthread_create JEMALLOC_N(pthread_create)
|
||||
#define quarantine JEMALLOC_N(quarantine)
|
||||
#define quarantine_boot JEMALLOC_N(quarantine_boot)
|
||||
#define quarantine_tsd_boot JEMALLOC_N(quarantine_tsd_boot)
|
||||
#define quarantine_tsd_cleanup_wrapper JEMALLOC_N(quarantine_tsd_cleanup_wrapper)
|
||||
#define quarantine_tsd_get JEMALLOC_N(quarantine_tsd_get)
|
||||
#define quarantine_tsd_set JEMALLOC_N(quarantine_tsd_set)
|
||||
#define register_zone JEMALLOC_N(register_zone)
|
||||
#define rtree_get JEMALLOC_N(rtree_get)
|
||||
#define rtree_get_locked JEMALLOC_N(rtree_get_locked)
|
||||
#define rtree_new JEMALLOC_N(rtree_new)
|
||||
#define rtree_set JEMALLOC_N(rtree_set)
|
||||
#define s2u JEMALLOC_N(s2u)
|
||||
#define sa2u JEMALLOC_N(sa2u)
|
||||
#define stats_arenas_i_bins_j_index JEMALLOC_N(stats_arenas_i_bins_j_index)
|
||||
#define stats_arenas_i_index JEMALLOC_N(stats_arenas_i_index)
|
||||
#define stats_arenas_i_lruns_j_index JEMALLOC_N(stats_arenas_i_lruns_j_index)
|
||||
#define stats_cactive JEMALLOC_N(stats_cactive)
|
||||
#define stats_cactive_add JEMALLOC_N(stats_cactive_add)
|
||||
#define stats_cactive_get JEMALLOC_N(stats_cactive_get)
|
||||
#define stats_cactive_sub JEMALLOC_N(stats_cactive_sub)
|
||||
#define stats_print JEMALLOC_N(stats_print)
|
||||
#define tcache_alloc_easy JEMALLOC_N(tcache_alloc_easy)
|
||||
#define tcache_alloc_large JEMALLOC_N(tcache_alloc_large)
|
||||
#define tcache_alloc_small JEMALLOC_N(tcache_alloc_small)
|
||||
#define tcache_alloc_small_hard JEMALLOC_N(tcache_alloc_small_hard)
|
||||
#define tcache_arena_associate JEMALLOC_N(tcache_arena_associate)
|
||||
#define tcache_arena_dissociate JEMALLOC_N(tcache_arena_dissociate)
|
||||
#define tcache_bin_flush_large JEMALLOC_N(tcache_bin_flush_large)
|
||||
#define tcache_bin_flush_small JEMALLOC_N(tcache_bin_flush_small)
|
||||
#define tcache_boot0 JEMALLOC_N(tcache_boot0)
|
||||
#define tcache_boot1 JEMALLOC_N(tcache_boot1)
|
||||
#define tcache_create JEMALLOC_N(tcache_create)
|
||||
#define tcache_dalloc_large JEMALLOC_N(tcache_dalloc_large)
|
||||
#define tcache_dalloc_small JEMALLOC_N(tcache_dalloc_small)
|
||||
#define tcache_destroy JEMALLOC_N(tcache_destroy)
|
||||
#define tcache_enabled_get JEMALLOC_N(tcache_enabled_get)
|
||||
#define tcache_enabled_set JEMALLOC_N(tcache_enabled_set)
|
||||
#define tcache_enabled_tsd_boot JEMALLOC_N(tcache_enabled_tsd_boot)
|
||||
#define tcache_enabled_tsd_cleanup_wrapper JEMALLOC_N(tcache_enabled_tsd_cleanup_wrapper)
|
||||
#define tcache_enabled_tsd_get JEMALLOC_N(tcache_enabled_tsd_get)
|
||||
#define tcache_enabled_tsd_set JEMALLOC_N(tcache_enabled_tsd_set)
|
||||
#define tcache_event JEMALLOC_N(tcache_event)
|
||||
#define tcache_flush JEMALLOC_N(tcache_flush)
|
||||
#define tcache_stats_merge JEMALLOC_N(tcache_stats_merge)
|
||||
#define tcache_thread_cleanup JEMALLOC_N(tcache_thread_cleanup)
|
||||
#define tcache_tsd_boot JEMALLOC_N(tcache_tsd_boot)
|
||||
#define tcache_tsd_cleanup_wrapper JEMALLOC_N(tcache_tsd_cleanup_wrapper)
|
||||
#define tcache_tsd_get JEMALLOC_N(tcache_tsd_get)
|
||||
#define tcache_tsd_set JEMALLOC_N(tcache_tsd_set)
|
||||
#define thread_allocated_tsd_boot JEMALLOC_N(thread_allocated_tsd_boot)
|
||||
#define thread_allocated_tsd_cleanup_wrapper JEMALLOC_N(thread_allocated_tsd_cleanup_wrapper)
|
||||
#define thread_allocated_tsd_get JEMALLOC_N(thread_allocated_tsd_get)
|
||||
#define thread_allocated_tsd_set JEMALLOC_N(thread_allocated_tsd_set)
|
||||
#define u2rz JEMALLOC_N(u2rz)
|
60
contrib/jemalloc/include/jemalloc/internal/prng.h
Normal file
60
contrib/jemalloc/include/jemalloc/internal/prng.h
Normal file
@ -0,0 +1,60 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/*
|
||||
* Simple linear congruential pseudo-random number generator:
|
||||
*
|
||||
* prng(y) = (a*x + c) % m
|
||||
*
|
||||
* where the following constants ensure maximal period:
|
||||
*
|
||||
* a == Odd number (relatively prime to 2^n), and (a-1) is a multiple of 4.
|
||||
* c == Odd number (relatively prime to 2^n).
|
||||
* m == 2^32
|
||||
*
|
||||
* See Knuth's TAOCP 3rd Ed., Vol. 2, pg. 17 for details on these constraints.
|
||||
*
|
||||
* This choice of m has the disadvantage that the quality of the bits is
|
||||
* proportional to bit position. For example. the lowest bit has a cycle of 2,
|
||||
* the next has a cycle of 4, etc. For this reason, we prefer to use the upper
|
||||
* bits.
|
||||
*
|
||||
* Macro parameters:
|
||||
* uint32_t r : Result.
|
||||
* unsigned lg_range : (0..32], number of least significant bits to return.
|
||||
* uint32_t state : Seed value.
|
||||
* const uint32_t a, c : See above discussion.
|
||||
*/
|
||||
#define prng32(r, lg_range, state, a, c) do { \
|
||||
assert(lg_range > 0); \
|
||||
assert(lg_range <= 32); \
|
||||
\
|
||||
r = (state * (a)) + (c); \
|
||||
state = r; \
|
||||
r >>= (32 - lg_range); \
|
||||
} while (false)
|
||||
|
||||
/* Same as prng32(), but 64 bits of pseudo-randomness, using uint64_t. */
|
||||
#define prng64(r, lg_range, state, a, c) do { \
|
||||
assert(lg_range > 0); \
|
||||
assert(lg_range <= 64); \
|
||||
\
|
||||
r = (state * (a)) + (c); \
|
||||
state = r; \
|
||||
r >>= (64 - lg_range); \
|
||||
} while (false)
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
535
contrib/jemalloc/include/jemalloc/internal/prof.h
Normal file
535
contrib/jemalloc/include/jemalloc/internal/prof.h
Normal file
@ -0,0 +1,535 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct prof_bt_s prof_bt_t;
|
||||
typedef struct prof_cnt_s prof_cnt_t;
|
||||
typedef struct prof_thr_cnt_s prof_thr_cnt_t;
|
||||
typedef struct prof_ctx_s prof_ctx_t;
|
||||
typedef struct prof_tdata_s prof_tdata_t;
|
||||
|
||||
/* Option defaults. */
|
||||
#define PROF_PREFIX_DEFAULT "jeprof"
|
||||
#define LG_PROF_SAMPLE_DEFAULT 0
|
||||
#define LG_PROF_INTERVAL_DEFAULT -1
|
||||
|
||||
/*
|
||||
* Hard limit on stack backtrace depth. The version of prof_backtrace() that
|
||||
* is based on __builtin_return_address() necessarily has a hard-coded number
|
||||
* of backtrace frame handlers, and should be kept in sync with this setting.
|
||||
*/
|
||||
#define PROF_BT_MAX 128
|
||||
|
||||
/* Maximum number of backtraces to store in each per thread LRU cache. */
|
||||
#define PROF_TCMAX 1024
|
||||
|
||||
/* Initial hash table size. */
|
||||
#define PROF_CKH_MINITEMS 64
|
||||
|
||||
/* Size of memory buffer to use when writing dump files. */
|
||||
#define PROF_DUMP_BUFSIZE 65536
|
||||
|
||||
/* Size of stack-allocated buffer used by prof_printf(). */
|
||||
#define PROF_PRINTF_BUFSIZE 128
|
||||
|
||||
/*
|
||||
* Number of mutexes shared among all ctx's. No space is allocated for these
|
||||
* unless profiling is enabled, so it's okay to over-provision.
|
||||
*/
|
||||
#define PROF_NCTX_LOCKS 1024
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
struct prof_bt_s {
|
||||
/* Backtrace, stored as len program counters. */
|
||||
void **vec;
|
||||
unsigned len;
|
||||
};
|
||||
|
||||
#ifdef JEMALLOC_PROF_LIBGCC
|
||||
/* Data structure passed to libgcc _Unwind_Backtrace() callback functions. */
|
||||
typedef struct {
|
||||
prof_bt_t *bt;
|
||||
unsigned nignore;
|
||||
unsigned max;
|
||||
} prof_unwind_data_t;
|
||||
#endif
|
||||
|
||||
struct prof_cnt_s {
|
||||
/*
|
||||
* Profiling counters. An allocation/deallocation pair can operate on
|
||||
* different prof_thr_cnt_t objects that are linked into the same
|
||||
* prof_ctx_t cnts_ql, so it is possible for the cur* counters to go
|
||||
* negative. In principle it is possible for the *bytes counters to
|
||||
* overflow/underflow, but a general solution would require something
|
||||
* like 128-bit counters; this implementation doesn't bother to solve
|
||||
* that problem.
|
||||
*/
|
||||
int64_t curobjs;
|
||||
int64_t curbytes;
|
||||
uint64_t accumobjs;
|
||||
uint64_t accumbytes;
|
||||
};
|
||||
|
||||
struct prof_thr_cnt_s {
|
||||
/* Linkage into prof_ctx_t's cnts_ql. */
|
||||
ql_elm(prof_thr_cnt_t) cnts_link;
|
||||
|
||||
/* Linkage into thread's LRU. */
|
||||
ql_elm(prof_thr_cnt_t) lru_link;
|
||||
|
||||
/*
|
||||
* Associated context. If a thread frees an object that it did not
|
||||
* allocate, it is possible that the context is not cached in the
|
||||
* thread's hash table, in which case it must be able to look up the
|
||||
* context, insert a new prof_thr_cnt_t into the thread's hash table,
|
||||
* and link it into the prof_ctx_t's cnts_ql.
|
||||
*/
|
||||
prof_ctx_t *ctx;
|
||||
|
||||
/*
|
||||
* Threads use memory barriers to update the counters. Since there is
|
||||
* only ever one writer, the only challenge is for the reader to get a
|
||||
* consistent read of the counters.
|
||||
*
|
||||
* The writer uses this series of operations:
|
||||
*
|
||||
* 1) Increment epoch to an odd number.
|
||||
* 2) Update counters.
|
||||
* 3) Increment epoch to an even number.
|
||||
*
|
||||
* The reader must assure 1) that the epoch is even while it reads the
|
||||
* counters, and 2) that the epoch doesn't change between the time it
|
||||
* starts and finishes reading the counters.
|
||||
*/
|
||||
unsigned epoch;
|
||||
|
||||
/* Profiling counters. */
|
||||
prof_cnt_t cnts;
|
||||
};
|
||||
|
||||
struct prof_ctx_s {
|
||||
/* Associated backtrace. */
|
||||
prof_bt_t *bt;
|
||||
|
||||
/* Protects cnt_merged and cnts_ql. */
|
||||
malloc_mutex_t *lock;
|
||||
|
||||
/* Temporary storage for summation during dump. */
|
||||
prof_cnt_t cnt_summed;
|
||||
|
||||
/* When threads exit, they merge their stats into cnt_merged. */
|
||||
prof_cnt_t cnt_merged;
|
||||
|
||||
/*
|
||||
* List of profile counters, one for each thread that has allocated in
|
||||
* this context.
|
||||
*/
|
||||
ql_head(prof_thr_cnt_t) cnts_ql;
|
||||
};
|
||||
|
||||
struct prof_tdata_s {
|
||||
/*
|
||||
* Hash of (prof_bt_t *)-->(prof_thr_cnt_t *). Each thread keeps a
|
||||
* cache of backtraces, with associated thread-specific prof_thr_cnt_t
|
||||
* objects. Other threads may read the prof_thr_cnt_t contents, but no
|
||||
* others will ever write them.
|
||||
*
|
||||
* Upon thread exit, the thread must merge all the prof_thr_cnt_t
|
||||
* counter data into the associated prof_ctx_t objects, and unlink/free
|
||||
* the prof_thr_cnt_t objects.
|
||||
*/
|
||||
ckh_t bt2cnt;
|
||||
|
||||
/* LRU for contents of bt2cnt. */
|
||||
ql_head(prof_thr_cnt_t) lru_ql;
|
||||
|
||||
/* Backtrace vector, used for calls to prof_backtrace(). */
|
||||
void **vec;
|
||||
|
||||
/* Sampling state. */
|
||||
uint64_t prng_state;
|
||||
uint64_t threshold;
|
||||
uint64_t accum;
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
extern bool opt_prof;
|
||||
/*
|
||||
* Even if opt_prof is true, sampling can be temporarily disabled by setting
|
||||
* opt_prof_active to false. No locking is used when updating opt_prof_active,
|
||||
* so there are no guarantees regarding how long it will take for all threads
|
||||
* to notice state changes.
|
||||
*/
|
||||
extern bool opt_prof_active;
|
||||
extern size_t opt_lg_prof_sample; /* Mean bytes between samples. */
|
||||
extern ssize_t opt_lg_prof_interval; /* lg(prof_interval). */
|
||||
extern bool opt_prof_gdump; /* High-water memory dumping. */
|
||||
extern bool opt_prof_leak; /* Dump leak summary at exit. */
|
||||
extern bool opt_prof_accum; /* Report cumulative bytes. */
|
||||
extern char opt_prof_prefix[PATH_MAX + 1];
|
||||
|
||||
/*
|
||||
* Profile dump interval, measured in bytes allocated. Each arena triggers a
|
||||
* profile dump when it reaches this threshold. The effect is that the
|
||||
* interval between profile dumps averages prof_interval, though the actual
|
||||
* interval between dumps will tend to be sporadic, and the interval will be a
|
||||
* maximum of approximately (prof_interval * narenas).
|
||||
*/
|
||||
extern uint64_t prof_interval;
|
||||
|
||||
/*
|
||||
* If true, promote small sampled objects to large objects, since small run
|
||||
* headers do not have embedded profile context pointers.
|
||||
*/
|
||||
extern bool prof_promote;
|
||||
|
||||
void bt_init(prof_bt_t *bt, void **vec);
|
||||
void prof_backtrace(prof_bt_t *bt, unsigned nignore);
|
||||
prof_thr_cnt_t *prof_lookup(prof_bt_t *bt);
|
||||
void prof_idump(void);
|
||||
bool prof_mdump(const char *filename);
|
||||
void prof_gdump(void);
|
||||
prof_tdata_t *prof_tdata_init(void);
|
||||
void prof_tdata_cleanup(void *arg);
|
||||
void prof_boot0(void);
|
||||
void prof_boot1(void);
|
||||
bool prof_boot2(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#define PROF_ALLOC_PREP(nignore, size, ret) do { \
|
||||
prof_tdata_t *prof_tdata; \
|
||||
prof_bt_t bt; \
|
||||
\
|
||||
assert(size == s2u(size)); \
|
||||
\
|
||||
prof_tdata = *prof_tdata_tsd_get(); \
|
||||
if (prof_tdata == NULL) { \
|
||||
prof_tdata = prof_tdata_init(); \
|
||||
if (prof_tdata == NULL) { \
|
||||
ret = NULL; \
|
||||
break; \
|
||||
} \
|
||||
} \
|
||||
\
|
||||
if (opt_prof_active == false) { \
|
||||
/* Sampling is currently inactive, so avoid sampling. */\
|
||||
ret = (prof_thr_cnt_t *)(uintptr_t)1U; \
|
||||
} else if (opt_lg_prof_sample == 0) { \
|
||||
/* Don't bother with sampling logic, since sampling */\
|
||||
/* interval is 1. */\
|
||||
bt_init(&bt, prof_tdata->vec); \
|
||||
prof_backtrace(&bt, nignore); \
|
||||
ret = prof_lookup(&bt); \
|
||||
} else { \
|
||||
if (prof_tdata->threshold == 0) { \
|
||||
/* Initialize. Seed the prng differently for */\
|
||||
/* each thread. */\
|
||||
prof_tdata->prng_state = \
|
||||
(uint64_t)(uintptr_t)&size; \
|
||||
prof_sample_threshold_update(prof_tdata); \
|
||||
} \
|
||||
\
|
||||
/* Determine whether to capture a backtrace based on */\
|
||||
/* whether size is enough for prof_accum to reach */\
|
||||
/* prof_tdata->threshold. However, delay updating */\
|
||||
/* these variables until prof_{m,re}alloc(), because */\
|
||||
/* we don't know for sure that the allocation will */\
|
||||
/* succeed. */\
|
||||
/* */\
|
||||
/* Use subtraction rather than addition to avoid */\
|
||||
/* potential integer overflow. */\
|
||||
if (size >= prof_tdata->threshold - \
|
||||
prof_tdata->accum) { \
|
||||
bt_init(&bt, prof_tdata->vec); \
|
||||
prof_backtrace(&bt, nignore); \
|
||||
ret = prof_lookup(&bt); \
|
||||
} else \
|
||||
ret = (prof_thr_cnt_t *)(uintptr_t)1U; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
malloc_tsd_protos(JEMALLOC_ATTR(unused), prof_tdata, prof_tdata_t *)
|
||||
|
||||
void prof_sample_threshold_update(prof_tdata_t *prof_tdata);
|
||||
prof_ctx_t *prof_ctx_get(const void *ptr);
|
||||
void prof_ctx_set(const void *ptr, prof_ctx_t *ctx);
|
||||
bool prof_sample_accum_update(size_t size);
|
||||
void prof_malloc(const void *ptr, size_t size, prof_thr_cnt_t *cnt);
|
||||
void prof_realloc(const void *ptr, size_t size, prof_thr_cnt_t *cnt,
|
||||
size_t old_size, prof_ctx_t *old_ctx);
|
||||
void prof_free(const void *ptr, size_t size);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_PROF_C_))
|
||||
/* Thread-specific backtrace cache, used to reduce bt2ctx contention. */
|
||||
malloc_tsd_externs(prof_tdata, prof_tdata_t *)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, prof_tdata, prof_tdata_t *, NULL,
|
||||
prof_tdata_cleanup)
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
prof_sample_threshold_update(prof_tdata_t *prof_tdata)
|
||||
{
|
||||
uint64_t r;
|
||||
double u;
|
||||
|
||||
cassert(config_prof);
|
||||
|
||||
/*
|
||||
* Compute sample threshold as a geometrically distributed random
|
||||
* variable with mean (2^opt_lg_prof_sample).
|
||||
*
|
||||
* __ __
|
||||
* | log(u) | 1
|
||||
* prof_tdata->threshold = | -------- |, where p = -------------------
|
||||
* | log(1-p) | opt_lg_prof_sample
|
||||
* 2
|
||||
*
|
||||
* For more information on the math, see:
|
||||
*
|
||||
* Non-Uniform Random Variate Generation
|
||||
* Luc Devroye
|
||||
* Springer-Verlag, New York, 1986
|
||||
* pp 500
|
||||
* (http://cg.scs.carleton.ca/~luc/rnbookindex.html)
|
||||
*/
|
||||
prng64(r, 53, prof_tdata->prng_state,
|
||||
UINT64_C(6364136223846793005), UINT64_C(1442695040888963407));
|
||||
u = (double)r * (1.0/9007199254740992.0L);
|
||||
prof_tdata->threshold = (uint64_t)(log(u) /
|
||||
log(1.0 - (1.0 / (double)((uint64_t)1U << opt_lg_prof_sample))))
|
||||
+ (uint64_t)1U;
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE prof_ctx_t *
|
||||
prof_ctx_get(const void *ptr)
|
||||
{
|
||||
prof_ctx_t *ret;
|
||||
arena_chunk_t *chunk;
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (chunk != ptr) {
|
||||
/* Region. */
|
||||
ret = arena_prof_ctx_get(ptr);
|
||||
} else
|
||||
ret = huge_prof_ctx_get(ptr);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
prof_ctx_set(const void *ptr, prof_ctx_t *ctx)
|
||||
{
|
||||
arena_chunk_t *chunk;
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (chunk != ptr) {
|
||||
/* Region. */
|
||||
arena_prof_ctx_set(ptr, ctx);
|
||||
} else
|
||||
huge_prof_ctx_set(ptr, ctx);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE bool
|
||||
prof_sample_accum_update(size_t size)
|
||||
{
|
||||
prof_tdata_t *prof_tdata;
|
||||
|
||||
cassert(config_prof);
|
||||
/* Sampling logic is unnecessary if the interval is 1. */
|
||||
assert(opt_lg_prof_sample != 0);
|
||||
|
||||
prof_tdata = *prof_tdata_tsd_get();
|
||||
assert(prof_tdata != NULL);
|
||||
|
||||
/* Take care to avoid integer overflow. */
|
||||
if (size >= prof_tdata->threshold - prof_tdata->accum) {
|
||||
prof_tdata->accum -= (prof_tdata->threshold - size);
|
||||
/* Compute new sample threshold. */
|
||||
prof_sample_threshold_update(prof_tdata);
|
||||
while (prof_tdata->accum >= prof_tdata->threshold) {
|
||||
prof_tdata->accum -= prof_tdata->threshold;
|
||||
prof_sample_threshold_update(prof_tdata);
|
||||
}
|
||||
return (false);
|
||||
} else {
|
||||
prof_tdata->accum += size;
|
||||
return (true);
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
prof_malloc(const void *ptr, size_t size, prof_thr_cnt_t *cnt)
|
||||
{
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL);
|
||||
assert(size == isalloc(ptr, true));
|
||||
|
||||
if (opt_lg_prof_sample != 0) {
|
||||
if (prof_sample_accum_update(size)) {
|
||||
/*
|
||||
* Don't sample. For malloc()-like allocation, it is
|
||||
* always possible to tell in advance how large an
|
||||
* object's usable size will be, so there should never
|
||||
* be a difference between the size passed to
|
||||
* PROF_ALLOC_PREP() and prof_malloc().
|
||||
*/
|
||||
assert((uintptr_t)cnt == (uintptr_t)1U);
|
||||
}
|
||||
}
|
||||
|
||||
if ((uintptr_t)cnt > (uintptr_t)1U) {
|
||||
prof_ctx_set(ptr, cnt->ctx);
|
||||
|
||||
cnt->epoch++;
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
cnt->cnts.curobjs++;
|
||||
cnt->cnts.curbytes += size;
|
||||
if (opt_prof_accum) {
|
||||
cnt->cnts.accumobjs++;
|
||||
cnt->cnts.accumbytes += size;
|
||||
}
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
cnt->epoch++;
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
} else
|
||||
prof_ctx_set(ptr, (prof_ctx_t *)(uintptr_t)1U);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
prof_realloc(const void *ptr, size_t size, prof_thr_cnt_t *cnt,
|
||||
size_t old_size, prof_ctx_t *old_ctx)
|
||||
{
|
||||
prof_thr_cnt_t *told_cnt;
|
||||
|
||||
cassert(config_prof);
|
||||
assert(ptr != NULL || (uintptr_t)cnt <= (uintptr_t)1U);
|
||||
|
||||
if (ptr != NULL) {
|
||||
assert(size == isalloc(ptr, true));
|
||||
if (opt_lg_prof_sample != 0) {
|
||||
if (prof_sample_accum_update(size)) {
|
||||
/*
|
||||
* Don't sample. The size passed to
|
||||
* PROF_ALLOC_PREP() was larger than what
|
||||
* actually got allocated, so a backtrace was
|
||||
* captured for this allocation, even though
|
||||
* its actual size was insufficient to cross
|
||||
* the sample threshold.
|
||||
*/
|
||||
cnt = (prof_thr_cnt_t *)(uintptr_t)1U;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ((uintptr_t)old_ctx > (uintptr_t)1U) {
|
||||
told_cnt = prof_lookup(old_ctx->bt);
|
||||
if (told_cnt == NULL) {
|
||||
/*
|
||||
* It's too late to propagate OOM for this realloc(),
|
||||
* so operate directly on old_cnt->ctx->cnt_merged.
|
||||
*/
|
||||
malloc_mutex_lock(old_ctx->lock);
|
||||
old_ctx->cnt_merged.curobjs--;
|
||||
old_ctx->cnt_merged.curbytes -= old_size;
|
||||
malloc_mutex_unlock(old_ctx->lock);
|
||||
told_cnt = (prof_thr_cnt_t *)(uintptr_t)1U;
|
||||
}
|
||||
} else
|
||||
told_cnt = (prof_thr_cnt_t *)(uintptr_t)1U;
|
||||
|
||||
if ((uintptr_t)told_cnt > (uintptr_t)1U)
|
||||
told_cnt->epoch++;
|
||||
if ((uintptr_t)cnt > (uintptr_t)1U) {
|
||||
prof_ctx_set(ptr, cnt->ctx);
|
||||
cnt->epoch++;
|
||||
} else
|
||||
prof_ctx_set(ptr, (prof_ctx_t *)(uintptr_t)1U);
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
if ((uintptr_t)told_cnt > (uintptr_t)1U) {
|
||||
told_cnt->cnts.curobjs--;
|
||||
told_cnt->cnts.curbytes -= old_size;
|
||||
}
|
||||
if ((uintptr_t)cnt > (uintptr_t)1U) {
|
||||
cnt->cnts.curobjs++;
|
||||
cnt->cnts.curbytes += size;
|
||||
if (opt_prof_accum) {
|
||||
cnt->cnts.accumobjs++;
|
||||
cnt->cnts.accumbytes += size;
|
||||
}
|
||||
}
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
if ((uintptr_t)told_cnt > (uintptr_t)1U)
|
||||
told_cnt->epoch++;
|
||||
if ((uintptr_t)cnt > (uintptr_t)1U)
|
||||
cnt->epoch++;
|
||||
/*********/
|
||||
mb_write(); /* Not strictly necessary. */
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
prof_free(const void *ptr, size_t size)
|
||||
{
|
||||
prof_ctx_t *ctx = prof_ctx_get(ptr);
|
||||
|
||||
cassert(config_prof);
|
||||
|
||||
if ((uintptr_t)ctx > (uintptr_t)1) {
|
||||
assert(size == isalloc(ptr, true));
|
||||
prof_thr_cnt_t *tcnt = prof_lookup(ctx->bt);
|
||||
|
||||
if (tcnt != NULL) {
|
||||
tcnt->epoch++;
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
tcnt->cnts.curobjs--;
|
||||
tcnt->cnts.curbytes -= size;
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
tcnt->epoch++;
|
||||
/*********/
|
||||
mb_write();
|
||||
/*********/
|
||||
} else {
|
||||
/*
|
||||
* OOM during free() cannot be propagated, so operate
|
||||
* directly on cnt->ctx->cnt_merged.
|
||||
*/
|
||||
malloc_mutex_lock(ctx->lock);
|
||||
ctx->cnt_merged.curobjs--;
|
||||
ctx->cnt_merged.curbytes -= size;
|
||||
malloc_mutex_unlock(ctx->lock);
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
@ -1,40 +1,3 @@
|
||||
/******************************************************************************
|
||||
*
|
||||
* Copyright (C) 2002 Jason Evans <jasone@FreeBSD.org>.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
* 1. Redistributions of source code must retain the above copyright
|
||||
* notice(s), this list of conditions and the following disclaimer
|
||||
* unmodified other than the allowable addition of one or more
|
||||
* copyright notices.
|
||||
* 2. Redistributions in binary form must reproduce the above copyright
|
||||
* notice(s), this list of conditions and the following disclaimer in
|
||||
* the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
|
||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
|
||||
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
|
||||
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
******************************************************************************/
|
||||
|
||||
#ifndef QL_H_
|
||||
#define QL_H_
|
||||
|
||||
#include <sys/cdefs.h>
|
||||
__FBSDID("$FreeBSD$");
|
||||
|
||||
/*
|
||||
* List definitions.
|
||||
*/
|
||||
@ -118,5 +81,3 @@ struct { \
|
||||
|
||||
#define ql_reverse_foreach(a_var, a_head, a_field) \
|
||||
qr_reverse_foreach((a_var), ql_first(a_head), a_field)
|
||||
|
||||
#endif /* QL_H_ */
|
@ -1,40 +1,3 @@
|
||||
/******************************************************************************
|
||||
*
|
||||
* Copyright (C) 2002 Jason Evans <jasone@FreeBSD.org>.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
* 1. Redistributions of source code must retain the above copyright
|
||||
* notice(s), this list of conditions and the following disclaimer
|
||||
* unmodified other than the allowable addition of one or more
|
||||
* copyright notices.
|
||||
* 2. Redistributions in binary form must reproduce the above copyright
|
||||
* notice(s), this list of conditions and the following disclaimer in
|
||||
* the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
|
||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
|
||||
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
|
||||
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
******************************************************************************/
|
||||
|
||||
#ifndef QR_H_
|
||||
#define QR_H_
|
||||
|
||||
#include <sys/cdefs.h>
|
||||
__FBSDID("$FreeBSD$");
|
||||
|
||||
/* Ring definitions. */
|
||||
#define qr(a_type) \
|
||||
struct { \
|
||||
@ -102,5 +65,3 @@ struct { \
|
||||
(var) != NULL; \
|
||||
(var) = (((var) != (a_qr)) \
|
||||
? (var)->a_field.qre_prev : NULL))
|
||||
|
||||
#endif /* QR_H_ */
|
24
contrib/jemalloc/include/jemalloc/internal/quarantine.h
Normal file
24
contrib/jemalloc/include/jemalloc/internal/quarantine.h
Normal file
@ -0,0 +1,24 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/* Default per thread quarantine size if valgrind is enabled. */
|
||||
#define JEMALLOC_VALGRIND_QUARANTINE_DEFAULT (ZU(1) << 24)
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
void quarantine(void *ptr);
|
||||
bool quarantine_boot(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
||||
|
@ -1,35 +1,6 @@
|
||||
/*-
|
||||
*******************************************************************************
|
||||
*
|
||||
* Copyright (C) 2008-2010 Jason Evans <jasone@FreeBSD.org>.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions
|
||||
* are met:
|
||||
* 1. Redistributions of source code must retain the above copyright
|
||||
* notice(s), this list of conditions and the following disclaimer
|
||||
* unmodified other than the allowable addition of one or more
|
||||
* copyright notices.
|
||||
* 2. Redistributions in binary form must reproduce the above copyright
|
||||
* notice(s), this list of conditions and the following disclaimer in
|
||||
* the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
|
||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE
|
||||
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
|
||||
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
|
||||
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
******************************************************************************
|
||||
*
|
||||
* cpp macro implementation of left-leaning 2-3 red-black trees. Parent
|
||||
* pointers are not used, and color bits are stored in the least significant
|
||||
* bit of right-child pointers (if RB_COMPACT is defined), thus making node
|
||||
@ -51,8 +22,9 @@
|
||||
#ifndef RB_H_
|
||||
#define RB_H_
|
||||
|
||||
#include <sys/cdefs.h>
|
||||
#if 0
|
||||
__FBSDID("$FreeBSD$");
|
||||
#endif
|
||||
|
||||
#ifdef RB_COMPACT
|
||||
/* Node structure. */
|
||||
@ -222,11 +194,10 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
* Arguments:
|
||||
*
|
||||
* a_attr : Function attribute for generated functions (ex: static).
|
||||
* a_prefix : Prefix for generated functions (ex: extree_).
|
||||
* a_rb_type : Type for red-black tree data structure (ex: extree_t).
|
||||
* a_type : Type for red-black tree node data structure (ex:
|
||||
* extree_node_t).
|
||||
* a_field : Name of red-black tree node linkage (ex: extree_link).
|
||||
* a_prefix : Prefix for generated functions (ex: ex_).
|
||||
* a_rb_type : Type for red-black tree data structure (ex: ex_t).
|
||||
* a_type : Type for red-black tree node data structure (ex: ex_node_t).
|
||||
* a_field : Name of red-black tree node linkage (ex: ex_link).
|
||||
* a_cmp : Node comparison function name, with the following prototype:
|
||||
* int (a_cmp *)(a_type *a_node, a_type *a_other);
|
||||
* ^^^^^^
|
||||
@ -246,94 +217,94 @@ a_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start, \
|
||||
* struct ex_node_s {
|
||||
* rb_node(ex_node_t) ex_link;
|
||||
* };
|
||||
* typedef rb(ex_node_t) ex_t;
|
||||
* rb_gen(static, ex_, ex_t, ex_node_t, ex_link, ex_cmp, 1297, 1301)
|
||||
* typedef rb_tree(ex_node_t) ex_t;
|
||||
* rb_gen(static, ex_, ex_t, ex_node_t, ex_link, ex_cmp)
|
||||
*
|
||||
* The following API is generated:
|
||||
*
|
||||
* static void
|
||||
* ex_new(ex_t *extree);
|
||||
* ex_new(ex_t *tree);
|
||||
* Description: Initialize a red-black tree structure.
|
||||
* Args:
|
||||
* extree: Pointer to an uninitialized red-black tree object.
|
||||
* tree: Pointer to an uninitialized red-black tree object.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_first(ex_t *extree);
|
||||
* ex_first(ex_t *tree);
|
||||
* static ex_node_t *
|
||||
* ex_last(ex_t *extree);
|
||||
* Description: Get the first/last node in extree.
|
||||
* ex_last(ex_t *tree);
|
||||
* Description: Get the first/last node in tree.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* Ret: First/last node in extree, or NULL if extree is empty.
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* Ret: First/last node in tree, or NULL if tree is empty.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_next(ex_t *extree, ex_node_t *node);
|
||||
* ex_next(ex_t *tree, ex_node_t *node);
|
||||
* static ex_node_t *
|
||||
* ex_prev(ex_t *extree, ex_node_t *node);
|
||||
* ex_prev(ex_t *tree, ex_node_t *node);
|
||||
* Description: Get node's successor/predecessor.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* node : A node in extree.
|
||||
* Ret: node's successor/predecessor in extree, or NULL if node is
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* node: A node in tree.
|
||||
* Ret: node's successor/predecessor in tree, or NULL if node is
|
||||
* last/first.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_search(ex_t *extree, ex_node_t *key);
|
||||
* ex_search(ex_t *tree, ex_node_t *key);
|
||||
* Description: Search for node that matches key.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* key : Search key.
|
||||
* Ret: Node in extree that matches key, or NULL if no match.
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* key : Search key.
|
||||
* Ret: Node in tree that matches key, or NULL if no match.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_nsearch(ex_t *extree, ex_node_t *key);
|
||||
* ex_nsearch(ex_t *tree, ex_node_t *key);
|
||||
* static ex_node_t *
|
||||
* ex_psearch(ex_t *extree, ex_node_t *key);
|
||||
* ex_psearch(ex_t *tree, ex_node_t *key);
|
||||
* Description: Search for node that matches key. If no match is found,
|
||||
* return what would be key's successor/predecessor, were
|
||||
* key in extree.
|
||||
* key in tree.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* key : Search key.
|
||||
* Ret: Node in extree that matches key, or if no match, hypothetical
|
||||
* node's successor/predecessor (NULL if no successor/predecessor).
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* key : Search key.
|
||||
* Ret: Node in tree that matches key, or if no match, hypothetical node's
|
||||
* successor/predecessor (NULL if no successor/predecessor).
|
||||
*
|
||||
* static void
|
||||
* ex_insert(ex_t *extree, ex_node_t *node);
|
||||
* Description: Insert node into extree.
|
||||
* ex_insert(ex_t *tree, ex_node_t *node);
|
||||
* Description: Insert node into tree.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* node : Node to be inserted into extree.
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* node: Node to be inserted into tree.
|
||||
*
|
||||
* static void
|
||||
* ex_remove(ex_t *extree, ex_node_t *node);
|
||||
* Description: Remove node from extree.
|
||||
* ex_remove(ex_t *tree, ex_node_t *node);
|
||||
* Description: Remove node from tree.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* node : Node in extree to be removed.
|
||||
* tree: Pointer to an initialized red-black tree object.
|
||||
* node: Node in tree to be removed.
|
||||
*
|
||||
* static ex_node_t *
|
||||
* ex_iter(ex_t *extree, ex_node_t *start, ex_node_t *(*cb)(ex_t *,
|
||||
* ex_iter(ex_t *tree, ex_node_t *start, ex_node_t *(*cb)(ex_t *,
|
||||
* ex_node_t *, void *), void *arg);
|
||||
* static ex_node_t *
|
||||
* ex_reverse_iter(ex_t *extree, ex_node_t *start, ex_node *(*cb)(ex_t *,
|
||||
* ex_reverse_iter(ex_t *tree, ex_node_t *start, ex_node *(*cb)(ex_t *,
|
||||
* ex_node_t *, void *), void *arg);
|
||||
* Description: Iterate forward/backward over extree, starting at node.
|
||||
* If extree is modified, iteration must be immediately
|
||||
* Description: Iterate forward/backward over tree, starting at node. If
|
||||
* tree is modified, iteration must be immediately
|
||||
* terminated by the callback function that causes the
|
||||
* modification.
|
||||
* Args:
|
||||
* extree: Pointer to an initialized red-black tree object.
|
||||
* start : Node at which to start iteration, or NULL to start at
|
||||
* first/last node.
|
||||
* cb : Callback function, which is called for each node during
|
||||
* iteration. Under normal circumstances the callback function
|
||||
* should return NULL, which causes iteration to continue. If a
|
||||
* callback function returns non-NULL, iteration is immediately
|
||||
* terminated and the non-NULL return value is returned by the
|
||||
* iterator. This is useful for re-starting iteration after
|
||||
* modifying extree.
|
||||
* arg : Opaque pointer passed to cb().
|
||||
* tree : Pointer to an initialized red-black tree object.
|
||||
* start: Node at which to start iteration, or NULL to start at
|
||||
* first/last node.
|
||||
* cb : Callback function, which is called for each node during
|
||||
* iteration. Under normal circumstances the callback function
|
||||
* should return NULL, which causes iteration to continue. If a
|
||||
* callback function returns non-NULL, iteration is immediately
|
||||
* terminated and the non-NULL return value is returned by the
|
||||
* iterator. This is useful for re-starting iteration after
|
||||
* modifying tree.
|
||||
* arg : Opaque pointer passed to cb().
|
||||
* Ret: NULL if iteration completed, or the non-NULL callback return value
|
||||
* that caused termination of the iteration.
|
||||
*/
|
161
contrib/jemalloc/include/jemalloc/internal/rtree.h
Normal file
161
contrib/jemalloc/include/jemalloc/internal/rtree.h
Normal file
@ -0,0 +1,161 @@
|
||||
/*
|
||||
* This radix tree implementation is tailored to the singular purpose of
|
||||
* tracking which chunks are currently owned by jemalloc. This functionality
|
||||
* is mandatory for OS X, where jemalloc must be able to respond to object
|
||||
* ownership queries.
|
||||
*
|
||||
*******************************************************************************
|
||||
*/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct rtree_s rtree_t;
|
||||
|
||||
/*
|
||||
* Size of each radix tree node (must be a power of 2). This impacts tree
|
||||
* depth.
|
||||
*/
|
||||
#if (LG_SIZEOF_PTR == 2)
|
||||
# define RTREE_NODESIZE (1U << 14)
|
||||
#else
|
||||
# define RTREE_NODESIZE CACHELINE
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
struct rtree_s {
|
||||
malloc_mutex_t mutex;
|
||||
void **root;
|
||||
unsigned height;
|
||||
unsigned level2bits[1]; /* Dynamically sized. */
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
rtree_t *rtree_new(unsigned bits);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
#ifndef JEMALLOC_DEBUG
|
||||
void *rtree_get_locked(rtree_t *rtree, uintptr_t key);
|
||||
#endif
|
||||
void *rtree_get(rtree_t *rtree, uintptr_t key);
|
||||
bool rtree_set(rtree_t *rtree, uintptr_t key, void *val);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_))
|
||||
#define RTREE_GET_GENERATE(f) \
|
||||
/* The least significant bits of the key are ignored. */ \
|
||||
JEMALLOC_INLINE void * \
|
||||
f(rtree_t *rtree, uintptr_t key) \
|
||||
{ \
|
||||
void *ret; \
|
||||
uintptr_t subkey; \
|
||||
unsigned i, lshift, height, bits; \
|
||||
void **node, **child; \
|
||||
\
|
||||
RTREE_LOCK(&rtree->mutex); \
|
||||
for (i = lshift = 0, height = rtree->height, node = rtree->root;\
|
||||
i < height - 1; \
|
||||
i++, lshift += bits, node = child) { \
|
||||
bits = rtree->level2bits[i]; \
|
||||
subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR + \
|
||||
3)) - bits); \
|
||||
child = (void**)node[subkey]; \
|
||||
if (child == NULL) { \
|
||||
RTREE_UNLOCK(&rtree->mutex); \
|
||||
return (NULL); \
|
||||
} \
|
||||
} \
|
||||
\
|
||||
/* \
|
||||
* node is a leaf, so it contains values rather than node \
|
||||
* pointers. \
|
||||
*/ \
|
||||
bits = rtree->level2bits[i]; \
|
||||
subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR+3)) - \
|
||||
bits); \
|
||||
ret = node[subkey]; \
|
||||
RTREE_UNLOCK(&rtree->mutex); \
|
||||
\
|
||||
RTREE_GET_VALIDATE \
|
||||
return (ret); \
|
||||
}
|
||||
|
||||
#ifdef JEMALLOC_DEBUG
|
||||
# define RTREE_LOCK(l) malloc_mutex_lock(l)
|
||||
# define RTREE_UNLOCK(l) malloc_mutex_unlock(l)
|
||||
# define RTREE_GET_VALIDATE
|
||||
RTREE_GET_GENERATE(rtree_get_locked)
|
||||
# undef RTREE_LOCK
|
||||
# undef RTREE_UNLOCK
|
||||
# undef RTREE_GET_VALIDATE
|
||||
#endif
|
||||
|
||||
#define RTREE_LOCK(l)
|
||||
#define RTREE_UNLOCK(l)
|
||||
#ifdef JEMALLOC_DEBUG
|
||||
/*
|
||||
* Suppose that it were possible for a jemalloc-allocated chunk to be
|
||||
* munmap()ped, followed by a different allocator in another thread re-using
|
||||
* overlapping virtual memory, all without invalidating the cached rtree
|
||||
* value. The result would be a false positive (the rtree would claim that
|
||||
* jemalloc owns memory that it had actually discarded). This scenario
|
||||
* seems impossible, but the following assertion is a prudent sanity check.
|
||||
*/
|
||||
# define RTREE_GET_VALIDATE \
|
||||
assert(rtree_get_locked(rtree, key) == ret);
|
||||
#else
|
||||
# define RTREE_GET_VALIDATE
|
||||
#endif
|
||||
RTREE_GET_GENERATE(rtree_get)
|
||||
#undef RTREE_LOCK
|
||||
#undef RTREE_UNLOCK
|
||||
#undef RTREE_GET_VALIDATE
|
||||
|
||||
JEMALLOC_INLINE bool
|
||||
rtree_set(rtree_t *rtree, uintptr_t key, void *val)
|
||||
{
|
||||
uintptr_t subkey;
|
||||
unsigned i, lshift, height, bits;
|
||||
void **node, **child;
|
||||
|
||||
malloc_mutex_lock(&rtree->mutex);
|
||||
for (i = lshift = 0, height = rtree->height, node = rtree->root;
|
||||
i < height - 1;
|
||||
i++, lshift += bits, node = child) {
|
||||
bits = rtree->level2bits[i];
|
||||
subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR+3)) -
|
||||
bits);
|
||||
child = (void**)node[subkey];
|
||||
if (child == NULL) {
|
||||
child = (void**)base_alloc(sizeof(void *) <<
|
||||
rtree->level2bits[i+1]);
|
||||
if (child == NULL) {
|
||||
malloc_mutex_unlock(&rtree->mutex);
|
||||
return (true);
|
||||
}
|
||||
memset(child, 0, sizeof(void *) <<
|
||||
rtree->level2bits[i+1]);
|
||||
node[subkey] = child;
|
||||
}
|
||||
}
|
||||
|
||||
/* node is a leaf, so it contains values rather than node pointers. */
|
||||
bits = rtree->level2bits[i];
|
||||
subkey = (key << lshift) >> ((ZU(1) << (LG_SIZEOF_PTR+3)) - bits);
|
||||
node[subkey] = val;
|
||||
malloc_mutex_unlock(&rtree->mutex);
|
||||
|
||||
return (false);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
721
contrib/jemalloc/include/jemalloc/internal/size_classes.h
Normal file
721
contrib/jemalloc/include/jemalloc/internal/size_classes.h
Normal file
@ -0,0 +1,721 @@
|
||||
/* This file was automatically generated by size_classes.sh. */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 12)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 8, 24) \
|
||||
SIZE_CLASS(3, 8, 32) \
|
||||
SIZE_CLASS(4, 8, 40) \
|
||||
SIZE_CLASS(5, 8, 48) \
|
||||
SIZE_CLASS(6, 8, 56) \
|
||||
SIZE_CLASS(7, 8, 64) \
|
||||
SIZE_CLASS(8, 16, 80) \
|
||||
SIZE_CLASS(9, 16, 96) \
|
||||
SIZE_CLASS(10, 16, 112) \
|
||||
SIZE_CLASS(11, 16, 128) \
|
||||
SIZE_CLASS(12, 32, 160) \
|
||||
SIZE_CLASS(13, 32, 192) \
|
||||
SIZE_CLASS(14, 32, 224) \
|
||||
SIZE_CLASS(15, 32, 256) \
|
||||
SIZE_CLASS(16, 64, 320) \
|
||||
SIZE_CLASS(17, 64, 384) \
|
||||
SIZE_CLASS(18, 64, 448) \
|
||||
SIZE_CLASS(19, 64, 512) \
|
||||
SIZE_CLASS(20, 128, 640) \
|
||||
SIZE_CLASS(21, 128, 768) \
|
||||
SIZE_CLASS(22, 128, 896) \
|
||||
SIZE_CLASS(23, 128, 1024) \
|
||||
SIZE_CLASS(24, 256, 1280) \
|
||||
SIZE_CLASS(25, 256, 1536) \
|
||||
SIZE_CLASS(26, 256, 1792) \
|
||||
SIZE_CLASS(27, 256, 2048) \
|
||||
SIZE_CLASS(28, 512, 2560) \
|
||||
SIZE_CLASS(29, 512, 3072) \
|
||||
SIZE_CLASS(30, 512, 3584) \
|
||||
|
||||
#define NBINS 31
|
||||
#define SMALL_MAXCLASS 3584
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 13)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 8, 24) \
|
||||
SIZE_CLASS(3, 8, 32) \
|
||||
SIZE_CLASS(4, 8, 40) \
|
||||
SIZE_CLASS(5, 8, 48) \
|
||||
SIZE_CLASS(6, 8, 56) \
|
||||
SIZE_CLASS(7, 8, 64) \
|
||||
SIZE_CLASS(8, 16, 80) \
|
||||
SIZE_CLASS(9, 16, 96) \
|
||||
SIZE_CLASS(10, 16, 112) \
|
||||
SIZE_CLASS(11, 16, 128) \
|
||||
SIZE_CLASS(12, 32, 160) \
|
||||
SIZE_CLASS(13, 32, 192) \
|
||||
SIZE_CLASS(14, 32, 224) \
|
||||
SIZE_CLASS(15, 32, 256) \
|
||||
SIZE_CLASS(16, 64, 320) \
|
||||
SIZE_CLASS(17, 64, 384) \
|
||||
SIZE_CLASS(18, 64, 448) \
|
||||
SIZE_CLASS(19, 64, 512) \
|
||||
SIZE_CLASS(20, 128, 640) \
|
||||
SIZE_CLASS(21, 128, 768) \
|
||||
SIZE_CLASS(22, 128, 896) \
|
||||
SIZE_CLASS(23, 128, 1024) \
|
||||
SIZE_CLASS(24, 256, 1280) \
|
||||
SIZE_CLASS(25, 256, 1536) \
|
||||
SIZE_CLASS(26, 256, 1792) \
|
||||
SIZE_CLASS(27, 256, 2048) \
|
||||
SIZE_CLASS(28, 512, 2560) \
|
||||
SIZE_CLASS(29, 512, 3072) \
|
||||
SIZE_CLASS(30, 512, 3584) \
|
||||
SIZE_CLASS(31, 512, 4096) \
|
||||
SIZE_CLASS(32, 1024, 5120) \
|
||||
SIZE_CLASS(33, 1024, 6144) \
|
||||
SIZE_CLASS(34, 1024, 7168) \
|
||||
|
||||
#define NBINS 35
|
||||
#define SMALL_MAXCLASS 7168
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 14)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 8, 24) \
|
||||
SIZE_CLASS(3, 8, 32) \
|
||||
SIZE_CLASS(4, 8, 40) \
|
||||
SIZE_CLASS(5, 8, 48) \
|
||||
SIZE_CLASS(6, 8, 56) \
|
||||
SIZE_CLASS(7, 8, 64) \
|
||||
SIZE_CLASS(8, 16, 80) \
|
||||
SIZE_CLASS(9, 16, 96) \
|
||||
SIZE_CLASS(10, 16, 112) \
|
||||
SIZE_CLASS(11, 16, 128) \
|
||||
SIZE_CLASS(12, 32, 160) \
|
||||
SIZE_CLASS(13, 32, 192) \
|
||||
SIZE_CLASS(14, 32, 224) \
|
||||
SIZE_CLASS(15, 32, 256) \
|
||||
SIZE_CLASS(16, 64, 320) \
|
||||
SIZE_CLASS(17, 64, 384) \
|
||||
SIZE_CLASS(18, 64, 448) \
|
||||
SIZE_CLASS(19, 64, 512) \
|
||||
SIZE_CLASS(20, 128, 640) \
|
||||
SIZE_CLASS(21, 128, 768) \
|
||||
SIZE_CLASS(22, 128, 896) \
|
||||
SIZE_CLASS(23, 128, 1024) \
|
||||
SIZE_CLASS(24, 256, 1280) \
|
||||
SIZE_CLASS(25, 256, 1536) \
|
||||
SIZE_CLASS(26, 256, 1792) \
|
||||
SIZE_CLASS(27, 256, 2048) \
|
||||
SIZE_CLASS(28, 512, 2560) \
|
||||
SIZE_CLASS(29, 512, 3072) \
|
||||
SIZE_CLASS(30, 512, 3584) \
|
||||
SIZE_CLASS(31, 512, 4096) \
|
||||
SIZE_CLASS(32, 1024, 5120) \
|
||||
SIZE_CLASS(33, 1024, 6144) \
|
||||
SIZE_CLASS(34, 1024, 7168) \
|
||||
SIZE_CLASS(35, 1024, 8192) \
|
||||
SIZE_CLASS(36, 2048, 10240) \
|
||||
SIZE_CLASS(37, 2048, 12288) \
|
||||
SIZE_CLASS(38, 2048, 14336) \
|
||||
|
||||
#define NBINS 39
|
||||
#define SMALL_MAXCLASS 14336
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 15)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 8, 24) \
|
||||
SIZE_CLASS(3, 8, 32) \
|
||||
SIZE_CLASS(4, 8, 40) \
|
||||
SIZE_CLASS(5, 8, 48) \
|
||||
SIZE_CLASS(6, 8, 56) \
|
||||
SIZE_CLASS(7, 8, 64) \
|
||||
SIZE_CLASS(8, 16, 80) \
|
||||
SIZE_CLASS(9, 16, 96) \
|
||||
SIZE_CLASS(10, 16, 112) \
|
||||
SIZE_CLASS(11, 16, 128) \
|
||||
SIZE_CLASS(12, 32, 160) \
|
||||
SIZE_CLASS(13, 32, 192) \
|
||||
SIZE_CLASS(14, 32, 224) \
|
||||
SIZE_CLASS(15, 32, 256) \
|
||||
SIZE_CLASS(16, 64, 320) \
|
||||
SIZE_CLASS(17, 64, 384) \
|
||||
SIZE_CLASS(18, 64, 448) \
|
||||
SIZE_CLASS(19, 64, 512) \
|
||||
SIZE_CLASS(20, 128, 640) \
|
||||
SIZE_CLASS(21, 128, 768) \
|
||||
SIZE_CLASS(22, 128, 896) \
|
||||
SIZE_CLASS(23, 128, 1024) \
|
||||
SIZE_CLASS(24, 256, 1280) \
|
||||
SIZE_CLASS(25, 256, 1536) \
|
||||
SIZE_CLASS(26, 256, 1792) \
|
||||
SIZE_CLASS(27, 256, 2048) \
|
||||
SIZE_CLASS(28, 512, 2560) \
|
||||
SIZE_CLASS(29, 512, 3072) \
|
||||
SIZE_CLASS(30, 512, 3584) \
|
||||
SIZE_CLASS(31, 512, 4096) \
|
||||
SIZE_CLASS(32, 1024, 5120) \
|
||||
SIZE_CLASS(33, 1024, 6144) \
|
||||
SIZE_CLASS(34, 1024, 7168) \
|
||||
SIZE_CLASS(35, 1024, 8192) \
|
||||
SIZE_CLASS(36, 2048, 10240) \
|
||||
SIZE_CLASS(37, 2048, 12288) \
|
||||
SIZE_CLASS(38, 2048, 14336) \
|
||||
SIZE_CLASS(39, 2048, 16384) \
|
||||
SIZE_CLASS(40, 4096, 20480) \
|
||||
SIZE_CLASS(41, 4096, 24576) \
|
||||
SIZE_CLASS(42, 4096, 28672) \
|
||||
|
||||
#define NBINS 43
|
||||
#define SMALL_MAXCLASS 28672
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 3 && LG_PAGE == 16)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 8, 24) \
|
||||
SIZE_CLASS(3, 8, 32) \
|
||||
SIZE_CLASS(4, 8, 40) \
|
||||
SIZE_CLASS(5, 8, 48) \
|
||||
SIZE_CLASS(6, 8, 56) \
|
||||
SIZE_CLASS(7, 8, 64) \
|
||||
SIZE_CLASS(8, 16, 80) \
|
||||
SIZE_CLASS(9, 16, 96) \
|
||||
SIZE_CLASS(10, 16, 112) \
|
||||
SIZE_CLASS(11, 16, 128) \
|
||||
SIZE_CLASS(12, 32, 160) \
|
||||
SIZE_CLASS(13, 32, 192) \
|
||||
SIZE_CLASS(14, 32, 224) \
|
||||
SIZE_CLASS(15, 32, 256) \
|
||||
SIZE_CLASS(16, 64, 320) \
|
||||
SIZE_CLASS(17, 64, 384) \
|
||||
SIZE_CLASS(18, 64, 448) \
|
||||
SIZE_CLASS(19, 64, 512) \
|
||||
SIZE_CLASS(20, 128, 640) \
|
||||
SIZE_CLASS(21, 128, 768) \
|
||||
SIZE_CLASS(22, 128, 896) \
|
||||
SIZE_CLASS(23, 128, 1024) \
|
||||
SIZE_CLASS(24, 256, 1280) \
|
||||
SIZE_CLASS(25, 256, 1536) \
|
||||
SIZE_CLASS(26, 256, 1792) \
|
||||
SIZE_CLASS(27, 256, 2048) \
|
||||
SIZE_CLASS(28, 512, 2560) \
|
||||
SIZE_CLASS(29, 512, 3072) \
|
||||
SIZE_CLASS(30, 512, 3584) \
|
||||
SIZE_CLASS(31, 512, 4096) \
|
||||
SIZE_CLASS(32, 1024, 5120) \
|
||||
SIZE_CLASS(33, 1024, 6144) \
|
||||
SIZE_CLASS(34, 1024, 7168) \
|
||||
SIZE_CLASS(35, 1024, 8192) \
|
||||
SIZE_CLASS(36, 2048, 10240) \
|
||||
SIZE_CLASS(37, 2048, 12288) \
|
||||
SIZE_CLASS(38, 2048, 14336) \
|
||||
SIZE_CLASS(39, 2048, 16384) \
|
||||
SIZE_CLASS(40, 4096, 20480) \
|
||||
SIZE_CLASS(41, 4096, 24576) \
|
||||
SIZE_CLASS(42, 4096, 28672) \
|
||||
SIZE_CLASS(43, 4096, 32768) \
|
||||
SIZE_CLASS(44, 8192, 40960) \
|
||||
SIZE_CLASS(45, 8192, 49152) \
|
||||
SIZE_CLASS(46, 8192, 57344) \
|
||||
|
||||
#define NBINS 47
|
||||
#define SMALL_MAXCLASS 57344
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 12)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 16, 32) \
|
||||
SIZE_CLASS(3, 16, 48) \
|
||||
SIZE_CLASS(4, 16, 64) \
|
||||
SIZE_CLASS(5, 16, 80) \
|
||||
SIZE_CLASS(6, 16, 96) \
|
||||
SIZE_CLASS(7, 16, 112) \
|
||||
SIZE_CLASS(8, 16, 128) \
|
||||
SIZE_CLASS(9, 32, 160) \
|
||||
SIZE_CLASS(10, 32, 192) \
|
||||
SIZE_CLASS(11, 32, 224) \
|
||||
SIZE_CLASS(12, 32, 256) \
|
||||
SIZE_CLASS(13, 64, 320) \
|
||||
SIZE_CLASS(14, 64, 384) \
|
||||
SIZE_CLASS(15, 64, 448) \
|
||||
SIZE_CLASS(16, 64, 512) \
|
||||
SIZE_CLASS(17, 128, 640) \
|
||||
SIZE_CLASS(18, 128, 768) \
|
||||
SIZE_CLASS(19, 128, 896) \
|
||||
SIZE_CLASS(20, 128, 1024) \
|
||||
SIZE_CLASS(21, 256, 1280) \
|
||||
SIZE_CLASS(22, 256, 1536) \
|
||||
SIZE_CLASS(23, 256, 1792) \
|
||||
SIZE_CLASS(24, 256, 2048) \
|
||||
SIZE_CLASS(25, 512, 2560) \
|
||||
SIZE_CLASS(26, 512, 3072) \
|
||||
SIZE_CLASS(27, 512, 3584) \
|
||||
|
||||
#define NBINS 28
|
||||
#define SMALL_MAXCLASS 3584
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 13)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 16, 32) \
|
||||
SIZE_CLASS(3, 16, 48) \
|
||||
SIZE_CLASS(4, 16, 64) \
|
||||
SIZE_CLASS(5, 16, 80) \
|
||||
SIZE_CLASS(6, 16, 96) \
|
||||
SIZE_CLASS(7, 16, 112) \
|
||||
SIZE_CLASS(8, 16, 128) \
|
||||
SIZE_CLASS(9, 32, 160) \
|
||||
SIZE_CLASS(10, 32, 192) \
|
||||
SIZE_CLASS(11, 32, 224) \
|
||||
SIZE_CLASS(12, 32, 256) \
|
||||
SIZE_CLASS(13, 64, 320) \
|
||||
SIZE_CLASS(14, 64, 384) \
|
||||
SIZE_CLASS(15, 64, 448) \
|
||||
SIZE_CLASS(16, 64, 512) \
|
||||
SIZE_CLASS(17, 128, 640) \
|
||||
SIZE_CLASS(18, 128, 768) \
|
||||
SIZE_CLASS(19, 128, 896) \
|
||||
SIZE_CLASS(20, 128, 1024) \
|
||||
SIZE_CLASS(21, 256, 1280) \
|
||||
SIZE_CLASS(22, 256, 1536) \
|
||||
SIZE_CLASS(23, 256, 1792) \
|
||||
SIZE_CLASS(24, 256, 2048) \
|
||||
SIZE_CLASS(25, 512, 2560) \
|
||||
SIZE_CLASS(26, 512, 3072) \
|
||||
SIZE_CLASS(27, 512, 3584) \
|
||||
SIZE_CLASS(28, 512, 4096) \
|
||||
SIZE_CLASS(29, 1024, 5120) \
|
||||
SIZE_CLASS(30, 1024, 6144) \
|
||||
SIZE_CLASS(31, 1024, 7168) \
|
||||
|
||||
#define NBINS 32
|
||||
#define SMALL_MAXCLASS 7168
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 14)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 16, 32) \
|
||||
SIZE_CLASS(3, 16, 48) \
|
||||
SIZE_CLASS(4, 16, 64) \
|
||||
SIZE_CLASS(5, 16, 80) \
|
||||
SIZE_CLASS(6, 16, 96) \
|
||||
SIZE_CLASS(7, 16, 112) \
|
||||
SIZE_CLASS(8, 16, 128) \
|
||||
SIZE_CLASS(9, 32, 160) \
|
||||
SIZE_CLASS(10, 32, 192) \
|
||||
SIZE_CLASS(11, 32, 224) \
|
||||
SIZE_CLASS(12, 32, 256) \
|
||||
SIZE_CLASS(13, 64, 320) \
|
||||
SIZE_CLASS(14, 64, 384) \
|
||||
SIZE_CLASS(15, 64, 448) \
|
||||
SIZE_CLASS(16, 64, 512) \
|
||||
SIZE_CLASS(17, 128, 640) \
|
||||
SIZE_CLASS(18, 128, 768) \
|
||||
SIZE_CLASS(19, 128, 896) \
|
||||
SIZE_CLASS(20, 128, 1024) \
|
||||
SIZE_CLASS(21, 256, 1280) \
|
||||
SIZE_CLASS(22, 256, 1536) \
|
||||
SIZE_CLASS(23, 256, 1792) \
|
||||
SIZE_CLASS(24, 256, 2048) \
|
||||
SIZE_CLASS(25, 512, 2560) \
|
||||
SIZE_CLASS(26, 512, 3072) \
|
||||
SIZE_CLASS(27, 512, 3584) \
|
||||
SIZE_CLASS(28, 512, 4096) \
|
||||
SIZE_CLASS(29, 1024, 5120) \
|
||||
SIZE_CLASS(30, 1024, 6144) \
|
||||
SIZE_CLASS(31, 1024, 7168) \
|
||||
SIZE_CLASS(32, 1024, 8192) \
|
||||
SIZE_CLASS(33, 2048, 10240) \
|
||||
SIZE_CLASS(34, 2048, 12288) \
|
||||
SIZE_CLASS(35, 2048, 14336) \
|
||||
|
||||
#define NBINS 36
|
||||
#define SMALL_MAXCLASS 14336
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 15)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 16, 32) \
|
||||
SIZE_CLASS(3, 16, 48) \
|
||||
SIZE_CLASS(4, 16, 64) \
|
||||
SIZE_CLASS(5, 16, 80) \
|
||||
SIZE_CLASS(6, 16, 96) \
|
||||
SIZE_CLASS(7, 16, 112) \
|
||||
SIZE_CLASS(8, 16, 128) \
|
||||
SIZE_CLASS(9, 32, 160) \
|
||||
SIZE_CLASS(10, 32, 192) \
|
||||
SIZE_CLASS(11, 32, 224) \
|
||||
SIZE_CLASS(12, 32, 256) \
|
||||
SIZE_CLASS(13, 64, 320) \
|
||||
SIZE_CLASS(14, 64, 384) \
|
||||
SIZE_CLASS(15, 64, 448) \
|
||||
SIZE_CLASS(16, 64, 512) \
|
||||
SIZE_CLASS(17, 128, 640) \
|
||||
SIZE_CLASS(18, 128, 768) \
|
||||
SIZE_CLASS(19, 128, 896) \
|
||||
SIZE_CLASS(20, 128, 1024) \
|
||||
SIZE_CLASS(21, 256, 1280) \
|
||||
SIZE_CLASS(22, 256, 1536) \
|
||||
SIZE_CLASS(23, 256, 1792) \
|
||||
SIZE_CLASS(24, 256, 2048) \
|
||||
SIZE_CLASS(25, 512, 2560) \
|
||||
SIZE_CLASS(26, 512, 3072) \
|
||||
SIZE_CLASS(27, 512, 3584) \
|
||||
SIZE_CLASS(28, 512, 4096) \
|
||||
SIZE_CLASS(29, 1024, 5120) \
|
||||
SIZE_CLASS(30, 1024, 6144) \
|
||||
SIZE_CLASS(31, 1024, 7168) \
|
||||
SIZE_CLASS(32, 1024, 8192) \
|
||||
SIZE_CLASS(33, 2048, 10240) \
|
||||
SIZE_CLASS(34, 2048, 12288) \
|
||||
SIZE_CLASS(35, 2048, 14336) \
|
||||
SIZE_CLASS(36, 2048, 16384) \
|
||||
SIZE_CLASS(37, 4096, 20480) \
|
||||
SIZE_CLASS(38, 4096, 24576) \
|
||||
SIZE_CLASS(39, 4096, 28672) \
|
||||
|
||||
#define NBINS 40
|
||||
#define SMALL_MAXCLASS 28672
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 3 && LG_QUANTUM == 4 && LG_PAGE == 16)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 8, 8) \
|
||||
SIZE_CLASS(1, 8, 16) \
|
||||
SIZE_CLASS(2, 16, 32) \
|
||||
SIZE_CLASS(3, 16, 48) \
|
||||
SIZE_CLASS(4, 16, 64) \
|
||||
SIZE_CLASS(5, 16, 80) \
|
||||
SIZE_CLASS(6, 16, 96) \
|
||||
SIZE_CLASS(7, 16, 112) \
|
||||
SIZE_CLASS(8, 16, 128) \
|
||||
SIZE_CLASS(9, 32, 160) \
|
||||
SIZE_CLASS(10, 32, 192) \
|
||||
SIZE_CLASS(11, 32, 224) \
|
||||
SIZE_CLASS(12, 32, 256) \
|
||||
SIZE_CLASS(13, 64, 320) \
|
||||
SIZE_CLASS(14, 64, 384) \
|
||||
SIZE_CLASS(15, 64, 448) \
|
||||
SIZE_CLASS(16, 64, 512) \
|
||||
SIZE_CLASS(17, 128, 640) \
|
||||
SIZE_CLASS(18, 128, 768) \
|
||||
SIZE_CLASS(19, 128, 896) \
|
||||
SIZE_CLASS(20, 128, 1024) \
|
||||
SIZE_CLASS(21, 256, 1280) \
|
||||
SIZE_CLASS(22, 256, 1536) \
|
||||
SIZE_CLASS(23, 256, 1792) \
|
||||
SIZE_CLASS(24, 256, 2048) \
|
||||
SIZE_CLASS(25, 512, 2560) \
|
||||
SIZE_CLASS(26, 512, 3072) \
|
||||
SIZE_CLASS(27, 512, 3584) \
|
||||
SIZE_CLASS(28, 512, 4096) \
|
||||
SIZE_CLASS(29, 1024, 5120) \
|
||||
SIZE_CLASS(30, 1024, 6144) \
|
||||
SIZE_CLASS(31, 1024, 7168) \
|
||||
SIZE_CLASS(32, 1024, 8192) \
|
||||
SIZE_CLASS(33, 2048, 10240) \
|
||||
SIZE_CLASS(34, 2048, 12288) \
|
||||
SIZE_CLASS(35, 2048, 14336) \
|
||||
SIZE_CLASS(36, 2048, 16384) \
|
||||
SIZE_CLASS(37, 4096, 20480) \
|
||||
SIZE_CLASS(38, 4096, 24576) \
|
||||
SIZE_CLASS(39, 4096, 28672) \
|
||||
SIZE_CLASS(40, 4096, 32768) \
|
||||
SIZE_CLASS(41, 8192, 40960) \
|
||||
SIZE_CLASS(42, 8192, 49152) \
|
||||
SIZE_CLASS(43, 8192, 57344) \
|
||||
|
||||
#define NBINS 44
|
||||
#define SMALL_MAXCLASS 57344
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 12)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 16, 16) \
|
||||
SIZE_CLASS(1, 16, 32) \
|
||||
SIZE_CLASS(2, 16, 48) \
|
||||
SIZE_CLASS(3, 16, 64) \
|
||||
SIZE_CLASS(4, 16, 80) \
|
||||
SIZE_CLASS(5, 16, 96) \
|
||||
SIZE_CLASS(6, 16, 112) \
|
||||
SIZE_CLASS(7, 16, 128) \
|
||||
SIZE_CLASS(8, 32, 160) \
|
||||
SIZE_CLASS(9, 32, 192) \
|
||||
SIZE_CLASS(10, 32, 224) \
|
||||
SIZE_CLASS(11, 32, 256) \
|
||||
SIZE_CLASS(12, 64, 320) \
|
||||
SIZE_CLASS(13, 64, 384) \
|
||||
SIZE_CLASS(14, 64, 448) \
|
||||
SIZE_CLASS(15, 64, 512) \
|
||||
SIZE_CLASS(16, 128, 640) \
|
||||
SIZE_CLASS(17, 128, 768) \
|
||||
SIZE_CLASS(18, 128, 896) \
|
||||
SIZE_CLASS(19, 128, 1024) \
|
||||
SIZE_CLASS(20, 256, 1280) \
|
||||
SIZE_CLASS(21, 256, 1536) \
|
||||
SIZE_CLASS(22, 256, 1792) \
|
||||
SIZE_CLASS(23, 256, 2048) \
|
||||
SIZE_CLASS(24, 512, 2560) \
|
||||
SIZE_CLASS(25, 512, 3072) \
|
||||
SIZE_CLASS(26, 512, 3584) \
|
||||
|
||||
#define NBINS 27
|
||||
#define SMALL_MAXCLASS 3584
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 13)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 16, 16) \
|
||||
SIZE_CLASS(1, 16, 32) \
|
||||
SIZE_CLASS(2, 16, 48) \
|
||||
SIZE_CLASS(3, 16, 64) \
|
||||
SIZE_CLASS(4, 16, 80) \
|
||||
SIZE_CLASS(5, 16, 96) \
|
||||
SIZE_CLASS(6, 16, 112) \
|
||||
SIZE_CLASS(7, 16, 128) \
|
||||
SIZE_CLASS(8, 32, 160) \
|
||||
SIZE_CLASS(9, 32, 192) \
|
||||
SIZE_CLASS(10, 32, 224) \
|
||||
SIZE_CLASS(11, 32, 256) \
|
||||
SIZE_CLASS(12, 64, 320) \
|
||||
SIZE_CLASS(13, 64, 384) \
|
||||
SIZE_CLASS(14, 64, 448) \
|
||||
SIZE_CLASS(15, 64, 512) \
|
||||
SIZE_CLASS(16, 128, 640) \
|
||||
SIZE_CLASS(17, 128, 768) \
|
||||
SIZE_CLASS(18, 128, 896) \
|
||||
SIZE_CLASS(19, 128, 1024) \
|
||||
SIZE_CLASS(20, 256, 1280) \
|
||||
SIZE_CLASS(21, 256, 1536) \
|
||||
SIZE_CLASS(22, 256, 1792) \
|
||||
SIZE_CLASS(23, 256, 2048) \
|
||||
SIZE_CLASS(24, 512, 2560) \
|
||||
SIZE_CLASS(25, 512, 3072) \
|
||||
SIZE_CLASS(26, 512, 3584) \
|
||||
SIZE_CLASS(27, 512, 4096) \
|
||||
SIZE_CLASS(28, 1024, 5120) \
|
||||
SIZE_CLASS(29, 1024, 6144) \
|
||||
SIZE_CLASS(30, 1024, 7168) \
|
||||
|
||||
#define NBINS 31
|
||||
#define SMALL_MAXCLASS 7168
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 14)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 16, 16) \
|
||||
SIZE_CLASS(1, 16, 32) \
|
||||
SIZE_CLASS(2, 16, 48) \
|
||||
SIZE_CLASS(3, 16, 64) \
|
||||
SIZE_CLASS(4, 16, 80) \
|
||||
SIZE_CLASS(5, 16, 96) \
|
||||
SIZE_CLASS(6, 16, 112) \
|
||||
SIZE_CLASS(7, 16, 128) \
|
||||
SIZE_CLASS(8, 32, 160) \
|
||||
SIZE_CLASS(9, 32, 192) \
|
||||
SIZE_CLASS(10, 32, 224) \
|
||||
SIZE_CLASS(11, 32, 256) \
|
||||
SIZE_CLASS(12, 64, 320) \
|
||||
SIZE_CLASS(13, 64, 384) \
|
||||
SIZE_CLASS(14, 64, 448) \
|
||||
SIZE_CLASS(15, 64, 512) \
|
||||
SIZE_CLASS(16, 128, 640) \
|
||||
SIZE_CLASS(17, 128, 768) \
|
||||
SIZE_CLASS(18, 128, 896) \
|
||||
SIZE_CLASS(19, 128, 1024) \
|
||||
SIZE_CLASS(20, 256, 1280) \
|
||||
SIZE_CLASS(21, 256, 1536) \
|
||||
SIZE_CLASS(22, 256, 1792) \
|
||||
SIZE_CLASS(23, 256, 2048) \
|
||||
SIZE_CLASS(24, 512, 2560) \
|
||||
SIZE_CLASS(25, 512, 3072) \
|
||||
SIZE_CLASS(26, 512, 3584) \
|
||||
SIZE_CLASS(27, 512, 4096) \
|
||||
SIZE_CLASS(28, 1024, 5120) \
|
||||
SIZE_CLASS(29, 1024, 6144) \
|
||||
SIZE_CLASS(30, 1024, 7168) \
|
||||
SIZE_CLASS(31, 1024, 8192) \
|
||||
SIZE_CLASS(32, 2048, 10240) \
|
||||
SIZE_CLASS(33, 2048, 12288) \
|
||||
SIZE_CLASS(34, 2048, 14336) \
|
||||
|
||||
#define NBINS 35
|
||||
#define SMALL_MAXCLASS 14336
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 15)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 16, 16) \
|
||||
SIZE_CLASS(1, 16, 32) \
|
||||
SIZE_CLASS(2, 16, 48) \
|
||||
SIZE_CLASS(3, 16, 64) \
|
||||
SIZE_CLASS(4, 16, 80) \
|
||||
SIZE_CLASS(5, 16, 96) \
|
||||
SIZE_CLASS(6, 16, 112) \
|
||||
SIZE_CLASS(7, 16, 128) \
|
||||
SIZE_CLASS(8, 32, 160) \
|
||||
SIZE_CLASS(9, 32, 192) \
|
||||
SIZE_CLASS(10, 32, 224) \
|
||||
SIZE_CLASS(11, 32, 256) \
|
||||
SIZE_CLASS(12, 64, 320) \
|
||||
SIZE_CLASS(13, 64, 384) \
|
||||
SIZE_CLASS(14, 64, 448) \
|
||||
SIZE_CLASS(15, 64, 512) \
|
||||
SIZE_CLASS(16, 128, 640) \
|
||||
SIZE_CLASS(17, 128, 768) \
|
||||
SIZE_CLASS(18, 128, 896) \
|
||||
SIZE_CLASS(19, 128, 1024) \
|
||||
SIZE_CLASS(20, 256, 1280) \
|
||||
SIZE_CLASS(21, 256, 1536) \
|
||||
SIZE_CLASS(22, 256, 1792) \
|
||||
SIZE_CLASS(23, 256, 2048) \
|
||||
SIZE_CLASS(24, 512, 2560) \
|
||||
SIZE_CLASS(25, 512, 3072) \
|
||||
SIZE_CLASS(26, 512, 3584) \
|
||||
SIZE_CLASS(27, 512, 4096) \
|
||||
SIZE_CLASS(28, 1024, 5120) \
|
||||
SIZE_CLASS(29, 1024, 6144) \
|
||||
SIZE_CLASS(30, 1024, 7168) \
|
||||
SIZE_CLASS(31, 1024, 8192) \
|
||||
SIZE_CLASS(32, 2048, 10240) \
|
||||
SIZE_CLASS(33, 2048, 12288) \
|
||||
SIZE_CLASS(34, 2048, 14336) \
|
||||
SIZE_CLASS(35, 2048, 16384) \
|
||||
SIZE_CLASS(36, 4096, 20480) \
|
||||
SIZE_CLASS(37, 4096, 24576) \
|
||||
SIZE_CLASS(38, 4096, 28672) \
|
||||
|
||||
#define NBINS 39
|
||||
#define SMALL_MAXCLASS 28672
|
||||
#endif
|
||||
|
||||
#if (LG_TINY_MIN == 4 && LG_QUANTUM == 4 && LG_PAGE == 16)
|
||||
#define SIZE_CLASSES_DEFINED
|
||||
/* SIZE_CLASS(bin, delta, sz) */
|
||||
#define SIZE_CLASSES \
|
||||
SIZE_CLASS(0, 16, 16) \
|
||||
SIZE_CLASS(1, 16, 32) \
|
||||
SIZE_CLASS(2, 16, 48) \
|
||||
SIZE_CLASS(3, 16, 64) \
|
||||
SIZE_CLASS(4, 16, 80) \
|
||||
SIZE_CLASS(5, 16, 96) \
|
||||
SIZE_CLASS(6, 16, 112) \
|
||||
SIZE_CLASS(7, 16, 128) \
|
||||
SIZE_CLASS(8, 32, 160) \
|
||||
SIZE_CLASS(9, 32, 192) \
|
||||
SIZE_CLASS(10, 32, 224) \
|
||||
SIZE_CLASS(11, 32, 256) \
|
||||
SIZE_CLASS(12, 64, 320) \
|
||||
SIZE_CLASS(13, 64, 384) \
|
||||
SIZE_CLASS(14, 64, 448) \
|
||||
SIZE_CLASS(15, 64, 512) \
|
||||
SIZE_CLASS(16, 128, 640) \
|
||||
SIZE_CLASS(17, 128, 768) \
|
||||
SIZE_CLASS(18, 128, 896) \
|
||||
SIZE_CLASS(19, 128, 1024) \
|
||||
SIZE_CLASS(20, 256, 1280) \
|
||||
SIZE_CLASS(21, 256, 1536) \
|
||||
SIZE_CLASS(22, 256, 1792) \
|
||||
SIZE_CLASS(23, 256, 2048) \
|
||||
SIZE_CLASS(24, 512, 2560) \
|
||||
SIZE_CLASS(25, 512, 3072) \
|
||||
SIZE_CLASS(26, 512, 3584) \
|
||||
SIZE_CLASS(27, 512, 4096) \
|
||||
SIZE_CLASS(28, 1024, 5120) \
|
||||
SIZE_CLASS(29, 1024, 6144) \
|
||||
SIZE_CLASS(30, 1024, 7168) \
|
||||
SIZE_CLASS(31, 1024, 8192) \
|
||||
SIZE_CLASS(32, 2048, 10240) \
|
||||
SIZE_CLASS(33, 2048, 12288) \
|
||||
SIZE_CLASS(34, 2048, 14336) \
|
||||
SIZE_CLASS(35, 2048, 16384) \
|
||||
SIZE_CLASS(36, 4096, 20480) \
|
||||
SIZE_CLASS(37, 4096, 24576) \
|
||||
SIZE_CLASS(38, 4096, 28672) \
|
||||
SIZE_CLASS(39, 4096, 32768) \
|
||||
SIZE_CLASS(40, 8192, 40960) \
|
||||
SIZE_CLASS(41, 8192, 49152) \
|
||||
SIZE_CLASS(42, 8192, 57344) \
|
||||
|
||||
#define NBINS 43
|
||||
#define SMALL_MAXCLASS 57344
|
||||
#endif
|
||||
|
||||
#ifndef SIZE_CLASSES_DEFINED
|
||||
# error "No size class definitions match configuration"
|
||||
#endif
|
||||
#undef SIZE_CLASSES_DEFINED
|
||||
/*
|
||||
* The small_size2bin lookup table uses uint8_t to encode each bin index, so we
|
||||
* cannot support more than 256 small size classes. Further constrain NBINS to
|
||||
* 255 to support prof_promote, since all small size classes, plus a "not
|
||||
* small" size class must be stored in 8 bits of arena_chunk_map_t's bits
|
||||
* field.
|
||||
*/
|
||||
#if (NBINS > 255)
|
||||
# error "Too many small size classes"
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
173
contrib/jemalloc/include/jemalloc/internal/stats.h
Normal file
173
contrib/jemalloc/include/jemalloc/internal/stats.h
Normal file
@ -0,0 +1,173 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct tcache_bin_stats_s tcache_bin_stats_t;
|
||||
typedef struct malloc_bin_stats_s malloc_bin_stats_t;
|
||||
typedef struct malloc_large_stats_s malloc_large_stats_t;
|
||||
typedef struct arena_stats_s arena_stats_t;
|
||||
typedef struct chunk_stats_s chunk_stats_t;
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
struct tcache_bin_stats_s {
|
||||
/*
|
||||
* Number of allocation requests that corresponded to the size of this
|
||||
* bin.
|
||||
*/
|
||||
uint64_t nrequests;
|
||||
};
|
||||
|
||||
struct malloc_bin_stats_s {
|
||||
/*
|
||||
* Current number of bytes allocated, including objects currently
|
||||
* cached by tcache.
|
||||
*/
|
||||
size_t allocated;
|
||||
|
||||
/*
|
||||
* Total number of allocation/deallocation requests served directly by
|
||||
* the bin. Note that tcache may allocate an object, then recycle it
|
||||
* many times, resulting many increments to nrequests, but only one
|
||||
* each to nmalloc and ndalloc.
|
||||
*/
|
||||
uint64_t nmalloc;
|
||||
uint64_t ndalloc;
|
||||
|
||||
/*
|
||||
* Number of allocation requests that correspond to the size of this
|
||||
* bin. This includes requests served by tcache, though tcache only
|
||||
* periodically merges into this counter.
|
||||
*/
|
||||
uint64_t nrequests;
|
||||
|
||||
/* Number of tcache fills from this bin. */
|
||||
uint64_t nfills;
|
||||
|
||||
/* Number of tcache flushes to this bin. */
|
||||
uint64_t nflushes;
|
||||
|
||||
/* Total number of runs created for this bin's size class. */
|
||||
uint64_t nruns;
|
||||
|
||||
/*
|
||||
* Total number of runs reused by extracting them from the runs tree for
|
||||
* this bin's size class.
|
||||
*/
|
||||
uint64_t reruns;
|
||||
|
||||
/* Current number of runs in this bin. */
|
||||
size_t curruns;
|
||||
};
|
||||
|
||||
struct malloc_large_stats_s {
|
||||
/*
|
||||
* Total number of allocation/deallocation requests served directly by
|
||||
* the arena. Note that tcache may allocate an object, then recycle it
|
||||
* many times, resulting many increments to nrequests, but only one
|
||||
* each to nmalloc and ndalloc.
|
||||
*/
|
||||
uint64_t nmalloc;
|
||||
uint64_t ndalloc;
|
||||
|
||||
/*
|
||||
* Number of allocation requests that correspond to this size class.
|
||||
* This includes requests served by tcache, though tcache only
|
||||
* periodically merges into this counter.
|
||||
*/
|
||||
uint64_t nrequests;
|
||||
|
||||
/* Current number of runs of this size class. */
|
||||
size_t curruns;
|
||||
};
|
||||
|
||||
struct arena_stats_s {
|
||||
/* Number of bytes currently mapped. */
|
||||
size_t mapped;
|
||||
|
||||
/*
|
||||
* Total number of purge sweeps, total number of madvise calls made,
|
||||
* and total pages purged in order to keep dirty unused memory under
|
||||
* control.
|
||||
*/
|
||||
uint64_t npurge;
|
||||
uint64_t nmadvise;
|
||||
uint64_t purged;
|
||||
|
||||
/* Per-size-category statistics. */
|
||||
size_t allocated_large;
|
||||
uint64_t nmalloc_large;
|
||||
uint64_t ndalloc_large;
|
||||
uint64_t nrequests_large;
|
||||
|
||||
/*
|
||||
* One element for each possible size class, including sizes that
|
||||
* overlap with bin size classes. This is necessary because ipalloc()
|
||||
* sometimes has to use such large objects in order to assure proper
|
||||
* alignment.
|
||||
*/
|
||||
malloc_large_stats_t *lstats;
|
||||
};
|
||||
|
||||
struct chunk_stats_s {
|
||||
/* Number of chunks that were allocated. */
|
||||
uint64_t nchunks;
|
||||
|
||||
/* High-water mark for number of chunks allocated. */
|
||||
size_t highchunks;
|
||||
|
||||
/*
|
||||
* Current number of chunks allocated. This value isn't maintained for
|
||||
* any other purpose, so keep track of it in order to be able to set
|
||||
* highchunks.
|
||||
*/
|
||||
size_t curchunks;
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
extern bool opt_stats_print;
|
||||
|
||||
extern size_t stats_cactive;
|
||||
|
||||
void stats_print(void (*write)(void *, const char *), void *cbopaque,
|
||||
const char *opts);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
size_t stats_cactive_get(void);
|
||||
void stats_cactive_add(size_t size);
|
||||
void stats_cactive_sub(size_t size);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_STATS_C_))
|
||||
JEMALLOC_INLINE size_t
|
||||
stats_cactive_get(void)
|
||||
{
|
||||
|
||||
return (atomic_read_z(&stats_cactive));
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
stats_cactive_add(size_t size)
|
||||
{
|
||||
|
||||
atomic_add_z(&stats_cactive, size);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
stats_cactive_sub(size_t size)
|
||||
{
|
||||
|
||||
atomic_sub_z(&stats_cactive, size);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
494
contrib/jemalloc/include/jemalloc/internal/tcache.h
Normal file
494
contrib/jemalloc/include/jemalloc/internal/tcache.h
Normal file
@ -0,0 +1,494 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
typedef struct tcache_bin_info_s tcache_bin_info_t;
|
||||
typedef struct tcache_bin_s tcache_bin_t;
|
||||
typedef struct tcache_s tcache_t;
|
||||
|
||||
/*
|
||||
* tcache pointers close to NULL are used to encode state information that is
|
||||
* used for two purposes: preventing thread caching on a per thread basis and
|
||||
* cleaning up during thread shutdown.
|
||||
*/
|
||||
#define TCACHE_STATE_DISABLED ((tcache_t *)(uintptr_t)1)
|
||||
#define TCACHE_STATE_REINCARNATED ((tcache_t *)(uintptr_t)2)
|
||||
#define TCACHE_STATE_PURGATORY ((tcache_t *)(uintptr_t)3)
|
||||
#define TCACHE_STATE_MAX TCACHE_STATE_PURGATORY
|
||||
|
||||
/*
|
||||
* Absolute maximum number of cache slots for each small bin in the thread
|
||||
* cache. This is an additional constraint beyond that imposed as: twice the
|
||||
* number of regions per run for this size class.
|
||||
*
|
||||
* This constant must be an even number.
|
||||
*/
|
||||
#define TCACHE_NSLOTS_SMALL_MAX 200
|
||||
|
||||
/* Number of cache slots for large size classes. */
|
||||
#define TCACHE_NSLOTS_LARGE 20
|
||||
|
||||
/* (1U << opt_lg_tcache_max) is used to compute tcache_maxclass. */
|
||||
#define LG_TCACHE_MAXCLASS_DEFAULT 15
|
||||
|
||||
/*
|
||||
* TCACHE_GC_SWEEP is the approximate number of allocation events between
|
||||
* full GC sweeps. Integer rounding may cause the actual number to be
|
||||
* slightly higher, since GC is performed incrementally.
|
||||
*/
|
||||
#define TCACHE_GC_SWEEP 8192
|
||||
|
||||
/* Number of tcache allocation/deallocation events between incremental GCs. */
|
||||
#define TCACHE_GC_INCR \
|
||||
((TCACHE_GC_SWEEP / NBINS) + ((TCACHE_GC_SWEEP / NBINS == 0) ? 0 : 1))
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
typedef enum {
|
||||
tcache_enabled_false = 0, /* Enable cast to/from bool. */
|
||||
tcache_enabled_true = 1,
|
||||
tcache_enabled_default = 2
|
||||
} tcache_enabled_t;
|
||||
|
||||
/*
|
||||
* Read-only information associated with each element of tcache_t's tbins array
|
||||
* is stored separately, mainly to reduce memory usage.
|
||||
*/
|
||||
struct tcache_bin_info_s {
|
||||
unsigned ncached_max; /* Upper limit on ncached. */
|
||||
};
|
||||
|
||||
struct tcache_bin_s {
|
||||
tcache_bin_stats_t tstats;
|
||||
int low_water; /* Min # cached since last GC. */
|
||||
unsigned lg_fill_div; /* Fill (ncached_max >> lg_fill_div). */
|
||||
unsigned ncached; /* # of cached objects. */
|
||||
void **avail; /* Stack of available objects. */
|
||||
};
|
||||
|
||||
struct tcache_s {
|
||||
ql_elm(tcache_t) link; /* Used for aggregating stats. */
|
||||
uint64_t prof_accumbytes;/* Cleared after arena_prof_accum() */
|
||||
arena_t *arena; /* This thread's arena. */
|
||||
unsigned ev_cnt; /* Event count since incremental GC. */
|
||||
unsigned next_gc_bin; /* Next bin to GC. */
|
||||
tcache_bin_t tbins[1]; /* Dynamically sized. */
|
||||
/*
|
||||
* The pointer stacks associated with tbins follow as a contiguous
|
||||
* array. During tcache initialization, the avail pointer in each
|
||||
* element of tbins is initialized to point to the proper offset within
|
||||
* this array.
|
||||
*/
|
||||
};
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
extern bool opt_tcache;
|
||||
extern ssize_t opt_lg_tcache_max;
|
||||
|
||||
extern tcache_bin_info_t *tcache_bin_info;
|
||||
|
||||
/*
|
||||
* Number of tcache bins. There are NBINS small-object bins, plus 0 or more
|
||||
* large-object bins.
|
||||
*/
|
||||
extern size_t nhbins;
|
||||
|
||||
/* Maximum cached size class. */
|
||||
extern size_t tcache_maxclass;
|
||||
|
||||
void tcache_bin_flush_small(tcache_bin_t *tbin, size_t binind, unsigned rem,
|
||||
tcache_t *tcache);
|
||||
void tcache_bin_flush_large(tcache_bin_t *tbin, size_t binind, unsigned rem,
|
||||
tcache_t *tcache);
|
||||
void tcache_arena_associate(tcache_t *tcache, arena_t *arena);
|
||||
void tcache_arena_dissociate(tcache_t *tcache);
|
||||
tcache_t *tcache_create(arena_t *arena);
|
||||
void *tcache_alloc_small_hard(tcache_t *tcache, tcache_bin_t *tbin,
|
||||
size_t binind);
|
||||
void tcache_destroy(tcache_t *tcache);
|
||||
void tcache_thread_cleanup(void *arg);
|
||||
void tcache_stats_merge(tcache_t *tcache, arena_t *arena);
|
||||
bool tcache_boot0(void);
|
||||
bool tcache_boot1(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
malloc_tsd_protos(JEMALLOC_ATTR(unused), tcache, tcache_t *)
|
||||
malloc_tsd_protos(JEMALLOC_ATTR(unused), tcache_enabled, tcache_enabled_t)
|
||||
|
||||
void tcache_event(tcache_t *tcache);
|
||||
void tcache_flush(void);
|
||||
bool tcache_enabled_get(void);
|
||||
tcache_t *tcache_get(bool create);
|
||||
void tcache_enabled_set(bool enabled);
|
||||
void *tcache_alloc_easy(tcache_bin_t *tbin);
|
||||
void *tcache_alloc_small(tcache_t *tcache, size_t size, bool zero);
|
||||
void *tcache_alloc_large(tcache_t *tcache, size_t size, bool zero);
|
||||
void tcache_dalloc_small(tcache_t *tcache, void *ptr);
|
||||
void tcache_dalloc_large(tcache_t *tcache, void *ptr, size_t size);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TCACHE_C_))
|
||||
/* Map of thread-specific caches. */
|
||||
malloc_tsd_externs(tcache, tcache_t *)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, tcache, tcache_t *, NULL,
|
||||
tcache_thread_cleanup)
|
||||
/* Per thread flag that allows thread caches to be disabled. */
|
||||
malloc_tsd_externs(tcache_enabled, tcache_enabled_t)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, tcache_enabled, tcache_enabled_t,
|
||||
tcache_enabled_default, malloc_tsd_no_cleanup)
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
tcache_flush(void)
|
||||
{
|
||||
tcache_t *tcache;
|
||||
|
||||
cassert(config_tcache);
|
||||
|
||||
tcache = *tcache_tsd_get();
|
||||
if ((uintptr_t)tcache <= (uintptr_t)TCACHE_STATE_MAX)
|
||||
return;
|
||||
tcache_destroy(tcache);
|
||||
tcache = NULL;
|
||||
tcache_tsd_set(&tcache);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE bool
|
||||
tcache_enabled_get(void)
|
||||
{
|
||||
tcache_enabled_t tcache_enabled;
|
||||
|
||||
cassert(config_tcache);
|
||||
|
||||
tcache_enabled = *tcache_enabled_tsd_get();
|
||||
if (tcache_enabled == tcache_enabled_default) {
|
||||
tcache_enabled = (tcache_enabled_t)opt_tcache;
|
||||
tcache_enabled_tsd_set(&tcache_enabled);
|
||||
}
|
||||
|
||||
return ((bool)tcache_enabled);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
tcache_enabled_set(bool enabled)
|
||||
{
|
||||
tcache_enabled_t tcache_enabled;
|
||||
tcache_t *tcache;
|
||||
|
||||
cassert(config_tcache);
|
||||
|
||||
tcache_enabled = (tcache_enabled_t)enabled;
|
||||
tcache_enabled_tsd_set(&tcache_enabled);
|
||||
tcache = *tcache_tsd_get();
|
||||
if (enabled) {
|
||||
if (tcache == TCACHE_STATE_DISABLED) {
|
||||
tcache = NULL;
|
||||
tcache_tsd_set(&tcache);
|
||||
}
|
||||
} else /* disabled */ {
|
||||
if (tcache > TCACHE_STATE_MAX) {
|
||||
tcache_destroy(tcache);
|
||||
tcache = NULL;
|
||||
}
|
||||
if (tcache == NULL) {
|
||||
tcache = TCACHE_STATE_DISABLED;
|
||||
tcache_tsd_set(&tcache);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE tcache_t *
|
||||
tcache_get(bool create)
|
||||
{
|
||||
tcache_t *tcache;
|
||||
|
||||
if (config_tcache == false)
|
||||
return (NULL);
|
||||
if (config_lazy_lock && isthreaded == false)
|
||||
return (NULL);
|
||||
|
||||
tcache = *tcache_tsd_get();
|
||||
if ((uintptr_t)tcache <= (uintptr_t)TCACHE_STATE_MAX) {
|
||||
if (tcache == TCACHE_STATE_DISABLED)
|
||||
return (NULL);
|
||||
if (tcache == NULL) {
|
||||
if (create == false) {
|
||||
/*
|
||||
* Creating a tcache here would cause
|
||||
* allocation as a side effect of free().
|
||||
* Ordinarily that would be okay since
|
||||
* tcache_create() failure is a soft failure
|
||||
* that doesn't propagate. However, if TLS
|
||||
* data are freed via free() as in glibc,
|
||||
* subtle corruption could result from setting
|
||||
* a TLS variable after its backing memory is
|
||||
* freed.
|
||||
*/
|
||||
return (NULL);
|
||||
}
|
||||
if (tcache_enabled_get() == false) {
|
||||
tcache_enabled_set(false); /* Memoize. */
|
||||
return (NULL);
|
||||
}
|
||||
return (tcache_create(choose_arena(NULL)));
|
||||
}
|
||||
if (tcache == TCACHE_STATE_PURGATORY) {
|
||||
/*
|
||||
* Make a note that an allocator function was called
|
||||
* after tcache_thread_cleanup() was called.
|
||||
*/
|
||||
tcache = TCACHE_STATE_REINCARNATED;
|
||||
tcache_tsd_set(&tcache);
|
||||
return (NULL);
|
||||
}
|
||||
if (tcache == TCACHE_STATE_REINCARNATED)
|
||||
return (NULL);
|
||||
not_reached();
|
||||
}
|
||||
|
||||
return (tcache);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
tcache_event(tcache_t *tcache)
|
||||
{
|
||||
|
||||
if (TCACHE_GC_INCR == 0)
|
||||
return;
|
||||
|
||||
tcache->ev_cnt++;
|
||||
assert(tcache->ev_cnt <= TCACHE_GC_INCR);
|
||||
if (tcache->ev_cnt == TCACHE_GC_INCR) {
|
||||
size_t binind = tcache->next_gc_bin;
|
||||
tcache_bin_t *tbin = &tcache->tbins[binind];
|
||||
tcache_bin_info_t *tbin_info = &tcache_bin_info[binind];
|
||||
|
||||
if (tbin->low_water > 0) {
|
||||
/*
|
||||
* Flush (ceiling) 3/4 of the objects below the low
|
||||
* water mark.
|
||||
*/
|
||||
if (binind < NBINS) {
|
||||
tcache_bin_flush_small(tbin, binind,
|
||||
tbin->ncached - tbin->low_water +
|
||||
(tbin->low_water >> 2), tcache);
|
||||
} else {
|
||||
tcache_bin_flush_large(tbin, binind,
|
||||
tbin->ncached - tbin->low_water +
|
||||
(tbin->low_water >> 2), tcache);
|
||||
}
|
||||
/*
|
||||
* Reduce fill count by 2X. Limit lg_fill_div such that
|
||||
* the fill count is always at least 1.
|
||||
*/
|
||||
if ((tbin_info->ncached_max >> (tbin->lg_fill_div+1))
|
||||
>= 1)
|
||||
tbin->lg_fill_div++;
|
||||
} else if (tbin->low_water < 0) {
|
||||
/*
|
||||
* Increase fill count by 2X. Make sure lg_fill_div
|
||||
* stays greater than 0.
|
||||
*/
|
||||
if (tbin->lg_fill_div > 1)
|
||||
tbin->lg_fill_div--;
|
||||
}
|
||||
tbin->low_water = tbin->ncached;
|
||||
|
||||
tcache->next_gc_bin++;
|
||||
if (tcache->next_gc_bin == nhbins)
|
||||
tcache->next_gc_bin = 0;
|
||||
tcache->ev_cnt = 0;
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
tcache_alloc_easy(tcache_bin_t *tbin)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
if (tbin->ncached == 0) {
|
||||
tbin->low_water = -1;
|
||||
return (NULL);
|
||||
}
|
||||
tbin->ncached--;
|
||||
if ((int)tbin->ncached < tbin->low_water)
|
||||
tbin->low_water = tbin->ncached;
|
||||
ret = tbin->avail[tbin->ncached];
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
tcache_alloc_small(tcache_t *tcache, size_t size, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
size_t binind;
|
||||
tcache_bin_t *tbin;
|
||||
|
||||
binind = SMALL_SIZE2BIN(size);
|
||||
assert(binind < NBINS);
|
||||
tbin = &tcache->tbins[binind];
|
||||
ret = tcache_alloc_easy(tbin);
|
||||
if (ret == NULL) {
|
||||
ret = tcache_alloc_small_hard(tcache, tbin, binind);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
}
|
||||
assert(arena_salloc(ret, false) == arena_bin_info[binind].reg_size);
|
||||
|
||||
if (zero == false) {
|
||||
if (config_fill) {
|
||||
if (opt_junk) {
|
||||
arena_alloc_junk_small(ret,
|
||||
&arena_bin_info[binind], false);
|
||||
} else if (opt_zero)
|
||||
memset(ret, 0, size);
|
||||
}
|
||||
} else {
|
||||
if (config_fill && opt_junk) {
|
||||
arena_alloc_junk_small(ret, &arena_bin_info[binind],
|
||||
true);
|
||||
}
|
||||
VALGRIND_MAKE_MEM_UNDEFINED(ret, size);
|
||||
memset(ret, 0, size);
|
||||
}
|
||||
|
||||
if (config_stats)
|
||||
tbin->tstats.nrequests++;
|
||||
if (config_prof)
|
||||
tcache->prof_accumbytes += arena_bin_info[binind].reg_size;
|
||||
tcache_event(tcache);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void *
|
||||
tcache_alloc_large(tcache_t *tcache, size_t size, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
size_t binind;
|
||||
tcache_bin_t *tbin;
|
||||
|
||||
size = PAGE_CEILING(size);
|
||||
assert(size <= tcache_maxclass);
|
||||
binind = NBINS + (size >> LG_PAGE) - 1;
|
||||
assert(binind < nhbins);
|
||||
tbin = &tcache->tbins[binind];
|
||||
ret = tcache_alloc_easy(tbin);
|
||||
if (ret == NULL) {
|
||||
/*
|
||||
* Only allocate one large object at a time, because it's quite
|
||||
* expensive to create one and not use it.
|
||||
*/
|
||||
ret = arena_malloc_large(tcache->arena, size, zero);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
} else {
|
||||
if (config_prof) {
|
||||
arena_chunk_t *chunk =
|
||||
(arena_chunk_t *)CHUNK_ADDR2BASE(ret);
|
||||
size_t pageind = (((uintptr_t)ret - (uintptr_t)chunk) >>
|
||||
LG_PAGE);
|
||||
chunk->map[pageind-map_bias].bits &=
|
||||
~CHUNK_MAP_CLASS_MASK;
|
||||
}
|
||||
if (zero == false) {
|
||||
if (config_fill) {
|
||||
if (opt_junk)
|
||||
memset(ret, 0xa5, size);
|
||||
else if (opt_zero)
|
||||
memset(ret, 0, size);
|
||||
}
|
||||
} else {
|
||||
VALGRIND_MAKE_MEM_UNDEFINED(ret, size);
|
||||
memset(ret, 0, size);
|
||||
}
|
||||
|
||||
if (config_stats)
|
||||
tbin->tstats.nrequests++;
|
||||
if (config_prof)
|
||||
tcache->prof_accumbytes += size;
|
||||
}
|
||||
|
||||
tcache_event(tcache);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
tcache_dalloc_small(tcache_t *tcache, void *ptr)
|
||||
{
|
||||
arena_t *arena;
|
||||
arena_chunk_t *chunk;
|
||||
arena_run_t *run;
|
||||
arena_bin_t *bin;
|
||||
tcache_bin_t *tbin;
|
||||
tcache_bin_info_t *tbin_info;
|
||||
size_t pageind, binind;
|
||||
arena_chunk_map_t *mapelm;
|
||||
|
||||
assert(arena_salloc(ptr, false) <= SMALL_MAXCLASS);
|
||||
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
arena = chunk->arena;
|
||||
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> LG_PAGE;
|
||||
mapelm = &chunk->map[pageind-map_bias];
|
||||
run = (arena_run_t *)((uintptr_t)chunk + (uintptr_t)((pageind -
|
||||
(mapelm->bits >> LG_PAGE)) << LG_PAGE));
|
||||
bin = run->bin;
|
||||
binind = ((uintptr_t)bin - (uintptr_t)&arena->bins) /
|
||||
sizeof(arena_bin_t);
|
||||
assert(binind < NBINS);
|
||||
|
||||
if (config_fill && opt_junk)
|
||||
arena_dalloc_junk_small(ptr, &arena_bin_info[binind]);
|
||||
|
||||
tbin = &tcache->tbins[binind];
|
||||
tbin_info = &tcache_bin_info[binind];
|
||||
if (tbin->ncached == tbin_info->ncached_max) {
|
||||
tcache_bin_flush_small(tbin, binind, (tbin_info->ncached_max >>
|
||||
1), tcache);
|
||||
}
|
||||
assert(tbin->ncached < tbin_info->ncached_max);
|
||||
tbin->avail[tbin->ncached] = ptr;
|
||||
tbin->ncached++;
|
||||
|
||||
tcache_event(tcache);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE void
|
||||
tcache_dalloc_large(tcache_t *tcache, void *ptr, size_t size)
|
||||
{
|
||||
size_t binind;
|
||||
tcache_bin_t *tbin;
|
||||
tcache_bin_info_t *tbin_info;
|
||||
|
||||
assert((size & PAGE_MASK) == 0);
|
||||
assert(arena_salloc(ptr, false) > SMALL_MAXCLASS);
|
||||
assert(arena_salloc(ptr, false) <= tcache_maxclass);
|
||||
|
||||
binind = NBINS + (size >> LG_PAGE) - 1;
|
||||
|
||||
if (config_fill && opt_junk)
|
||||
memset(ptr, 0x5a, size);
|
||||
|
||||
tbin = &tcache->tbins[binind];
|
||||
tbin_info = &tcache_bin_info[binind];
|
||||
if (tbin->ncached == tbin_info->ncached_max) {
|
||||
tcache_bin_flush_large(tbin, binind, (tbin_info->ncached_max >>
|
||||
1), tcache);
|
||||
}
|
||||
assert(tbin->ncached < tbin_info->ncached_max);
|
||||
tbin->avail[tbin->ncached] = ptr;
|
||||
tbin->ncached++;
|
||||
|
||||
tcache_event(tcache);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
309
contrib/jemalloc/include/jemalloc/internal/tsd.h
Normal file
309
contrib/jemalloc/include/jemalloc/internal/tsd.h
Normal file
@ -0,0 +1,309 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/* Maximum number of malloc_tsd users with cleanup functions. */
|
||||
#define MALLOC_TSD_CLEANUPS_MAX 8
|
||||
|
||||
typedef struct malloc_tsd_cleanup_s malloc_tsd_cleanup_t;
|
||||
struct malloc_tsd_cleanup_s {
|
||||
bool (*f)(void *);
|
||||
void *arg;
|
||||
};
|
||||
|
||||
/*
|
||||
* TLS/TSD-agnostic macro-based implementation of thread-specific data. There
|
||||
* are four macros that support (at least) three use cases: file-private,
|
||||
* library-private, and library-private inlined. Following is an example
|
||||
* library-private tsd variable:
|
||||
*
|
||||
* In example.h:
|
||||
* typedef struct {
|
||||
* int x;
|
||||
* int y;
|
||||
* } example_t;
|
||||
* #define EX_INITIALIZER JEMALLOC_CONCAT({0, 0})
|
||||
* malloc_tsd_protos(, example, example_t *)
|
||||
* malloc_tsd_externs(example, example_t *)
|
||||
* In example.c:
|
||||
* malloc_tsd_data(, example, example_t *, EX_INITIALIZER)
|
||||
* malloc_tsd_funcs(, example, example_t *, EX_INITIALIZER,
|
||||
* example_tsd_cleanup)
|
||||
*
|
||||
* The result is a set of generated functions, e.g.:
|
||||
*
|
||||
* bool example_tsd_boot(void) {...}
|
||||
* example_t **example_tsd_get() {...}
|
||||
* void example_tsd_set(example_t **val) {...}
|
||||
*
|
||||
* Note that all of the functions deal in terms of (a_type *) rather than
|
||||
* (a_type) so that it is possible to support non-pointer types (unlike
|
||||
* pthreads TSD). example_tsd_cleanup() is passed an (a_type *) pointer that is
|
||||
* cast to (void *). This means that the cleanup function needs to cast *and*
|
||||
* dereference the function argument, e.g.:
|
||||
*
|
||||
* void
|
||||
* example_tsd_cleanup(void *arg)
|
||||
* {
|
||||
* example_t *example = *(example_t **)arg;
|
||||
*
|
||||
* [...]
|
||||
* if ([want the cleanup function to be called again]) {
|
||||
* example_tsd_set(&example);
|
||||
* }
|
||||
* }
|
||||
*
|
||||
* If example_tsd_set() is called within example_tsd_cleanup(), it will be
|
||||
* called again. This is similar to how pthreads TSD destruction works, except
|
||||
* that pthreads only calls the cleanup function again if the value was set to
|
||||
* non-NULL.
|
||||
*/
|
||||
|
||||
/* malloc_tsd_protos(). */
|
||||
#define malloc_tsd_protos(a_attr, a_name, a_type) \
|
||||
a_attr bool \
|
||||
a_name##_tsd_boot(void); \
|
||||
a_attr a_type * \
|
||||
a_name##_tsd_get(void); \
|
||||
a_attr void \
|
||||
a_name##_tsd_set(a_type *val);
|
||||
|
||||
/* malloc_tsd_externs(). */
|
||||
#ifdef JEMALLOC_MALLOC_THREAD_CLEANUP
|
||||
#define malloc_tsd_externs(a_name, a_type) \
|
||||
extern __thread a_type a_name##_tls; \
|
||||
extern __thread bool a_name##_initialized; \
|
||||
extern bool a_name##_booted;
|
||||
#elif (defined(JEMALLOC_TLS))
|
||||
#define malloc_tsd_externs(a_name, a_type) \
|
||||
extern __thread a_type a_name##_tls; \
|
||||
extern pthread_key_t a_name##_tsd; \
|
||||
extern bool a_name##_booted;
|
||||
#else
|
||||
#define malloc_tsd_externs(a_name, a_type) \
|
||||
extern pthread_key_t a_name##_tsd; \
|
||||
extern bool a_name##_booted;
|
||||
#endif
|
||||
|
||||
/* malloc_tsd_data(). */
|
||||
#ifdef JEMALLOC_MALLOC_THREAD_CLEANUP
|
||||
#define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \
|
||||
a_attr __thread a_type JEMALLOC_TLS_MODEL \
|
||||
a_name##_tls = a_initializer; \
|
||||
a_attr __thread bool JEMALLOC_TLS_MODEL \
|
||||
a_name##_initialized = false; \
|
||||
a_attr bool a_name##_booted = false;
|
||||
#elif (defined(JEMALLOC_TLS))
|
||||
#define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \
|
||||
a_attr __thread a_type JEMALLOC_TLS_MODEL \
|
||||
a_name##_tls = a_initializer; \
|
||||
a_attr pthread_key_t a_name##_tsd; \
|
||||
a_attr bool a_name##_booted = false;
|
||||
#else
|
||||
#define malloc_tsd_data(a_attr, a_name, a_type, a_initializer) \
|
||||
a_attr pthread_key_t a_name##_tsd; \
|
||||
a_attr bool a_name##_booted = false;
|
||||
#endif
|
||||
|
||||
/* malloc_tsd_funcs(). */
|
||||
#ifdef JEMALLOC_MALLOC_THREAD_CLEANUP
|
||||
#define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \
|
||||
a_cleanup) \
|
||||
/* Initialization/cleanup. */ \
|
||||
a_attr bool \
|
||||
a_name##_tsd_cleanup_wrapper(void *arg) \
|
||||
{ \
|
||||
bool (*cleanup)(void *) = arg; \
|
||||
\
|
||||
if (a_name##_initialized) { \
|
||||
a_name##_initialized = false; \
|
||||
cleanup(&a_name##_tls); \
|
||||
} \
|
||||
return (a_name##_initialized); \
|
||||
} \
|
||||
a_attr bool \
|
||||
a_name##_tsd_boot(void) \
|
||||
{ \
|
||||
\
|
||||
if (a_cleanup != malloc_tsd_no_cleanup) { \
|
||||
malloc_tsd_cleanup_register( \
|
||||
&a_name##_tsd_cleanup_wrapper, a_cleanup); \
|
||||
} \
|
||||
a_name##_booted = true; \
|
||||
return (false); \
|
||||
} \
|
||||
/* Get/set. */ \
|
||||
a_attr a_type * \
|
||||
a_name##_tsd_get(void) \
|
||||
{ \
|
||||
\
|
||||
assert(a_name##_booted); \
|
||||
return (&a_name##_tls); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##_tsd_set(a_type *val) \
|
||||
{ \
|
||||
\
|
||||
assert(a_name##_booted); \
|
||||
a_name##_tls = (*val); \
|
||||
if (a_cleanup != malloc_tsd_no_cleanup) \
|
||||
a_name##_initialized = true; \
|
||||
}
|
||||
#elif (defined(JEMALLOC_TLS))
|
||||
#define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \
|
||||
a_cleanup) \
|
||||
/* Initialization/cleanup. */ \
|
||||
a_attr bool \
|
||||
a_name##_tsd_boot(void) \
|
||||
{ \
|
||||
\
|
||||
if (a_cleanup != malloc_tsd_no_cleanup) { \
|
||||
if (pthread_key_create(&a_name##_tsd, a_cleanup) != 0) \
|
||||
return (true); \
|
||||
} \
|
||||
a_name##_booted = true; \
|
||||
return (false); \
|
||||
} \
|
||||
/* Get/set. */ \
|
||||
a_attr a_type * \
|
||||
a_name##_tsd_get(void) \
|
||||
{ \
|
||||
\
|
||||
assert(a_name##_booted); \
|
||||
return (&a_name##_tls); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##_tsd_set(a_type *val) \
|
||||
{ \
|
||||
\
|
||||
assert(a_name##_booted); \
|
||||
a_name##_tls = (*val); \
|
||||
if (a_cleanup != malloc_tsd_no_cleanup) { \
|
||||
if (pthread_setspecific(a_name##_tsd, \
|
||||
(void *)(&a_name##_tls))) { \
|
||||
malloc_write("<jemalloc>: Error" \
|
||||
" setting TSD for "#a_name"\n"); \
|
||||
if (opt_abort) \
|
||||
abort(); \
|
||||
} \
|
||||
} \
|
||||
}
|
||||
#else
|
||||
#define malloc_tsd_funcs(a_attr, a_name, a_type, a_initializer, \
|
||||
a_cleanup) \
|
||||
/* Data structure. */ \
|
||||
typedef struct { \
|
||||
bool isstatic; \
|
||||
bool initialized; \
|
||||
a_type val; \
|
||||
} a_name##_tsd_wrapper_t; \
|
||||
/* Initialization/cleanup. */ \
|
||||
a_attr void \
|
||||
a_name##_tsd_cleanup_wrapper(void *arg) \
|
||||
{ \
|
||||
a_name##_tsd_wrapper_t *wrapper = (a_name##_tsd_wrapper_t *)arg;\
|
||||
\
|
||||
if (a_cleanup != malloc_tsd_no_cleanup && \
|
||||
wrapper->initialized) { \
|
||||
wrapper->initialized = false; \
|
||||
a_cleanup(&wrapper->val); \
|
||||
if (wrapper->initialized) { \
|
||||
/* Trigger another cleanup round. */ \
|
||||
if (pthread_setspecific(a_name##_tsd, \
|
||||
(void *)wrapper)) { \
|
||||
malloc_write("<jemalloc>: Error" \
|
||||
" setting TSD for "#a_name"\n"); \
|
||||
if (opt_abort) \
|
||||
abort(); \
|
||||
} \
|
||||
return; \
|
||||
} \
|
||||
} \
|
||||
if (wrapper->isstatic == false) \
|
||||
malloc_tsd_dalloc(wrapper); \
|
||||
} \
|
||||
a_attr bool \
|
||||
a_name##_tsd_boot(void) \
|
||||
{ \
|
||||
\
|
||||
if (pthread_key_create(&a_name##_tsd, \
|
||||
a_name##_tsd_cleanup_wrapper) != 0) \
|
||||
return (true); \
|
||||
a_name##_booted = true; \
|
||||
return (false); \
|
||||
} \
|
||||
/* Get/set. */ \
|
||||
a_attr a_name##_tsd_wrapper_t * \
|
||||
a_name##_tsd_get_wrapper(void) \
|
||||
{ \
|
||||
a_name##_tsd_wrapper_t *wrapper = (a_name##_tsd_wrapper_t *) \
|
||||
pthread_getspecific(a_name##_tsd); \
|
||||
\
|
||||
if (wrapper == NULL) { \
|
||||
wrapper = (a_name##_tsd_wrapper_t *) \
|
||||
malloc_tsd_malloc(sizeof(a_name##_tsd_wrapper_t)); \
|
||||
if (wrapper == NULL) { \
|
||||
static a_name##_tsd_wrapper_t \
|
||||
a_name##_tsd_static_data = \
|
||||
{true, false, a_initializer}; \
|
||||
malloc_write("<jemalloc>: Error allocating" \
|
||||
" TSD for "#a_name"\n"); \
|
||||
if (opt_abort) \
|
||||
abort(); \
|
||||
wrapper = &a_name##_tsd_static_data; \
|
||||
} else { \
|
||||
static a_type tsd_static_data = a_initializer; \
|
||||
wrapper->isstatic = false; \
|
||||
wrapper->val = tsd_static_data; \
|
||||
} \
|
||||
if (pthread_setspecific(a_name##_tsd, \
|
||||
(void *)wrapper)) { \
|
||||
malloc_write("<jemalloc>: Error setting" \
|
||||
" TSD for "#a_name"\n"); \
|
||||
if (opt_abort) \
|
||||
abort(); \
|
||||
} \
|
||||
} \
|
||||
return (wrapper); \
|
||||
} \
|
||||
a_attr a_type * \
|
||||
a_name##_tsd_get(void) \
|
||||
{ \
|
||||
a_name##_tsd_wrapper_t *wrapper; \
|
||||
\
|
||||
assert(a_name##_booted); \
|
||||
wrapper = a_name##_tsd_get_wrapper(); \
|
||||
return (&wrapper->val); \
|
||||
} \
|
||||
a_attr void \
|
||||
a_name##_tsd_set(a_type *val) \
|
||||
{ \
|
||||
a_name##_tsd_wrapper_t *wrapper; \
|
||||
\
|
||||
assert(a_name##_booted); \
|
||||
wrapper = a_name##_tsd_get_wrapper(); \
|
||||
wrapper->val = *(val); \
|
||||
if (a_cleanup != malloc_tsd_no_cleanup) \
|
||||
wrapper->initialized = true; \
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
void *malloc_tsd_malloc(size_t size);
|
||||
void malloc_tsd_dalloc(void *wrapper);
|
||||
void malloc_tsd_no_cleanup(void *);
|
||||
void malloc_tsd_cleanup_register(bool (*f)(void *), void *arg);
|
||||
void malloc_tsd_boot(void);
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
146
contrib/jemalloc/include/jemalloc/internal/util.h
Normal file
146
contrib/jemalloc/include/jemalloc/internal/util.h
Normal file
@ -0,0 +1,146 @@
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_TYPES
|
||||
|
||||
/* Size of stack-allocated buffer passed to buferror(). */
|
||||
#define BUFERROR_BUF 64
|
||||
|
||||
/*
|
||||
* Size of stack-allocated buffer used by malloc_{,v,vc}printf(). This must be
|
||||
* large enough for all possible uses within jemalloc.
|
||||
*/
|
||||
#define MALLOC_PRINTF_BUFSIZE 4096
|
||||
|
||||
/*
|
||||
* Wrap a cpp argument that contains commas such that it isn't broken up into
|
||||
* multiple arguments.
|
||||
*/
|
||||
#define JEMALLOC_CONCAT(...) __VA_ARGS__
|
||||
|
||||
/*
|
||||
* Silence compiler warnings due to uninitialized values. This is used
|
||||
* wherever the compiler fails to recognize that the variable is never used
|
||||
* uninitialized.
|
||||
*/
|
||||
#ifdef JEMALLOC_CC_SILENCE
|
||||
# define JEMALLOC_CC_SILENCE_INIT(v) = v
|
||||
#else
|
||||
# define JEMALLOC_CC_SILENCE_INIT(v)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Define a custom assert() in order to reduce the chances of deadlock during
|
||||
* assertion failure.
|
||||
*/
|
||||
#ifndef assert
|
||||
#define assert(e) do { \
|
||||
if (config_debug && !(e)) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
|
||||
__FILE__, __LINE__, #e); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
/* Use to assert a particular configuration, e.g., cassert(config_debug). */
|
||||
#define cassert(c) do { \
|
||||
if ((c) == false) \
|
||||
assert(false); \
|
||||
} while (0)
|
||||
|
||||
#ifndef not_reached
|
||||
#define not_reached() do { \
|
||||
if (config_debug) { \
|
||||
malloc_printf( \
|
||||
"<jemalloc>: %s:%d: Unreachable code reached\n", \
|
||||
__FILE__, __LINE__); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#ifndef not_implemented
|
||||
#define not_implemented() do { \
|
||||
if (config_debug) { \
|
||||
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
|
||||
__FILE__, __LINE__); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#endif
|
||||
|
||||
#define assert_not_implemented(e) do { \
|
||||
if (config_debug && !(e)) \
|
||||
not_implemented(); \
|
||||
} while (0)
|
||||
|
||||
#endif /* JEMALLOC_H_TYPES */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_STRUCTS
|
||||
|
||||
#endif /* JEMALLOC_H_STRUCTS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_EXTERNS
|
||||
|
||||
extern void (*je_malloc_message)(void *wcbopaque, const char *s);
|
||||
|
||||
int buferror(int errnum, char *buf, size_t buflen);
|
||||
uintmax_t malloc_strtoumax(const char *nptr, char **endptr, int base);
|
||||
|
||||
/*
|
||||
* malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating
|
||||
* point math.
|
||||
*/
|
||||
int malloc_vsnprintf(char *str, size_t size, const char *format,
|
||||
va_list ap);
|
||||
int malloc_snprintf(char *str, size_t size, const char *format, ...)
|
||||
JEMALLOC_ATTR(format(printf, 3, 4));
|
||||
void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
const char *format, va_list ap);
|
||||
void malloc_cprintf(void (*write)(void *, const char *), void *cbopaque,
|
||||
const char *format, ...) JEMALLOC_ATTR(format(printf, 3, 4));
|
||||
void malloc_printf(const char *format, ...)
|
||||
JEMALLOC_ATTR(format(printf, 1, 2));
|
||||
|
||||
#endif /* JEMALLOC_H_EXTERNS */
|
||||
/******************************************************************************/
|
||||
#ifdef JEMALLOC_H_INLINES
|
||||
|
||||
#ifndef JEMALLOC_ENABLE_INLINE
|
||||
size_t pow2_ceil(size_t x);
|
||||
void malloc_write(const char *s);
|
||||
#endif
|
||||
|
||||
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_UTIL_C_))
|
||||
/* Compute the smallest power of 2 that is >= x. */
|
||||
JEMALLOC_INLINE size_t
|
||||
pow2_ceil(size_t x)
|
||||
{
|
||||
|
||||
x--;
|
||||
x |= x >> 1;
|
||||
x |= x >> 2;
|
||||
x |= x >> 4;
|
||||
x |= x >> 8;
|
||||
x |= x >> 16;
|
||||
#if (LG_SIZEOF_PTR == 3)
|
||||
x |= x >> 32;
|
||||
#endif
|
||||
x++;
|
||||
return (x);
|
||||
}
|
||||
|
||||
/*
|
||||
* Wrapper around malloc_message() that avoids the need for
|
||||
* je_malloc_message(...) throughout the code.
|
||||
*/
|
||||
JEMALLOC_INLINE void
|
||||
malloc_write(const char *s)
|
||||
{
|
||||
|
||||
je_malloc_message(NULL, s);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* JEMALLOC_H_INLINES */
|
||||
/******************************************************************************/
|
141
contrib/jemalloc/include/jemalloc/jemalloc.h
Normal file
141
contrib/jemalloc/include/jemalloc/jemalloc.h
Normal file
@ -0,0 +1,141 @@
|
||||
#ifndef JEMALLOC_H_
|
||||
#define JEMALLOC_H_
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include <limits.h>
|
||||
#include <strings.h>
|
||||
|
||||
#define JEMALLOC_VERSION "1.0.0-258-g9ef7f5dc34ff02f50d401e41c8d9a4a928e7c2aa"
|
||||
#define JEMALLOC_VERSION_MAJOR 1
|
||||
#define JEMALLOC_VERSION_MINOR 0
|
||||
#define JEMALLOC_VERSION_BUGFIX 0
|
||||
#define JEMALLOC_VERSION_NREV 258
|
||||
#define JEMALLOC_VERSION_GID "9ef7f5dc34ff02f50d401e41c8d9a4a928e7c2aa"
|
||||
|
||||
#include "jemalloc_defs.h"
|
||||
#include "jemalloc_FreeBSD.h"
|
||||
|
||||
#ifdef JEMALLOC_EXPERIMENTAL
|
||||
#define ALLOCM_LG_ALIGN(la) (la)
|
||||
#if LG_SIZEOF_PTR == 2
|
||||
#define ALLOCM_ALIGN(a) (ffs(a)-1)
|
||||
#else
|
||||
#define ALLOCM_ALIGN(a) ((a < (size_t)INT_MAX) ? ffs(a)-1 : ffs(a>>32)+31)
|
||||
#endif
|
||||
#define ALLOCM_ZERO ((int)0x40)
|
||||
#define ALLOCM_NO_MOVE ((int)0x80)
|
||||
|
||||
#define ALLOCM_SUCCESS 0
|
||||
#define ALLOCM_ERR_OOM 1
|
||||
#define ALLOCM_ERR_NOT_MOVED 2
|
||||
#endif
|
||||
|
||||
/*
|
||||
* The je_ prefix on the following public symbol declarations is an artifact of
|
||||
* namespace management, and should be omitted in application code unless
|
||||
* JEMALLOC_NO_DEMANGLE is defined (see below).
|
||||
*/
|
||||
extern const char *je_malloc_conf;
|
||||
extern void (*je_malloc_message)(void *, const char *);
|
||||
|
||||
void *je_malloc(size_t size) JEMALLOC_ATTR(malloc);
|
||||
void *je_calloc(size_t num, size_t size) JEMALLOC_ATTR(malloc);
|
||||
int je_posix_memalign(void **memptr, size_t alignment, size_t size)
|
||||
JEMALLOC_ATTR(nonnull(1));
|
||||
void *je_aligned_alloc(size_t alignment, size_t size) JEMALLOC_ATTR(malloc);
|
||||
void *je_realloc(void *ptr, size_t size);
|
||||
void je_free(void *ptr);
|
||||
|
||||
size_t je_malloc_usable_size(const void *ptr);
|
||||
void je_malloc_stats_print(void (*write_cb)(void *, const char *),
|
||||
void *je_cbopaque, const char *opts);
|
||||
int je_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp,
|
||||
size_t newlen);
|
||||
int je_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp);
|
||||
int je_mallctlbymib(const size_t *mib, size_t miblen, void *oldp,
|
||||
size_t *oldlenp, void *newp, size_t newlen);
|
||||
|
||||
#ifdef JEMALLOC_EXPERIMENTAL
|
||||
int je_allocm(void **ptr, size_t *rsize, size_t size, int flags)
|
||||
JEMALLOC_ATTR(nonnull(1));
|
||||
int je_rallocm(void **ptr, size_t *rsize, size_t size, size_t extra,
|
||||
int flags) JEMALLOC_ATTR(nonnull(1));
|
||||
int je_sallocm(const void *ptr, size_t *rsize, int flags)
|
||||
JEMALLOC_ATTR(nonnull(1));
|
||||
int je_dallocm(void *ptr, int flags) JEMALLOC_ATTR(nonnull(1));
|
||||
int je_nallocm(size_t *rsize, size_t size, int flags);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* By default application code must explicitly refer to mangled symbol names,
|
||||
* so that it is possible to use jemalloc in conjunction with another allocator
|
||||
* in the same application. Define JEMALLOC_MANGLE in order to cause automatic
|
||||
* name mangling that matches the API prefixing that happened as a result of
|
||||
* --with-mangling and/or --with-jemalloc-prefix configuration settings.
|
||||
*/
|
||||
#ifdef JEMALLOC_MANGLE
|
||||
#ifndef JEMALLOC_NO_DEMANGLE
|
||||
#define JEMALLOC_NO_DEMANGLE
|
||||
#endif
|
||||
#define malloc_conf je_malloc_conf
|
||||
#define malloc_message je_malloc_message
|
||||
#define malloc je_malloc
|
||||
#define calloc je_calloc
|
||||
#define posix_memalign je_posix_memalign
|
||||
#define aligned_alloc je_aligned_alloc
|
||||
#define realloc je_realloc
|
||||
#define free je_free
|
||||
#define malloc_usable_size je_malloc_usable_size
|
||||
#define malloc_stats_print je_malloc_stats_print
|
||||
#define mallctl je_mallctl
|
||||
#define mallctlnametomib je_mallctlnametomib
|
||||
#define mallctlbymib je_mallctlbymib
|
||||
#define memalign je_memalign
|
||||
#define valloc je_valloc
|
||||
#ifdef JEMALLOC_EXPERIMENTAL
|
||||
#define allocm je_allocm
|
||||
#define rallocm je_rallocm
|
||||
#define sallocm je_sallocm
|
||||
#define dallocm je_dallocm
|
||||
#define nallocm je_nallocm
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/*
|
||||
* The je_* macros can be used as stable alternative names for the public
|
||||
* jemalloc API if JEMALLOC_NO_DEMANGLE is defined. This is primarily meant
|
||||
* for use in jemalloc itself, but it can be used by application code to
|
||||
* provide isolation from the name mangling specified via --with-mangling
|
||||
* and/or --with-jemalloc-prefix.
|
||||
*/
|
||||
#ifndef JEMALLOC_NO_DEMANGLE
|
||||
#undef je_malloc_conf
|
||||
#undef je_malloc_message
|
||||
#undef je_malloc
|
||||
#undef je_calloc
|
||||
#undef je_posix_memalign
|
||||
#undef je_aligned_alloc
|
||||
#undef je_realloc
|
||||
#undef je_free
|
||||
#undef je_malloc_usable_size
|
||||
#undef je_malloc_stats_print
|
||||
#undef je_mallctl
|
||||
#undef je_mallctlnametomib
|
||||
#undef je_mallctlbymib
|
||||
#undef je_memalign
|
||||
#undef je_valloc
|
||||
#ifdef JEMALLOC_EXPERIMENTAL
|
||||
#undef je_allocm
|
||||
#undef je_rallocm
|
||||
#undef je_sallocm
|
||||
#undef je_dallocm
|
||||
#undef je_nallocm
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
};
|
||||
#endif
|
||||
#endif /* JEMALLOC_H_ */
|
76
contrib/jemalloc/include/jemalloc/jemalloc_FreeBSD.h
Normal file
76
contrib/jemalloc/include/jemalloc/jemalloc_FreeBSD.h
Normal file
@ -0,0 +1,76 @@
|
||||
/*
|
||||
* Override settings that were generated in jemalloc_defs.h as necessary.
|
||||
*/
|
||||
|
||||
#undef JEMALLOC_OVERRIDE_VALLOC
|
||||
|
||||
#ifndef MALLOC_PRODUCTION
|
||||
#define JEMALLOC_DEBUG
|
||||
#endif
|
||||
|
||||
/*
|
||||
* The following are architecture-dependent, so conditionally define them for
|
||||
* each supported architecture.
|
||||
*/
|
||||
#undef CPU_SPINWAIT
|
||||
#undef JEMALLOC_TLS_MODEL
|
||||
#undef STATIC_PAGE_SHIFT
|
||||
#undef LG_SIZEOF_PTR
|
||||
#undef LG_SIZEOF_INT
|
||||
#undef LG_SIZEOF_LONG
|
||||
#undef LG_SIZEOF_INTMAX_T
|
||||
|
||||
#ifdef __i386__
|
||||
# define LG_SIZEOF_PTR 2
|
||||
# define CPU_SPINWAIT __asm__ volatile("pause")
|
||||
# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
#endif
|
||||
#ifdef __ia64__
|
||||
# define LG_SIZEOF_PTR 3
|
||||
#endif
|
||||
#ifdef __sparc64__
|
||||
# define LG_SIZEOF_PTR 3
|
||||
# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
#endif
|
||||
#ifdef __amd64__
|
||||
# define LG_SIZEOF_PTR 3
|
||||
# define CPU_SPINWAIT __asm__ volatile("pause")
|
||||
# define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
#endif
|
||||
#ifdef __arm__
|
||||
# define LG_SIZEOF_PTR 2
|
||||
#endif
|
||||
#ifdef __mips__
|
||||
# define LG_SIZEOF_PTR 2
|
||||
#endif
|
||||
#ifdef __powerpc64__
|
||||
# define LG_SIZEOF_PTR 3
|
||||
#elif defined(__powerpc__)
|
||||
# define LG_SIZEOF_PTR 2
|
||||
#endif
|
||||
|
||||
#ifndef JEMALLOC_TLS_MODEL
|
||||
# define JEMALLOC_TLS_MODEL /* Default. */
|
||||
#endif
|
||||
#ifdef __clang__
|
||||
# undef JEMALLOC_TLS_MODEL
|
||||
# define JEMALLOC_TLS_MODEL /* clang does not support tls_model yet. */
|
||||
#endif
|
||||
|
||||
#define STATIC_PAGE_SHIFT PAGE_SHIFT
|
||||
#define LG_SIZEOF_INT 2
|
||||
#define LG_SIZEOF_LONG LG_SIZEOF_PTR
|
||||
#define LG_SIZEOF_INTMAX_T 3
|
||||
|
||||
/* Disable lazy-lock machinery, mangle isthreaded, and adjust its type. */
|
||||
#undef JEMALLOC_LAZY_LOCK
|
||||
extern int __isthreaded;
|
||||
#define isthreaded ((bool)__isthreaded)
|
||||
|
||||
/* Mangle. */
|
||||
#define open _open
|
||||
#define read _read
|
||||
#define write _write
|
||||
#define close _close
|
||||
#define pthread_mutex_lock _pthread_mutex_lock
|
||||
#define pthread_mutex_unlock _pthread_mutex_unlock
|
239
contrib/jemalloc/include/jemalloc/jemalloc_defs.h
Normal file
239
contrib/jemalloc/include/jemalloc/jemalloc_defs.h
Normal file
@ -0,0 +1,239 @@
|
||||
/* include/jemalloc/jemalloc_defs.h. Generated from jemalloc_defs.h.in by configure. */
|
||||
/*
|
||||
* If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all
|
||||
* public APIs to be prefixed. This makes it possible, with some care, to use
|
||||
* multiple allocators simultaneously.
|
||||
*/
|
||||
/* #undef JEMALLOC_PREFIX */
|
||||
/* #undef JEMALLOC_CPREFIX */
|
||||
|
||||
/*
|
||||
* Name mangling for public symbols is controlled by --with-mangling and
|
||||
* --with-jemalloc-prefix. With default settings the je_ prefix is stripped by
|
||||
* these macro definitions.
|
||||
*/
|
||||
#define je_malloc_conf malloc_conf
|
||||
#define je_malloc_message malloc_message
|
||||
#define je_malloc malloc
|
||||
#define je_calloc calloc
|
||||
#define je_posix_memalign posix_memalign
|
||||
#define je_aligned_alloc aligned_alloc
|
||||
#define je_realloc realloc
|
||||
#define je_free free
|
||||
#define je_malloc_usable_size malloc_usable_size
|
||||
#define je_malloc_stats_print malloc_stats_print
|
||||
#define je_mallctl mallctl
|
||||
#define je_mallctlnametomib mallctlnametomib
|
||||
#define je_mallctlbymib mallctlbymib
|
||||
/* #undef je_memalign */
|
||||
#define je_valloc valloc
|
||||
#define je_allocm allocm
|
||||
#define je_rallocm rallocm
|
||||
#define je_sallocm sallocm
|
||||
#define je_dallocm dallocm
|
||||
#define je_nallocm nallocm
|
||||
|
||||
/*
|
||||
* JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs.
|
||||
* For shared libraries, symbol visibility mechanisms prevent these symbols
|
||||
* from being exported, but for static libraries, naming collisions are a real
|
||||
* possibility.
|
||||
*/
|
||||
#define JEMALLOC_PRIVATE_NAMESPACE ""
|
||||
#define JEMALLOC_N(string_that_no_one_should_want_to_use_as_a_jemalloc_private_namespace_prefix) string_that_no_one_should_want_to_use_as_a_jemalloc_private_namespace_prefix
|
||||
|
||||
/*
|
||||
* Hyper-threaded CPUs may need a special instruction inside spin loops in
|
||||
* order to yield to another virtual CPU.
|
||||
*/
|
||||
#define CPU_SPINWAIT __asm__ volatile("pause")
|
||||
|
||||
/*
|
||||
* Defined if OSAtomic*() functions are available, as provided by Darwin, and
|
||||
* documented in the atomic(3) manual page.
|
||||
*/
|
||||
/* #undef JEMALLOC_OSATOMIC */
|
||||
|
||||
/*
|
||||
* Defined if __sync_add_and_fetch(uint32_t *, uint32_t) and
|
||||
* __sync_sub_and_fetch(uint32_t *, uint32_t) are available, despite
|
||||
* __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 not being defined (which means the
|
||||
* functions are defined in libgcc instead of being inlines)
|
||||
*/
|
||||
#define JE_FORCE_SYNC_COMPARE_AND_SWAP_4
|
||||
|
||||
/*
|
||||
* Defined if __sync_add_and_fetch(uint64_t *, uint64_t) and
|
||||
* __sync_sub_and_fetch(uint64_t *, uint64_t) are available, despite
|
||||
* __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8 not being defined (which means the
|
||||
* functions are defined in libgcc instead of being inlines)
|
||||
*/
|
||||
#define JE_FORCE_SYNC_COMPARE_AND_SWAP_8
|
||||
|
||||
/*
|
||||
* Defined if OSSpin*() functions are available, as provided by Darwin, and
|
||||
* documented in the spinlock(3) manual page.
|
||||
*/
|
||||
/* #undef JEMALLOC_OSSPIN */
|
||||
|
||||
/*
|
||||
* Defined if _malloc_thread_cleanup() exists. At least in the case of
|
||||
* FreeBSD, pthread_key_create() allocates, which if used during malloc
|
||||
* bootstrapping will cause recursion into the pthreads library. Therefore, if
|
||||
* _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in
|
||||
* malloc_tsd.
|
||||
*/
|
||||
#define JEMALLOC_MALLOC_THREAD_CLEANUP
|
||||
|
||||
/*
|
||||
* Defined if threaded initialization is known to be safe on this platform.
|
||||
* Among other things, it must be possible to initialize a mutex without
|
||||
* triggering allocation in order for threaded allocation to be safe.
|
||||
*/
|
||||
/* #undef JEMALLOC_THREADED_INIT */
|
||||
|
||||
/*
|
||||
* Defined if the pthreads implementation defines
|
||||
* _pthread_mutex_init_calloc_cb(), in which case the function is used in order
|
||||
* to avoid recursive allocation during mutex initialization.
|
||||
*/
|
||||
#define JEMALLOC_MUTEX_INIT_CB 1
|
||||
|
||||
/* Defined if __attribute__((...)) syntax is supported. */
|
||||
#define JEMALLOC_HAVE_ATTR
|
||||
#ifdef JEMALLOC_HAVE_ATTR
|
||||
# define JEMALLOC_CATTR(s, a) __attribute__((s))
|
||||
# define JEMALLOC_ATTR(s) JEMALLOC_CATTR(s,)
|
||||
#else
|
||||
# define JEMALLOC_CATTR(s, a) a
|
||||
# define JEMALLOC_ATTR(s) JEMALLOC_CATTR(s,)
|
||||
#endif
|
||||
|
||||
/* Defined if sbrk() is supported. */
|
||||
#define JEMALLOC_HAVE_SBRK
|
||||
|
||||
/* Non-empty if the tls_model attribute is supported. */
|
||||
#define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
|
||||
/* JEMALLOC_CC_SILENCE enables code that silences unuseful compiler warnings. */
|
||||
#define JEMALLOC_CC_SILENCE
|
||||
|
||||
/*
|
||||
* JEMALLOC_DEBUG enables assertions and other sanity checks, and disables
|
||||
* inline functions.
|
||||
*/
|
||||
/* #undef JEMALLOC_DEBUG */
|
||||
|
||||
/* JEMALLOC_STATS enables statistics calculation. */
|
||||
#define JEMALLOC_STATS
|
||||
|
||||
/* JEMALLOC_PROF enables allocation profiling. */
|
||||
/* #undef JEMALLOC_PROF */
|
||||
|
||||
/* Use libunwind for profile backtracing if defined. */
|
||||
/* #undef JEMALLOC_PROF_LIBUNWIND */
|
||||
|
||||
/* Use libgcc for profile backtracing if defined. */
|
||||
/* #undef JEMALLOC_PROF_LIBGCC */
|
||||
|
||||
/* Use gcc intrinsics for profile backtracing if defined. */
|
||||
/* #undef JEMALLOC_PROF_GCC */
|
||||
|
||||
/*
|
||||
* JEMALLOC_TCACHE enables a thread-specific caching layer for small objects.
|
||||
* This makes it possible to allocate/deallocate objects without any locking
|
||||
* when the cache is in the steady state.
|
||||
*/
|
||||
#define JEMALLOC_TCACHE
|
||||
|
||||
/*
|
||||
* JEMALLOC_DSS enables use of sbrk(2) to allocate chunks from the data storage
|
||||
* segment (DSS).
|
||||
*/
|
||||
#define JEMALLOC_DSS
|
||||
|
||||
/* Support memory filling (junk/zero/quarantine/redzone). */
|
||||
#define JEMALLOC_FILL
|
||||
|
||||
/* Support the experimental API. */
|
||||
#define JEMALLOC_EXPERIMENTAL
|
||||
|
||||
/* Support utrace(2)-based tracing. */
|
||||
#define JEMALLOC_UTRACE
|
||||
|
||||
/* Support Valgrind. */
|
||||
/* #undef JEMALLOC_VALGRIND */
|
||||
|
||||
/* Support optional abort() on OOM. */
|
||||
#define JEMALLOC_XMALLOC
|
||||
|
||||
/* Support lazy locking (avoid locking unless a second thread is launched). */
|
||||
#define JEMALLOC_LAZY_LOCK
|
||||
|
||||
/* One page is 2^STATIC_PAGE_SHIFT bytes. */
|
||||
#define STATIC_PAGE_SHIFT 12
|
||||
|
||||
/*
|
||||
* If defined, use munmap() to unmap freed chunks, rather than storing them for
|
||||
* later reuse. This is automatically disabled if configuration determines
|
||||
* that common sequences of mmap()/munmap() calls will cause virtual memory map
|
||||
* holes.
|
||||
*/
|
||||
#define JEMALLOC_MUNMAP
|
||||
|
||||
/* TLS is used to map arenas and magazine caches to threads. */
|
||||
#define JEMALLOC_TLS
|
||||
|
||||
/*
|
||||
* JEMALLOC_IVSALLOC enables ivsalloc(), which verifies that pointers reside
|
||||
* within jemalloc-owned chunks before dereferencing them.
|
||||
*/
|
||||
/* #undef JEMALLOC_IVSALLOC */
|
||||
|
||||
/*
|
||||
* Define overrides for non-standard allocator-related functions if they
|
||||
* are present on the system.
|
||||
*/
|
||||
/* #undef JEMALLOC_OVERRIDE_MEMALIGN */
|
||||
#define JEMALLOC_OVERRIDE_VALLOC
|
||||
|
||||
/*
|
||||
* Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings.
|
||||
*/
|
||||
/* #undef JEMALLOC_ZONE */
|
||||
/* #undef JEMALLOC_ZONE_VERSION */
|
||||
|
||||
/* If defined, use mremap(...MREMAP_FIXED...) for huge realloc(). */
|
||||
/* #undef JEMALLOC_MREMAP_FIXED */
|
||||
|
||||
/*
|
||||
* Methods for purging unused pages differ between operating systems.
|
||||
*
|
||||
* madvise(..., MADV_DONTNEED) : On Linux, this immediately discards pages,
|
||||
* such that new pages will be demand-zeroed if
|
||||
* the address region is later touched.
|
||||
* madvise(..., MADV_FREE) : On FreeBSD and Darwin, this marks pages as being
|
||||
* unused, such that they will be discarded rather
|
||||
* than swapped out.
|
||||
*/
|
||||
/* #undef JEMALLOC_PURGE_MADVISE_DONTNEED */
|
||||
#define JEMALLOC_PURGE_MADVISE_FREE
|
||||
#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED
|
||||
# define JEMALLOC_MADV_PURGE MADV_DONTNEED
|
||||
#elif defined(JEMALLOC_PURGE_MADVISE_FREE)
|
||||
# define JEMALLOC_MADV_PURGE MADV_FREE
|
||||
#else
|
||||
# error "No method defined for purging unused dirty pages."
|
||||
#endif
|
||||
|
||||
/* sizeof(void *) == 2^LG_SIZEOF_PTR. */
|
||||
#define LG_SIZEOF_PTR 3
|
||||
|
||||
/* sizeof(int) == 2^LG_SIZEOF_INT. */
|
||||
#define LG_SIZEOF_INT 2
|
||||
|
||||
/* sizeof(long) == 2^LG_SIZEOF_LONG. */
|
||||
#define LG_SIZEOF_LONG 3
|
||||
|
||||
/* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */
|
||||
#define LG_SIZEOF_INTMAX_T 3
|
2248
contrib/jemalloc/src/arena.c
Normal file
2248
contrib/jemalloc/src/arena.c
Normal file
File diff suppressed because it is too large
Load Diff
2
contrib/jemalloc/src/atomic.c
Normal file
2
contrib/jemalloc/src/atomic.c
Normal file
@ -0,0 +1,2 @@
|
||||
#define JEMALLOC_ATOMIC_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
138
contrib/jemalloc/src/base.c
Normal file
138
contrib/jemalloc/src/base.c
Normal file
@ -0,0 +1,138 @@
|
||||
#define JEMALLOC_BASE_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
static malloc_mutex_t base_mtx;
|
||||
|
||||
/*
|
||||
* Current pages that are being used for internal memory allocations. These
|
||||
* pages are carved up in cacheline-size quanta, so that there is no chance of
|
||||
* false cache line sharing.
|
||||
*/
|
||||
static void *base_pages;
|
||||
static void *base_next_addr;
|
||||
static void *base_past_addr; /* Addr immediately past base_pages. */
|
||||
static extent_node_t *base_nodes;
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static bool base_pages_alloc(size_t minsize);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static bool
|
||||
base_pages_alloc(size_t minsize)
|
||||
{
|
||||
size_t csize;
|
||||
bool zero;
|
||||
|
||||
assert(minsize != 0);
|
||||
csize = CHUNK_CEILING(minsize);
|
||||
zero = false;
|
||||
base_pages = chunk_alloc(csize, chunksize, true, &zero);
|
||||
if (base_pages == NULL)
|
||||
return (true);
|
||||
base_next_addr = base_pages;
|
||||
base_past_addr = (void *)((uintptr_t)base_pages + csize);
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
void *
|
||||
base_alloc(size_t size)
|
||||
{
|
||||
void *ret;
|
||||
size_t csize;
|
||||
|
||||
/* Round size up to nearest multiple of the cacheline size. */
|
||||
csize = CACHELINE_CEILING(size);
|
||||
|
||||
malloc_mutex_lock(&base_mtx);
|
||||
/* Make sure there's enough space for the allocation. */
|
||||
if ((uintptr_t)base_next_addr + csize > (uintptr_t)base_past_addr) {
|
||||
if (base_pages_alloc(csize)) {
|
||||
malloc_mutex_unlock(&base_mtx);
|
||||
return (NULL);
|
||||
}
|
||||
}
|
||||
/* Allocate. */
|
||||
ret = base_next_addr;
|
||||
base_next_addr = (void *)((uintptr_t)base_next_addr + csize);
|
||||
malloc_mutex_unlock(&base_mtx);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void *
|
||||
base_calloc(size_t number, size_t size)
|
||||
{
|
||||
void *ret = base_alloc(number * size);
|
||||
|
||||
if (ret != NULL)
|
||||
memset(ret, 0, number * size);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
extent_node_t *
|
||||
base_node_alloc(void)
|
||||
{
|
||||
extent_node_t *ret;
|
||||
|
||||
malloc_mutex_lock(&base_mtx);
|
||||
if (base_nodes != NULL) {
|
||||
ret = base_nodes;
|
||||
base_nodes = *(extent_node_t **)ret;
|
||||
malloc_mutex_unlock(&base_mtx);
|
||||
} else {
|
||||
malloc_mutex_unlock(&base_mtx);
|
||||
ret = (extent_node_t *)base_alloc(sizeof(extent_node_t));
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void
|
||||
base_node_dealloc(extent_node_t *node)
|
||||
{
|
||||
|
||||
malloc_mutex_lock(&base_mtx);
|
||||
*(extent_node_t **)node = base_nodes;
|
||||
base_nodes = node;
|
||||
malloc_mutex_unlock(&base_mtx);
|
||||
}
|
||||
|
||||
bool
|
||||
base_boot(void)
|
||||
{
|
||||
|
||||
base_nodes = NULL;
|
||||
if (malloc_mutex_init(&base_mtx))
|
||||
return (true);
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
void
|
||||
base_prefork(void)
|
||||
{
|
||||
|
||||
malloc_mutex_prefork(&base_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
base_postfork_parent(void)
|
||||
{
|
||||
|
||||
malloc_mutex_postfork_parent(&base_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
base_postfork_child(void)
|
||||
{
|
||||
|
||||
malloc_mutex_postfork_child(&base_mtx);
|
||||
}
|
90
contrib/jemalloc/src/bitmap.c
Normal file
90
contrib/jemalloc/src/bitmap.c
Normal file
@ -0,0 +1,90 @@
|
||||
#define JEMALLOC_BITMAP_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static size_t bits2groups(size_t nbits);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static size_t
|
||||
bits2groups(size_t nbits)
|
||||
{
|
||||
|
||||
return ((nbits >> LG_BITMAP_GROUP_NBITS) +
|
||||
!!(nbits & BITMAP_GROUP_NBITS_MASK));
|
||||
}
|
||||
|
||||
void
|
||||
bitmap_info_init(bitmap_info_t *binfo, size_t nbits)
|
||||
{
|
||||
unsigned i;
|
||||
size_t group_count;
|
||||
|
||||
assert(nbits > 0);
|
||||
assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS));
|
||||
|
||||
/*
|
||||
* Compute the number of groups necessary to store nbits bits, and
|
||||
* progressively work upward through the levels until reaching a level
|
||||
* that requires only one group.
|
||||
*/
|
||||
binfo->levels[0].group_offset = 0;
|
||||
group_count = bits2groups(nbits);
|
||||
for (i = 1; group_count > 1; i++) {
|
||||
assert(i < BITMAP_MAX_LEVELS);
|
||||
binfo->levels[i].group_offset = binfo->levels[i-1].group_offset
|
||||
+ group_count;
|
||||
group_count = bits2groups(group_count);
|
||||
}
|
||||
binfo->levels[i].group_offset = binfo->levels[i-1].group_offset
|
||||
+ group_count;
|
||||
binfo->nlevels = i;
|
||||
binfo->nbits = nbits;
|
||||
}
|
||||
|
||||
size_t
|
||||
bitmap_info_ngroups(const bitmap_info_t *binfo)
|
||||
{
|
||||
|
||||
return (binfo->levels[binfo->nlevels].group_offset << LG_SIZEOF_BITMAP);
|
||||
}
|
||||
|
||||
size_t
|
||||
bitmap_size(size_t nbits)
|
||||
{
|
||||
bitmap_info_t binfo;
|
||||
|
||||
bitmap_info_init(&binfo, nbits);
|
||||
return (bitmap_info_ngroups(&binfo));
|
||||
}
|
||||
|
||||
void
|
||||
bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo)
|
||||
{
|
||||
size_t extra;
|
||||
unsigned i;
|
||||
|
||||
/*
|
||||
* Bits are actually inverted with regard to the external bitmap
|
||||
* interface, so the bitmap starts out with all 1 bits, except for
|
||||
* trailing unused bits (if any). Note that each group uses bit 0 to
|
||||
* correspond to the first logical bit in the group, so extra bits
|
||||
* are the most significant bits of the last group.
|
||||
*/
|
||||
memset(bitmap, 0xffU, binfo->levels[binfo->nlevels].group_offset <<
|
||||
LG_SIZEOF_BITMAP);
|
||||
extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK))
|
||||
& BITMAP_GROUP_NBITS_MASK;
|
||||
if (extra != 0)
|
||||
bitmap[binfo->levels[1].group_offset - 1] >>= extra;
|
||||
for (i = 1; i < binfo->nlevels; i++) {
|
||||
size_t group_count = binfo->levels[i].group_offset -
|
||||
binfo->levels[i-1].group_offset;
|
||||
extra = (BITMAP_GROUP_NBITS - (group_count &
|
||||
BITMAP_GROUP_NBITS_MASK)) & BITMAP_GROUP_NBITS_MASK;
|
||||
if (extra != 0)
|
||||
bitmap[binfo->levels[i+1].group_offset - 1] >>= extra;
|
||||
}
|
||||
}
|
304
contrib/jemalloc/src/chunk.c
Normal file
304
contrib/jemalloc/src/chunk.c
Normal file
@ -0,0 +1,304 @@
|
||||
#define JEMALLOC_CHUNK_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
size_t opt_lg_chunk = LG_CHUNK_DEFAULT;
|
||||
|
||||
malloc_mutex_t chunks_mtx;
|
||||
chunk_stats_t stats_chunks;
|
||||
|
||||
/*
|
||||
* Trees of chunks that were previously allocated (trees differ only in node
|
||||
* ordering). These are used when allocating chunks, in an attempt to re-use
|
||||
* address space. Depending on function, different tree orderings are needed,
|
||||
* which is why there are two trees with the same contents.
|
||||
*/
|
||||
static extent_tree_t chunks_szad;
|
||||
static extent_tree_t chunks_ad;
|
||||
|
||||
rtree_t *chunks_rtree;
|
||||
|
||||
/* Various chunk-related settings. */
|
||||
size_t chunksize;
|
||||
size_t chunksize_mask; /* (chunksize - 1). */
|
||||
size_t chunk_npages;
|
||||
size_t map_bias;
|
||||
size_t arena_maxclass; /* Max size class for arenas. */
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static void *chunk_recycle(size_t size, size_t alignment, bool *zero);
|
||||
static void chunk_record(void *chunk, size_t size);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static void *
|
||||
chunk_recycle(size_t size, size_t alignment, bool *zero)
|
||||
{
|
||||
void *ret;
|
||||
extent_node_t *node;
|
||||
extent_node_t key;
|
||||
size_t alloc_size, leadsize, trailsize;
|
||||
|
||||
alloc_size = size + alignment - chunksize;
|
||||
/* Beware size_t wrap-around. */
|
||||
if (alloc_size < size)
|
||||
return (NULL);
|
||||
key.addr = NULL;
|
||||
key.size = alloc_size;
|
||||
malloc_mutex_lock(&chunks_mtx);
|
||||
node = extent_tree_szad_nsearch(&chunks_szad, &key);
|
||||
if (node == NULL) {
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
return (NULL);
|
||||
}
|
||||
leadsize = ALIGNMENT_CEILING((uintptr_t)node->addr, alignment) -
|
||||
(uintptr_t)node->addr;
|
||||
assert(alloc_size >= leadsize + size);
|
||||
trailsize = alloc_size - leadsize - size;
|
||||
ret = (void *)((uintptr_t)node->addr + leadsize);
|
||||
/* Remove node from the tree. */
|
||||
extent_tree_szad_remove(&chunks_szad, node);
|
||||
extent_tree_ad_remove(&chunks_ad, node);
|
||||
if (leadsize != 0) {
|
||||
/* Insert the leading space as a smaller chunk. */
|
||||
node->size = leadsize;
|
||||
extent_tree_szad_insert(&chunks_szad, node);
|
||||
extent_tree_ad_insert(&chunks_ad, node);
|
||||
node = NULL;
|
||||
}
|
||||
if (trailsize != 0) {
|
||||
/* Insert the trailing space as a smaller chunk. */
|
||||
if (node == NULL) {
|
||||
/*
|
||||
* An additional node is required, but
|
||||
* base_node_alloc() can cause a new base chunk to be
|
||||
* allocated. Drop chunks_mtx in order to avoid
|
||||
* deadlock, and if node allocation fails, deallocate
|
||||
* the result before returning an error.
|
||||
*/
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
node = base_node_alloc();
|
||||
if (node == NULL) {
|
||||
chunk_dealloc(ret, size, true);
|
||||
return (NULL);
|
||||
}
|
||||
malloc_mutex_lock(&chunks_mtx);
|
||||
}
|
||||
node->addr = (void *)((uintptr_t)(ret) + size);
|
||||
node->size = trailsize;
|
||||
extent_tree_szad_insert(&chunks_szad, node);
|
||||
extent_tree_ad_insert(&chunks_ad, node);
|
||||
node = NULL;
|
||||
}
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
|
||||
if (node != NULL)
|
||||
base_node_dealloc(node);
|
||||
#ifdef JEMALLOC_PURGE_MADVISE_FREE
|
||||
if (*zero) {
|
||||
VALGRIND_MAKE_MEM_UNDEFINED(ret, size);
|
||||
memset(ret, 0, size);
|
||||
}
|
||||
#endif
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* If the caller specifies (*zero == false), it is still possible to receive
|
||||
* zeroed memory, in which case *zero is toggled to true. arena_chunk_alloc()
|
||||
* takes advantage of this to avoid demanding zeroed chunks, but taking
|
||||
* advantage of them if they are returned.
|
||||
*/
|
||||
void *
|
||||
chunk_alloc(size_t size, size_t alignment, bool base, bool *zero)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
assert(size != 0);
|
||||
assert((size & chunksize_mask) == 0);
|
||||
assert((alignment & chunksize_mask) == 0);
|
||||
|
||||
ret = chunk_recycle(size, alignment, zero);
|
||||
if (ret != NULL)
|
||||
goto label_return;
|
||||
if (config_dss) {
|
||||
ret = chunk_alloc_dss(size, alignment, zero);
|
||||
if (ret != NULL)
|
||||
goto label_return;
|
||||
}
|
||||
ret = chunk_alloc_mmap(size, alignment);
|
||||
if (ret != NULL) {
|
||||
*zero = true;
|
||||
goto label_return;
|
||||
}
|
||||
|
||||
/* All strategies for allocation failed. */
|
||||
ret = NULL;
|
||||
label_return:
|
||||
if (config_ivsalloc && base == false && ret != NULL) {
|
||||
if (rtree_set(chunks_rtree, (uintptr_t)ret, ret)) {
|
||||
chunk_dealloc(ret, size, true);
|
||||
return (NULL);
|
||||
}
|
||||
}
|
||||
if ((config_stats || config_prof) && ret != NULL) {
|
||||
bool gdump;
|
||||
malloc_mutex_lock(&chunks_mtx);
|
||||
if (config_stats)
|
||||
stats_chunks.nchunks += (size / chunksize);
|
||||
stats_chunks.curchunks += (size / chunksize);
|
||||
if (stats_chunks.curchunks > stats_chunks.highchunks) {
|
||||
stats_chunks.highchunks = stats_chunks.curchunks;
|
||||
if (config_prof)
|
||||
gdump = true;
|
||||
} else if (config_prof)
|
||||
gdump = false;
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
if (config_prof && opt_prof && opt_prof_gdump && gdump)
|
||||
prof_gdump();
|
||||
}
|
||||
|
||||
assert(CHUNK_ADDR2BASE(ret) == ret);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static void
|
||||
chunk_record(void *chunk, size_t size)
|
||||
{
|
||||
extent_node_t *xnode, *node, *prev, key;
|
||||
|
||||
madvise(chunk, size, JEMALLOC_MADV_PURGE);
|
||||
|
||||
xnode = NULL;
|
||||
malloc_mutex_lock(&chunks_mtx);
|
||||
while (true) {
|
||||
key.addr = (void *)((uintptr_t)chunk + size);
|
||||
node = extent_tree_ad_nsearch(&chunks_ad, &key);
|
||||
/* Try to coalesce forward. */
|
||||
if (node != NULL && node->addr == key.addr) {
|
||||
/*
|
||||
* Coalesce chunk with the following address range.
|
||||
* This does not change the position within chunks_ad,
|
||||
* so only remove/insert from/into chunks_szad.
|
||||
*/
|
||||
extent_tree_szad_remove(&chunks_szad, node);
|
||||
node->addr = chunk;
|
||||
node->size += size;
|
||||
extent_tree_szad_insert(&chunks_szad, node);
|
||||
break;
|
||||
} else if (xnode == NULL) {
|
||||
/*
|
||||
* It is possible that base_node_alloc() will cause a
|
||||
* new base chunk to be allocated, so take care not to
|
||||
* deadlock on chunks_mtx, and recover if another thread
|
||||
* deallocates an adjacent chunk while this one is busy
|
||||
* allocating xnode.
|
||||
*/
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
xnode = base_node_alloc();
|
||||
if (xnode == NULL)
|
||||
return;
|
||||
malloc_mutex_lock(&chunks_mtx);
|
||||
} else {
|
||||
/* Coalescing forward failed, so insert a new node. */
|
||||
node = xnode;
|
||||
xnode = NULL;
|
||||
node->addr = chunk;
|
||||
node->size = size;
|
||||
extent_tree_ad_insert(&chunks_ad, node);
|
||||
extent_tree_szad_insert(&chunks_szad, node);
|
||||
break;
|
||||
}
|
||||
}
|
||||
/* Discard xnode if it ended up unused due to a race. */
|
||||
if (xnode != NULL)
|
||||
base_node_dealloc(xnode);
|
||||
|
||||
/* Try to coalesce backward. */
|
||||
prev = extent_tree_ad_prev(&chunks_ad, node);
|
||||
if (prev != NULL && (void *)((uintptr_t)prev->addr + prev->size) ==
|
||||
chunk) {
|
||||
/*
|
||||
* Coalesce chunk with the previous address range. This does
|
||||
* not change the position within chunks_ad, so only
|
||||
* remove/insert node from/into chunks_szad.
|
||||
*/
|
||||
extent_tree_szad_remove(&chunks_szad, prev);
|
||||
extent_tree_ad_remove(&chunks_ad, prev);
|
||||
|
||||
extent_tree_szad_remove(&chunks_szad, node);
|
||||
node->addr = prev->addr;
|
||||
node->size += prev->size;
|
||||
extent_tree_szad_insert(&chunks_szad, node);
|
||||
|
||||
base_node_dealloc(prev);
|
||||
}
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
chunk_dealloc(void *chunk, size_t size, bool unmap)
|
||||
{
|
||||
|
||||
assert(chunk != NULL);
|
||||
assert(CHUNK_ADDR2BASE(chunk) == chunk);
|
||||
assert(size != 0);
|
||||
assert((size & chunksize_mask) == 0);
|
||||
|
||||
if (config_ivsalloc)
|
||||
rtree_set(chunks_rtree, (uintptr_t)chunk, NULL);
|
||||
if (config_stats || config_prof) {
|
||||
malloc_mutex_lock(&chunks_mtx);
|
||||
stats_chunks.curchunks -= (size / chunksize);
|
||||
malloc_mutex_unlock(&chunks_mtx);
|
||||
}
|
||||
|
||||
if (unmap) {
|
||||
if (chunk_dealloc_mmap(chunk, size) == false)
|
||||
return;
|
||||
chunk_record(chunk, size);
|
||||
}
|
||||
}
|
||||
|
||||
bool
|
||||
chunk_boot0(void)
|
||||
{
|
||||
|
||||
/* Set variables according to the value of opt_lg_chunk. */
|
||||
chunksize = (ZU(1) << opt_lg_chunk);
|
||||
assert(chunksize >= PAGE);
|
||||
chunksize_mask = chunksize - 1;
|
||||
chunk_npages = (chunksize >> LG_PAGE);
|
||||
|
||||
if (config_stats || config_prof) {
|
||||
if (malloc_mutex_init(&chunks_mtx))
|
||||
return (true);
|
||||
memset(&stats_chunks, 0, sizeof(chunk_stats_t));
|
||||
}
|
||||
if (config_dss && chunk_dss_boot())
|
||||
return (true);
|
||||
extent_tree_szad_new(&chunks_szad);
|
||||
extent_tree_ad_new(&chunks_ad);
|
||||
if (config_ivsalloc) {
|
||||
chunks_rtree = rtree_new((ZU(1) << (LG_SIZEOF_PTR+3)) -
|
||||
opt_lg_chunk);
|
||||
if (chunks_rtree == NULL)
|
||||
return (true);
|
||||
}
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
bool
|
||||
chunk_boot1(void)
|
||||
{
|
||||
|
||||
if (chunk_mmap_boot())
|
||||
return (true);
|
||||
|
||||
return (false);
|
||||
}
|
159
contrib/jemalloc/src/chunk_dss.c
Normal file
159
contrib/jemalloc/src/chunk_dss.c
Normal file
@ -0,0 +1,159 @@
|
||||
#define JEMALLOC_CHUNK_DSS_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
/*
|
||||
* Protects sbrk() calls. This avoids malloc races among threads, though it
|
||||
* does not protect against races with threads that call sbrk() directly.
|
||||
*/
|
||||
static malloc_mutex_t dss_mtx;
|
||||
|
||||
/* Base address of the DSS. */
|
||||
static void *dss_base;
|
||||
/* Current end of the DSS, or ((void *)-1) if the DSS is exhausted. */
|
||||
static void *dss_prev;
|
||||
/* Current upper limit on DSS addresses. */
|
||||
static void *dss_max;
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
#ifndef JEMALLOC_HAVE_SBRK
|
||||
static void *
|
||||
sbrk(intptr_t increment)
|
||||
{
|
||||
|
||||
not_implemented();
|
||||
|
||||
return (NULL);
|
||||
}
|
||||
#endif
|
||||
|
||||
void *
|
||||
chunk_alloc_dss(size_t size, size_t alignment, bool *zero)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
cassert(config_dss);
|
||||
assert(size > 0 && (size & chunksize_mask) == 0);
|
||||
assert(alignment > 0 && (alignment & chunksize_mask) == 0);
|
||||
|
||||
/*
|
||||
* sbrk() uses a signed increment argument, so take care not to
|
||||
* interpret a huge allocation request as a negative increment.
|
||||
*/
|
||||
if ((intptr_t)size < 0)
|
||||
return (NULL);
|
||||
|
||||
malloc_mutex_lock(&dss_mtx);
|
||||
if (dss_prev != (void *)-1) {
|
||||
size_t gap_size, cpad_size;
|
||||
void *cpad, *dss_next;
|
||||
intptr_t incr;
|
||||
|
||||
/*
|
||||
* The loop is necessary to recover from races with other
|
||||
* threads that are using the DSS for something other than
|
||||
* malloc.
|
||||
*/
|
||||
do {
|
||||
/* Get the current end of the DSS. */
|
||||
dss_max = sbrk(0);
|
||||
/*
|
||||
* Calculate how much padding is necessary to
|
||||
* chunk-align the end of the DSS.
|
||||
*/
|
||||
gap_size = (chunksize - CHUNK_ADDR2OFFSET(dss_max)) &
|
||||
chunksize_mask;
|
||||
/*
|
||||
* Compute how much chunk-aligned pad space (if any) is
|
||||
* necessary to satisfy alignment. This space can be
|
||||
* recycled for later use.
|
||||
*/
|
||||
cpad = (void *)((uintptr_t)dss_max + gap_size);
|
||||
ret = (void *)ALIGNMENT_CEILING((uintptr_t)dss_max,
|
||||
alignment);
|
||||
cpad_size = (uintptr_t)ret - (uintptr_t)cpad;
|
||||
dss_next = (void *)((uintptr_t)ret + size);
|
||||
if ((uintptr_t)ret < (uintptr_t)dss_max ||
|
||||
(uintptr_t)dss_next < (uintptr_t)dss_max) {
|
||||
/* Wrap-around. */
|
||||
malloc_mutex_unlock(&dss_mtx);
|
||||
return (NULL);
|
||||
}
|
||||
incr = gap_size + cpad_size + size;
|
||||
dss_prev = sbrk(incr);
|
||||
if (dss_prev == dss_max) {
|
||||
/* Success. */
|
||||
dss_max = dss_next;
|
||||
malloc_mutex_unlock(&dss_mtx);
|
||||
if (cpad_size != 0)
|
||||
chunk_dealloc(cpad, cpad_size, true);
|
||||
*zero = true;
|
||||
return (ret);
|
||||
}
|
||||
} while (dss_prev != (void *)-1);
|
||||
}
|
||||
malloc_mutex_unlock(&dss_mtx);
|
||||
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
bool
|
||||
chunk_in_dss(void *chunk)
|
||||
{
|
||||
bool ret;
|
||||
|
||||
cassert(config_dss);
|
||||
|
||||
malloc_mutex_lock(&dss_mtx);
|
||||
if ((uintptr_t)chunk >= (uintptr_t)dss_base
|
||||
&& (uintptr_t)chunk < (uintptr_t)dss_max)
|
||||
ret = true;
|
||||
else
|
||||
ret = false;
|
||||
malloc_mutex_unlock(&dss_mtx);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
bool
|
||||
chunk_dss_boot(void)
|
||||
{
|
||||
|
||||
cassert(config_dss);
|
||||
|
||||
if (malloc_mutex_init(&dss_mtx))
|
||||
return (true);
|
||||
dss_base = sbrk(0);
|
||||
dss_prev = dss_base;
|
||||
dss_max = dss_base;
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
void
|
||||
chunk_dss_prefork(void)
|
||||
{
|
||||
|
||||
if (config_dss)
|
||||
malloc_mutex_prefork(&dss_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
chunk_dss_postfork_parent(void)
|
||||
{
|
||||
|
||||
if (config_dss)
|
||||
malloc_mutex_postfork_parent(&dss_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
chunk_dss_postfork_child(void)
|
||||
{
|
||||
|
||||
if (config_dss)
|
||||
malloc_mutex_postfork_child(&dss_mtx);
|
||||
}
|
||||
|
||||
/******************************************************************************/
|
207
contrib/jemalloc/src/chunk_mmap.c
Normal file
207
contrib/jemalloc/src/chunk_mmap.c
Normal file
@ -0,0 +1,207 @@
|
||||
#define JEMALLOC_CHUNK_MMAP_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
/*
|
||||
* Used by chunk_alloc_mmap() to decide whether to attempt the fast path and
|
||||
* potentially avoid some system calls.
|
||||
*/
|
||||
malloc_tsd_data(static, mmap_unaligned, bool, false)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, mmap_unaligned, bool, false,
|
||||
malloc_tsd_no_cleanup)
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static void *pages_map(void *addr, size_t size);
|
||||
static void pages_unmap(void *addr, size_t size);
|
||||
static void *chunk_alloc_mmap_slow(size_t size, size_t alignment,
|
||||
bool unaligned);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static void *
|
||||
pages_map(void *addr, size_t size)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
/*
|
||||
* We don't use MAP_FIXED here, because it can cause the *replacement*
|
||||
* of existing mappings, and we only want to create new mappings.
|
||||
*/
|
||||
ret = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON,
|
||||
-1, 0);
|
||||
assert(ret != NULL);
|
||||
|
||||
if (ret == MAP_FAILED)
|
||||
ret = NULL;
|
||||
else if (addr != NULL && ret != addr) {
|
||||
/*
|
||||
* We succeeded in mapping memory, but not in the right place.
|
||||
*/
|
||||
if (munmap(ret, size) == -1) {
|
||||
char buf[BUFERROR_BUF];
|
||||
|
||||
buferror(errno, buf, sizeof(buf));
|
||||
malloc_printf("<jemalloc: Error in munmap(): %s\n",
|
||||
buf);
|
||||
if (opt_abort)
|
||||
abort();
|
||||
}
|
||||
ret = NULL;
|
||||
}
|
||||
|
||||
assert(ret == NULL || (addr == NULL && ret != addr)
|
||||
|| (addr != NULL && ret == addr));
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static void
|
||||
pages_unmap(void *addr, size_t size)
|
||||
{
|
||||
|
||||
if (munmap(addr, size) == -1) {
|
||||
char buf[BUFERROR_BUF];
|
||||
|
||||
buferror(errno, buf, sizeof(buf));
|
||||
malloc_printf("<jemalloc>: Error in munmap(): %s\n", buf);
|
||||
if (opt_abort)
|
||||
abort();
|
||||
}
|
||||
}
|
||||
|
||||
static void *
|
||||
chunk_alloc_mmap_slow(size_t size, size_t alignment, bool unaligned)
|
||||
{
|
||||
void *ret, *pages;
|
||||
size_t alloc_size, leadsize, trailsize;
|
||||
|
||||
alloc_size = size + alignment - PAGE;
|
||||
/* Beware size_t wrap-around. */
|
||||
if (alloc_size < size)
|
||||
return (NULL);
|
||||
pages = pages_map(NULL, alloc_size);
|
||||
if (pages == NULL)
|
||||
return (NULL);
|
||||
leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) -
|
||||
(uintptr_t)pages;
|
||||
assert(alloc_size >= leadsize + size);
|
||||
trailsize = alloc_size - leadsize - size;
|
||||
ret = (void *)((uintptr_t)pages + leadsize);
|
||||
if (leadsize != 0) {
|
||||
/* Note that mmap() returned an unaligned mapping. */
|
||||
unaligned = true;
|
||||
pages_unmap(pages, leadsize);
|
||||
}
|
||||
if (trailsize != 0)
|
||||
pages_unmap((void *)((uintptr_t)ret + size), trailsize);
|
||||
|
||||
/*
|
||||
* If mmap() returned an aligned mapping, reset mmap_unaligned so that
|
||||
* the next chunk_alloc_mmap() execution tries the fast allocation
|
||||
* method.
|
||||
*/
|
||||
if (unaligned == false && mmap_unaligned_booted) {
|
||||
bool mu = false;
|
||||
mmap_unaligned_tsd_set(&mu);
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void *
|
||||
chunk_alloc_mmap(size_t size, size_t alignment)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
/*
|
||||
* Ideally, there would be a way to specify alignment to mmap() (like
|
||||
* NetBSD has), but in the absence of such a feature, we have to work
|
||||
* hard to efficiently create aligned mappings. The reliable, but
|
||||
* slow method is to create a mapping that is over-sized, then trim the
|
||||
* excess. However, that always results in at least one call to
|
||||
* pages_unmap().
|
||||
*
|
||||
* A more optimistic approach is to try mapping precisely the right
|
||||
* amount, then try to append another mapping if alignment is off. In
|
||||
* practice, this works out well as long as the application is not
|
||||
* interleaving mappings via direct mmap() calls. If we do run into a
|
||||
* situation where there is an interleaved mapping and we are unable to
|
||||
* extend an unaligned mapping, our best option is to switch to the
|
||||
* slow method until mmap() returns another aligned mapping. This will
|
||||
* tend to leave a gap in the memory map that is too small to cause
|
||||
* later problems for the optimistic method.
|
||||
*
|
||||
* Another possible confounding factor is address space layout
|
||||
* randomization (ASLR), which causes mmap(2) to disregard the
|
||||
* requested address. mmap_unaligned tracks whether the previous
|
||||
* chunk_alloc_mmap() execution received any unaligned or relocated
|
||||
* mappings, and if so, the current execution will immediately fall
|
||||
* back to the slow method. However, we keep track of whether the fast
|
||||
* method would have succeeded, and if so, we make a note to try the
|
||||
* fast method next time.
|
||||
*/
|
||||
|
||||
if (mmap_unaligned_booted && *mmap_unaligned_tsd_get() == false) {
|
||||
size_t offset;
|
||||
|
||||
ret = pages_map(NULL, size);
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
|
||||
offset = ALIGNMENT_ADDR2OFFSET(ret, alignment);
|
||||
if (offset != 0) {
|
||||
bool mu = true;
|
||||
mmap_unaligned_tsd_set(&mu);
|
||||
/* Try to extend chunk boundary. */
|
||||
if (pages_map((void *)((uintptr_t)ret + size),
|
||||
chunksize - offset) == NULL) {
|
||||
/*
|
||||
* Extension failed. Clean up, then revert to
|
||||
* the reliable-but-expensive method.
|
||||
*/
|
||||
pages_unmap(ret, size);
|
||||
ret = chunk_alloc_mmap_slow(size, alignment,
|
||||
true);
|
||||
} else {
|
||||
/* Clean up unneeded leading space. */
|
||||
pages_unmap(ret, chunksize - offset);
|
||||
ret = (void *)((uintptr_t)ret + (chunksize -
|
||||
offset));
|
||||
}
|
||||
}
|
||||
} else
|
||||
ret = chunk_alloc_mmap_slow(size, alignment, false);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
bool
|
||||
chunk_dealloc_mmap(void *chunk, size_t size)
|
||||
{
|
||||
|
||||
if (config_munmap)
|
||||
pages_unmap(chunk, size);
|
||||
|
||||
return (config_munmap == false);
|
||||
}
|
||||
|
||||
bool
|
||||
chunk_mmap_boot(void)
|
||||
{
|
||||
|
||||
/*
|
||||
* XXX For the non-TLS implementation of tsd, the first access from
|
||||
* each thread causes memory allocation. The result is a bootstrapping
|
||||
* problem for this particular use case, so for now just disable it by
|
||||
* leaving it in an unbooted state.
|
||||
*/
|
||||
#ifdef JEMALLOC_TLS
|
||||
if (mmap_unaligned_tsd_boot())
|
||||
return (true);
|
||||
#endif
|
||||
|
||||
return (false);
|
||||
}
|
609
contrib/jemalloc/src/ckh.c
Normal file
609
contrib/jemalloc/src/ckh.c
Normal file
@ -0,0 +1,609 @@
|
||||
/*
|
||||
*******************************************************************************
|
||||
* Implementation of (2^1+,2) cuckoo hashing, where 2^1+ indicates that each
|
||||
* hash bucket contains 2^n cells, for n >= 1, and 2 indicates that two hash
|
||||
* functions are employed. The original cuckoo hashing algorithm was described
|
||||
* in:
|
||||
*
|
||||
* Pagh, R., F.F. Rodler (2004) Cuckoo Hashing. Journal of Algorithms
|
||||
* 51(2):122-144.
|
||||
*
|
||||
* Generalization of cuckoo hashing was discussed in:
|
||||
*
|
||||
* Erlingsson, U., M. Manasse, F. McSherry (2006) A cool and practical
|
||||
* alternative to traditional hash tables. In Proceedings of the 7th
|
||||
* Workshop on Distributed Data and Structures (WDAS'06), Santa Clara, CA,
|
||||
* January 2006.
|
||||
*
|
||||
* This implementation uses precisely two hash functions because that is the
|
||||
* fewest that can work, and supporting multiple hashes is an implementation
|
||||
* burden. Here is a reproduction of Figure 1 from Erlingsson et al. (2006)
|
||||
* that shows approximate expected maximum load factors for various
|
||||
* configurations:
|
||||
*
|
||||
* | #cells/bucket |
|
||||
* #hashes | 1 | 2 | 4 | 8 |
|
||||
* --------+-------+-------+-------+-------+
|
||||
* 1 | 0.006 | 0.006 | 0.03 | 0.12 |
|
||||
* 2 | 0.49 | 0.86 |>0.93< |>0.96< |
|
||||
* 3 | 0.91 | 0.97 | 0.98 | 0.999 |
|
||||
* 4 | 0.97 | 0.99 | 0.999 | |
|
||||
*
|
||||
* The number of cells per bucket is chosen such that a bucket fits in one cache
|
||||
* line. So, on 32- and 64-bit systems, we use (8,2) and (4,2) cuckoo hashing,
|
||||
* respectively.
|
||||
*
|
||||
******************************************************************************/
|
||||
#define JEMALLOC_CKH_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static bool ckh_grow(ckh_t *ckh);
|
||||
static void ckh_shrink(ckh_t *ckh);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
/*
|
||||
* Search bucket for key and return the cell number if found; SIZE_T_MAX
|
||||
* otherwise.
|
||||
*/
|
||||
JEMALLOC_INLINE size_t
|
||||
ckh_bucket_search(ckh_t *ckh, size_t bucket, const void *key)
|
||||
{
|
||||
ckhc_t *cell;
|
||||
unsigned i;
|
||||
|
||||
for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) {
|
||||
cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i];
|
||||
if (cell->key != NULL && ckh->keycomp(key, cell->key))
|
||||
return ((bucket << LG_CKH_BUCKET_CELLS) + i);
|
||||
}
|
||||
|
||||
return (SIZE_T_MAX);
|
||||
}
|
||||
|
||||
/*
|
||||
* Search table for key and return cell number if found; SIZE_T_MAX otherwise.
|
||||
*/
|
||||
JEMALLOC_INLINE size_t
|
||||
ckh_isearch(ckh_t *ckh, const void *key)
|
||||
{
|
||||
size_t hash1, hash2, bucket, cell;
|
||||
|
||||
assert(ckh != NULL);
|
||||
|
||||
ckh->hash(key, ckh->lg_curbuckets, &hash1, &hash2);
|
||||
|
||||
/* Search primary bucket. */
|
||||
bucket = hash1 & ((ZU(1) << ckh->lg_curbuckets) - 1);
|
||||
cell = ckh_bucket_search(ckh, bucket, key);
|
||||
if (cell != SIZE_T_MAX)
|
||||
return (cell);
|
||||
|
||||
/* Search secondary bucket. */
|
||||
bucket = hash2 & ((ZU(1) << ckh->lg_curbuckets) - 1);
|
||||
cell = ckh_bucket_search(ckh, bucket, key);
|
||||
return (cell);
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE bool
|
||||
ckh_try_bucket_insert(ckh_t *ckh, size_t bucket, const void *key,
|
||||
const void *data)
|
||||
{
|
||||
ckhc_t *cell;
|
||||
unsigned offset, i;
|
||||
|
||||
/*
|
||||
* Cycle through the cells in the bucket, starting at a random position.
|
||||
* The randomness avoids worst-case search overhead as buckets fill up.
|
||||
*/
|
||||
prng32(offset, LG_CKH_BUCKET_CELLS, ckh->prng_state, CKH_A, CKH_C);
|
||||
for (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) {
|
||||
cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) +
|
||||
((i + offset) & ((ZU(1) << LG_CKH_BUCKET_CELLS) - 1))];
|
||||
if (cell->key == NULL) {
|
||||
cell->key = key;
|
||||
cell->data = data;
|
||||
ckh->count++;
|
||||
return (false);
|
||||
}
|
||||
}
|
||||
|
||||
return (true);
|
||||
}
|
||||
|
||||
/*
|
||||
* No space is available in bucket. Randomly evict an item, then try to find an
|
||||
* alternate location for that item. Iteratively repeat this
|
||||
* eviction/relocation procedure until either success or detection of an
|
||||
* eviction/relocation bucket cycle.
|
||||
*/
|
||||
JEMALLOC_INLINE bool
|
||||
ckh_evict_reloc_insert(ckh_t *ckh, size_t argbucket, void const **argkey,
|
||||
void const **argdata)
|
||||
{
|
||||
const void *key, *data, *tkey, *tdata;
|
||||
ckhc_t *cell;
|
||||
size_t hash1, hash2, bucket, tbucket;
|
||||
unsigned i;
|
||||
|
||||
bucket = argbucket;
|
||||
key = *argkey;
|
||||
data = *argdata;
|
||||
while (true) {
|
||||
/*
|
||||
* Choose a random item within the bucket to evict. This is
|
||||
* critical to correct function, because without (eventually)
|
||||
* evicting all items within a bucket during iteration, it
|
||||
* would be possible to get stuck in an infinite loop if there
|
||||
* were an item for which both hashes indicated the same
|
||||
* bucket.
|
||||
*/
|
||||
prng32(i, LG_CKH_BUCKET_CELLS, ckh->prng_state, CKH_A, CKH_C);
|
||||
cell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i];
|
||||
assert(cell->key != NULL);
|
||||
|
||||
/* Swap cell->{key,data} and {key,data} (evict). */
|
||||
tkey = cell->key; tdata = cell->data;
|
||||
cell->key = key; cell->data = data;
|
||||
key = tkey; data = tdata;
|
||||
|
||||
#ifdef CKH_COUNT
|
||||
ckh->nrelocs++;
|
||||
#endif
|
||||
|
||||
/* Find the alternate bucket for the evicted item. */
|
||||
ckh->hash(key, ckh->lg_curbuckets, &hash1, &hash2);
|
||||
tbucket = hash2 & ((ZU(1) << ckh->lg_curbuckets) - 1);
|
||||
if (tbucket == bucket) {
|
||||
tbucket = hash1 & ((ZU(1) << ckh->lg_curbuckets) - 1);
|
||||
/*
|
||||
* It may be that (tbucket == bucket) still, if the
|
||||
* item's hashes both indicate this bucket. However,
|
||||
* we are guaranteed to eventually escape this bucket
|
||||
* during iteration, assuming pseudo-random item
|
||||
* selection (true randomness would make infinite
|
||||
* looping a remote possibility). The reason we can
|
||||
* never get trapped forever is that there are two
|
||||
* cases:
|
||||
*
|
||||
* 1) This bucket == argbucket, so we will quickly
|
||||
* detect an eviction cycle and terminate.
|
||||
* 2) An item was evicted to this bucket from another,
|
||||
* which means that at least one item in this bucket
|
||||
* has hashes that indicate distinct buckets.
|
||||
*/
|
||||
}
|
||||
/* Check for a cycle. */
|
||||
if (tbucket == argbucket) {
|
||||
*argkey = key;
|
||||
*argdata = data;
|
||||
return (true);
|
||||
}
|
||||
|
||||
bucket = tbucket;
|
||||
if (ckh_try_bucket_insert(ckh, bucket, key, data) == false)
|
||||
return (false);
|
||||
}
|
||||
}
|
||||
|
||||
JEMALLOC_INLINE bool
|
||||
ckh_try_insert(ckh_t *ckh, void const**argkey, void const**argdata)
|
||||
{
|
||||
size_t hash1, hash2, bucket;
|
||||
const void *key = *argkey;
|
||||
const void *data = *argdata;
|
||||
|
||||
ckh->hash(key, ckh->lg_curbuckets, &hash1, &hash2);
|
||||
|
||||
/* Try to insert in primary bucket. */
|
||||
bucket = hash1 & ((ZU(1) << ckh->lg_curbuckets) - 1);
|
||||
if (ckh_try_bucket_insert(ckh, bucket, key, data) == false)
|
||||
return (false);
|
||||
|
||||
/* Try to insert in secondary bucket. */
|
||||
bucket = hash2 & ((ZU(1) << ckh->lg_curbuckets) - 1);
|
||||
if (ckh_try_bucket_insert(ckh, bucket, key, data) == false)
|
||||
return (false);
|
||||
|
||||
/*
|
||||
* Try to find a place for this item via iterative eviction/relocation.
|
||||
*/
|
||||
return (ckh_evict_reloc_insert(ckh, bucket, argkey, argdata));
|
||||
}
|
||||
|
||||
/*
|
||||
* Try to rebuild the hash table from scratch by inserting all items from the
|
||||
* old table into the new.
|
||||
*/
|
||||
JEMALLOC_INLINE bool
|
||||
ckh_rebuild(ckh_t *ckh, ckhc_t *aTab)
|
||||
{
|
||||
size_t count, i, nins;
|
||||
const void *key, *data;
|
||||
|
||||
count = ckh->count;
|
||||
ckh->count = 0;
|
||||
for (i = nins = 0; nins < count; i++) {
|
||||
if (aTab[i].key != NULL) {
|
||||
key = aTab[i].key;
|
||||
data = aTab[i].data;
|
||||
if (ckh_try_insert(ckh, &key, &data)) {
|
||||
ckh->count = count;
|
||||
return (true);
|
||||
}
|
||||
nins++;
|
||||
}
|
||||
}
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
static bool
|
||||
ckh_grow(ckh_t *ckh)
|
||||
{
|
||||
bool ret;
|
||||
ckhc_t *tab, *ttab;
|
||||
size_t lg_curcells;
|
||||
unsigned lg_prevbuckets;
|
||||
|
||||
#ifdef CKH_COUNT
|
||||
ckh->ngrows++;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* It is possible (though unlikely, given well behaved hashes) that the
|
||||
* table will have to be doubled more than once in order to create a
|
||||
* usable table.
|
||||
*/
|
||||
lg_prevbuckets = ckh->lg_curbuckets;
|
||||
lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS;
|
||||
while (true) {
|
||||
size_t usize;
|
||||
|
||||
lg_curcells++;
|
||||
usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);
|
||||
if (usize == 0) {
|
||||
ret = true;
|
||||
goto label_return;
|
||||
}
|
||||
tab = (ckhc_t *)ipalloc(usize, CACHELINE, true);
|
||||
if (tab == NULL) {
|
||||
ret = true;
|
||||
goto label_return;
|
||||
}
|
||||
/* Swap in new table. */
|
||||
ttab = ckh->tab;
|
||||
ckh->tab = tab;
|
||||
tab = ttab;
|
||||
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
|
||||
|
||||
if (ckh_rebuild(ckh, tab) == false) {
|
||||
idalloc(tab);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Rebuilding failed, so back out partially rebuilt table. */
|
||||
idalloc(ckh->tab);
|
||||
ckh->tab = tab;
|
||||
ckh->lg_curbuckets = lg_prevbuckets;
|
||||
}
|
||||
|
||||
ret = false;
|
||||
label_return:
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static void
|
||||
ckh_shrink(ckh_t *ckh)
|
||||
{
|
||||
ckhc_t *tab, *ttab;
|
||||
size_t lg_curcells, usize;
|
||||
unsigned lg_prevbuckets;
|
||||
|
||||
/*
|
||||
* It is possible (though unlikely, given well behaved hashes) that the
|
||||
* table rebuild will fail.
|
||||
*/
|
||||
lg_prevbuckets = ckh->lg_curbuckets;
|
||||
lg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 1;
|
||||
usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);
|
||||
if (usize == 0)
|
||||
return;
|
||||
tab = (ckhc_t *)ipalloc(usize, CACHELINE, true);
|
||||
if (tab == NULL) {
|
||||
/*
|
||||
* An OOM error isn't worth propagating, since it doesn't
|
||||
* prevent this or future operations from proceeding.
|
||||
*/
|
||||
return;
|
||||
}
|
||||
/* Swap in new table. */
|
||||
ttab = ckh->tab;
|
||||
ckh->tab = tab;
|
||||
tab = ttab;
|
||||
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
|
||||
|
||||
if (ckh_rebuild(ckh, tab) == false) {
|
||||
idalloc(tab);
|
||||
#ifdef CKH_COUNT
|
||||
ckh->nshrinks++;
|
||||
#endif
|
||||
return;
|
||||
}
|
||||
|
||||
/* Rebuilding failed, so back out partially rebuilt table. */
|
||||
idalloc(ckh->tab);
|
||||
ckh->tab = tab;
|
||||
ckh->lg_curbuckets = lg_prevbuckets;
|
||||
#ifdef CKH_COUNT
|
||||
ckh->nshrinkfails++;
|
||||
#endif
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_new(ckh_t *ckh, size_t minitems, ckh_hash_t *hash, ckh_keycomp_t *keycomp)
|
||||
{
|
||||
bool ret;
|
||||
size_t mincells, usize;
|
||||
unsigned lg_mincells;
|
||||
|
||||
assert(minitems > 0);
|
||||
assert(hash != NULL);
|
||||
assert(keycomp != NULL);
|
||||
|
||||
#ifdef CKH_COUNT
|
||||
ckh->ngrows = 0;
|
||||
ckh->nshrinks = 0;
|
||||
ckh->nshrinkfails = 0;
|
||||
ckh->ninserts = 0;
|
||||
ckh->nrelocs = 0;
|
||||
#endif
|
||||
ckh->prng_state = 42; /* Value doesn't really matter. */
|
||||
ckh->count = 0;
|
||||
|
||||
/*
|
||||
* Find the minimum power of 2 that is large enough to fit aBaseCount
|
||||
* entries. We are using (2+,2) cuckoo hashing, which has an expected
|
||||
* maximum load factor of at least ~0.86, so 0.75 is a conservative load
|
||||
* factor that will typically allow 2^aLgMinItems to fit without ever
|
||||
* growing the table.
|
||||
*/
|
||||
assert(LG_CKH_BUCKET_CELLS > 0);
|
||||
mincells = ((minitems + (3 - (minitems % 3))) / 3) << 2;
|
||||
for (lg_mincells = LG_CKH_BUCKET_CELLS;
|
||||
(ZU(1) << lg_mincells) < mincells;
|
||||
lg_mincells++)
|
||||
; /* Do nothing. */
|
||||
ckh->lg_minbuckets = lg_mincells - LG_CKH_BUCKET_CELLS;
|
||||
ckh->lg_curbuckets = lg_mincells - LG_CKH_BUCKET_CELLS;
|
||||
ckh->hash = hash;
|
||||
ckh->keycomp = keycomp;
|
||||
|
||||
usize = sa2u(sizeof(ckhc_t) << lg_mincells, CACHELINE);
|
||||
if (usize == 0) {
|
||||
ret = true;
|
||||
goto label_return;
|
||||
}
|
||||
ckh->tab = (ckhc_t *)ipalloc(usize, CACHELINE, true);
|
||||
if (ckh->tab == NULL) {
|
||||
ret = true;
|
||||
goto label_return;
|
||||
}
|
||||
|
||||
ret = false;
|
||||
label_return:
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void
|
||||
ckh_delete(ckh_t *ckh)
|
||||
{
|
||||
|
||||
assert(ckh != NULL);
|
||||
|
||||
#ifdef CKH_VERBOSE
|
||||
malloc_printf(
|
||||
"%s(%p): ngrows: %"PRIu64", nshrinks: %"PRIu64","
|
||||
" nshrinkfails: %"PRIu64", ninserts: %"PRIu64","
|
||||
" nrelocs: %"PRIu64"\n", __func__, ckh,
|
||||
(unsigned long long)ckh->ngrows,
|
||||
(unsigned long long)ckh->nshrinks,
|
||||
(unsigned long long)ckh->nshrinkfails,
|
||||
(unsigned long long)ckh->ninserts,
|
||||
(unsigned long long)ckh->nrelocs);
|
||||
#endif
|
||||
|
||||
idalloc(ckh->tab);
|
||||
#ifdef JEMALLOC_DEBUG
|
||||
memset(ckh, 0x5a, sizeof(ckh_t));
|
||||
#endif
|
||||
}
|
||||
|
||||
size_t
|
||||
ckh_count(ckh_t *ckh)
|
||||
{
|
||||
|
||||
assert(ckh != NULL);
|
||||
|
||||
return (ckh->count);
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data)
|
||||
{
|
||||
size_t i, ncells;
|
||||
|
||||
for (i = *tabind, ncells = (ZU(1) << (ckh->lg_curbuckets +
|
||||
LG_CKH_BUCKET_CELLS)); i < ncells; i++) {
|
||||
if (ckh->tab[i].key != NULL) {
|
||||
if (key != NULL)
|
||||
*key = (void *)ckh->tab[i].key;
|
||||
if (data != NULL)
|
||||
*data = (void *)ckh->tab[i].data;
|
||||
*tabind = i + 1;
|
||||
return (false);
|
||||
}
|
||||
}
|
||||
|
||||
return (true);
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_insert(ckh_t *ckh, const void *key, const void *data)
|
||||
{
|
||||
bool ret;
|
||||
|
||||
assert(ckh != NULL);
|
||||
assert(ckh_search(ckh, key, NULL, NULL));
|
||||
|
||||
#ifdef CKH_COUNT
|
||||
ckh->ninserts++;
|
||||
#endif
|
||||
|
||||
while (ckh_try_insert(ckh, &key, &data)) {
|
||||
if (ckh_grow(ckh)) {
|
||||
ret = true;
|
||||
goto label_return;
|
||||
}
|
||||
}
|
||||
|
||||
ret = false;
|
||||
label_return:
|
||||
return (ret);
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_remove(ckh_t *ckh, const void *searchkey, void **key, void **data)
|
||||
{
|
||||
size_t cell;
|
||||
|
||||
assert(ckh != NULL);
|
||||
|
||||
cell = ckh_isearch(ckh, searchkey);
|
||||
if (cell != SIZE_T_MAX) {
|
||||
if (key != NULL)
|
||||
*key = (void *)ckh->tab[cell].key;
|
||||
if (data != NULL)
|
||||
*data = (void *)ckh->tab[cell].data;
|
||||
ckh->tab[cell].key = NULL;
|
||||
ckh->tab[cell].data = NULL; /* Not necessary. */
|
||||
|
||||
ckh->count--;
|
||||
/* Try to halve the table if it is less than 1/4 full. */
|
||||
if (ckh->count < (ZU(1) << (ckh->lg_curbuckets
|
||||
+ LG_CKH_BUCKET_CELLS - 2)) && ckh->lg_curbuckets
|
||||
> ckh->lg_minbuckets) {
|
||||
/* Ignore error due to OOM. */
|
||||
ckh_shrink(ckh);
|
||||
}
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
return (true);
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data)
|
||||
{
|
||||
size_t cell;
|
||||
|
||||
assert(ckh != NULL);
|
||||
|
||||
cell = ckh_isearch(ckh, searchkey);
|
||||
if (cell != SIZE_T_MAX) {
|
||||
if (key != NULL)
|
||||
*key = (void *)ckh->tab[cell].key;
|
||||
if (data != NULL)
|
||||
*data = (void *)ckh->tab[cell].data;
|
||||
return (false);
|
||||
}
|
||||
|
||||
return (true);
|
||||
}
|
||||
|
||||
void
|
||||
ckh_string_hash(const void *key, unsigned minbits, size_t *hash1, size_t *hash2)
|
||||
{
|
||||
size_t ret1, ret2;
|
||||
uint64_t h;
|
||||
|
||||
assert(minbits <= 32 || (SIZEOF_PTR == 8 && minbits <= 64));
|
||||
assert(hash1 != NULL);
|
||||
assert(hash2 != NULL);
|
||||
|
||||
h = hash(key, strlen((const char *)key), UINT64_C(0x94122f335b332aea));
|
||||
if (minbits <= 32) {
|
||||
/*
|
||||
* Avoid doing multiple hashes, since a single hash provides
|
||||
* enough bits.
|
||||
*/
|
||||
ret1 = h & ZU(0xffffffffU);
|
||||
ret2 = h >> 32;
|
||||
} else {
|
||||
ret1 = h;
|
||||
ret2 = hash(key, strlen((const char *)key),
|
||||
UINT64_C(0x8432a476666bbc13));
|
||||
}
|
||||
|
||||
*hash1 = ret1;
|
||||
*hash2 = ret2;
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_string_keycomp(const void *k1, const void *k2)
|
||||
{
|
||||
|
||||
assert(k1 != NULL);
|
||||
assert(k2 != NULL);
|
||||
|
||||
return (strcmp((char *)k1, (char *)k2) ? false : true);
|
||||
}
|
||||
|
||||
void
|
||||
ckh_pointer_hash(const void *key, unsigned minbits, size_t *hash1,
|
||||
size_t *hash2)
|
||||
{
|
||||
size_t ret1, ret2;
|
||||
uint64_t h;
|
||||
union {
|
||||
const void *v;
|
||||
uint64_t i;
|
||||
} u;
|
||||
|
||||
assert(minbits <= 32 || (SIZEOF_PTR == 8 && minbits <= 64));
|
||||
assert(hash1 != NULL);
|
||||
assert(hash2 != NULL);
|
||||
|
||||
assert(sizeof(u.v) == sizeof(u.i));
|
||||
#if (LG_SIZEOF_PTR != LG_SIZEOF_INT)
|
||||
u.i = 0;
|
||||
#endif
|
||||
u.v = key;
|
||||
h = hash(&u.i, sizeof(u.i), UINT64_C(0xd983396e68886082));
|
||||
if (minbits <= 32) {
|
||||
/*
|
||||
* Avoid doing multiple hashes, since a single hash provides
|
||||
* enough bits.
|
||||
*/
|
||||
ret1 = h & ZU(0xffffffffU);
|
||||
ret2 = h >> 32;
|
||||
} else {
|
||||
assert(SIZEOF_PTR == 8);
|
||||
ret1 = h;
|
||||
ret2 = hash(&u.i, sizeof(u.i), UINT64_C(0x5e2be9aff8709a5d));
|
||||
}
|
||||
|
||||
*hash1 = ret1;
|
||||
*hash2 = ret2;
|
||||
}
|
||||
|
||||
bool
|
||||
ckh_pointer_keycomp(const void *k1, const void *k2)
|
||||
{
|
||||
|
||||
return ((k1 == k2) ? true : false);
|
||||
}
|
1385
contrib/jemalloc/src/ctl.c
Normal file
1385
contrib/jemalloc/src/ctl.c
Normal file
File diff suppressed because it is too large
Load Diff
39
contrib/jemalloc/src/extent.c
Normal file
39
contrib/jemalloc/src/extent.c
Normal file
@ -0,0 +1,39 @@
|
||||
#define JEMALLOC_EXTENT_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static inline int
|
||||
extent_szad_comp(extent_node_t *a, extent_node_t *b)
|
||||
{
|
||||
int ret;
|
||||
size_t a_size = a->size;
|
||||
size_t b_size = b->size;
|
||||
|
||||
ret = (a_size > b_size) - (a_size < b_size);
|
||||
if (ret == 0) {
|
||||
uintptr_t a_addr = (uintptr_t)a->addr;
|
||||
uintptr_t b_addr = (uintptr_t)b->addr;
|
||||
|
||||
ret = (a_addr > b_addr) - (a_addr < b_addr);
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/* Generate red-black tree functions. */
|
||||
rb_gen(, extent_tree_szad_, extent_tree_t, extent_node_t, link_szad,
|
||||
extent_szad_comp)
|
||||
|
||||
static inline int
|
||||
extent_ad_comp(extent_node_t *a, extent_node_t *b)
|
||||
{
|
||||
uintptr_t a_addr = (uintptr_t)a->addr;
|
||||
uintptr_t b_addr = (uintptr_t)b->addr;
|
||||
|
||||
return ((a_addr > b_addr) - (a_addr < b_addr));
|
||||
}
|
||||
|
||||
/* Generate red-black tree functions. */
|
||||
rb_gen(, extent_tree_ad_, extent_tree_t, extent_node_t, link_ad,
|
||||
extent_ad_comp)
|
2
contrib/jemalloc/src/hash.c
Normal file
2
contrib/jemalloc/src/hash.c
Normal file
@ -0,0 +1,2 @@
|
||||
#define JEMALLOC_HASH_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
306
contrib/jemalloc/src/huge.c
Normal file
306
contrib/jemalloc/src/huge.c
Normal file
@ -0,0 +1,306 @@
|
||||
#define JEMALLOC_HUGE_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
uint64_t huge_nmalloc;
|
||||
uint64_t huge_ndalloc;
|
||||
size_t huge_allocated;
|
||||
|
||||
malloc_mutex_t huge_mtx;
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
/* Tree of chunks that are stand-alone huge allocations. */
|
||||
static extent_tree_t huge;
|
||||
|
||||
void *
|
||||
huge_malloc(size_t size, bool zero)
|
||||
{
|
||||
|
||||
return (huge_palloc(size, chunksize, zero));
|
||||
}
|
||||
|
||||
void *
|
||||
huge_palloc(size_t size, size_t alignment, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
size_t csize;
|
||||
extent_node_t *node;
|
||||
|
||||
/* Allocate one or more contiguous chunks for this request. */
|
||||
|
||||
csize = CHUNK_CEILING(size);
|
||||
if (csize == 0) {
|
||||
/* size is large enough to cause size_t wrap-around. */
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
/* Allocate an extent node with which to track the chunk. */
|
||||
node = base_node_alloc();
|
||||
if (node == NULL)
|
||||
return (NULL);
|
||||
|
||||
ret = chunk_alloc(csize, alignment, false, &zero);
|
||||
if (ret == NULL) {
|
||||
base_node_dealloc(node);
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
/* Insert node into huge. */
|
||||
node->addr = ret;
|
||||
node->size = csize;
|
||||
|
||||
malloc_mutex_lock(&huge_mtx);
|
||||
extent_tree_ad_insert(&huge, node);
|
||||
if (config_stats) {
|
||||
stats_cactive_add(csize);
|
||||
huge_nmalloc++;
|
||||
huge_allocated += csize;
|
||||
}
|
||||
malloc_mutex_unlock(&huge_mtx);
|
||||
|
||||
if (config_fill && zero == false) {
|
||||
if (opt_junk)
|
||||
memset(ret, 0xa5, csize);
|
||||
else if (opt_zero)
|
||||
memset(ret, 0, csize);
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void *
|
||||
huge_ralloc_no_move(void *ptr, size_t oldsize, size_t size, size_t extra)
|
||||
{
|
||||
|
||||
/*
|
||||
* Avoid moving the allocation if the size class can be left the same.
|
||||
*/
|
||||
if (oldsize > arena_maxclass
|
||||
&& CHUNK_CEILING(oldsize) >= CHUNK_CEILING(size)
|
||||
&& CHUNK_CEILING(oldsize) <= CHUNK_CEILING(size+extra)) {
|
||||
assert(CHUNK_CEILING(oldsize) == oldsize);
|
||||
if (config_fill && opt_junk && size < oldsize) {
|
||||
memset((void *)((uintptr_t)ptr + size), 0x5a,
|
||||
oldsize - size);
|
||||
}
|
||||
return (ptr);
|
||||
}
|
||||
|
||||
/* Reallocation would require a move. */
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
void *
|
||||
huge_ralloc(void *ptr, size_t oldsize, size_t size, size_t extra,
|
||||
size_t alignment, bool zero)
|
||||
{
|
||||
void *ret;
|
||||
size_t copysize;
|
||||
|
||||
/* Try to avoid moving the allocation. */
|
||||
ret = huge_ralloc_no_move(ptr, oldsize, size, extra);
|
||||
if (ret != NULL)
|
||||
return (ret);
|
||||
|
||||
/*
|
||||
* size and oldsize are different enough that we need to use a
|
||||
* different size class. In that case, fall back to allocating new
|
||||
* space and copying.
|
||||
*/
|
||||
if (alignment > chunksize)
|
||||
ret = huge_palloc(size + extra, alignment, zero);
|
||||
else
|
||||
ret = huge_malloc(size + extra, zero);
|
||||
|
||||
if (ret == NULL) {
|
||||
if (extra == 0)
|
||||
return (NULL);
|
||||
/* Try again, this time without extra. */
|
||||
if (alignment > chunksize)
|
||||
ret = huge_palloc(size, alignment, zero);
|
||||
else
|
||||
ret = huge_malloc(size, zero);
|
||||
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
* Copy at most size bytes (not size+extra), since the caller has no
|
||||
* expectation that the extra bytes will be reliably preserved.
|
||||
*/
|
||||
copysize = (size < oldsize) ? size : oldsize;
|
||||
|
||||
/*
|
||||
* Use mremap(2) if this is a huge-->huge reallocation, and neither the
|
||||
* source nor the destination are in dss.
|
||||
*/
|
||||
#ifdef JEMALLOC_MREMAP_FIXED
|
||||
if (oldsize >= chunksize && (config_dss == false || (chunk_in_dss(ptr)
|
||||
== false && chunk_in_dss(ret) == false))) {
|
||||
size_t newsize = huge_salloc(ret);
|
||||
|
||||
/*
|
||||
* Remove ptr from the tree of huge allocations before
|
||||
* performing the remap operation, in order to avoid the
|
||||
* possibility of another thread acquiring that mapping before
|
||||
* this one removes it from the tree.
|
||||
*/
|
||||
huge_dalloc(ptr, false);
|
||||
if (mremap(ptr, oldsize, newsize, MREMAP_MAYMOVE|MREMAP_FIXED,
|
||||
ret) == MAP_FAILED) {
|
||||
/*
|
||||
* Assuming no chunk management bugs in the allocator,
|
||||
* the only documented way an error can occur here is
|
||||
* if the application changed the map type for a
|
||||
* portion of the old allocation. This is firmly in
|
||||
* undefined behavior territory, so write a diagnostic
|
||||
* message, and optionally abort.
|
||||
*/
|
||||
char buf[BUFERROR_BUF];
|
||||
|
||||
buferror(errno, buf, sizeof(buf));
|
||||
malloc_printf("<jemalloc>: Error in mremap(): %s\n",
|
||||
buf);
|
||||
if (opt_abort)
|
||||
abort();
|
||||
memcpy(ret, ptr, copysize);
|
||||
chunk_dealloc_mmap(ptr, oldsize);
|
||||
}
|
||||
} else
|
||||
#endif
|
||||
{
|
||||
memcpy(ret, ptr, copysize);
|
||||
iqalloc(ptr);
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void
|
||||
huge_dalloc(void *ptr, bool unmap)
|
||||
{
|
||||
extent_node_t *node, key;
|
||||
|
||||
malloc_mutex_lock(&huge_mtx);
|
||||
|
||||
/* Extract from tree of huge allocations. */
|
||||
key.addr = ptr;
|
||||
node = extent_tree_ad_search(&huge, &key);
|
||||
assert(node != NULL);
|
||||
assert(node->addr == ptr);
|
||||
extent_tree_ad_remove(&huge, node);
|
||||
|
||||
if (config_stats) {
|
||||
stats_cactive_sub(node->size);
|
||||
huge_ndalloc++;
|
||||
huge_allocated -= node->size;
|
||||
}
|
||||
|
||||
malloc_mutex_unlock(&huge_mtx);
|
||||
|
||||
if (unmap && config_fill && config_dss && opt_junk)
|
||||
memset(node->addr, 0x5a, node->size);
|
||||
|
||||
chunk_dealloc(node->addr, node->size, unmap);
|
||||
|
||||
base_node_dealloc(node);
|
||||
}
|
||||
|
||||
size_t
|
||||
huge_salloc(const void *ptr)
|
||||
{
|
||||
size_t ret;
|
||||
extent_node_t *node, key;
|
||||
|
||||
malloc_mutex_lock(&huge_mtx);
|
||||
|
||||
/* Extract from tree of huge allocations. */
|
||||
key.addr = __DECONST(void *, ptr);
|
||||
node = extent_tree_ad_search(&huge, &key);
|
||||
assert(node != NULL);
|
||||
|
||||
ret = node->size;
|
||||
|
||||
malloc_mutex_unlock(&huge_mtx);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
prof_ctx_t *
|
||||
huge_prof_ctx_get(const void *ptr)
|
||||
{
|
||||
prof_ctx_t *ret;
|
||||
extent_node_t *node, key;
|
||||
|
||||
malloc_mutex_lock(&huge_mtx);
|
||||
|
||||
/* Extract from tree of huge allocations. */
|
||||
key.addr = __DECONST(void *, ptr);
|
||||
node = extent_tree_ad_search(&huge, &key);
|
||||
assert(node != NULL);
|
||||
|
||||
ret = node->prof_ctx;
|
||||
|
||||
malloc_mutex_unlock(&huge_mtx);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void
|
||||
huge_prof_ctx_set(const void *ptr, prof_ctx_t *ctx)
|
||||
{
|
||||
extent_node_t *node, key;
|
||||
|
||||
malloc_mutex_lock(&huge_mtx);
|
||||
|
||||
/* Extract from tree of huge allocations. */
|
||||
key.addr = __DECONST(void *, ptr);
|
||||
node = extent_tree_ad_search(&huge, &key);
|
||||
assert(node != NULL);
|
||||
|
||||
node->prof_ctx = ctx;
|
||||
|
||||
malloc_mutex_unlock(&huge_mtx);
|
||||
}
|
||||
|
||||
bool
|
||||
huge_boot(void)
|
||||
{
|
||||
|
||||
/* Initialize chunks data. */
|
||||
if (malloc_mutex_init(&huge_mtx))
|
||||
return (true);
|
||||
extent_tree_ad_new(&huge);
|
||||
|
||||
if (config_stats) {
|
||||
huge_nmalloc = 0;
|
||||
huge_ndalloc = 0;
|
||||
huge_allocated = 0;
|
||||
}
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
void
|
||||
huge_prefork(void)
|
||||
{
|
||||
|
||||
malloc_mutex_prefork(&huge_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
huge_postfork_parent(void)
|
||||
{
|
||||
|
||||
malloc_mutex_postfork_parent(&huge_mtx);
|
||||
}
|
||||
|
||||
void
|
||||
huge_postfork_child(void)
|
||||
{
|
||||
|
||||
malloc_mutex_postfork_child(&huge_mtx);
|
||||
}
|
1733
contrib/jemalloc/src/jemalloc.c
Normal file
1733
contrib/jemalloc/src/jemalloc.c
Normal file
File diff suppressed because it is too large
Load Diff
2
contrib/jemalloc/src/mb.c
Normal file
2
contrib/jemalloc/src/mb.c
Normal file
@ -0,0 +1,2 @@
|
||||
#define JEMALLOC_MB_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
153
contrib/jemalloc/src/mutex.c
Normal file
153
contrib/jemalloc/src/mutex.c
Normal file
@ -0,0 +1,153 @@
|
||||
#define JEMALLOC_MUTEX_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
#include <dlfcn.h>
|
||||
#endif
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
bool isthreaded = false;
|
||||
#endif
|
||||
#ifdef JEMALLOC_MUTEX_INIT_CB
|
||||
static bool postpone_init = true;
|
||||
static malloc_mutex_t *postponed_mutexes = NULL;
|
||||
#endif
|
||||
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
static void pthread_create_once(void);
|
||||
#endif
|
||||
|
||||
/******************************************************************************/
|
||||
/*
|
||||
* We intercept pthread_create() calls in order to toggle isthreaded if the
|
||||
* process goes multi-threaded.
|
||||
*/
|
||||
|
||||
#ifdef JEMALLOC_LAZY_LOCK
|
||||
static int (*pthread_create_fptr)(pthread_t *__restrict, const pthread_attr_t *,
|
||||
void *(*)(void *), void *__restrict);
|
||||
|
||||
static void
|
||||
pthread_create_once(void)
|
||||
{
|
||||
|
||||
pthread_create_fptr = dlsym(RTLD_NEXT, "pthread_create");
|
||||
if (pthread_create_fptr == NULL) {
|
||||
malloc_write("<jemalloc>: Error in dlsym(RTLD_NEXT, "
|
||||
"\"pthread_create\")\n");
|
||||
abort();
|
||||
}
|
||||
|
||||
isthreaded = true;
|
||||
}
|
||||
|
||||
JEMALLOC_ATTR(visibility("default"))
|
||||
int
|
||||
pthread_create(pthread_t *__restrict thread,
|
||||
const pthread_attr_t *__restrict attr, void *(*start_routine)(void *),
|
||||
void *__restrict arg)
|
||||
{
|
||||
static pthread_once_t once_control = PTHREAD_ONCE_INIT;
|
||||
|
||||
pthread_once(&once_control, pthread_create_once);
|
||||
|
||||
return (pthread_create_fptr(thread, attr, start_routine, arg));
|
||||
}
|
||||
#endif
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
#ifdef JEMALLOC_MUTEX_INIT_CB
|
||||
int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex,
|
||||
void *(calloc_cb)(size_t, size_t));
|
||||
|
||||
__weak_reference(_pthread_mutex_init_calloc_cb_stub,
|
||||
_pthread_mutex_init_calloc_cb);
|
||||
|
||||
int
|
||||
_pthread_mutex_init_calloc_cb_stub(pthread_mutex_t *mutex,
|
||||
void *(calloc_cb)(size_t, size_t))
|
||||
{
|
||||
|
||||
return (0);
|
||||
}
|
||||
#endif
|
||||
|
||||
bool
|
||||
malloc_mutex_init(malloc_mutex_t *mutex)
|
||||
{
|
||||
#ifdef JEMALLOC_OSSPIN
|
||||
mutex->lock = 0;
|
||||
#elif (defined(JEMALLOC_MUTEX_INIT_CB))
|
||||
if (postpone_init) {
|
||||
mutex->postponed_next = postponed_mutexes;
|
||||
postponed_mutexes = mutex;
|
||||
} else {
|
||||
if (_pthread_mutex_init_calloc_cb(&mutex->lock, base_calloc) !=
|
||||
0)
|
||||
return (true);
|
||||
}
|
||||
#else
|
||||
pthread_mutexattr_t attr;
|
||||
|
||||
if (pthread_mutexattr_init(&attr) != 0)
|
||||
return (true);
|
||||
pthread_mutexattr_settype(&attr, MALLOC_MUTEX_TYPE);
|
||||
if (pthread_mutex_init(&mutex->lock, &attr) != 0) {
|
||||
pthread_mutexattr_destroy(&attr);
|
||||
return (true);
|
||||
}
|
||||
pthread_mutexattr_destroy(&attr);
|
||||
|
||||
#endif
|
||||
return (false);
|
||||
}
|
||||
|
||||
void
|
||||
malloc_mutex_prefork(malloc_mutex_t *mutex)
|
||||
{
|
||||
|
||||
malloc_mutex_lock(mutex);
|
||||
}
|
||||
|
||||
void
|
||||
malloc_mutex_postfork_parent(malloc_mutex_t *mutex)
|
||||
{
|
||||
|
||||
malloc_mutex_unlock(mutex);
|
||||
}
|
||||
|
||||
void
|
||||
malloc_mutex_postfork_child(malloc_mutex_t *mutex)
|
||||
{
|
||||
|
||||
#ifdef JEMALLOC_MUTEX_INIT_CB
|
||||
malloc_mutex_unlock(mutex);
|
||||
#else
|
||||
if (malloc_mutex_init(mutex)) {
|
||||
malloc_printf("<jemalloc>: Error re-initializing mutex in "
|
||||
"child\n");
|
||||
if (opt_abort)
|
||||
abort();
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
bool
|
||||
mutex_boot(void)
|
||||
{
|
||||
|
||||
#ifdef JEMALLOC_MUTEX_INIT_CB
|
||||
postpone_init = false;
|
||||
while (postponed_mutexes != NULL) {
|
||||
if (_pthread_mutex_init_calloc_cb(&postponed_mutexes->lock,
|
||||
base_calloc) != 0)
|
||||
return (true);
|
||||
postponed_mutexes = postponed_mutexes->postponed_next;
|
||||
}
|
||||
#endif
|
||||
return (false);
|
||||
}
|
1243
contrib/jemalloc/src/prof.c
Normal file
1243
contrib/jemalloc/src/prof.c
Normal file
File diff suppressed because it is too large
Load Diff
163
contrib/jemalloc/src/quarantine.c
Normal file
163
contrib/jemalloc/src/quarantine.c
Normal file
@ -0,0 +1,163 @@
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
typedef struct quarantine_s quarantine_t;
|
||||
|
||||
struct quarantine_s {
|
||||
size_t curbytes;
|
||||
size_t curobjs;
|
||||
size_t first;
|
||||
#define LG_MAXOBJS_INIT 10
|
||||
size_t lg_maxobjs;
|
||||
void *objs[1]; /* Dynamically sized ring buffer. */
|
||||
};
|
||||
|
||||
static void quarantine_cleanup(void *arg);
|
||||
|
||||
malloc_tsd_data(static, quarantine, quarantine_t *, NULL)
|
||||
malloc_tsd_funcs(JEMALLOC_INLINE, quarantine, quarantine_t *, NULL,
|
||||
quarantine_cleanup)
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static quarantine_t *quarantine_init(size_t lg_maxobjs);
|
||||
static quarantine_t *quarantine_grow(quarantine_t *quarantine);
|
||||
static void quarantine_drain(quarantine_t *quarantine, size_t upper_bound);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static quarantine_t *
|
||||
quarantine_init(size_t lg_maxobjs)
|
||||
{
|
||||
quarantine_t *quarantine;
|
||||
|
||||
quarantine = (quarantine_t *)imalloc(offsetof(quarantine_t, objs) +
|
||||
((ZU(1) << lg_maxobjs) * sizeof(void *)));
|
||||
if (quarantine == NULL)
|
||||
return (NULL);
|
||||
quarantine->curbytes = 0;
|
||||
quarantine->curobjs = 0;
|
||||
quarantine->first = 0;
|
||||
quarantine->lg_maxobjs = lg_maxobjs;
|
||||
|
||||
quarantine_tsd_set(&quarantine);
|
||||
|
||||
return (quarantine);
|
||||
}
|
||||
|
||||
static quarantine_t *
|
||||
quarantine_grow(quarantine_t *quarantine)
|
||||
{
|
||||
quarantine_t *ret;
|
||||
|
||||
ret = quarantine_init(quarantine->lg_maxobjs + 1);
|
||||
if (ret == NULL)
|
||||
return (quarantine);
|
||||
|
||||
ret->curbytes = quarantine->curbytes;
|
||||
if (quarantine->first + quarantine->curobjs < (ZU(1) <<
|
||||
quarantine->lg_maxobjs)) {
|
||||
/* objs ring buffer data are contiguous. */
|
||||
memcpy(ret->objs, &quarantine->objs[quarantine->first],
|
||||
quarantine->curobjs * sizeof(void *));
|
||||
ret->curobjs = quarantine->curobjs;
|
||||
} else {
|
||||
/* objs ring buffer data wrap around. */
|
||||
size_t ncopy = (ZU(1) << quarantine->lg_maxobjs) -
|
||||
quarantine->first;
|
||||
memcpy(ret->objs, &quarantine->objs[quarantine->first], ncopy *
|
||||
sizeof(void *));
|
||||
ret->curobjs = ncopy;
|
||||
if (quarantine->curobjs != 0) {
|
||||
memcpy(&ret->objs[ret->curobjs], quarantine->objs,
|
||||
quarantine->curobjs - ncopy);
|
||||
}
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static void
|
||||
quarantine_drain(quarantine_t *quarantine, size_t upper_bound)
|
||||
{
|
||||
|
||||
while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0) {
|
||||
void *ptr = quarantine->objs[quarantine->first];
|
||||
size_t usize = isalloc(ptr, config_prof);
|
||||
idalloc(ptr);
|
||||
quarantine->curbytes -= usize;
|
||||
quarantine->curobjs--;
|
||||
quarantine->first = (quarantine->first + 1) & ((ZU(1) <<
|
||||
quarantine->lg_maxobjs) - 1);
|
||||
}
|
||||
}
|
||||
|
||||
void
|
||||
quarantine(void *ptr)
|
||||
{
|
||||
quarantine_t *quarantine;
|
||||
size_t usize = isalloc(ptr, config_prof);
|
||||
|
||||
assert(config_fill);
|
||||
assert(opt_quarantine);
|
||||
|
||||
quarantine = *quarantine_tsd_get();
|
||||
if (quarantine == NULL && (quarantine =
|
||||
quarantine_init(LG_MAXOBJS_INIT)) == NULL) {
|
||||
idalloc(ptr);
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* Drain one or more objects if the quarantine size limit would be
|
||||
* exceeded by appending ptr.
|
||||
*/
|
||||
if (quarantine->curbytes + usize > opt_quarantine) {
|
||||
size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine
|
||||
- usize : 0;
|
||||
quarantine_drain(quarantine, upper_bound);
|
||||
}
|
||||
/* Grow the quarantine ring buffer if it's full. */
|
||||
if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs))
|
||||
quarantine = quarantine_grow(quarantine);
|
||||
/* quarantine_grow() must free a slot if it fails to grow. */
|
||||
assert(quarantine->curobjs < (ZU(1) << quarantine->lg_maxobjs));
|
||||
/* Append ptr if its size doesn't exceed the quarantine size. */
|
||||
if (quarantine->curbytes + usize <= opt_quarantine) {
|
||||
size_t offset = (quarantine->first + quarantine->curobjs) &
|
||||
((ZU(1) << quarantine->lg_maxobjs) - 1);
|
||||
quarantine->objs[offset] = ptr;
|
||||
quarantine->curbytes += usize;
|
||||
quarantine->curobjs++;
|
||||
if (opt_junk)
|
||||
memset(ptr, 0x5a, usize);
|
||||
} else {
|
||||
assert(quarantine->curbytes == 0);
|
||||
idalloc(ptr);
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
quarantine_cleanup(void *arg)
|
||||
{
|
||||
quarantine_t *quarantine = *(quarantine_t **)arg;
|
||||
|
||||
if (quarantine != NULL) {
|
||||
quarantine_drain(quarantine, 0);
|
||||
idalloc(quarantine);
|
||||
}
|
||||
}
|
||||
|
||||
bool
|
||||
quarantine_boot(void)
|
||||
{
|
||||
|
||||
assert(config_fill);
|
||||
|
||||
if (quarantine_tsd_boot())
|
||||
return (true);
|
||||
|
||||
return (false);
|
||||
}
|
46
contrib/jemalloc/src/rtree.c
Normal file
46
contrib/jemalloc/src/rtree.c
Normal file
@ -0,0 +1,46 @@
|
||||
#define JEMALLOC_RTREE_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
rtree_t *
|
||||
rtree_new(unsigned bits)
|
||||
{
|
||||
rtree_t *ret;
|
||||
unsigned bits_per_level, height, i;
|
||||
|
||||
bits_per_level = ffs(pow2_ceil((RTREE_NODESIZE / sizeof(void *)))) - 1;
|
||||
height = bits / bits_per_level;
|
||||
if (height * bits_per_level != bits)
|
||||
height++;
|
||||
assert(height * bits_per_level >= bits);
|
||||
|
||||
ret = (rtree_t*)base_alloc(offsetof(rtree_t, level2bits) +
|
||||
(sizeof(unsigned) * height));
|
||||
if (ret == NULL)
|
||||
return (NULL);
|
||||
memset(ret, 0, offsetof(rtree_t, level2bits) + (sizeof(unsigned) *
|
||||
height));
|
||||
|
||||
if (malloc_mutex_init(&ret->mutex)) {
|
||||
/* Leak the rtree. */
|
||||
return (NULL);
|
||||
}
|
||||
ret->height = height;
|
||||
if (bits_per_level * height > bits)
|
||||
ret->level2bits[0] = bits % bits_per_level;
|
||||
else
|
||||
ret->level2bits[0] = bits_per_level;
|
||||
for (i = 1; i < height; i++)
|
||||
ret->level2bits[i] = bits_per_level;
|
||||
|
||||
ret->root = (void**)base_alloc(sizeof(void *) << ret->level2bits[0]);
|
||||
if (ret->root == NULL) {
|
||||
/*
|
||||
* We leak the rtree here, since there's no generic base
|
||||
* deallocation.
|
||||
*/
|
||||
return (NULL);
|
||||
}
|
||||
memset(ret->root, 0, sizeof(void *) << ret->level2bits[0]);
|
||||
|
||||
return (ret);
|
||||
}
|
550
contrib/jemalloc/src/stats.c
Normal file
550
contrib/jemalloc/src/stats.c
Normal file
@ -0,0 +1,550 @@
|
||||
#define JEMALLOC_STATS_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
#define CTL_GET(n, v, t) do { \
|
||||
size_t sz = sizeof(t); \
|
||||
xmallctl(n, v, &sz, NULL, 0); \
|
||||
} while (0)
|
||||
|
||||
#define CTL_I_GET(n, v, t) do { \
|
||||
size_t mib[6]; \
|
||||
size_t miblen = sizeof(mib) / sizeof(size_t); \
|
||||
size_t sz = sizeof(t); \
|
||||
xmallctlnametomib(n, mib, &miblen); \
|
||||
mib[2] = i; \
|
||||
xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \
|
||||
} while (0)
|
||||
|
||||
#define CTL_J_GET(n, v, t) do { \
|
||||
size_t mib[6]; \
|
||||
size_t miblen = sizeof(mib) / sizeof(size_t); \
|
||||
size_t sz = sizeof(t); \
|
||||
xmallctlnametomib(n, mib, &miblen); \
|
||||
mib[2] = j; \
|
||||
xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \
|
||||
} while (0)
|
||||
|
||||
#define CTL_IJ_GET(n, v, t) do { \
|
||||
size_t mib[6]; \
|
||||
size_t miblen = sizeof(mib) / sizeof(size_t); \
|
||||
size_t sz = sizeof(t); \
|
||||
xmallctlnametomib(n, mib, &miblen); \
|
||||
mib[2] = i; \
|
||||
mib[4] = j; \
|
||||
xmallctlbymib(mib, miblen, v, &sz, NULL, 0); \
|
||||
} while (0)
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
bool opt_stats_print = false;
|
||||
|
||||
size_t stats_cactive = 0;
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static void stats_arena_bins_print(void (*write_cb)(void *, const char *),
|
||||
void *cbopaque, unsigned i);
|
||||
static void stats_arena_lruns_print(void (*write_cb)(void *, const char *),
|
||||
void *cbopaque, unsigned i);
|
||||
static void stats_arena_print(void (*write_cb)(void *, const char *),
|
||||
void *cbopaque, unsigned i, bool bins, bool large);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
static void
|
||||
stats_arena_bins_print(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
unsigned i)
|
||||
{
|
||||
size_t page;
|
||||
bool config_tcache;
|
||||
unsigned nbins, j, gap_start;
|
||||
|
||||
CTL_GET("arenas.page", &page, size_t);
|
||||
|
||||
CTL_GET("config.tcache", &config_tcache, bool);
|
||||
if (config_tcache) {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"bins: bin size regs pgs allocated nmalloc"
|
||||
" ndalloc nrequests nfills nflushes"
|
||||
" newruns reruns curruns\n");
|
||||
} else {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"bins: bin size regs pgs allocated nmalloc"
|
||||
" ndalloc newruns reruns curruns\n");
|
||||
}
|
||||
CTL_GET("arenas.nbins", &nbins, unsigned);
|
||||
for (j = 0, gap_start = UINT_MAX; j < nbins; j++) {
|
||||
uint64_t nruns;
|
||||
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.nruns", &nruns, uint64_t);
|
||||
if (nruns == 0) {
|
||||
if (gap_start == UINT_MAX)
|
||||
gap_start = j;
|
||||
} else {
|
||||
size_t reg_size, run_size, allocated;
|
||||
uint32_t nregs;
|
||||
uint64_t nmalloc, ndalloc, nrequests, nfills, nflushes;
|
||||
uint64_t reruns;
|
||||
size_t curruns;
|
||||
|
||||
if (gap_start != UINT_MAX) {
|
||||
if (j > gap_start + 1) {
|
||||
/* Gap of more than one size class. */
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"[%u..%u]\n", gap_start,
|
||||
j - 1);
|
||||
} else {
|
||||
/* Gap of one size class. */
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"[%u]\n", gap_start);
|
||||
}
|
||||
gap_start = UINT_MAX;
|
||||
}
|
||||
CTL_J_GET("arenas.bin.0.size", ®_size, size_t);
|
||||
CTL_J_GET("arenas.bin.0.nregs", &nregs, uint32_t);
|
||||
CTL_J_GET("arenas.bin.0.run_size", &run_size, size_t);
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.allocated",
|
||||
&allocated, size_t);
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.nmalloc",
|
||||
&nmalloc, uint64_t);
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.ndalloc",
|
||||
&ndalloc, uint64_t);
|
||||
if (config_tcache) {
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.nrequests",
|
||||
&nrequests, uint64_t);
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.nfills",
|
||||
&nfills, uint64_t);
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.nflushes",
|
||||
&nflushes, uint64_t);
|
||||
}
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.nreruns", &reruns,
|
||||
uint64_t);
|
||||
CTL_IJ_GET("stats.arenas.0.bins.0.curruns", &curruns,
|
||||
size_t);
|
||||
if (config_tcache) {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"%13u %5zu %4u %3zu %12zu %12"PRIu64
|
||||
" %12"PRIu64" %12"PRIu64" %12"PRIu64
|
||||
" %12"PRIu64" %12"PRIu64" %12"PRIu64
|
||||
" %12zu\n",
|
||||
j, reg_size, nregs, run_size / page,
|
||||
allocated, nmalloc, ndalloc, nrequests,
|
||||
nfills, nflushes, nruns, reruns, curruns);
|
||||
} else {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"%13u %5zu %4u %3zu %12zu %12"PRIu64
|
||||
" %12"PRIu64" %12"PRIu64" %12"PRIu64
|
||||
" %12zu\n",
|
||||
j, reg_size, nregs, run_size / page,
|
||||
allocated, nmalloc, ndalloc, nruns, reruns,
|
||||
curruns);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (gap_start != UINT_MAX) {
|
||||
if (j > gap_start + 1) {
|
||||
/* Gap of more than one size class. */
|
||||
malloc_cprintf(write_cb, cbopaque, "[%u..%u]\n",
|
||||
gap_start, j - 1);
|
||||
} else {
|
||||
/* Gap of one size class. */
|
||||
malloc_cprintf(write_cb, cbopaque, "[%u]\n", gap_start);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
stats_arena_lruns_print(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
unsigned i)
|
||||
{
|
||||
size_t page, nlruns, j;
|
||||
ssize_t gap_start;
|
||||
|
||||
CTL_GET("arenas.page", &page, size_t);
|
||||
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"large: size pages nmalloc ndalloc nrequests"
|
||||
" curruns\n");
|
||||
CTL_GET("arenas.nlruns", &nlruns, size_t);
|
||||
for (j = 0, gap_start = -1; j < nlruns; j++) {
|
||||
uint64_t nmalloc, ndalloc, nrequests;
|
||||
size_t run_size, curruns;
|
||||
|
||||
CTL_IJ_GET("stats.arenas.0.lruns.0.nmalloc", &nmalloc,
|
||||
uint64_t);
|
||||
CTL_IJ_GET("stats.arenas.0.lruns.0.ndalloc", &ndalloc,
|
||||
uint64_t);
|
||||
CTL_IJ_GET("stats.arenas.0.lruns.0.nrequests", &nrequests,
|
||||
uint64_t);
|
||||
if (nrequests == 0) {
|
||||
if (gap_start == -1)
|
||||
gap_start = j;
|
||||
} else {
|
||||
CTL_J_GET("arenas.lrun.0.size", &run_size, size_t);
|
||||
CTL_IJ_GET("stats.arenas.0.lruns.0.curruns", &curruns,
|
||||
size_t);
|
||||
if (gap_start != -1) {
|
||||
malloc_cprintf(write_cb, cbopaque, "[%zu]\n",
|
||||
j - gap_start);
|
||||
gap_start = -1;
|
||||
}
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"%13zu %5zu %12"PRIu64" %12"PRIu64" %12"PRIu64
|
||||
" %12zu\n",
|
||||
run_size, run_size / page, nmalloc, ndalloc,
|
||||
nrequests, curruns);
|
||||
}
|
||||
}
|
||||
if (gap_start != -1)
|
||||
malloc_cprintf(write_cb, cbopaque, "[%zu]\n", j - gap_start);
|
||||
}
|
||||
|
||||
static void
|
||||
stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
unsigned i, bool bins, bool large)
|
||||
{
|
||||
unsigned nthreads;
|
||||
size_t page, pactive, pdirty, mapped;
|
||||
uint64_t npurge, nmadvise, purged;
|
||||
size_t small_allocated;
|
||||
uint64_t small_nmalloc, small_ndalloc, small_nrequests;
|
||||
size_t large_allocated;
|
||||
uint64_t large_nmalloc, large_ndalloc, large_nrequests;
|
||||
|
||||
CTL_GET("arenas.page", &page, size_t);
|
||||
|
||||
CTL_I_GET("stats.arenas.0.nthreads", &nthreads, unsigned);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"assigned threads: %u\n", nthreads);
|
||||
CTL_I_GET("stats.arenas.0.pactive", &pactive, size_t);
|
||||
CTL_I_GET("stats.arenas.0.pdirty", &pdirty, size_t);
|
||||
CTL_I_GET("stats.arenas.0.npurge", &npurge, uint64_t);
|
||||
CTL_I_GET("stats.arenas.0.nmadvise", &nmadvise, uint64_t);
|
||||
CTL_I_GET("stats.arenas.0.purged", &purged, uint64_t);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"dirty pages: %zu:%zu active:dirty, %"PRIu64" sweep%s,"
|
||||
" %"PRIu64" madvise%s, %"PRIu64" purged\n",
|
||||
pactive, pdirty, npurge, npurge == 1 ? "" : "s",
|
||||
nmadvise, nmadvise == 1 ? "" : "s", purged);
|
||||
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
" allocated nmalloc ndalloc nrequests\n");
|
||||
CTL_I_GET("stats.arenas.0.small.allocated", &small_allocated, size_t);
|
||||
CTL_I_GET("stats.arenas.0.small.nmalloc", &small_nmalloc, uint64_t);
|
||||
CTL_I_GET("stats.arenas.0.small.ndalloc", &small_ndalloc, uint64_t);
|
||||
CTL_I_GET("stats.arenas.0.small.nrequests", &small_nrequests, uint64_t);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"small: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n",
|
||||
small_allocated, small_nmalloc, small_ndalloc, small_nrequests);
|
||||
CTL_I_GET("stats.arenas.0.large.allocated", &large_allocated, size_t);
|
||||
CTL_I_GET("stats.arenas.0.large.nmalloc", &large_nmalloc, uint64_t);
|
||||
CTL_I_GET("stats.arenas.0.large.ndalloc", &large_ndalloc, uint64_t);
|
||||
CTL_I_GET("stats.arenas.0.large.nrequests", &large_nrequests, uint64_t);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"large: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n",
|
||||
large_allocated, large_nmalloc, large_ndalloc, large_nrequests);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"total: %12zu %12"PRIu64" %12"PRIu64" %12"PRIu64"\n",
|
||||
small_allocated + large_allocated,
|
||||
small_nmalloc + large_nmalloc,
|
||||
small_ndalloc + large_ndalloc,
|
||||
small_nrequests + large_nrequests);
|
||||
malloc_cprintf(write_cb, cbopaque, "active: %12zu\n", pactive * page);
|
||||
CTL_I_GET("stats.arenas.0.mapped", &mapped, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque, "mapped: %12zu\n", mapped);
|
||||
|
||||
if (bins)
|
||||
stats_arena_bins_print(write_cb, cbopaque, i);
|
||||
if (large)
|
||||
stats_arena_lruns_print(write_cb, cbopaque, i);
|
||||
}
|
||||
|
||||
void
|
||||
stats_print(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
const char *opts)
|
||||
{
|
||||
int err;
|
||||
uint64_t epoch;
|
||||
size_t u64sz;
|
||||
bool general = true;
|
||||
bool merged = true;
|
||||
bool unmerged = true;
|
||||
bool bins = true;
|
||||
bool large = true;
|
||||
|
||||
/*
|
||||
* Refresh stats, in case mallctl() was called by the application.
|
||||
*
|
||||
* Check for OOM here, since refreshing the ctl cache can trigger
|
||||
* allocation. In practice, none of the subsequent mallctl()-related
|
||||
* calls in this function will cause OOM if this one succeeds.
|
||||
* */
|
||||
epoch = 1;
|
||||
u64sz = sizeof(uint64_t);
|
||||
err = je_mallctl("epoch", &epoch, &u64sz, &epoch, sizeof(uint64_t));
|
||||
if (err != 0) {
|
||||
if (err == EAGAIN) {
|
||||
malloc_write("<jemalloc>: Memory allocation failure in "
|
||||
"mallctl(\"epoch\", ...)\n");
|
||||
return;
|
||||
}
|
||||
malloc_write("<jemalloc>: Failure in mallctl(\"epoch\", "
|
||||
"...)\n");
|
||||
abort();
|
||||
}
|
||||
|
||||
if (write_cb == NULL) {
|
||||
/*
|
||||
* The caller did not provide an alternate write_cb callback
|
||||
* function, so use the default one. malloc_write() is an
|
||||
* inline function, so use malloc_message() directly here.
|
||||
*/
|
||||
write_cb = je_malloc_message;
|
||||
cbopaque = NULL;
|
||||
}
|
||||
|
||||
if (opts != NULL) {
|
||||
unsigned i;
|
||||
|
||||
for (i = 0; opts[i] != '\0'; i++) {
|
||||
switch (opts[i]) {
|
||||
case 'g':
|
||||
general = false;
|
||||
break;
|
||||
case 'm':
|
||||
merged = false;
|
||||
break;
|
||||
case 'a':
|
||||
unmerged = false;
|
||||
break;
|
||||
case 'b':
|
||||
bins = false;
|
||||
break;
|
||||
case 'l':
|
||||
large = false;
|
||||
break;
|
||||
default:;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
write_cb(cbopaque, "___ Begin jemalloc statistics ___\n");
|
||||
if (general) {
|
||||
int err;
|
||||
const char *cpv;
|
||||
bool bv;
|
||||
unsigned uv;
|
||||
ssize_t ssv;
|
||||
size_t sv, bsz, ssz, sssz, cpsz;
|
||||
|
||||
bsz = sizeof(bool);
|
||||
ssz = sizeof(size_t);
|
||||
sssz = sizeof(ssize_t);
|
||||
cpsz = sizeof(const char *);
|
||||
|
||||
CTL_GET("version", &cpv, const char *);
|
||||
malloc_cprintf(write_cb, cbopaque, "Version: %s\n", cpv);
|
||||
CTL_GET("config.debug", &bv, bool);
|
||||
malloc_cprintf(write_cb, cbopaque, "Assertions %s\n",
|
||||
bv ? "enabled" : "disabled");
|
||||
|
||||
#define OPT_WRITE_BOOL(n) \
|
||||
if ((err = je_mallctl("opt."#n, &bv, &bsz, NULL, 0)) \
|
||||
== 0) { \
|
||||
malloc_cprintf(write_cb, cbopaque, \
|
||||
" opt."#n": %s\n", bv ? "true" : "false"); \
|
||||
}
|
||||
#define OPT_WRITE_SIZE_T(n) \
|
||||
if ((err = je_mallctl("opt."#n, &sv, &ssz, NULL, 0)) \
|
||||
== 0) { \
|
||||
malloc_cprintf(write_cb, cbopaque, \
|
||||
" opt."#n": %zu\n", sv); \
|
||||
}
|
||||
#define OPT_WRITE_SSIZE_T(n) \
|
||||
if ((err = je_mallctl("opt."#n, &ssv, &sssz, NULL, 0)) \
|
||||
== 0) { \
|
||||
malloc_cprintf(write_cb, cbopaque, \
|
||||
" opt."#n": %zd\n", ssv); \
|
||||
}
|
||||
#define OPT_WRITE_CHAR_P(n) \
|
||||
if ((err = je_mallctl("opt."#n, &cpv, &cpsz, NULL, 0)) \
|
||||
== 0) { \
|
||||
malloc_cprintf(write_cb, cbopaque, \
|
||||
" opt."#n": \"%s\"\n", cpv); \
|
||||
}
|
||||
|
||||
write_cb(cbopaque, "Run-time option settings:\n");
|
||||
OPT_WRITE_BOOL(abort)
|
||||
OPT_WRITE_SIZE_T(lg_chunk)
|
||||
OPT_WRITE_SIZE_T(narenas)
|
||||
OPT_WRITE_SSIZE_T(lg_dirty_mult)
|
||||
OPT_WRITE_BOOL(stats_print)
|
||||
OPT_WRITE_BOOL(junk)
|
||||
OPT_WRITE_SIZE_T(quarantine)
|
||||
OPT_WRITE_BOOL(redzone)
|
||||
OPT_WRITE_BOOL(zero)
|
||||
OPT_WRITE_BOOL(utrace)
|
||||
OPT_WRITE_BOOL(valgrind)
|
||||
OPT_WRITE_BOOL(xmalloc)
|
||||
OPT_WRITE_BOOL(tcache)
|
||||
OPT_WRITE_SSIZE_T(lg_tcache_max)
|
||||
OPT_WRITE_BOOL(prof)
|
||||
OPT_WRITE_CHAR_P(prof_prefix)
|
||||
OPT_WRITE_BOOL(prof_active)
|
||||
OPT_WRITE_SSIZE_T(lg_prof_sample)
|
||||
OPT_WRITE_BOOL(prof_accum)
|
||||
OPT_WRITE_SSIZE_T(lg_prof_interval)
|
||||
OPT_WRITE_BOOL(prof_gdump)
|
||||
OPT_WRITE_BOOL(prof_leak)
|
||||
|
||||
#undef OPT_WRITE_BOOL
|
||||
#undef OPT_WRITE_SIZE_T
|
||||
#undef OPT_WRITE_SSIZE_T
|
||||
#undef OPT_WRITE_CHAR_P
|
||||
|
||||
malloc_cprintf(write_cb, cbopaque, "CPUs: %u\n", ncpus);
|
||||
|
||||
CTL_GET("arenas.narenas", &uv, unsigned);
|
||||
malloc_cprintf(write_cb, cbopaque, "Max arenas: %u\n", uv);
|
||||
|
||||
malloc_cprintf(write_cb, cbopaque, "Pointer size: %zu\n",
|
||||
sizeof(void *));
|
||||
|
||||
CTL_GET("arenas.quantum", &sv, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque, "Quantum size: %zu\n", sv);
|
||||
|
||||
CTL_GET("arenas.page", &sv, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque, "Page size: %zu\n", sv);
|
||||
|
||||
CTL_GET("opt.lg_dirty_mult", &ssv, ssize_t);
|
||||
if (ssv >= 0) {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"Min active:dirty page ratio per arena: %u:1\n",
|
||||
(1U << ssv));
|
||||
} else {
|
||||
write_cb(cbopaque,
|
||||
"Min active:dirty page ratio per arena: N/A\n");
|
||||
}
|
||||
if ((err = je_mallctl("arenas.tcache_max", &sv, &ssz, NULL, 0))
|
||||
== 0) {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"Maximum thread-cached size class: %zu\n", sv);
|
||||
}
|
||||
if ((err = je_mallctl("opt.prof", &bv, &bsz, NULL, 0)) == 0 &&
|
||||
bv) {
|
||||
CTL_GET("opt.lg_prof_sample", &sv, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"Average profile sample interval: %"PRIu64
|
||||
" (2^%zu)\n", (((uint64_t)1U) << sv), sv);
|
||||
|
||||
CTL_GET("opt.lg_prof_interval", &ssv, ssize_t);
|
||||
if (ssv >= 0) {
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"Average profile dump interval: %"PRIu64
|
||||
" (2^%zd)\n",
|
||||
(((uint64_t)1U) << ssv), ssv);
|
||||
} else {
|
||||
write_cb(cbopaque,
|
||||
"Average profile dump interval: N/A\n");
|
||||
}
|
||||
}
|
||||
CTL_GET("opt.lg_chunk", &sv, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque, "Chunk size: %zu (2^%zu)\n",
|
||||
(ZU(1) << sv), sv);
|
||||
}
|
||||
|
||||
if (config_stats) {
|
||||
size_t *cactive;
|
||||
size_t allocated, active, mapped;
|
||||
size_t chunks_current, chunks_high;
|
||||
uint64_t chunks_total;
|
||||
size_t huge_allocated;
|
||||
uint64_t huge_nmalloc, huge_ndalloc;
|
||||
|
||||
CTL_GET("stats.cactive", &cactive, size_t *);
|
||||
CTL_GET("stats.allocated", &allocated, size_t);
|
||||
CTL_GET("stats.active", &active, size_t);
|
||||
CTL_GET("stats.mapped", &mapped, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"Allocated: %zu, active: %zu, mapped: %zu\n",
|
||||
allocated, active, mapped);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"Current active ceiling: %zu\n", atomic_read_z(cactive));
|
||||
|
||||
/* Print chunk stats. */
|
||||
CTL_GET("stats.chunks.total", &chunks_total, uint64_t);
|
||||
CTL_GET("stats.chunks.high", &chunks_high, size_t);
|
||||
CTL_GET("stats.chunks.current", &chunks_current, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque, "chunks: nchunks "
|
||||
"highchunks curchunks\n");
|
||||
malloc_cprintf(write_cb, cbopaque, " %13"PRIu64"%13zu%13zu\n",
|
||||
chunks_total, chunks_high, chunks_current);
|
||||
|
||||
/* Print huge stats. */
|
||||
CTL_GET("stats.huge.nmalloc", &huge_nmalloc, uint64_t);
|
||||
CTL_GET("stats.huge.ndalloc", &huge_ndalloc, uint64_t);
|
||||
CTL_GET("stats.huge.allocated", &huge_allocated, size_t);
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"huge: nmalloc ndalloc allocated\n");
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
" %12"PRIu64" %12"PRIu64" %12zu\n",
|
||||
huge_nmalloc, huge_ndalloc, huge_allocated);
|
||||
|
||||
if (merged) {
|
||||
unsigned narenas;
|
||||
|
||||
CTL_GET("arenas.narenas", &narenas, unsigned);
|
||||
{
|
||||
bool initialized[narenas];
|
||||
size_t isz;
|
||||
unsigned i, ninitialized;
|
||||
|
||||
isz = sizeof(initialized);
|
||||
xmallctl("arenas.initialized", initialized,
|
||||
&isz, NULL, 0);
|
||||
for (i = ninitialized = 0; i < narenas; i++) {
|
||||
if (initialized[i])
|
||||
ninitialized++;
|
||||
}
|
||||
|
||||
if (ninitialized > 1 || unmerged == false) {
|
||||
/* Print merged arena stats. */
|
||||
malloc_cprintf(write_cb, cbopaque,
|
||||
"\nMerged arenas stats:\n");
|
||||
stats_arena_print(write_cb, cbopaque,
|
||||
narenas, bins, large);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (unmerged) {
|
||||
unsigned narenas;
|
||||
|
||||
/* Print stats for each arena. */
|
||||
|
||||
CTL_GET("arenas.narenas", &narenas, unsigned);
|
||||
{
|
||||
bool initialized[narenas];
|
||||
size_t isz;
|
||||
unsigned i;
|
||||
|
||||
isz = sizeof(initialized);
|
||||
xmallctl("arenas.initialized", initialized,
|
||||
&isz, NULL, 0);
|
||||
|
||||
for (i = 0; i < narenas; i++) {
|
||||
if (initialized[i]) {
|
||||
malloc_cprintf(write_cb,
|
||||
cbopaque,
|
||||
"\narenas[%u]:\n", i);
|
||||
stats_arena_print(write_cb,
|
||||
cbopaque, i, bins, large);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
write_cb(cbopaque, "--- End jemalloc statistics ---\n");
|
||||
}
|
435
contrib/jemalloc/src/tcache.c
Normal file
435
contrib/jemalloc/src/tcache.c
Normal file
@ -0,0 +1,435 @@
|
||||
#define JEMALLOC_TCACHE_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
malloc_tsd_data(, tcache, tcache_t *, NULL)
|
||||
malloc_tsd_data(, tcache_enabled, tcache_enabled_t, tcache_enabled_default)
|
||||
|
||||
bool opt_tcache = true;
|
||||
ssize_t opt_lg_tcache_max = LG_TCACHE_MAXCLASS_DEFAULT;
|
||||
|
||||
tcache_bin_info_t *tcache_bin_info;
|
||||
static unsigned stack_nelms; /* Total stack elms per tcache. */
|
||||
|
||||
size_t nhbins;
|
||||
size_t tcache_maxclass;
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
void *
|
||||
tcache_alloc_small_hard(tcache_t *tcache, tcache_bin_t *tbin, size_t binind)
|
||||
{
|
||||
void *ret;
|
||||
|
||||
arena_tcache_fill_small(tcache->arena, tbin, binind,
|
||||
config_prof ? tcache->prof_accumbytes : 0);
|
||||
if (config_prof)
|
||||
tcache->prof_accumbytes = 0;
|
||||
ret = tcache_alloc_easy(tbin);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void
|
||||
tcache_bin_flush_small(tcache_bin_t *tbin, size_t binind, unsigned rem,
|
||||
tcache_t *tcache)
|
||||
{
|
||||
void *ptr;
|
||||
unsigned i, nflush, ndeferred;
|
||||
bool merged_stats = false;
|
||||
|
||||
assert(binind < NBINS);
|
||||
assert(rem <= tbin->ncached);
|
||||
|
||||
for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) {
|
||||
/* Lock the arena bin associated with the first object. */
|
||||
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(
|
||||
tbin->avail[0]);
|
||||
arena_t *arena = chunk->arena;
|
||||
arena_bin_t *bin = &arena->bins[binind];
|
||||
|
||||
if (config_prof && arena == tcache->arena) {
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
arena_prof_accum(arena, tcache->prof_accumbytes);
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
tcache->prof_accumbytes = 0;
|
||||
}
|
||||
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
if (config_stats && arena == tcache->arena) {
|
||||
assert(merged_stats == false);
|
||||
merged_stats = true;
|
||||
bin->stats.nflushes++;
|
||||
bin->stats.nrequests += tbin->tstats.nrequests;
|
||||
tbin->tstats.nrequests = 0;
|
||||
}
|
||||
ndeferred = 0;
|
||||
for (i = 0; i < nflush; i++) {
|
||||
ptr = tbin->avail[i];
|
||||
assert(ptr != NULL);
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (chunk->arena == arena) {
|
||||
size_t pageind = ((uintptr_t)ptr -
|
||||
(uintptr_t)chunk) >> LG_PAGE;
|
||||
arena_chunk_map_t *mapelm =
|
||||
&chunk->map[pageind-map_bias];
|
||||
if (config_fill && opt_junk) {
|
||||
arena_alloc_junk_small(ptr,
|
||||
&arena_bin_info[binind], true);
|
||||
}
|
||||
arena_dalloc_bin(arena, chunk, ptr, mapelm);
|
||||
} else {
|
||||
/*
|
||||
* This object was allocated via a different
|
||||
* arena bin than the one that is currently
|
||||
* locked. Stash the object, so that it can be
|
||||
* handled in a future pass.
|
||||
*/
|
||||
tbin->avail[ndeferred] = ptr;
|
||||
ndeferred++;
|
||||
}
|
||||
}
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
}
|
||||
if (config_stats && merged_stats == false) {
|
||||
/*
|
||||
* The flush loop didn't happen to flush to this thread's
|
||||
* arena, so the stats didn't get merged. Manually do so now.
|
||||
*/
|
||||
arena_bin_t *bin = &tcache->arena->bins[binind];
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
bin->stats.nflushes++;
|
||||
bin->stats.nrequests += tbin->tstats.nrequests;
|
||||
tbin->tstats.nrequests = 0;
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
}
|
||||
|
||||
memmove(tbin->avail, &tbin->avail[tbin->ncached - rem],
|
||||
rem * sizeof(void *));
|
||||
tbin->ncached = rem;
|
||||
if ((int)tbin->ncached < tbin->low_water)
|
||||
tbin->low_water = tbin->ncached;
|
||||
}
|
||||
|
||||
void
|
||||
tcache_bin_flush_large(tcache_bin_t *tbin, size_t binind, unsigned rem,
|
||||
tcache_t *tcache)
|
||||
{
|
||||
void *ptr;
|
||||
unsigned i, nflush, ndeferred;
|
||||
bool merged_stats = false;
|
||||
|
||||
assert(binind < nhbins);
|
||||
assert(rem <= tbin->ncached);
|
||||
|
||||
for (nflush = tbin->ncached - rem; nflush > 0; nflush = ndeferred) {
|
||||
/* Lock the arena associated with the first object. */
|
||||
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(
|
||||
tbin->avail[0]);
|
||||
arena_t *arena = chunk->arena;
|
||||
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
if ((config_prof || config_stats) && arena == tcache->arena) {
|
||||
if (config_prof) {
|
||||
arena_prof_accum(arena,
|
||||
tcache->prof_accumbytes);
|
||||
tcache->prof_accumbytes = 0;
|
||||
}
|
||||
if (config_stats) {
|
||||
merged_stats = true;
|
||||
arena->stats.nrequests_large +=
|
||||
tbin->tstats.nrequests;
|
||||
arena->stats.lstats[binind - NBINS].nrequests +=
|
||||
tbin->tstats.nrequests;
|
||||
tbin->tstats.nrequests = 0;
|
||||
}
|
||||
}
|
||||
ndeferred = 0;
|
||||
for (i = 0; i < nflush; i++) {
|
||||
ptr = tbin->avail[i];
|
||||
assert(ptr != NULL);
|
||||
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
|
||||
if (chunk->arena == arena)
|
||||
arena_dalloc_large(arena, chunk, ptr);
|
||||
else {
|
||||
/*
|
||||
* This object was allocated via a different
|
||||
* arena than the one that is currently locked.
|
||||
* Stash the object, so that it can be handled
|
||||
* in a future pass.
|
||||
*/
|
||||
tbin->avail[ndeferred] = ptr;
|
||||
ndeferred++;
|
||||
}
|
||||
}
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
}
|
||||
if (config_stats && merged_stats == false) {
|
||||
/*
|
||||
* The flush loop didn't happen to flush to this thread's
|
||||
* arena, so the stats didn't get merged. Manually do so now.
|
||||
*/
|
||||
arena_t *arena = tcache->arena;
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
arena->stats.nrequests_large += tbin->tstats.nrequests;
|
||||
arena->stats.lstats[binind - NBINS].nrequests +=
|
||||
tbin->tstats.nrequests;
|
||||
tbin->tstats.nrequests = 0;
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
}
|
||||
|
||||
memmove(tbin->avail, &tbin->avail[tbin->ncached - rem],
|
||||
rem * sizeof(void *));
|
||||
tbin->ncached = rem;
|
||||
if ((int)tbin->ncached < tbin->low_water)
|
||||
tbin->low_water = tbin->ncached;
|
||||
}
|
||||
|
||||
void
|
||||
tcache_arena_associate(tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
|
||||
if (config_stats) {
|
||||
/* Link into list of extant tcaches. */
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
ql_elm_new(tcache, link);
|
||||
ql_tail_insert(&arena->tcache_ql, tcache, link);
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
}
|
||||
tcache->arena = arena;
|
||||
}
|
||||
|
||||
void
|
||||
tcache_arena_dissociate(tcache_t *tcache)
|
||||
{
|
||||
|
||||
if (config_stats) {
|
||||
/* Unlink from list of extant tcaches. */
|
||||
malloc_mutex_lock(&tcache->arena->lock);
|
||||
ql_remove(&tcache->arena->tcache_ql, tcache, link);
|
||||
malloc_mutex_unlock(&tcache->arena->lock);
|
||||
tcache_stats_merge(tcache, tcache->arena);
|
||||
}
|
||||
}
|
||||
|
||||
tcache_t *
|
||||
tcache_create(arena_t *arena)
|
||||
{
|
||||
tcache_t *tcache;
|
||||
size_t size, stack_offset;
|
||||
unsigned i;
|
||||
|
||||
size = offsetof(tcache_t, tbins) + (sizeof(tcache_bin_t) * nhbins);
|
||||
/* Naturally align the pointer stacks. */
|
||||
size = PTR_CEILING(size);
|
||||
stack_offset = size;
|
||||
size += stack_nelms * sizeof(void *);
|
||||
/*
|
||||
* Round up to the nearest multiple of the cacheline size, in order to
|
||||
* avoid the possibility of false cacheline sharing.
|
||||
*
|
||||
* That this works relies on the same logic as in ipalloc(), but we
|
||||
* cannot directly call ipalloc() here due to tcache bootstrapping
|
||||
* issues.
|
||||
*/
|
||||
size = (size + CACHELINE_MASK) & (-CACHELINE);
|
||||
|
||||
if (size <= SMALL_MAXCLASS)
|
||||
tcache = (tcache_t *)arena_malloc_small(arena, size, true);
|
||||
else if (size <= tcache_maxclass)
|
||||
tcache = (tcache_t *)arena_malloc_large(arena, size, true);
|
||||
else
|
||||
tcache = (tcache_t *)icalloc(size);
|
||||
|
||||
if (tcache == NULL)
|
||||
return (NULL);
|
||||
|
||||
tcache_arena_associate(tcache, arena);
|
||||
|
||||
assert((TCACHE_NSLOTS_SMALL_MAX & 1U) == 0);
|
||||
for (i = 0; i < nhbins; i++) {
|
||||
tcache->tbins[i].lg_fill_div = 1;
|
||||
tcache->tbins[i].avail = (void **)((uintptr_t)tcache +
|
||||
(uintptr_t)stack_offset);
|
||||
stack_offset += tcache_bin_info[i].ncached_max * sizeof(void *);
|
||||
}
|
||||
|
||||
tcache_tsd_set(&tcache);
|
||||
|
||||
return (tcache);
|
||||
}
|
||||
|
||||
void
|
||||
tcache_destroy(tcache_t *tcache)
|
||||
{
|
||||
unsigned i;
|
||||
size_t tcache_size;
|
||||
|
||||
tcache_arena_dissociate(tcache);
|
||||
|
||||
for (i = 0; i < NBINS; i++) {
|
||||
tcache_bin_t *tbin = &tcache->tbins[i];
|
||||
tcache_bin_flush_small(tbin, i, 0, tcache);
|
||||
|
||||
if (config_stats && tbin->tstats.nrequests != 0) {
|
||||
arena_t *arena = tcache->arena;
|
||||
arena_bin_t *bin = &arena->bins[i];
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
bin->stats.nrequests += tbin->tstats.nrequests;
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
}
|
||||
}
|
||||
|
||||
for (; i < nhbins; i++) {
|
||||
tcache_bin_t *tbin = &tcache->tbins[i];
|
||||
tcache_bin_flush_large(tbin, i, 0, tcache);
|
||||
|
||||
if (config_stats && tbin->tstats.nrequests != 0) {
|
||||
arena_t *arena = tcache->arena;
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
arena->stats.nrequests_large += tbin->tstats.nrequests;
|
||||
arena->stats.lstats[i - NBINS].nrequests +=
|
||||
tbin->tstats.nrequests;
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
}
|
||||
}
|
||||
|
||||
if (config_prof && tcache->prof_accumbytes > 0) {
|
||||
malloc_mutex_lock(&tcache->arena->lock);
|
||||
arena_prof_accum(tcache->arena, tcache->prof_accumbytes);
|
||||
malloc_mutex_unlock(&tcache->arena->lock);
|
||||
}
|
||||
|
||||
tcache_size = arena_salloc(tcache, false);
|
||||
if (tcache_size <= SMALL_MAXCLASS) {
|
||||
arena_chunk_t *chunk = CHUNK_ADDR2BASE(tcache);
|
||||
arena_t *arena = chunk->arena;
|
||||
size_t pageind = ((uintptr_t)tcache - (uintptr_t)chunk) >>
|
||||
LG_PAGE;
|
||||
arena_chunk_map_t *mapelm = &chunk->map[pageind-map_bias];
|
||||
arena_run_t *run = (arena_run_t *)((uintptr_t)chunk +
|
||||
(uintptr_t)((pageind - (mapelm->bits >> LG_PAGE)) <<
|
||||
LG_PAGE));
|
||||
arena_bin_t *bin = run->bin;
|
||||
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
arena_dalloc_bin(arena, chunk, tcache, mapelm);
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
} else if (tcache_size <= tcache_maxclass) {
|
||||
arena_chunk_t *chunk = CHUNK_ADDR2BASE(tcache);
|
||||
arena_t *arena = chunk->arena;
|
||||
|
||||
malloc_mutex_lock(&arena->lock);
|
||||
arena_dalloc_large(arena, chunk, tcache);
|
||||
malloc_mutex_unlock(&arena->lock);
|
||||
} else
|
||||
idalloc(tcache);
|
||||
}
|
||||
|
||||
void
|
||||
tcache_thread_cleanup(void *arg)
|
||||
{
|
||||
tcache_t *tcache = *(tcache_t **)arg;
|
||||
|
||||
if (tcache == TCACHE_STATE_DISABLED) {
|
||||
/* Do nothing. */
|
||||
} else if (tcache == TCACHE_STATE_REINCARNATED) {
|
||||
/*
|
||||
* Another destructor called an allocator function after this
|
||||
* destructor was called. Reset tcache to
|
||||
* TCACHE_STATE_PURGATORY in order to receive another callback.
|
||||
*/
|
||||
tcache = TCACHE_STATE_PURGATORY;
|
||||
tcache_tsd_set(&tcache);
|
||||
} else if (tcache == TCACHE_STATE_PURGATORY) {
|
||||
/*
|
||||
* The previous time this destructor was called, we set the key
|
||||
* to TCACHE_STATE_PURGATORY so that other destructors wouldn't
|
||||
* cause re-creation of the tcache. This time, do nothing, so
|
||||
* that the destructor will not be called again.
|
||||
*/
|
||||
} else if (tcache != NULL) {
|
||||
assert(tcache != TCACHE_STATE_PURGATORY);
|
||||
tcache_destroy(tcache);
|
||||
tcache = TCACHE_STATE_PURGATORY;
|
||||
tcache_tsd_set(&tcache);
|
||||
}
|
||||
}
|
||||
|
||||
void
|
||||
tcache_stats_merge(tcache_t *tcache, arena_t *arena)
|
||||
{
|
||||
unsigned i;
|
||||
|
||||
/* Merge and reset tcache stats. */
|
||||
for (i = 0; i < NBINS; i++) {
|
||||
arena_bin_t *bin = &arena->bins[i];
|
||||
tcache_bin_t *tbin = &tcache->tbins[i];
|
||||
malloc_mutex_lock(&bin->lock);
|
||||
bin->stats.nrequests += tbin->tstats.nrequests;
|
||||
malloc_mutex_unlock(&bin->lock);
|
||||
tbin->tstats.nrequests = 0;
|
||||
}
|
||||
|
||||
for (; i < nhbins; i++) {
|
||||
malloc_large_stats_t *lstats = &arena->stats.lstats[i - NBINS];
|
||||
tcache_bin_t *tbin = &tcache->tbins[i];
|
||||
arena->stats.nrequests_large += tbin->tstats.nrequests;
|
||||
lstats->nrequests += tbin->tstats.nrequests;
|
||||
tbin->tstats.nrequests = 0;
|
||||
}
|
||||
}
|
||||
|
||||
bool
|
||||
tcache_boot0(void)
|
||||
{
|
||||
unsigned i;
|
||||
|
||||
/*
|
||||
* If necessary, clamp opt_lg_tcache_max, now that arena_maxclass is
|
||||
* known.
|
||||
*/
|
||||
if (opt_lg_tcache_max < 0 || (1U << opt_lg_tcache_max) < SMALL_MAXCLASS)
|
||||
tcache_maxclass = SMALL_MAXCLASS;
|
||||
else if ((1U << opt_lg_tcache_max) > arena_maxclass)
|
||||
tcache_maxclass = arena_maxclass;
|
||||
else
|
||||
tcache_maxclass = (1U << opt_lg_tcache_max);
|
||||
|
||||
nhbins = NBINS + (tcache_maxclass >> LG_PAGE);
|
||||
|
||||
/* Initialize tcache_bin_info. */
|
||||
tcache_bin_info = (tcache_bin_info_t *)base_alloc(nhbins *
|
||||
sizeof(tcache_bin_info_t));
|
||||
if (tcache_bin_info == NULL)
|
||||
return (true);
|
||||
stack_nelms = 0;
|
||||
for (i = 0; i < NBINS; i++) {
|
||||
if ((arena_bin_info[i].nregs << 1) <= TCACHE_NSLOTS_SMALL_MAX) {
|
||||
tcache_bin_info[i].ncached_max =
|
||||
(arena_bin_info[i].nregs << 1);
|
||||
} else {
|
||||
tcache_bin_info[i].ncached_max =
|
||||
TCACHE_NSLOTS_SMALL_MAX;
|
||||
}
|
||||
stack_nelms += tcache_bin_info[i].ncached_max;
|
||||
}
|
||||
for (; i < nhbins; i++) {
|
||||
tcache_bin_info[i].ncached_max = TCACHE_NSLOTS_LARGE;
|
||||
stack_nelms += tcache_bin_info[i].ncached_max;
|
||||
}
|
||||
|
||||
return (false);
|
||||
}
|
||||
|
||||
bool
|
||||
tcache_boot1(void)
|
||||
{
|
||||
|
||||
if (tcache_tsd_boot() || tcache_enabled_tsd_boot())
|
||||
return (true);
|
||||
|
||||
return (false);
|
||||
}
|
72
contrib/jemalloc/src/tsd.c
Normal file
72
contrib/jemalloc/src/tsd.c
Normal file
@ -0,0 +1,72 @@
|
||||
#define JEMALLOC_TSD_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Data. */
|
||||
|
||||
static unsigned ncleanups;
|
||||
static malloc_tsd_cleanup_t cleanups[MALLOC_TSD_CLEANUPS_MAX];
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
void *
|
||||
malloc_tsd_malloc(size_t size)
|
||||
{
|
||||
|
||||
/* Avoid choose_arena() in order to dodge bootstrapping issues. */
|
||||
return arena_malloc(arenas[0], size, false, false);
|
||||
}
|
||||
|
||||
void
|
||||
malloc_tsd_dalloc(void *wrapper)
|
||||
{
|
||||
|
||||
idalloc(wrapper);
|
||||
}
|
||||
|
||||
void
|
||||
malloc_tsd_no_cleanup(void *arg)
|
||||
{
|
||||
|
||||
not_reached();
|
||||
}
|
||||
|
||||
#ifdef JEMALLOC_MALLOC_THREAD_CLEANUP
|
||||
void
|
||||
_malloc_thread_cleanup(void)
|
||||
{
|
||||
bool pending[ncleanups], again;
|
||||
unsigned i;
|
||||
|
||||
for (i = 0; i < ncleanups; i++)
|
||||
pending[i] = true;
|
||||
|
||||
do {
|
||||
again = false;
|
||||
for (i = 0; i < ncleanups; i++) {
|
||||
if (pending[i]) {
|
||||
pending[i] = cleanups[i].f(cleanups[i].arg);
|
||||
if (pending[i])
|
||||
again = true;
|
||||
}
|
||||
}
|
||||
} while (again);
|
||||
}
|
||||
#endif
|
||||
|
||||
void
|
||||
malloc_tsd_cleanup_register(bool (*f)(void *), void *arg)
|
||||
{
|
||||
|
||||
assert(ncleanups < MALLOC_TSD_CLEANUPS_MAX);
|
||||
cleanups[ncleanups].f = f;
|
||||
cleanups[ncleanups].arg = arg;
|
||||
ncleanups++;
|
||||
}
|
||||
|
||||
void
|
||||
malloc_tsd_boot(void)
|
||||
{
|
||||
|
||||
ncleanups = 0;
|
||||
}
|
635
contrib/jemalloc/src/util.c
Normal file
635
contrib/jemalloc/src/util.c
Normal file
@ -0,0 +1,635 @@
|
||||
#define assert(e) do { \
|
||||
if (config_debug && !(e)) { \
|
||||
malloc_write("<jemalloc>: Failed assertion\n"); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define not_reached() do { \
|
||||
if (config_debug) { \
|
||||
malloc_write("<jemalloc>: Unreachable code reached\n"); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define not_implemented() do { \
|
||||
if (config_debug) { \
|
||||
malloc_write("<jemalloc>: Not implemented\n"); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define JEMALLOC_UTIL_C_
|
||||
#include "jemalloc/internal/jemalloc_internal.h"
|
||||
|
||||
/******************************************************************************/
|
||||
/* Function prototypes for non-inline static functions. */
|
||||
|
||||
static void wrtmessage(void *cbopaque, const char *s);
|
||||
#define U2S_BUFSIZE ((1U << (LG_SIZEOF_INTMAX_T + 3)) + 1)
|
||||
static char *u2s(uintmax_t x, unsigned base, bool uppercase, char *s,
|
||||
size_t *slen_p);
|
||||
#define D2S_BUFSIZE (1 + U2S_BUFSIZE)
|
||||
static char *d2s(intmax_t x, char sign, char *s, size_t *slen_p);
|
||||
#define O2S_BUFSIZE (1 + U2S_BUFSIZE)
|
||||
static char *o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p);
|
||||
#define X2S_BUFSIZE (2 + U2S_BUFSIZE)
|
||||
static char *x2s(uintmax_t x, bool alt_form, bool uppercase, char *s,
|
||||
size_t *slen_p);
|
||||
|
||||
/******************************************************************************/
|
||||
|
||||
/* malloc_message() setup. */
|
||||
JEMALLOC_CATTR(visibility("hidden"), static)
|
||||
void
|
||||
wrtmessage(void *cbopaque, const char *s)
|
||||
{
|
||||
|
||||
#ifdef SYS_write
|
||||
/*
|
||||
* Use syscall(2) rather than write(2) when possible in order to avoid
|
||||
* the possibility of memory allocation within libc. This is necessary
|
||||
* on FreeBSD; most operating systems do not have this problem though.
|
||||
*/
|
||||
UNUSED int result = syscall(SYS_write, STDERR_FILENO, s, strlen(s));
|
||||
#else
|
||||
UNUSED int result = write(STDERR_FILENO, s, strlen(s));
|
||||
#endif
|
||||
}
|
||||
|
||||
void (*je_malloc_message)(void *, const char *s)
|
||||
JEMALLOC_ATTR(visibility("default")) = wrtmessage;
|
||||
|
||||
JEMALLOC_CATTR(visibility("hidden"), static)
|
||||
void
|
||||
wrtmessage_1_0(const char *s1, const char *s2, const char *s3,
|
||||
const char *s4)
|
||||
{
|
||||
|
||||
wrtmessage(NULL, s1);
|
||||
wrtmessage(NULL, s2);
|
||||
wrtmessage(NULL, s3);
|
||||
wrtmessage(NULL, s4);
|
||||
}
|
||||
|
||||
void (*__malloc_message_1_0)(const char *s1, const char *s2, const char *s3,
|
||||
const char *s4) = wrtmessage_1_0;
|
||||
__sym_compat(_malloc_message, __malloc_message_1_0, FBSD_1.0);
|
||||
|
||||
/*
|
||||
* glibc provides a non-standard strerror_r() when _GNU_SOURCE is defined, so
|
||||
* provide a wrapper.
|
||||
*/
|
||||
int
|
||||
buferror(int errnum, char *buf, size_t buflen)
|
||||
{
|
||||
#ifdef _GNU_SOURCE
|
||||
char *b = strerror_r(errno, buf, buflen);
|
||||
if (b != buf) {
|
||||
strncpy(buf, b, buflen);
|
||||
buf[buflen-1] = '\0';
|
||||
}
|
||||
return (0);
|
||||
#else
|
||||
return (strerror_r(errno, buf, buflen));
|
||||
#endif
|
||||
}
|
||||
|
||||
uintmax_t
|
||||
malloc_strtoumax(const char *nptr, char **endptr, int base)
|
||||
{
|
||||
uintmax_t ret, digit;
|
||||
int b;
|
||||
bool neg;
|
||||
const char *p, *ns;
|
||||
|
||||
if (base < 0 || base == 1 || base > 36) {
|
||||
errno = EINVAL;
|
||||
return (UINTMAX_MAX);
|
||||
}
|
||||
b = base;
|
||||
|
||||
/* Swallow leading whitespace and get sign, if any. */
|
||||
neg = false;
|
||||
p = nptr;
|
||||
while (true) {
|
||||
switch (*p) {
|
||||
case '\t': case '\n': case '\v': case '\f': case '\r': case ' ':
|
||||
p++;
|
||||
break;
|
||||
case '-':
|
||||
neg = true;
|
||||
/* Fall through. */
|
||||
case '+':
|
||||
p++;
|
||||
/* Fall through. */
|
||||
default:
|
||||
goto label_prefix;
|
||||
}
|
||||
}
|
||||
|
||||
/* Get prefix, if any. */
|
||||
label_prefix:
|
||||
/*
|
||||
* Note where the first non-whitespace/sign character is so that it is
|
||||
* possible to tell whether any digits are consumed (e.g., " 0" vs.
|
||||
* " -x").
|
||||
*/
|
||||
ns = p;
|
||||
if (*p == '0') {
|
||||
switch (p[1]) {
|
||||
case '0': case '1': case '2': case '3': case '4': case '5':
|
||||
case '6': case '7':
|
||||
if (b == 0)
|
||||
b = 8;
|
||||
if (b == 8)
|
||||
p++;
|
||||
break;
|
||||
case 'x':
|
||||
switch (p[2]) {
|
||||
case '0': case '1': case '2': case '3': case '4':
|
||||
case '5': case '6': case '7': case '8': case '9':
|
||||
case 'A': case 'B': case 'C': case 'D': case 'E':
|
||||
case 'F':
|
||||
case 'a': case 'b': case 'c': case 'd': case 'e':
|
||||
case 'f':
|
||||
if (b == 0)
|
||||
b = 16;
|
||||
if (b == 16)
|
||||
p += 2;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (b == 0)
|
||||
b = 10;
|
||||
|
||||
/* Convert. */
|
||||
ret = 0;
|
||||
while ((*p >= '0' && *p <= '9' && (digit = *p - '0') < b)
|
||||
|| (*p >= 'A' && *p <= 'Z' && (digit = 10 + *p - 'A') < b)
|
||||
|| (*p >= 'a' && *p <= 'z' && (digit = 10 + *p - 'a') < b)) {
|
||||
uintmax_t pret = ret;
|
||||
ret *= b;
|
||||
ret += digit;
|
||||
if (ret < pret) {
|
||||
/* Overflow. */
|
||||
errno = ERANGE;
|
||||
return (UINTMAX_MAX);
|
||||
}
|
||||
p++;
|
||||
}
|
||||
if (neg)
|
||||
ret = -ret;
|
||||
|
||||
if (endptr != NULL) {
|
||||
if (p == ns) {
|
||||
/* No characters were converted. */
|
||||
*endptr = (char *)nptr;
|
||||
} else
|
||||
*endptr = (char *)p;
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
static char *
|
||||
u2s(uintmax_t x, unsigned base, bool uppercase, char *s, size_t *slen_p)
|
||||
{
|
||||
unsigned i;
|
||||
|
||||
i = U2S_BUFSIZE - 1;
|
||||
s[i] = '\0';
|
||||
switch (base) {
|
||||
case 10:
|
||||
do {
|
||||
i--;
|
||||
s[i] = "0123456789"[x % (uint64_t)10];
|
||||
x /= (uint64_t)10;
|
||||
} while (x > 0);
|
||||
break;
|
||||
case 16: {
|
||||
const char *digits = (uppercase)
|
||||
? "0123456789ABCDEF"
|
||||
: "0123456789abcdef";
|
||||
|
||||
do {
|
||||
i--;
|
||||
s[i] = digits[x & 0xf];
|
||||
x >>= 4;
|
||||
} while (x > 0);
|
||||
break;
|
||||
} default: {
|
||||
const char *digits = (uppercase)
|
||||
? "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
: "0123456789abcdefghijklmnopqrstuvwxyz";
|
||||
|
||||
assert(base >= 2 && base <= 36);
|
||||
do {
|
||||
i--;
|
||||
s[i] = digits[x % (uint64_t)base];
|
||||
x /= (uint64_t)base;
|
||||
} while (x > 0);
|
||||
}}
|
||||
|
||||
*slen_p = U2S_BUFSIZE - 1 - i;
|
||||
return (&s[i]);
|
||||
}
|
||||
|
||||
static char *
|
||||
d2s(intmax_t x, char sign, char *s, size_t *slen_p)
|
||||
{
|
||||
bool neg;
|
||||
|
||||
if ((neg = (x < 0)))
|
||||
x = -x;
|
||||
s = u2s(x, 10, false, s, slen_p);
|
||||
if (neg)
|
||||
sign = '-';
|
||||
switch (sign) {
|
||||
case '-':
|
||||
if (neg == false)
|
||||
break;
|
||||
/* Fall through. */
|
||||
case ' ':
|
||||
case '+':
|
||||
s--;
|
||||
(*slen_p)++;
|
||||
*s = sign;
|
||||
break;
|
||||
default: not_reached();
|
||||
}
|
||||
return (s);
|
||||
}
|
||||
|
||||
static char *
|
||||
o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p)
|
||||
{
|
||||
|
||||
s = u2s(x, 8, false, s, slen_p);
|
||||
if (alt_form && *s != '0') {
|
||||
s--;
|
||||
(*slen_p)++;
|
||||
*s = '0';
|
||||
}
|
||||
return (s);
|
||||
}
|
||||
|
||||
static char *
|
||||
x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p)
|
||||
{
|
||||
|
||||
s = u2s(x, 16, uppercase, s, slen_p);
|
||||
if (alt_form) {
|
||||
s -= 2;
|
||||
(*slen_p) += 2;
|
||||
memcpy(s, uppercase ? "0X" : "0x", 2);
|
||||
}
|
||||
return (s);
|
||||
}
|
||||
|
||||
int
|
||||
malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
|
||||
{
|
||||
int ret;
|
||||
size_t i;
|
||||
const char *f;
|
||||
va_list tap;
|
||||
|
||||
#define APPEND_C(c) do { \
|
||||
if (i < size) \
|
||||
str[i] = (c); \
|
||||
i++; \
|
||||
} while (0)
|
||||
#define APPEND_S(s, slen) do { \
|
||||
if (i < size) { \
|
||||
size_t cpylen = (slen <= size - i) ? slen : size - i; \
|
||||
memcpy(&str[i], s, cpylen); \
|
||||
} \
|
||||
i += slen; \
|
||||
} while (0)
|
||||
#define APPEND_PADDED_S(s, slen, width, left_justify) do { \
|
||||
/* Left padding. */ \
|
||||
size_t pad_len = (width == -1) ? 0 : ((slen < (size_t)width) ? \
|
||||
(size_t)width - slen : 0); \
|
||||
if (left_justify == false && pad_len != 0) { \
|
||||
size_t j; \
|
||||
for (j = 0; j < pad_len; j++) \
|
||||
APPEND_C(' '); \
|
||||
} \
|
||||
/* Value. */ \
|
||||
APPEND_S(s, slen); \
|
||||
/* Right padding. */ \
|
||||
if (left_justify && pad_len != 0) { \
|
||||
size_t j; \
|
||||
for (j = 0; j < pad_len; j++) \
|
||||
APPEND_C(' '); \
|
||||
} \
|
||||
} while (0)
|
||||
#define GET_ARG_NUMERIC(val, len) do { \
|
||||
switch (len) { \
|
||||
case '?': \
|
||||
val = va_arg(ap, int); \
|
||||
break; \
|
||||
case 'l': \
|
||||
val = va_arg(ap, long); \
|
||||
break; \
|
||||
case 'q': \
|
||||
val = va_arg(ap, long long); \
|
||||
break; \
|
||||
case 'j': \
|
||||
val = va_arg(ap, intmax_t); \
|
||||
break; \
|
||||
case 't': \
|
||||
val = va_arg(ap, ptrdiff_t); \
|
||||
break; \
|
||||
case 'z': \
|
||||
val = va_arg(ap, ssize_t); \
|
||||
break; \
|
||||
case 'p': /* Synthetic; used for %p. */ \
|
||||
val = va_arg(ap, uintptr_t); \
|
||||
break; \
|
||||
default: not_reached(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
if (config_debug)
|
||||
va_copy(tap, ap);
|
||||
|
||||
i = 0;
|
||||
f = format;
|
||||
while (true) {
|
||||
switch (*f) {
|
||||
case '\0': goto label_out;
|
||||
case '%': {
|
||||
bool alt_form = false;
|
||||
bool zero_pad = false;
|
||||
bool left_justify = false;
|
||||
bool plus_space = false;
|
||||
bool plus_plus = false;
|
||||
int prec = -1;
|
||||
int width = -1;
|
||||
char len = '?';
|
||||
|
||||
f++;
|
||||
if (*f == '%') {
|
||||
/* %% */
|
||||
APPEND_C(*f);
|
||||
break;
|
||||
}
|
||||
/* Flags. */
|
||||
while (true) {
|
||||
switch (*f) {
|
||||
case '#':
|
||||
assert(alt_form == false);
|
||||
alt_form = true;
|
||||
break;
|
||||
case '0':
|
||||
assert(zero_pad == false);
|
||||
zero_pad = true;
|
||||
break;
|
||||
case '-':
|
||||
assert(left_justify == false);
|
||||
left_justify = true;
|
||||
break;
|
||||
case ' ':
|
||||
assert(plus_space == false);
|
||||
plus_space = true;
|
||||
break;
|
||||
case '+':
|
||||
assert(plus_plus == false);
|
||||
plus_plus = true;
|
||||
break;
|
||||
default: goto label_width;
|
||||
}
|
||||
f++;
|
||||
}
|
||||
/* Width. */
|
||||
label_width:
|
||||
switch (*f) {
|
||||
case '*':
|
||||
width = va_arg(ap, int);
|
||||
f++;
|
||||
break;
|
||||
case '0': case '1': case '2': case '3': case '4':
|
||||
case '5': case '6': case '7': case '8': case '9': {
|
||||
uintmax_t uwidth;
|
||||
errno = 0;
|
||||
uwidth = malloc_strtoumax(f, (char **)&f, 10);
|
||||
assert(uwidth != UINTMAX_MAX || errno !=
|
||||
ERANGE);
|
||||
width = (int)uwidth;
|
||||
if (*f == '.') {
|
||||
f++;
|
||||
goto label_precision;
|
||||
} else
|
||||
goto label_length;
|
||||
break;
|
||||
} case '.':
|
||||
f++;
|
||||
goto label_precision;
|
||||
default: goto label_length;
|
||||
}
|
||||
/* Precision. */
|
||||
label_precision:
|
||||
switch (*f) {
|
||||
case '*':
|
||||
prec = va_arg(ap, int);
|
||||
f++;
|
||||
break;
|
||||
case '0': case '1': case '2': case '3': case '4':
|
||||
case '5': case '6': case '7': case '8': case '9': {
|
||||
uintmax_t uprec;
|
||||
errno = 0;
|
||||
uprec = malloc_strtoumax(f, (char **)&f, 10);
|
||||
assert(uprec != UINTMAX_MAX || errno != ERANGE);
|
||||
prec = (int)uprec;
|
||||
break;
|
||||
}
|
||||
default: break;
|
||||
}
|
||||
/* Length. */
|
||||
label_length:
|
||||
switch (*f) {
|
||||
case 'l':
|
||||
f++;
|
||||
if (*f == 'l') {
|
||||
len = 'q';
|
||||
f++;
|
||||
} else
|
||||
len = 'l';
|
||||
break;
|
||||
case 'j':
|
||||
len = 'j';
|
||||
f++;
|
||||
break;
|
||||
case 't':
|
||||
len = 't';
|
||||
f++;
|
||||
break;
|
||||
case 'z':
|
||||
len = 'z';
|
||||
f++;
|
||||
break;
|
||||
default: break;
|
||||
}
|
||||
/* Conversion specifier. */
|
||||
switch (*f) {
|
||||
char *s;
|
||||
size_t slen;
|
||||
case 'd': case 'i': {
|
||||
intmax_t val JEMALLOC_CC_SILENCE_INIT(0);
|
||||
char buf[D2S_BUFSIZE];
|
||||
|
||||
GET_ARG_NUMERIC(val, len);
|
||||
s = d2s(val, (plus_plus ? '+' : (plus_space ?
|
||||
' ' : '-')), buf, &slen);
|
||||
APPEND_PADDED_S(s, slen, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
} case 'o': {
|
||||
uintmax_t val JEMALLOC_CC_SILENCE_INIT(0);
|
||||
char buf[O2S_BUFSIZE];
|
||||
|
||||
GET_ARG_NUMERIC(val, len);
|
||||
s = o2s(val, alt_form, buf, &slen);
|
||||
APPEND_PADDED_S(s, slen, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
} case 'u': {
|
||||
uintmax_t val JEMALLOC_CC_SILENCE_INIT(0);
|
||||
char buf[U2S_BUFSIZE];
|
||||
|
||||
GET_ARG_NUMERIC(val, len);
|
||||
s = u2s(val, 10, false, buf, &slen);
|
||||
APPEND_PADDED_S(s, slen, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
} case 'x': case 'X': {
|
||||
uintmax_t val JEMALLOC_CC_SILENCE_INIT(0);
|
||||
char buf[X2S_BUFSIZE];
|
||||
|
||||
GET_ARG_NUMERIC(val, len);
|
||||
s = x2s(val, alt_form, *f == 'X', buf, &slen);
|
||||
APPEND_PADDED_S(s, slen, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
} case 'c': {
|
||||
unsigned char val;
|
||||
char buf[2];
|
||||
|
||||
assert(len == '?' || len == 'l');
|
||||
assert_not_implemented(len != 'l');
|
||||
val = va_arg(ap, int);
|
||||
buf[0] = val;
|
||||
buf[1] = '\0';
|
||||
APPEND_PADDED_S(buf, 1, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
} case 's':
|
||||
assert(len == '?' || len == 'l');
|
||||
assert_not_implemented(len != 'l');
|
||||
s = va_arg(ap, char *);
|
||||
slen = (prec == -1) ? strlen(s) : prec;
|
||||
APPEND_PADDED_S(s, slen, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
case 'p': {
|
||||
uintmax_t val;
|
||||
char buf[X2S_BUFSIZE];
|
||||
|
||||
GET_ARG_NUMERIC(val, 'p');
|
||||
s = x2s(val, true, false, buf, &slen);
|
||||
APPEND_PADDED_S(s, slen, width, left_justify);
|
||||
f++;
|
||||
break;
|
||||
}
|
||||
default: not_implemented();
|
||||
}
|
||||
break;
|
||||
} default: {
|
||||
APPEND_C(*f);
|
||||
f++;
|
||||
break;
|
||||
}}
|
||||
}
|
||||
label_out:
|
||||
if (i < size)
|
||||
str[i] = '\0';
|
||||
else
|
||||
str[size - 1] = '\0';
|
||||
ret = i;
|
||||
|
||||
#undef APPEND_C
|
||||
#undef APPEND_S
|
||||
#undef APPEND_PADDED_S
|
||||
#undef GET_ARG_NUMERIC
|
||||
return (ret);
|
||||
}
|
||||
|
||||
JEMALLOC_ATTR(format(printf, 3, 4))
|
||||
int
|
||||
malloc_snprintf(char *str, size_t size, const char *format, ...)
|
||||
{
|
||||
int ret;
|
||||
va_list ap;
|
||||
|
||||
va_start(ap, format);
|
||||
ret = malloc_vsnprintf(str, size, format, ap);
|
||||
va_end(ap);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
void
|
||||
malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
const char *format, va_list ap)
|
||||
{
|
||||
char buf[MALLOC_PRINTF_BUFSIZE];
|
||||
|
||||
if (write_cb == NULL) {
|
||||
/*
|
||||
* The caller did not provide an alternate write_cb callback
|
||||
* function, so use the default one. malloc_write() is an
|
||||
* inline function, so use malloc_message() directly here.
|
||||
*/
|
||||
write_cb = je_malloc_message;
|
||||
cbopaque = NULL;
|
||||
}
|
||||
|
||||
malloc_vsnprintf(buf, sizeof(buf), format, ap);
|
||||
write_cb(cbopaque, buf);
|
||||
}
|
||||
|
||||
/*
|
||||
* Print to a callback function in such a way as to (hopefully) avoid memory
|
||||
* allocation.
|
||||
*/
|
||||
JEMALLOC_ATTR(format(printf, 3, 4))
|
||||
void
|
||||
malloc_cprintf(void (*write_cb)(void *, const char *), void *cbopaque,
|
||||
const char *format, ...)
|
||||
{
|
||||
va_list ap;
|
||||
|
||||
va_start(ap, format);
|
||||
malloc_vcprintf(write_cb, cbopaque, format, ap);
|
||||
va_end(ap);
|
||||
}
|
||||
|
||||
/* Print to stderr in such a way as to avoid memory allocation. */
|
||||
JEMALLOC_ATTR(format(printf, 1, 2))
|
||||
void
|
||||
malloc_printf(const char *format, ...)
|
||||
{
|
||||
va_list ap;
|
||||
|
||||
va_start(ap, format);
|
||||
malloc_vcprintf(NULL, NULL, format, ap);
|
||||
va_end(ap);
|
||||
}
|
@ -33,9 +33,36 @@
|
||||
#define _MALLOC_NP_H_
|
||||
#include <sys/cdefs.h>
|
||||
#include <sys/types.h>
|
||||
#include <strings.h>
|
||||
|
||||
__BEGIN_DECLS
|
||||
size_t malloc_usable_size(const void *ptr);
|
||||
|
||||
void malloc_stats_print(void (*write_cb)(void *, const char *),
|
||||
void *cbopaque, const char *opts);
|
||||
int mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp,
|
||||
size_t newlen);
|
||||
int mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp);
|
||||
int mallctlbymib(const size_t *mib, size_t miblen, void *oldp,
|
||||
size_t *oldlenp, void *newp, size_t newlen);
|
||||
|
||||
#define ALLOCM_LG_ALIGN(la) (la)
|
||||
#define ALLOCM_ALIGN(a) (ffsl(a)-1)
|
||||
#define ALLOCM_ZERO ((int)0x40)
|
||||
#define ALLOCM_NO_MOVE ((int)0x80)
|
||||
|
||||
#define ALLOCM_SUCCESS 0
|
||||
#define ALLOCM_ERR_OOM 1
|
||||
#define ALLOCM_ERR_NOT_MOVED 2
|
||||
|
||||
int allocm(void **ptr, size_t *rsize, size_t size, int flags)
|
||||
__attribute__(nonnull(1));
|
||||
int rallocm(void **ptr, size_t *rsize, size_t size, size_t extra,
|
||||
int flags) __attribute__(nonnull(1));
|
||||
int sallocm(const void *ptr, size_t *rsize, int flags)
|
||||
__attribute__(nonnull(1));
|
||||
int dallocm(void *ptr, int flags) __attribute__(nonnull(1));
|
||||
int nallocm(size_t *rsize, size_t size, int flags);
|
||||
__END_DECLS
|
||||
|
||||
#endif /* _MALLOC_NP_H_ */
|
||||
|
@ -155,7 +155,7 @@ _Noreturn void _Exit(int);
|
||||
* If we're in a mode greater than C99, expose C11 functions.
|
||||
*/
|
||||
#if __ISO_C_VISIBLE >= 2011 || __cplusplus >= 201103L
|
||||
void * aligned_alloc(size_t, size_t);
|
||||
void * aligned_alloc(size_t, size_t) __malloc_like;
|
||||
int at_quick_exit(void (*)(void));
|
||||
_Noreturn void
|
||||
quick_exit(int);
|
||||
@ -228,9 +228,8 @@ int unlockpt(int);
|
||||
#endif /* __XSI_VISIBLE */
|
||||
|
||||
#if __BSD_VISIBLE
|
||||
extern const char *_malloc_options;
|
||||
extern void (*_malloc_message)(const char *, const char *, const char *,
|
||||
const char *);
|
||||
extern const char *malloc_conf;
|
||||
extern void (*malloc_message)(void *, const char *);
|
||||
|
||||
/*
|
||||
* The alloca() function can't be implemented in C, and on some
|
||||
|
@ -79,6 +79,7 @@ NOASM=
|
||||
.include "${.CURDIR}/resolv/Makefile.inc"
|
||||
.include "${.CURDIR}/stdio/Makefile.inc"
|
||||
.include "${.CURDIR}/stdlib/Makefile.inc"
|
||||
.include "${.CURDIR}/stdlib/jemalloc/Makefile.inc"
|
||||
.include "${.CURDIR}/stdtime/Makefile.inc"
|
||||
.include "${.CURDIR}/string/Makefile.inc"
|
||||
.include "${.CURDIR}/sys/Makefile.inc"
|
||||
|
@ -39,6 +39,11 @@
|
||||
|
||||
#include "libc_private.h"
|
||||
|
||||
/* Provided by jemalloc to avoid bootstrapping issues. */
|
||||
void *a0malloc(size_t size);
|
||||
void *a0calloc(size_t num, size_t size);
|
||||
void a0free(void *ptr);
|
||||
|
||||
__weak_reference(__libc_allocate_tls, _rtld_allocate_tls);
|
||||
__weak_reference(__libc_free_tls, _rtld_free_tls);
|
||||
|
||||
@ -120,8 +125,8 @@ __libc_free_tls(void *tcb, size_t tcbsize, size_t tcbalign __unused)
|
||||
|
||||
tls = (Elf_Addr **)((Elf_Addr)tcb + tcbsize - TLS_TCB_SIZE);
|
||||
dtv = tls[0];
|
||||
free(dtv);
|
||||
free(tcb);
|
||||
a0free(dtv);
|
||||
a0free(tcb);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -137,18 +142,18 @@ __libc_allocate_tls(void *oldtcb, size_t tcbsize, size_t tcbalign __unused)
|
||||
if (oldtcb != NULL && tcbsize == TLS_TCB_SIZE)
|
||||
return (oldtcb);
|
||||
|
||||
tcb = calloc(1, tls_static_space + tcbsize - TLS_TCB_SIZE);
|
||||
tcb = a0calloc(1, tls_static_space + tcbsize - TLS_TCB_SIZE);
|
||||
tls = (Elf_Addr **)(tcb + tcbsize - TLS_TCB_SIZE);
|
||||
|
||||
if (oldtcb != NULL) {
|
||||
memcpy(tls, oldtcb, tls_static_space);
|
||||
free(oldtcb);
|
||||
a0free(oldtcb);
|
||||
|
||||
/* Adjust the DTV. */
|
||||
dtv = tls[0];
|
||||
dtv[2] = (Elf_Addr)tls + TLS_TCB_SIZE;
|
||||
} else {
|
||||
dtv = malloc(3 * sizeof(Elf_Addr));
|
||||
dtv = a0malloc(3 * sizeof(Elf_Addr));
|
||||
tls[0] = dtv;
|
||||
dtv[0] = 1;
|
||||
dtv[1] = 1;
|
||||
@ -189,8 +194,8 @@ __libc_free_tls(void *tcb, size_t tcbsize __unused, size_t tcbalign)
|
||||
dtv = ((Elf_Addr**)tcb)[1];
|
||||
tlsend = (Elf_Addr) tcb;
|
||||
tlsstart = tlsend - size;
|
||||
free((void*) tlsstart);
|
||||
free(dtv);
|
||||
a0free((void*) tlsstart);
|
||||
a0free(dtv);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -208,8 +213,8 @@ __libc_allocate_tls(void *oldtls, size_t tcbsize, size_t tcbalign)
|
||||
|
||||
if (tcbsize < 2 * sizeof(Elf_Addr))
|
||||
tcbsize = 2 * sizeof(Elf_Addr);
|
||||
tls = calloc(1, size + tcbsize);
|
||||
dtv = malloc(3 * sizeof(Elf_Addr));
|
||||
tls = a0calloc(1, size + tcbsize);
|
||||
dtv = a0malloc(3 * sizeof(Elf_Addr));
|
||||
|
||||
segbase = (Elf_Addr)(tls + size);
|
||||
((Elf_Addr*)segbase)[0] = segbase;
|
||||
|
@ -7,7 +7,7 @@
|
||||
MISRCS+=_Exit.c a64l.c abort.c abs.c atexit.c atof.c atoi.c atol.c atoll.c \
|
||||
bsearch.c div.c exit.c getenv.c getopt.c getopt_long.c \
|
||||
getsubopt.c hcreate.c heapsort.c imaxabs.c imaxdiv.c \
|
||||
insque.c l64a.c labs.c ldiv.c llabs.c lldiv.c lsearch.c malloc.c \
|
||||
insque.c l64a.c labs.c ldiv.c llabs.c lldiv.c lsearch.c \
|
||||
merge.c ptsname.c qsort.c qsort_r.c quick_exit.c radixsort.c rand.c \
|
||||
random.c reallocf.c realpath.c remque.c strfmon.c strtoimax.c \
|
||||
strtol.c strtoll.c strtoq.c strtoul.c strtonum.c strtoull.c \
|
||||
@ -18,18 +18,17 @@ SYM_MAPS+= ${.CURDIR}/stdlib/Symbol.map
|
||||
# machine-dependent stdlib sources
|
||||
.sinclude "${.CURDIR}/${LIBC_ARCH}/stdlib/Makefile.inc"
|
||||
|
||||
MAN+= a64l.3 abort.3 abs.3 aligned_alloc.3 alloca.3 atexit.3 atof.3 \
|
||||
MAN+= a64l.3 abort.3 abs.3 alloca.3 atexit.3 atof.3 \
|
||||
atoi.3 atol.3 at_quick_exit.3 bsearch.3 \
|
||||
div.3 exit.3 getenv.3 getopt.3 getopt_long.3 getsubopt.3 \
|
||||
hcreate.3 imaxabs.3 imaxdiv.3 insque.3 labs.3 ldiv.3 llabs.3 lldiv.3 \
|
||||
lsearch.3 malloc.3 memory.3 ptsname.3 qsort.3 \
|
||||
lsearch.3 memory.3 ptsname.3 qsort.3 \
|
||||
quick_exit.3 \
|
||||
radixsort.3 rand.3 random.3 \
|
||||
radixsort.3 rand.3 random.3 reallocf.3 \
|
||||
realpath.3 strfmon.3 strtod.3 strtol.3 strtonum.3 strtoul.3 system.3 \
|
||||
tsearch.3
|
||||
|
||||
MLINKS+=a64l.3 l64a.3 a64l.3 l64a_r.3
|
||||
MLINKS+=aligned_alloc.3 posix_memalign.3
|
||||
MLINKS+=atol.3 atoll.3
|
||||
MLINKS+=exit.3 _Exit.3
|
||||
MLINKS+=getenv.3 putenv.3 getenv.3 setenv.3 getenv.3 unsetenv.3
|
||||
@ -46,11 +45,4 @@ MLINKS+=radixsort.3 sradixsort.3
|
||||
MLINKS+=strtod.3 strtof.3 strtod.3 strtold.3
|
||||
MLINKS+=strtol.3 strtoll.3 strtol.3 strtoq.3 strtol.3 strtoimax.3
|
||||
MLINKS+=strtoul.3 strtoull.3 strtoul.3 strtouq.3 strtoul.3 strtoumax.3
|
||||
MLINKS+=malloc.3 calloc.3 malloc.3 free.3 malloc.3 malloc.conf.5 \
|
||||
malloc.3 realloc.3 malloc.3 reallocf.3 malloc.3 malloc_usable_size.3
|
||||
MLINKS+=tsearch.3 tdelete.3 tsearch.3 tfind.3 tsearch.3 twalk.3
|
||||
|
||||
.if defined(MALLOC_PRODUCTION)
|
||||
CFLAGS+= -DMALLOC_PRODUCTION
|
||||
.endif
|
||||
|
||||
|
@ -47,14 +47,6 @@ FBSD_1.0 {
|
||||
lldiv;
|
||||
lsearch;
|
||||
lfind;
|
||||
_malloc_options;
|
||||
_malloc_message;
|
||||
malloc;
|
||||
posix_memalign;
|
||||
calloc;
|
||||
realloc;
|
||||
free;
|
||||
malloc_usable_size;
|
||||
mergesort;
|
||||
putenv;
|
||||
qsort_r;
|
||||
@ -93,7 +85,6 @@ FBSD_1.0 {
|
||||
};
|
||||
|
||||
FBSD_1.3 {
|
||||
aligned_alloc;
|
||||
at_quick_exit;
|
||||
atof_l;
|
||||
atoi_l;
|
||||
@ -114,9 +105,6 @@ FBSD_1.3 {
|
||||
};
|
||||
|
||||
FBSDprivate_1.0 {
|
||||
_malloc_thread_cleanup;
|
||||
_malloc_prefork;
|
||||
_malloc_postfork;
|
||||
__system;
|
||||
_system;
|
||||
};
|
||||
|
@ -1,126 +0,0 @@
|
||||
.\" Copyright (C) 2006 Jason Evans <jasone@FreeBSD.org>.
|
||||
.\" All rights reserved.
|
||||
.\"
|
||||
.\" Redistribution and use in source and binary forms, with or without
|
||||
.\" modification, are permitted provided that the following conditions
|
||||
.\" are met:
|
||||
.\" 1. Redistributions of source code must retain the above copyright
|
||||
.\" notice(s), this list of conditions and the following disclaimer as
|
||||
.\" the first lines of this file unmodified other than the possible
|
||||
.\" addition of one or more copyright notices.
|
||||
.\" 2. Redistributions in binary form must reproduce the above copyright
|
||||
.\" notice(s), this list of conditions and the following disclaimer in
|
||||
.\" the documentation and/or other materials provided with the
|
||||
.\" distribution.
|
||||
.\"
|
||||
.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY
|
||||
.\" EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
.\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) BE
|
||||
.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
|
||||
.\" BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
.\" WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
.\" OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
|
||||
.\" EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
.\"
|
||||
.\" $FreeBSD$
|
||||
.\"
|
||||
.Dd January 7, 2011
|
||||
.Dt ALIGNED_ALLOC 3
|
||||
.Os
|
||||
.Sh NAME
|
||||
.Nm aligned_alloc ,
|
||||
.Nm posix_memalign
|
||||
.Nd aligned memory allocation
|
||||
.Sh LIBRARY
|
||||
.Lb libc
|
||||
.Sh SYNOPSIS
|
||||
.In stdlib.h
|
||||
.Ft void *
|
||||
.Fn aligned_alloc "size_t alignment" "size_t size"
|
||||
.Ft int
|
||||
.Fn posix_memalign "void **ptr" "size_t alignment" "size_t size"
|
||||
.Sh DESCRIPTION
|
||||
The
|
||||
.Fn aligned_alloc
|
||||
and
|
||||
.Fn posix_memalign
|
||||
functions allocate
|
||||
.Fa size
|
||||
bytes of memory such that the allocation's base address is an even multiple of
|
||||
.Fa alignment .
|
||||
The
|
||||
.Fn aligned_alloc
|
||||
function returns the allocation, while the
|
||||
.Fn posix_memalign
|
||||
function stores the allocation in the value pointed to by
|
||||
.Fa ptr .
|
||||
.Pp
|
||||
The requested
|
||||
.Fa alignment
|
||||
must be a power of 2 at least as large as
|
||||
.Fn sizeof "void *" .
|
||||
.Pp
|
||||
Memory that is allocated via
|
||||
.Fn aligned_alloc
|
||||
and
|
||||
.Fn posix_memalign
|
||||
can be used as an argument in subsequent calls to
|
||||
.Xr realloc 3 ,
|
||||
.Xr reallocf 3 ,
|
||||
and
|
||||
.Xr free 3 .
|
||||
.Sh RETURN VALUES
|
||||
The
|
||||
.Fn aligned_alloc
|
||||
function returns a pointer to the allocation if successful; otherwise a
|
||||
NULL pointer is returned and
|
||||
.Va errno
|
||||
is set to an error value.
|
||||
.Pp
|
||||
The
|
||||
.Fn posix_memalign
|
||||
function returns the value 0 if successful; otherwise it returns an error value.
|
||||
.Sh ERRORS
|
||||
The
|
||||
.Fn aligned_alloc
|
||||
and
|
||||
.Fn posix_memalign
|
||||
functions will fail if:
|
||||
.Bl -tag -width Er
|
||||
.It Bq Er EINVAL
|
||||
The
|
||||
.Fa alignment
|
||||
parameter is not a power of 2 at least as large as
|
||||
.Fn sizeof "void *" .
|
||||
.It Bq Er ENOMEM
|
||||
Memory allocation error.
|
||||
.El
|
||||
.Sh SEE ALSO
|
||||
.Xr free 3 ,
|
||||
.Xr malloc 3 ,
|
||||
.Xr realloc 3 ,
|
||||
.Xr reallocf 3 ,
|
||||
.Xr valloc 3
|
||||
.Sh STANDARDS
|
||||
The
|
||||
.Fn aligned_alloc
|
||||
function conforms to
|
||||
.St -isoC-2011 .
|
||||
.Pp
|
||||
The
|
||||
.Fn posix_memalign
|
||||
function conforms to
|
||||
.St -p1003.1-2001 .
|
||||
.Sh HISTORY
|
||||
The
|
||||
.Fn posix_memalign
|
||||
function first appeared in
|
||||
.Fx 7.0 .
|
||||
.Pp
|
||||
The
|
||||
.Fn aligned_alloc
|
||||
function first appeared in
|
||||
.Fx 10.0 .
|
46
lib/libc/stdlib/jemalloc/Makefile.inc
Normal file
46
lib/libc/stdlib/jemalloc/Makefile.inc
Normal file
@ -0,0 +1,46 @@
|
||||
# $FreeBSD$
|
||||
|
||||
.PATH: ${.CURDIR}/stdlib/jemalloc
|
||||
|
||||
JEMALLOCSRCS:= jemalloc.c arena.c atomic.c base.c bitmap.c chunk.c \
|
||||
chunk_dss.c chunk_mmap.c ckh.c ctl.c extent.c hash.c huge.c mb.c \
|
||||
mutex.c prof.c quarantine.c rtree.c stats.c tcache.c util.c tsd.c
|
||||
|
||||
SYM_MAPS+=${.CURDIR}/stdlib/jemalloc/Symbol.map
|
||||
|
||||
CFLAGS+=-I${.CURDIR}/../../contrib/jemalloc/include
|
||||
|
||||
.for src in ${JEMALLOCSRCS}
|
||||
MISRCS+=jemalloc_${src}
|
||||
CLEANFILES+=jemalloc_${src}
|
||||
jemalloc_${src}:
|
||||
ln -sf ${.CURDIR}/../../contrib/jemalloc/src/${src} ${.TARGET}
|
||||
.endfor
|
||||
|
||||
MAN+=jemalloc.3
|
||||
CLEANFILES+=jemalloc.3
|
||||
jemalloc.3:
|
||||
ln -sf ${.CURDIR}/../../contrib/jemalloc/doc/jemalloc.3 ${.TARGET}
|
||||
|
||||
MLINKS+= \
|
||||
jemalloc.3 malloc.3 \
|
||||
jemalloc.3 calloc.3 \
|
||||
jemalloc.3 posix_memalign.3 \
|
||||
jemalloc.3 aligned_alloc.3 \
|
||||
jemalloc.3 realloc.3 \
|
||||
jemalloc.3 free.3 \
|
||||
jemalloc.3 malloc_usable_size.3 \
|
||||
jemalloc.3 malloc_stats_print.3 \
|
||||
jemalloc.3 mallctl.3 \
|
||||
jemalloc.3 mallctlnametomib.3 \
|
||||
jemalloc.3 mallctlbymib.3 \
|
||||
jemalloc.3 allocm.3 \
|
||||
jemalloc.3 rallocm.3 \
|
||||
jemalloc.3 sallocm.3 \
|
||||
jemalloc.3 dallocm.3 \
|
||||
jemalloc.3 nallocm.3 \
|
||||
jemalloc.3 malloc.conf.5
|
||||
|
||||
.if defined(MALLOC_PRODUCTION)
|
||||
CFLAGS+= -DMALLOC_PRODUCTION
|
||||
.endif
|
35
lib/libc/stdlib/jemalloc/Symbol.map
Normal file
35
lib/libc/stdlib/jemalloc/Symbol.map
Normal file
@ -0,0 +1,35 @@
|
||||
/*
|
||||
* $FreeBSD$
|
||||
*/
|
||||
|
||||
FBSD_1.0 {
|
||||
_malloc_options;
|
||||
_malloc_message;
|
||||
malloc;
|
||||
posix_memalign;
|
||||
calloc;
|
||||
realloc;
|
||||
free;
|
||||
malloc_usable_size;
|
||||
};
|
||||
|
||||
FBSD_1.3 {
|
||||
malloc_conf;
|
||||
malloc_message;
|
||||
aligned_alloc;
|
||||
malloc_stats_print;
|
||||
mallctl;
|
||||
mallctlnametomib;
|
||||
mallctlbymib;
|
||||
allocm;
|
||||
rallocm;
|
||||
sallocm;
|
||||
dallocm;
|
||||
nallocm;
|
||||
};
|
||||
|
||||
FBSDprivate_1.0 {
|
||||
_malloc_thread_cleanup;
|
||||
_malloc_prefork;
|
||||
_malloc_postfork;
|
||||
};
|
@ -1,591 +0,0 @@
|
||||
.\" Copyright (c) 1980, 1991, 1993
|
||||
.\" The Regents of the University of California. All rights reserved.
|
||||
.\"
|
||||
.\" This code is derived from software contributed to Berkeley by
|
||||
.\" the American National Standards Committee X3, on Information
|
||||
.\" Processing Systems.
|
||||
.\"
|
||||
.\" Redistribution and use in source and binary forms, with or without
|
||||
.\" modification, are permitted provided that the following conditions
|
||||
.\" are met:
|
||||
.\" 1. Redistributions of source code must retain the above copyright
|
||||
.\" notice, this list of conditions and the following disclaimer.
|
||||
.\" 2. Redistributions in binary form must reproduce the above copyright
|
||||
.\" notice, this list of conditions and the following disclaimer in the
|
||||
.\" documentation and/or other materials provided with the distribution.
|
||||
.\" 3. Neither the name of the University nor the names of its contributors
|
||||
.\" may be used to endorse or promote products derived from this software
|
||||
.\" without specific prior written permission.
|
||||
.\"
|
||||
.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
||||
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
||||
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
||||
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
||||
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
||||
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||
.\" SUCH DAMAGE.
|
||||
.\"
|
||||
.\" @(#)malloc.3 8.1 (Berkeley) 6/4/93
|
||||
.\" $FreeBSD$
|
||||
.\"
|
||||
.Dd January 31, 2010
|
||||
.Dt MALLOC 3
|
||||
.Os
|
||||
.Sh NAME
|
||||
.Nm malloc , calloc , realloc , free , reallocf , malloc_usable_size
|
||||
.Nd general purpose memory allocation functions
|
||||
.Sh LIBRARY
|
||||
.Lb libc
|
||||
.Sh SYNOPSIS
|
||||
.In stdlib.h
|
||||
.Ft void *
|
||||
.Fn malloc "size_t size"
|
||||
.Ft void *
|
||||
.Fn calloc "size_t number" "size_t size"
|
||||
.Ft void *
|
||||
.Fn realloc "void *ptr" "size_t size"
|
||||
.Ft void *
|
||||
.Fn reallocf "void *ptr" "size_t size"
|
||||
.Ft void
|
||||
.Fn free "void *ptr"
|
||||
.Ft const char *
|
||||
.Va _malloc_options ;
|
||||
.Ft void
|
||||
.Fn \*(lp*_malloc_message\*(rp "const char *p1" "const char *p2" "const char *p3" "const char *p4"
|
||||
.In malloc_np.h
|
||||
.Ft size_t
|
||||
.Fn malloc_usable_size "const void *ptr"
|
||||
.Sh DESCRIPTION
|
||||
The
|
||||
.Fn malloc
|
||||
function allocates
|
||||
.Fa size
|
||||
bytes of uninitialized memory.
|
||||
The allocated space is suitably aligned (after possible pointer coercion)
|
||||
for storage of any type of object.
|
||||
.Pp
|
||||
The
|
||||
.Fn calloc
|
||||
function allocates space for
|
||||
.Fa number
|
||||
objects,
|
||||
each
|
||||
.Fa size
|
||||
bytes in length.
|
||||
The result is identical to calling
|
||||
.Fn malloc
|
||||
with an argument of
|
||||
.Dq "number * size" ,
|
||||
with the exception that the allocated memory is explicitly initialized
|
||||
to zero bytes.
|
||||
.Pp
|
||||
The
|
||||
.Fn realloc
|
||||
function changes the size of the previously allocated memory referenced by
|
||||
.Fa ptr
|
||||
to
|
||||
.Fa size
|
||||
bytes.
|
||||
The contents of the memory are unchanged up to the lesser of the new and
|
||||
old sizes.
|
||||
If the new size is larger,
|
||||
the contents of the newly allocated portion of the memory are undefined.
|
||||
Upon success, the memory referenced by
|
||||
.Fa ptr
|
||||
is freed and a pointer to the newly allocated memory is returned.
|
||||
Note that
|
||||
.Fn realloc
|
||||
and
|
||||
.Fn reallocf
|
||||
may move the memory allocation, resulting in a different return value than
|
||||
.Fa ptr .
|
||||
If
|
||||
.Fa ptr
|
||||
is
|
||||
.Dv NULL ,
|
||||
the
|
||||
.Fn realloc
|
||||
function behaves identically to
|
||||
.Fn malloc
|
||||
for the specified size.
|
||||
.Pp
|
||||
The
|
||||
.Fn reallocf
|
||||
function is identical to the
|
||||
.Fn realloc
|
||||
function, except that it
|
||||
will free the passed pointer when the requested memory cannot be allocated.
|
||||
This is a
|
||||
.Fx
|
||||
specific API designed to ease the problems with traditional coding styles
|
||||
for
|
||||
.Fn realloc
|
||||
causing memory leaks in libraries.
|
||||
.Pp
|
||||
The
|
||||
.Fn free
|
||||
function causes the allocated memory referenced by
|
||||
.Fa ptr
|
||||
to be made available for future allocations.
|
||||
If
|
||||
.Fa ptr
|
||||
is
|
||||
.Dv NULL ,
|
||||
no action occurs.
|
||||
.Pp
|
||||
The
|
||||
.Fn malloc_usable_size
|
||||
function returns the usable size of the allocation pointed to by
|
||||
.Fa ptr .
|
||||
The return value may be larger than the size that was requested during
|
||||
allocation.
|
||||
The
|
||||
.Fn malloc_usable_size
|
||||
function is not a mechanism for in-place
|
||||
.Fn realloc ;
|
||||
rather it is provided solely as a tool for introspection purposes.
|
||||
Any discrepancy between the requested allocation size and the size reported by
|
||||
.Fn malloc_usable_size
|
||||
should not be depended on, since such behavior is entirely
|
||||
implementation-dependent.
|
||||
.Sh TUNING
|
||||
Once, when the first call is made to one of these memory allocation
|
||||
routines, various flags will be set or reset, which affects the
|
||||
workings of this allocator implementation.
|
||||
.Pp
|
||||
The
|
||||
.Dq name
|
||||
of the file referenced by the symbolic link named
|
||||
.Pa /etc/malloc.conf ,
|
||||
the value of the environment variable
|
||||
.Ev MALLOC_OPTIONS ,
|
||||
and the string pointed to by the global variable
|
||||
.Va _malloc_options
|
||||
will be interpreted, in that order, from left to right as flags.
|
||||
.Pp
|
||||
Each flag is a single letter, optionally prefixed by a non-negative base 10
|
||||
integer repetition count.
|
||||
For example,
|
||||
.Dq 3N
|
||||
is equivalent to
|
||||
.Dq NNN .
|
||||
Some flags control parameter magnitudes, where uppercase increases the
|
||||
magnitude, and lowercase decreases the magnitude.
|
||||
Other flags control boolean parameters, where uppercase indicates that a
|
||||
behavior is set, or on, and lowercase means that a behavior is not set, or off.
|
||||
.Bl -tag -width indent
|
||||
.It A
|
||||
All warnings (except for the warning about unknown
|
||||
flags being set) become fatal.
|
||||
The process will call
|
||||
.Xr abort 3
|
||||
in these cases.
|
||||
.It C
|
||||
Double/halve the size of the maximum size class that is a multiple of the
|
||||
cacheline size (64).
|
||||
Above this size, subpage spacing (256 bytes) is used for size classes.
|
||||
The default value is 512 bytes.
|
||||
.It D
|
||||
Use
|
||||
.Xr sbrk 2
|
||||
to acquire memory in the data storage segment (DSS).
|
||||
This option is enabled by default.
|
||||
See the
|
||||
.Dq M
|
||||
option for related information and interactions.
|
||||
.It E
|
||||
Double/halve the size of the maximum medium size class.
|
||||
The valid range is from one page to one half chunk.
|
||||
The default value is 32 KiB.
|
||||
.It F
|
||||
Halve/double the per-arena minimum ratio of active to dirty pages.
|
||||
Some dirty unused pages may be allowed to accumulate, within the limit set by
|
||||
the ratio, before informing the kernel about at least half of those pages via
|
||||
.Xr madvise 2 .
|
||||
This provides the kernel with sufficient information to recycle dirty pages if
|
||||
physical memory becomes scarce and the pages remain unused.
|
||||
The default minimum ratio is 32:1;
|
||||
.Ev MALLOC_OPTIONS=6F
|
||||
will disable dirty page purging.
|
||||
.It G
|
||||
Double/halve the approximate interval (counted in terms of
|
||||
thread-specific cache allocation/deallocation events) between full
|
||||
thread-specific cache garbage collection sweeps.
|
||||
Garbage collection is actually performed incrementally, one size
|
||||
class at a time, in order to avoid large collection pauses.
|
||||
The default sweep interval is 8192;
|
||||
.Ev MALLOC_OPTIONS=14g
|
||||
will disable garbage collection.
|
||||
.It H
|
||||
Double/halve the number of thread-specific cache slots per size
|
||||
class.
|
||||
When there are multiple threads, each thread uses a
|
||||
thread-specific cache for small and medium objects.
|
||||
Thread-specific caching allows many allocations to be satisfied
|
||||
without performing any thread synchronization, at the cost of
|
||||
increased memory use.
|
||||
See the
|
||||
.Dq G
|
||||
option for related tuning information.
|
||||
The default number of cache slots is 128;
|
||||
.Ev MALLOC_OPTIONS=7h
|
||||
will disable thread-specific caching.
|
||||
Note that one cache slot per size class is not a valid
|
||||
configuration due to implementation details.
|
||||
.It J
|
||||
Each byte of new memory allocated by
|
||||
.Fn malloc ,
|
||||
.Fn realloc ,
|
||||
or
|
||||
.Fn reallocf
|
||||
will be initialized to 0xa5.
|
||||
All memory returned by
|
||||
.Fn free ,
|
||||
.Fn realloc ,
|
||||
or
|
||||
.Fn reallocf
|
||||
will be initialized to 0x5a.
|
||||
This is intended for debugging and will impact performance negatively.
|
||||
.It K
|
||||
Double/halve the virtual memory chunk size.
|
||||
The default chunk size is 4 MiB.
|
||||
.It M
|
||||
Use
|
||||
.Xr mmap 2
|
||||
to acquire anonymously mapped memory.
|
||||
This option is enabled by default.
|
||||
If both the
|
||||
.Dq D
|
||||
and
|
||||
.Dq M
|
||||
options are enabled, the allocator prefers anonymous mappings over the DSS,
|
||||
but allocation only fails if memory cannot be acquired via either method.
|
||||
If neither option is enabled, then the
|
||||
.Dq M
|
||||
option is implicitly enabled in order to assure that there is a method for
|
||||
acquiring memory.
|
||||
.It N
|
||||
Double/halve the number of arenas.
|
||||
The default number of arenas is two times the number of CPUs, or one if there
|
||||
is a single CPU.
|
||||
.It P
|
||||
Various statistics are printed at program exit via an
|
||||
.Xr atexit 3
|
||||
function.
|
||||
This has the potential to cause deadlock for a multi-threaded process that exits
|
||||
while one or more threads are executing in the memory allocation functions.
|
||||
Therefore, this option should only be used with care; it is primarily intended
|
||||
as a performance tuning aid during application development.
|
||||
.It Q
|
||||
Double/halve the size of the maximum size class that is a multiple of the
|
||||
quantum (8 or 16 bytes, depending on architecture).
|
||||
Above this size, cacheline spacing is used for size classes.
|
||||
The default value is 128 bytes.
|
||||
.It U
|
||||
Generate
|
||||
.Dq utrace
|
||||
entries for
|
||||
.Xr ktrace 1 ,
|
||||
for all operations.
|
||||
Consult the source for details on this option.
|
||||
.It V
|
||||
Attempting to allocate zero bytes will return a
|
||||
.Dv NULL
|
||||
pointer instead of a valid pointer.
|
||||
(The default behavior is to make a minimal allocation and return a
|
||||
pointer to it.)
|
||||
This option is provided for System V compatibility.
|
||||
This option is incompatible with the
|
||||
.Dq X
|
||||
option.
|
||||
.It X
|
||||
Rather than return failure for any allocation function, display a diagnostic
|
||||
message on
|
||||
.Dv STDERR_FILENO
|
||||
and cause the program to drop core (using
|
||||
.Xr abort 3 ) .
|
||||
This option should be set at compile time by including the following in the
|
||||
source code:
|
||||
.Bd -literal -offset indent
|
||||
_malloc_options = "X";
|
||||
.Ed
|
||||
.It Z
|
||||
Each byte of new memory allocated by
|
||||
.Fn malloc ,
|
||||
.Fn realloc ,
|
||||
or
|
||||
.Fn reallocf
|
||||
will be initialized to 0.
|
||||
Note that this initialization only happens once for each byte, so
|
||||
.Fn realloc
|
||||
and
|
||||
.Fn reallocf
|
||||
calls do not zero memory that was previously allocated.
|
||||
This is intended for debugging and will impact performance negatively.
|
||||
.El
|
||||
.Pp
|
||||
The
|
||||
.Dq J
|
||||
and
|
||||
.Dq Z
|
||||
options are intended for testing and debugging.
|
||||
An application which changes its behavior when these options are used
|
||||
is flawed.
|
||||
.Sh IMPLEMENTATION NOTES
|
||||
Traditionally, allocators have used
|
||||
.Xr sbrk 2
|
||||
to obtain memory, which is suboptimal for several reasons, including race
|
||||
conditions, increased fragmentation, and artificial limitations on maximum
|
||||
usable memory.
|
||||
This allocator uses both
|
||||
.Xr sbrk 2
|
||||
and
|
||||
.Xr mmap 2
|
||||
by default, but it can be configured at run time to use only one or the other.
|
||||
If resource limits are not a primary concern, the preferred configuration is
|
||||
.Ev MALLOC_OPTIONS=dM
|
||||
or
|
||||
.Ev MALLOC_OPTIONS=DM .
|
||||
When so configured, the
|
||||
.Ar datasize
|
||||
resource limit has little practical effect for typical applications; use
|
||||
.Ev MALLOC_OPTIONS=Dm
|
||||
if that is a concern.
|
||||
Regardless of allocator configuration, the
|
||||
.Ar vmemoryuse
|
||||
resource limit can be used to bound the total virtual memory used by a
|
||||
process, as described in
|
||||
.Xr limits 1 .
|
||||
.Pp
|
||||
This allocator uses multiple arenas in order to reduce lock contention for
|
||||
threaded programs on multi-processor systems.
|
||||
This works well with regard to threading scalability, but incurs some costs.
|
||||
There is a small fixed per-arena overhead, and additionally, arenas manage
|
||||
memory completely independently of each other, which means a small fixed
|
||||
increase in overall memory fragmentation.
|
||||
These overheads are not generally an issue, given the number of arenas normally
|
||||
used.
|
||||
Note that using substantially more arenas than the default is not likely to
|
||||
improve performance, mainly due to reduced cache performance.
|
||||
However, it may make sense to reduce the number of arenas if an application
|
||||
does not make much use of the allocation functions.
|
||||
.Pp
|
||||
In addition to multiple arenas, this allocator supports thread-specific caching
|
||||
for small and medium objects, in order to make it possible to completely avoid
|
||||
synchronization for most small and medium allocation requests.
|
||||
Such caching allows very fast allocation in the common case, but it increases
|
||||
memory usage and fragmentation, since a bounded number of objects can remain
|
||||
allocated in each thread cache.
|
||||
.Pp
|
||||
Memory is conceptually broken into equal-sized chunks, where the chunk size is
|
||||
a power of two that is greater than the page size.
|
||||
Chunks are always aligned to multiples of the chunk size.
|
||||
This alignment makes it possible to find metadata for user objects very
|
||||
quickly.
|
||||
.Pp
|
||||
User objects are broken into four categories according to size: small, medium,
|
||||
large, and huge.
|
||||
Small objects are smaller than one page.
|
||||
Medium objects range from one page to an upper limit determined at run time (see
|
||||
the
|
||||
.Dq E
|
||||
option).
|
||||
Large objects are smaller than the chunk size.
|
||||
Huge objects are a multiple of the chunk size.
|
||||
Small, medium, and large objects are managed by arenas; huge objects are managed
|
||||
separately in a single data structure that is shared by all threads.
|
||||
Huge objects are used by applications infrequently enough that this single
|
||||
data structure is not a scalability issue.
|
||||
.Pp
|
||||
Each chunk that is managed by an arena tracks its contents as runs of
|
||||
contiguous pages (unused, backing a set of small or medium objects, or backing
|
||||
one large object).
|
||||
The combination of chunk alignment and chunk page maps makes it possible to
|
||||
determine all metadata regarding small and large allocations in constant time.
|
||||
.Pp
|
||||
Small and medium objects are managed in groups by page runs.
|
||||
Each run maintains a bitmap that tracks which regions are in use.
|
||||
Allocation requests that are no more than half the quantum (8 or 16, depending
|
||||
on architecture) are rounded up to the nearest power of two.
|
||||
Allocation requests that are more than half the quantum, but no more than the
|
||||
minimum cacheline-multiple size class (see the
|
||||
.Dq Q
|
||||
option) are rounded up to the nearest multiple of the quantum.
|
||||
Allocation requests that are more than the minimum cacheline-multiple size
|
||||
class, but no more than the minimum subpage-multiple size class (see the
|
||||
.Dq C
|
||||
option) are rounded up to the nearest multiple of the cacheline size (64).
|
||||
Allocation requests that are more than the minimum subpage-multiple size class,
|
||||
but no more than the maximum subpage-multiple size class are rounded up to the
|
||||
nearest multiple of the subpage size (256).
|
||||
Allocation requests that are more than the maximum subpage-multiple size class,
|
||||
but no more than the maximum medium size class (see the
|
||||
.Dq M
|
||||
option) are rounded up to the nearest medium size class; spacing is an
|
||||
automatically determined power of two and ranges from the subpage size to the
|
||||
page size.
|
||||
Allocation requests that are more than the maximum medium size class, but small
|
||||
enough to fit in an arena-managed chunk (see the
|
||||
.Dq K
|
||||
option), are rounded up to the nearest run size.
|
||||
Allocation requests that are too large to fit in an arena-managed chunk are
|
||||
rounded up to the nearest multiple of the chunk size.
|
||||
.Pp
|
||||
Allocations are packed tightly together, which can be an issue for
|
||||
multi-threaded applications.
|
||||
If you need to assure that allocations do not suffer from cacheline sharing,
|
||||
round your allocation requests up to the nearest multiple of the cacheline
|
||||
size.
|
||||
.Sh DEBUGGING MALLOC PROBLEMS
|
||||
The first thing to do is to set the
|
||||
.Dq A
|
||||
option.
|
||||
This option forces a coredump (if possible) at the first sign of trouble,
|
||||
rather than the normal policy of trying to continue if at all possible.
|
||||
.Pp
|
||||
It is probably also a good idea to recompile the program with suitable
|
||||
options and symbols for debugger support.
|
||||
.Pp
|
||||
If the program starts to give unusual results, coredump or generally behave
|
||||
differently without emitting any of the messages mentioned in the next
|
||||
section, it is likely because it depends on the storage being filled with
|
||||
zero bytes.
|
||||
Try running it with the
|
||||
.Dq Z
|
||||
option set;
|
||||
if that improves the situation, this diagnosis has been confirmed.
|
||||
If the program still misbehaves,
|
||||
the likely problem is accessing memory outside the allocated area.
|
||||
.Pp
|
||||
Alternatively, if the symptoms are not easy to reproduce, setting the
|
||||
.Dq J
|
||||
option may help provoke the problem.
|
||||
.Pp
|
||||
In truly difficult cases, the
|
||||
.Dq U
|
||||
option, if supported by the kernel, can provide a detailed trace of
|
||||
all calls made to these functions.
|
||||
.Pp
|
||||
Unfortunately this implementation does not provide much detail about
|
||||
the problems it detects; the performance impact for storing such information
|
||||
would be prohibitive.
|
||||
There are a number of allocator implementations available on the Internet
|
||||
which focus on detecting and pinpointing problems by trading performance for
|
||||
extra sanity checks and detailed diagnostics.
|
||||
.Sh DIAGNOSTIC MESSAGES
|
||||
If any of the memory allocation/deallocation functions detect an error or
|
||||
warning condition, a message will be printed to file descriptor
|
||||
.Dv STDERR_FILENO .
|
||||
Errors will result in the process dumping core.
|
||||
If the
|
||||
.Dq A
|
||||
option is set, all warnings are treated as errors.
|
||||
.Pp
|
||||
The
|
||||
.Va _malloc_message
|
||||
variable allows the programmer to override the function which emits the text
|
||||
strings forming the errors and warnings if for some reason the
|
||||
.Dv STDERR_FILENO
|
||||
file descriptor is not suitable for this.
|
||||
Please note that doing anything which tries to allocate memory in this function
|
||||
is likely to result in a crash or deadlock.
|
||||
.Pp
|
||||
All messages are prefixed by
|
||||
.Dq Ao Ar progname Ac Ns Li : (malloc) .
|
||||
.Sh RETURN VALUES
|
||||
The
|
||||
.Fn malloc
|
||||
and
|
||||
.Fn calloc
|
||||
functions return a pointer to the allocated memory if successful; otherwise
|
||||
a
|
||||
.Dv NULL
|
||||
pointer is returned and
|
||||
.Va errno
|
||||
is set to
|
||||
.Er ENOMEM .
|
||||
.Pp
|
||||
The
|
||||
.Fn realloc
|
||||
and
|
||||
.Fn reallocf
|
||||
functions return a pointer, possibly identical to
|
||||
.Fa ptr ,
|
||||
to the allocated memory
|
||||
if successful; otherwise a
|
||||
.Dv NULL
|
||||
pointer is returned, and
|
||||
.Va errno
|
||||
is set to
|
||||
.Er ENOMEM
|
||||
if the error was the result of an allocation failure.
|
||||
The
|
||||
.Fn realloc
|
||||
function always leaves the original buffer intact
|
||||
when an error occurs, whereas
|
||||
.Fn reallocf
|
||||
deallocates it in this case.
|
||||
.Pp
|
||||
The
|
||||
.Fn free
|
||||
function returns no value.
|
||||
.Pp
|
||||
The
|
||||
.Fn malloc_usable_size
|
||||
function returns the usable size of the allocation pointed to by
|
||||
.Fa ptr .
|
||||
.Sh ENVIRONMENT
|
||||
The following environment variables affect the execution of the allocation
|
||||
functions:
|
||||
.Bl -tag -width ".Ev MALLOC_OPTIONS"
|
||||
.It Ev MALLOC_OPTIONS
|
||||
If the environment variable
|
||||
.Ev MALLOC_OPTIONS
|
||||
is set, the characters it contains will be interpreted as flags to the
|
||||
allocation functions.
|
||||
.El
|
||||
.Sh EXAMPLES
|
||||
To dump core whenever a problem occurs:
|
||||
.Bd -literal -offset indent
|
||||
ln -s 'A' /etc/malloc.conf
|
||||
.Ed
|
||||
.Pp
|
||||
To specify in the source that a program does no return value checking
|
||||
on calls to these functions:
|
||||
.Bd -literal -offset indent
|
||||
_malloc_options = "X";
|
||||
.Ed
|
||||
.Sh SEE ALSO
|
||||
.Xr limits 1 ,
|
||||
.Xr madvise 2 ,
|
||||
.Xr mmap 2 ,
|
||||
.Xr sbrk 2 ,
|
||||
.Xr alloca 3 ,
|
||||
.Xr atexit 3 ,
|
||||
.Xr getpagesize 3 ,
|
||||
.Xr getpagesizes 3 ,
|
||||
.Xr memory 3 ,
|
||||
.Xr posix_memalign 3
|
||||
.Sh STANDARDS
|
||||
The
|
||||
.Fn malloc ,
|
||||
.Fn calloc ,
|
||||
.Fn realloc
|
||||
and
|
||||
.Fn free
|
||||
functions conform to
|
||||
.St -isoC .
|
||||
.Sh HISTORY
|
||||
The
|
||||
.Fn reallocf
|
||||
function first appeared in
|
||||
.Fx 3.0 .
|
||||
.Pp
|
||||
The
|
||||
.Fn malloc_usable_size
|
||||
function first appeared in
|
||||
.Fx 7.0 .
|
File diff suppressed because it is too large
Load Diff
82
lib/libc/stdlib/reallocf.3
Normal file
82
lib/libc/stdlib/reallocf.3
Normal file
@ -0,0 +1,82 @@
|
||||
.\" Copyright (c) 1980, 1991, 1993
|
||||
.\" The Regents of the University of California. All rights reserved.
|
||||
.\"
|
||||
.\" This code is derived from software contributed to Berkeley by
|
||||
.\" the American National Standards Committee X3, on Information
|
||||
.\" Processing Systems.
|
||||
.\"
|
||||
.\" Redistribution and use in source and binary forms, with or without
|
||||
.\" modification, are permitted provided that the following conditions
|
||||
.\" are met:
|
||||
.\" 1. Redistributions of source code must retain the above copyright
|
||||
.\" notice, this list of conditions and the following disclaimer.
|
||||
.\" 2. Redistributions in binary form must reproduce the above copyright
|
||||
.\" notice, this list of conditions and the following disclaimer in the
|
||||
.\" documentation and/or other materials provided with the distribution.
|
||||
.\" 3. Neither the name of the University nor the names of its contributors
|
||||
.\" may be used to endorse or promote products derived from this software
|
||||
.\" without specific prior written permission.
|
||||
.\"
|
||||
.\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
||||
.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
.\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
||||
.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
||||
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
||||
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
||||
.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||
.\" SUCH DAMAGE.
|
||||
.\"
|
||||
.\" @(#)malloc.3 8.1 (Berkeley) 6/4/93
|
||||
.\" $FreeBSD$
|
||||
.\"
|
||||
.Dd January 31, 2010
|
||||
.Dt MALLOC 3
|
||||
.Os
|
||||
.Sh NAME
|
||||
.Nm reallocf
|
||||
.Nd memory reallocation function
|
||||
.Sh LIBRARY
|
||||
.Lb libc
|
||||
.Sh SYNOPSIS
|
||||
.In stdlib.h
|
||||
.Ft void *
|
||||
.Fn reallocf "void *ptr" "size_t size"
|
||||
.Sh DESCRIPTION
|
||||
The
|
||||
.Fn reallocf
|
||||
function is identical to the
|
||||
.Fn realloc
|
||||
function, except that it
|
||||
will free the passed pointer when the requested memory cannot be allocated.
|
||||
This is a
|
||||
.Fx
|
||||
specific API designed to ease the problems with traditional coding styles
|
||||
for
|
||||
.Fn realloc
|
||||
causing memory leaks in libraries.
|
||||
.Sh RETURN VALUES
|
||||
The
|
||||
.Fn reallocf
|
||||
function returns a pointer, possibly identical to
|
||||
.Fa ptr ,
|
||||
to the allocated memory
|
||||
if successful; otherwise a
|
||||
.Dv NULL
|
||||
pointer is returned, and
|
||||
.Va errno
|
||||
is set to
|
||||
.Er ENOMEM
|
||||
if the error was the result of an allocation failure.
|
||||
The
|
||||
.Fn reallocf
|
||||
function deletes the original buffer when an error occurs.
|
||||
.Sh SEE ALSO
|
||||
.Xr realloc 3
|
||||
.Sh HISTORY
|
||||
The
|
||||
.Fn reallocf
|
||||
function first appeared in
|
||||
.Fx 3.0 .
|
Loading…
x
Reference in New Issue
Block a user