Commit Graph

195 Commits

Author SHA1 Message Date
Konstantin Belousov
2793b01844 Use aux vector to get values for SSP canary, pagesize, pagesizes array,
number of host CPUs and osreldate.

This eliminates the last sysctl(2) calls from the dynamically linked image
startup.

No objections from:	kan
Tested by:	marius (sparc64)
MFC after:	1 month
2010-08-17 09:13:26 +00:00
Nathan Whitehorn
840b91cc52 Provide 64-bit PowerPC support in libc.
Obtained from:	projects/ppc64
2010-07-10 14:45:03 +00:00
Jason Evans
15f8d49756 Rewrite red-black trees to do lazy balance fixup. This improves
insert/remove speed by ~30%.
2010-02-28 22:57:13 +00:00
Marcel Moolenaar
a6c79196a7 Define TLS_MODEL for PowerPC as well. Since PowerPC uses variant I,
like ia64, leave it empty (default model).
2010-02-16 20:46:22 +00:00
Marcel Moolenaar
066d438476 Unbreak ia64: tls_model("initial-exec") is invalid, because it assumes
the static TLS model, which is fundamentally different from the dynamic
TLS model. The consequence was data corruption. Limit the attribute to
i386 and amd64.
2010-02-16 06:47:00 +00:00
Jason Evans
d227440524 Fix bugs:
* Fix a race in chunk_dealloc_dss().

  * Check for allocation failure before zeroing memory in base_calloc().

Merge enhancements from a divergent version of jemalloc:

  * Convert thread-specific caching from magazines to an algorithm that is
    more tunable, and implement incremental GC.

  * Add support for medium size classes, [4KiB..32KiB], 2KiB apart by
    default.

  * Add dirty page tracking for pages within active small/medium object
    runs.  This allows malloc to track precisely which pages are in active
    use, which makes dirty page purging more effective.

  * Base maximum dirty page count on proportion of active memory.

  * Use optional zeroing in arena_chunk_alloc() to avoid needless zeroing
    of chunks.  This is useful in the context of DSS allocation, since a
    long-lived application may commonly recycle chunks.

  * Increase the default chunk size from 1MiB to 4MiB.

Remove feature:

  * Remove the dynamic rebalancing code, since thread caching reduces its
    utility.
2010-01-31 23:16:10 +00:00
Ed Maste
648050dfd0 Add missing return, in a rare case where we can't allocate memory in
deallocate.

Submitted by:	Ryan Stone (rysto32 at gmail dot com)
Approved by:	jasone
2010-01-27 16:47:02 +00:00
Jason Evans
5f2b1ed91b Simplify arena_run_reg_dalloc(), and remove a bug that was due to incorrect
initialization of ssize_invs.
2009-12-10 02:51:40 +00:00
Jason Evans
2354bdcf94 Fix the posix_memalign() changes in r196861 to actually return a NULL pointer
as intended.

PR:		standards/138307
2009-12-10 00:16:11 +00:00
Colin Percival
8a7f1847b7 Change the utrace log entry for malloc_init from (0, 0, 0) to (-1, 0, 0)
in order to distinguish it from free(NULL), which is logged as (0, 0, 0).

Reviewed by:	jhb
2009-11-14 09:31:47 +00:00
Alan Cox
b8947edcb6 Make malloc(3) superpage aware. Specifically, if getpagesizes(3) returns
a large page size that is greater than malloc(3)'s default chunk size but
less than or equal to 4 MB, then increase the chunk size to match the large
page size.

Most often, using a chunk size that is less than the large page size is not
a problem.  However, consider a long-running application that allocates and
frees significant amounts of memory.  In particular, it frees enough memory
at times that some of that memory is munmap()ed.  Up until the first
munmap(), a 1MB chunk size is just fine; it's not a problem for the virtual
memory system.  Two adjacent 1MB chunks that are aligned on a 2MB boundary
will be promoted automatically to a superpage even though they were
allocated at different times.  The trouble begins with the munmap(),
releasing a 1MB chunk will trigger the demotion of the containing superpage,
leaving behind a half-used 2MB reservation.  Now comes the real problem.
Unfortunately, when the application needs to allocate more memory, and it
recycles the previously munmap()ed address range, the implementation of
mmap() won't be able to reuse the reservation.  Basically, the coalescing
rules in the virtual memory system don't allow this new range to combine
with its neighbor.  The effect being that superpage promotion will not
reoccur for this range of addresses until both 1MB chunks are freed at some
point in the future.

Reviewed by:	jasone
MFC after:	3 weeks
2009-09-26 18:20:40 +00:00
Konstantin Belousov
1ecc75dfe3 Handle zero size for posix_memalign. Return NULL or unique address
according to the 'V' option.

PR:	standards/138307
MFC after:	1 week
2009-09-05 13:32:05 +00:00
Jason Evans
d7ba3e423a Fix a lock order reversal bug that could cause deadlock during fork(2).
Reported by:	kib
2008-12-01 10:20:59 +00:00
Jason Evans
17daa728ae Adjust an assertion to handle the case where a lock is contested, but
spinning is avoided due to running on a single-CPU system.

Reported by:	stefanf
2008-11-30 19:30:31 +00:00
Jason Evans
93e34865fa Do not spin when trying to lock on a single-CPU system.
Reported by:	davidxu
2008-11-30 05:55:24 +00:00
Jason Evans
b74d3e0c37 Revert to preferring mmap(2) over sbrk(2) when mapping memory, due to
potential extreme contention in the kernel for multi-threaded applications
on SMP systems.

Reported by:	kris
2008-11-03 21:17:18 +00:00
Jason Evans
bf5b19279d Use PAGE_{SIZE,MASK,SHIFT} from machine/param.h rather than hard-coding
page size and using sysconf(3).

Suggested by:	marcel
2008-09-10 14:27:34 +00:00
Marcel Moolenaar
93bf4a8436 Unbreak ia64: pges are 8KB. 2008-09-06 05:26:31 +00:00
Jason Evans
d6742bfbd3 Add thread-specific caching for small size classes, based on magazines.
This caching allows for completely lock-free allocation/deallocation in the
steady state, at the expense of likely increased memory use and
fragmentation.

Reduce the default number of arenas to 2*ncpus, since thread-specific
caching typically reduces arena contention.

Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced,
cacheline-spaced, and subpage-spaced size classes.  The advantages are:
fewer size classes, reduced false cacheline sharing, and reduced internal
fragmentation for allocations that are slightly over 512, 1024, etc.

Increase RUN_MAX_SMALL, in order to limit fragmentation for the
subpage-spaced size classes.

Add a size-->bin lookup table for small sizes to simplify translating sizes
to size classes.  Include a hard-coded constant table that is used unless
custom size class spacing is specified at run time.

Add the ability to disable tiny size classes at compile time via
MALLOC_TINY.
2008-08-27 02:00:53 +00:00
Jason Evans
6f14f9b656 Move CPU_SPINWAIT into the innermost spin loop, in order to allow faster
preemption while busy-waiting.

Submitted by:	Mike Schuster <schuster@adobe.com>
2008-08-14 17:31:42 +00:00
Jason Evans
52d7a117c0 Re-order the terms of an expression in arena_run_reg_dalloc() to correctly
detect whether the integer division table is large enough to handle the
divisor.  Before this change, the last two table elements were never used,
thus causing the slow path to be used for those divisors.
2008-08-14 17:03:29 +00:00
Colin Percival
c123de30b6 Remove variables which are assigned values and never used thereafter.
Found by:	LLVM/Clang Static Checker
Approved by:	jasone
2008-08-08 20:42:42 +00:00
Jason Evans
2bb0f7ba54 Enhance arena_chunk_map_t to directly support run coalescing, and use
the chunk map instead of red-black trees where possible.  Remove the
red-black trees and node objects that are obsoleted by this change.  The
net result is a ~1-2% memory savings, and a substantial allocation speed
improvement.
2008-07-18 19:35:44 +00:00
Jason Evans
b1c8b30f55 In the error path through base_alloc(), release base_mtx [1].
Fix bit vector initialization for run headers.

Submitted by:	[1] Mike Schuster <schuster@adobe.com>
2008-06-10 15:46:18 +00:00
Jason Evans
9007109030 Add a separate tree to track arena chunks that contain dirty pages.
This substantially improves worst case allocation performance, since
O(lg n) tree search can be used instead of O(n) tree iteration.

Use rb_wrap() instead of directly calling rb_*() macros.
2008-05-01 17:25:55 +00:00
Oleksandr Tymoshenko
00fb5362ba Set QUANTUM_2POW_MIN and SIZEOF_PTR_2POW parameters for MIPS
Approved by: imp
2008-04-29 22:56:05 +00:00
Jason Evans
e3085308be Check for integer overflow before calling sbrk(2), since it uses a
signed increment argument, but the size is an unsigned integer.
2008-04-29 01:32:42 +00:00
Jason Evans
e5bf0d71c9 Implement red-black trees without using parent pointers, and store the
color bit in the least significant bit of the right child pointer, in
order to reduce red-black tree linkage overhead by ~2X as compared to
sys/tree.h.

Use the new red-black tree implementation in malloc, which drops
memory usage by ~0.5 or ~1%, for 32- and 64-bit systems, respectively.
2008-04-23 16:09:18 +00:00
Jason Evans
f2ec9c0c86 Remove stale #include <machine/atomic.h>, which as needed by lazy
deallocation.
2008-03-07 16:54:03 +00:00
Jason Evans
1945c7bd47 Fix a race condition in arena_ralloc() for shrinking in-place large
reallocation, when junk filling is enabled.  Junk filling must occur
prior to shrinking, since any deallocated trailing pages are immediately
available for use by other threads.

Reported by:	Mats Palmgren <mats.palmgren@bredband.net>
2008-02-17 18:34:17 +00:00
Jason Evans
196d0d4b59 Remove support for lazy deallocation. Benchmarks across a wide range of
allocation patterns, number of CPUs, and MALLOC_OPTIONS settings indicate
that lazy deallocation has the potential to worsen throughput dramatically.
Performance degradation occurs when multiple threads try to clear the lazy
free cache simultaneously.  Various experiments to avoid this bottleneck
failed to completely solve this problem, while adding yet more complexity.
2008-02-17 17:09:24 +00:00
Jason Evans
157d89fe25 Fix a bug in lazy deallocation that was introduced when
arena_dalloc_lazy_hard() was split out of arena_dalloc_lazy() in revision
1.162.

Reduce thundering herd problems in lazy deallocation by randomly varying
how many probes a thread does before taking the slow path.
2008-02-08 08:02:34 +00:00
Jason Evans
97091a2dd7 Clean up manipulation of chunk page map elements to remove some tenuous
assumptions about whether bits are set at various times.  This makes
adding other flags safe.

Reorganize functions in order to inline i{m,c,p,s,re}alloc().  This
allows the entire fast-path call chains for malloc() and free() to be
inlined. [1]

Suggested by:	[1] Stuart Parmenter <stuart@mozilla.com>
2008-02-08 00:35:56 +00:00
Jason Evans
baad859d16 Track dirty unused pages so that they can be purged if they exceed a
threshold, according to the 'F' MALLOC_OPTIONS flag.  This obsoletes the
'H' flag.

Try to realloc() large objects in place.  This substantially speeds up
incremental large reallocations in the common case.

Fix a bug in arena_ralloc() that caused relocation of sub-page objects
even if the old and new sizes were in the same size class.

Maintain trees of runs and simplify the per-chunk page map.  This allows
logarithmic-time searching for sufficiently large runs in
arena_run_alloc(), whereas the previous algorithm required linear time
in the worst case.

Break various large functions into smaller sub-functions, and inline
only the functions that are in the fast path for small object
allocation/deallocation.

Remove an unnecessary check in base_pages_alloc_mmap().

Avoid integer division in choose_arena() for the NO_TLS case on
single-CPU systems.
2008-02-06 02:59:54 +00:00
Jason Evans
f38512f4af Enable both sbrk(2)- and mmap(2)-based memory acquisition methods by
default.  This has the disadvantage of rendering the datasize resource
limit irrelevant, but without this change, legitimate uses of more
memory than will fit in the data segment are thwarted by default.

Fix chunk_alloc_mmap() to work correctly if initial mapping is not
chunk-aligned and mapping extension fails.
2008-01-03 23:22:13 +00:00
Jason Evans
36ac4cc502 Fix a major chunk-related memory leak in chunk_dealloc_dss_record(). [1]
Clean up DSS-related locking and protect all pertinent variables with
dss_mtx (remove dss_chunks_mtx).  This fixes race conditions that could
cause chunk leaks.

Reported by:	[1] kris
2007-12-31 06:19:48 +00:00
Jason Evans
07aa172f11 Fix a bug related to sbrk() calls that could cause address space leaks.
This is a long-standing bug, but until recent changes it was difficult
to trigger, and even then its impact was non-catastrophic, with the
exception of revision 1.157.

Optimize chunk_alloc_mmap() to avoid the need for unmapping pages in the
common case.  Thanks go to Kris Kennaway for a patch that inspired this
change.

Do not maintain a record of previously mmap'ed chunk address ranges.
The original intent was to avoid the extra system call overhead in
chunk_alloc_mmap(), which is no longer a concern.  This also allows some
simplifications for the tree of unused DSS chunks.

Introduce huge_mtx and dss_chunks_mtx to replace chunks_mtx.  There was
no compelling reason to use the same mutex for these disjoint purposes.

Avoid memset() for huge allocations when possible.

Maintain two trees instead of one for tracking unused DSS address
ranges.  This allows scalable allocation of multi-chunk huge objects in
the DSS.  Previously, multi-chunk huge allocation requests failed if the
DSS could not be extended.
2007-12-31 00:59:16 +00:00
Jason Evans
14a7e7b5e1 Back out premature commit of previous version. 2007-12-28 09:21:12 +00:00
Jason Evans
03947063d0 Maintain two trees instead of one (old_chunks --> old_chunks_{ad,szad}) in
order to support re-use of multi-chunk unused regions within the DSS for
huge allocations.  This generalization is important to correct function
when mmap-based allocation is disabled.

Avoid zeroing re-used memory in the DSS unless it really needs to be
zeroed.
2007-12-28 07:24:19 +00:00
Jason Evans
3762647250 Release chunks_mtx for all paths through chunk_dealloc().
Reported by:	kris
2007-12-28 02:15:08 +00:00
Jason Evans
ebc87e7e0b Add the 'D' and 'M' run time options, and use them to control whether
memory is acquired from the system via sbrk(2) and/or mmap(2).  By default,
use sbrk(2) only, in order to support traditional use of resource limits.
Additionally, when both options are enabled, prefer the data segment to
anonymous mappings, in order to coexist better with large file mappings
in applications on 32-bit platforms.  This change has the potential to
increase memory fragmentation due to the linear nature of the data
segment, but from a performance perspective this is mitigated by the use
of madvise(2). [1]

Add the ability to interpret integer prefixes in MALLOC_OPTIONS
processing.  For example, MALLOC_OPTIONS=lllllllll can now be specified as
MALLOC_OPTIONS=9l.

Reported by:	[1] rwatson
Design review:	[1] alc, peter, rwatson
2007-12-27 23:29:44 +00:00
Jason Evans
a0a474aed6 Use fixed point integer math instead of floating point math when
calculating run sizes.  Use of the floating point unit was a potential
pessimization to context switching for applications that do not otherwise
use floating point math. [1]

Reformat cpp macro-related comments to improve consistency.

Submitted by:	das
2007-12-18 05:27:57 +00:00
Jason Evans
d55bd6236f Refactor features a bit in order to make it possible to disable lazy
deallocation and dynamic load balancing via the MALLOC_LAZY_FREE and
MALLOC_BALANCE knobs.  This is a non-functional change, since these
features are still enabled when possible.

Clean up a few things that more pedantic compiler settings would cause
complaints over.
2007-12-17 01:20:04 +00:00
Jason Evans
7e42e29b9b Only zero large allocations when necessary (for calloc()). 2007-11-28 00:17:34 +00:00
Jason Evans
5ea8413d0a Implement dynamic load balancing of thread-->arena mapping, based on lock
contention.  The intent is to dynamically adjust to load imbalances, which
can cause severe contention.

Use pthread mutexes where possible instead of libc "spinlocks" (they aren't
actually spin locks).  Conceptually, this change is meant only to support
the dynamic load balancing code by enabling the use of spin locks, but it
has the added apparent benefit of substantially improving performance due to
reduced context switches when there is moderate arena lock contention.

Proper tuning parameter configuration for this change is a finicky business,
and it is very much machine-dependent.  One seemingly promising solution
would be to run a tuning program during operating system installation that
computes appropriate settings for load balancing.  (The pthreads adaptive
spin locks should probably be similarly tuned.)
2007-11-27 03:17:30 +00:00
Jason Evans
26b5e3a18e Implement lazy deallocation of small objects. For each arena, maintain a
vector of slots for lazily freed objects.  For each deallocation, before
doing the hard work of locking the arena and deallocating, try several times
to randomly insert the object into the vector using atomic operations.

This approach is particularly effective at reducing contention for
multi-threaded applications that use the producer-consumer model, wherein
one producer thread allocates objects, then multiple consumer threads
deallocate those objects.
2007-11-27 03:13:15 +00:00
Jason Evans
bcd3523138 Avoid re-zeroing memory in calloc() when possible. 2007-11-27 03:12:15 +00:00
Jason Evans
1bbd1b8613 Fix stats printing of the amount of memory currently consumed by huge
allocations. [1]

Fix calculation of the number of arenas when 'n' is specified via
MALLOC_OPTIONS.

Clean up various style inconsistencies.

Obtained from:	[1] NetBSD
2007-11-27 03:09:23 +00:00
Jason Evans
76507741ab Fix junk/zero filling for realloc(). Junk filling was missing in one case,
and zero filling was broken in a way that could cause memory corruption.

Update comments.
2007-06-15 22:00:16 +00:00
Jason Evans
d33f4690ba Use size_t instead of unsigned for pagesize-related values, in order to
avoid downcasting issues.  In particular, this change fixes
posix_memalign(3) for alignments greater than 2^31 on LP64 systems.

Make sure that NDEBUG is always set to be compatible with MALLOC_DEBUG. [1]

Reported by:	[1] Lee Hyo geol <hyogeollee@gmail.com>
2007-03-29 21:07:17 +00:00