Commit Graph

509 Commits

Author SHA1 Message Date
Marcel Moolenaar
5011eea82f Define NO_TLS on PowerPC.
See also: PR ia64/91846
2006-08-09 19:01:27 +00:00
Jason Evans
b3dcb52814 Conditionally expand the size_invs lookup table in arena_run_reg_dalloc()
so that architectures with a quantum of 8 (rather than 16) work.

Restore arm's quantum to 8.

Submitted by:	jmg
2006-07-27 19:09:32 +00:00
Olivier Houchard
4cfa5e0135 Use 4 as QUANTUM_2POW_MIN on arm as it is on any other architecture, to avoid
triggering an assertion later.
2006-07-27 14:36:28 +00:00
Jason Evans
b8f9774731 Fix cpp logic in arena_malloc() to adjust size when assertions are enabled,
even if stats gathering is disabled. [1]

Remove 'size' parameter from several functions that do not use it.

Reported by:	[1] ache
2006-07-27 04:00:12 +00:00
Jason Evans
5355c74026 Use some math tricks in arena_run_reg_dalloc() to avoid actual division, as
well as avoiding a switch statement.  This change has no significant impact
to performance when branch prediction is successful at predicting the sizes
of objects passed to free(), but in the case that the object sizes are
semi-random, this change has the potential to prevent many branch prediction
misses, thus improving performance substantially.

Take advantage of alignment guarantees in ipalloc(), and pad object sizes to
something less than a power of two when possible.  This has the potential
to substantially reduce internal fragmentation for objects allocated via
posix_memalign().

Avoid an unnecessary pow2_ceil() call in arena_ralloc().

Submitted by:	djam8193ah@hotmail.com
2006-07-01 16:51:10 +00:00
Jason Evans
00d8242c2b Make the behavior of malloc(0) standards-compliant by getting rid of nil,
and instead creating a small allocation for each malloc(0) call.  The
optional SysV compatibility behavior remains unchanged.

Add a couple of assertions.

Fix a couple of typos in error message strings.
2006-06-30 20:54:15 +00:00
Giorgos Keramidas
1d3a1c8bce twalk() expects an `action' function not a comparison function.
The text is correct in the "DESCRIPTION" section, so fix "SYNOPSIS"
to use the correct name.

PR:		docs/90498
Submitted by:	Vasil Dimov
MFC after:	3 days
2006-06-23 13:36:33 +00:00
Jason Evans
0fc8aff0c4 Add a missing case for the switch statement in arena_run_reg_dalloc(). [1]
Fix a leak in chunk_dealloc(). [2]

Reported by:	[1] djam8193ah@hotmail.com,
		[2] Ville-Pertti Keinonen <will@exomi.com>
2006-06-20 20:38:25 +00:00
Maxim Konovalov
3953c11715 o .Xr strtonum(3).
MFC after:	1 week
2006-05-20 21:11:35 +00:00
Jung-uk Kim
1761ec1040 Correct decoding a string containing '/'.
PR:		97485
Submitted by:	Mikko Tyolajarvi < mbsd at pacbell dot net >
2006-05-19 19:06:38 +00:00
Jason Evans
3212b810d8 Increase the minimum chunk size by a power of two (32kB --> 64kB, assuming
4kB pages), in order to avoid dangerous rounding error when calculating
fullness limits during run promotion/demotion.

Convert a structure bitfield to a normal field in areana_run_t.  This should
have been changed along with the other fields in revision 1.120.
2006-05-10 00:07:45 +00:00
Jason Evans
f7768b9f34 Change the semantics of brk_max to dynamically deal with data segment
bounds. [1]

Modify logic for utilizing the data segment, such that it is possible to
create huge allocations there.

Shrink the data segment when deallocating a chunk, if it is at the end of
the data segment.

Rename chunk_size to csize in huge_malloc(), in order to avoid masking a
static variable of the same name. [1]

Reported by:	Paul Allen <nospam@ugcs.caltech.edu>
2006-04-27 01:03:00 +00:00
Jens Schweikhardt
e4b2624f46 s/soley/solely 2006-04-13 18:19:44 +00:00
Jason Evans
f90cbdf17f Add an unreachable return statement, in order to avoid a compiler warning
for non-standard optimization levels.

Reported by:	Michael Zach <zach@webges.com>
2006-04-05 18:46:24 +00:00
Jason Evans
50ff9670e2 Only initialize the first per-chunk page map element for free runs. This
makes run split/coalesce operations of complexity lg(n) rather than n.
2006-04-05 04:15:12 +00:00
Jason Evans
94fc7dc0d5 Add malloc_usable_size() to the RETURN VALUES section. 2006-04-04 20:27:53 +00:00
Jason Evans
cf01f0d7c5 Add init_lock, and use it to protect against allocator initialization
races.  This isn't currently necessary for libpthread or libthr, but
without it external threads libraries like the linuxthreads port are
not safe to use.

Reported by:	ganbold@micom.mng.net
2006-04-04 19:46:28 +00:00
Jason Evans
1c6d5bde6c Refactor per-run bitmap manipulation functions so that bitmap offsets only
have to be calculated once per allocator operation.

Make nil const.

Update various comments.

Remove/avoid division where possible.

For the one division operation that remains in the critical path, add a
switch statement that has a case for each small size class, and do division
with a constant divisor in each case.  This allows the compiler to generate
optimized code that does not use hardware division [1].

Obtained from:	peter [1]
2006-04-04 03:51:47 +00:00
Jason Evans
cd70100e5d Optimize runtime performance, primary using the following techniques:
* Avoid choosing an arena until it's certain that an arena is needed
    for allocation.

  * Convert division/multiplication to bitshifting where possible.

  * Avoid accessing TLS variables in single-threaded code.

  * Reduce the amount of pointer dereferencing.

  * Move lock acquisition in critical paths to only protect the the code
    that requires synchronization, and completely remove locking where
    possible.
2006-03-30 20:25:52 +00:00
Jason Evans
6b2c15da6a Add malloc_usable_size(3).
Discussed with:		arch@
2006-03-28 22:16:04 +00:00
Jason Evans
9f9bc9367c Allow the 'n' option to decrease the number of arenas below the default,
to as little as one arena.  Also, limit the number of arenas to avoid a
potential invariant violation in base_alloc().
2006-03-26 23:41:35 +00:00
Jason Evans
4328edf534 Add comments and reformat/rearrange code. There are no significant
functional changes in this commit.
2006-03-26 23:37:25 +00:00
Jason Evans
0c21f9eda7 Convert TINY_MIN_2POW from a cpp macro to tiny_min_2pow (a variable), and
determine its value at run time according to other relevant values.  This
avoids the creation of runs that are incompletely utilized, as long as
pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW
setting).

Increase the size of several structure bitfields in arena_run_t in order
to avoid integer overflow in the case that a run's header does not overlap
with the space that is usable as application allocation regions.  Given
the tiny_min_2pow change, this fix has no additional impact unless
pagesize is >32kB.

Reported by:	kris
2006-03-24 22:13:49 +00:00
Jason Evans
efafcfa7fb Add USE_BRK-specific code in malloc_init_hard() to allow the first
internally used chunk to start at the beginning of the heap, rather
than at a chunk-aligned address.  This reduces mapped memory somewhat
for 32-bit architectures.

Add the arena_run_link_t type and use it wherever a run object is only
used as a ring 'header'.  This saves approximately 40 kB of memory per
arena.

Remove an obsolete (no longer used) code path from base_alloc(), which
supported the internal allocation of objects larger than the chunk
size.

Enhance chunk_dealloc() to cache chunk addresses for all deallocated
chunks.  This has no impact for most programs, but has the potential
to reduce VM map fragmentation for programs that use huge
allocations.
2006-03-24 00:28:08 +00:00
Jason Evans
c07ee180bc Separate completely full runs from runs that are merely almost full, so
that no linear searching is necessary if we resort to allocating from a
run that is known to be mostly full.  There are pathological edge cases
that could have caused severely degraded performance, and this change
fixes that.
2006-03-20 04:05:05 +00:00
Jason Evans
bd6a7799c4 Optimize realloc() to reallocate in place if the old and new sizes are
close enough to each other that reallocation would allocate a new region
of the same size.  This improves the performance of repeated incremental
reallocations by up to three orders of magnitude. [1]

Fix arena_new() to properly constrain run size if a small chunk size was
specified during runtime configuration.

Suggested by:	se [1]
2006-03-19 18:28:06 +00:00
Jason Evans
2d07e432d4 Modify allocation policy, in order to avoid excessive fragmentation for
allocation patterns that involve a relatively even mixture of many
different size classes.

Reduce the chunk size from 16 MB to 2 MB.  Since chunks are now carved up
using an address-ordered first best fit policy, VM map fragmentation is
much less likely, which makes smaller chunks not as much of a risk.  This
reduces the virtual memory size of most applications.

Remove redzones, since program buffer overruns are no longer as likely to
corrupt malloc data structures.

Remove the C MALLOC_OPTIONS flag, and add H and S.
2006-03-17 09:00:27 +00:00
Ruslan Ermilov
91545fccf9 Add a non-optional newline after ".Bx". 2006-03-15 14:45:45 +00:00
Andre Oppermann
7727f485de Revert previous changes as we do support the .Ox macro for OpenBSD.
Pointed out by:	ceri, ru, delphij
2006-03-15 14:05:41 +00:00
Andrey A. Chernov
7768950fe3 POSIXed strtoll() (and ours one too) can set errno to EINVAL, so check
it first.

Approved by:    andre
2006-03-14 19:53:03 +00:00
Andre Oppermann
b0b2326781 Fix HISTORY and point to OpenBSD. 2006-03-14 17:01:21 +00:00
Andre Oppermann
c74dfa2faf Import of OpenBSD's strtonum(3) which is a nicer version of strtoll(3)
providing proper error checking and other improvements.

Obtained from:	OpenBSD
Requested by:	flz (to port Open[BGP|OSPF]D)
MFC after:	3 days
2006-03-14 16:57:30 +00:00
Daniel Eischen
6fad3aaf15 Add each directory's symbol map file to SYM_MAPS. 2006-03-13 01:15:01 +00:00
Daniel Eischen
cce72e8860 Add symbol maps and initial symbol version definitions to libc.
Reviewed by:	davidxu
2006-03-13 00:53:21 +00:00
Wojciech A. Koszek
9d0e4617f3 Fix typo in manual page reference.
Approved by:	cognet (mentor)
MFC after:	3 days
2006-02-26 23:01:11 +00:00
Alexander Kabaev
129d4752a0 Remove extra slash from pty slave device name returned by ptsname. 2006-02-13 00:04:04 +00:00
Jason Evans
d8a1377b1b Fix calculation of the number of arenas to use on multi-processor systems. 2006-02-04 01:11:30 +00:00
Joel Dahl
fbf9b468d5 Expand contractions. 2006-02-01 14:33:14 +00:00
Olivier Houchard
9b1fa2482e If the sysctl kern.pts.enable doesn't exist, check that /dev/ptmx is there,
and if so, use the pts system.

Suggested by:	rwatson
2006-01-29 00:02:57 +00:00
Jason Evans
4fae5e8fda Remove unwarranted uses of 'goto'. 2006-01-27 07:46:22 +00:00
Jason Evans
a3d0ab47a6 Add NO_MALLOC_EXTRAS, so that various extra features that can cause
performance degradation can be disabled via something like the following
in /etc/malloc.conf:

	CFLAGS+=-DNO_MALLOC_EXTRAS

Suggested by:	deischen
2006-01-27 04:42:10 +00:00
Jason Evans
7138ef5b1d Fix the type of a statistics counter (unsigned --> unsigned long). 2006-01-27 04:36:39 +00:00
Jason Evans
842e5e3d91 Clean up statistics gathering and printing. 2006-01-27 02:36:44 +00:00
Jason Evans
499168546f Optimize arena_bin_pop() to reduce the number of separator operations.
Remove the block of code that tries to use delayed regions in LIFO order,
since from a policy perspective, it conflicts with LRU caching of newly
coalesced regions in arena_undelay().  There are numerous policy
alternatives, and it isn't readily obvious which (if any) is superior;
this change at least has the virtue of being consistent with policy.
2006-01-26 08:11:23 +00:00
Olivier Houchard
67c7201e18 ptsname() bits for pts. 2006-01-26 01:33:55 +00:00
Jason Evans
0653ddb655 Remove a redundant variable assignment in arena_reg_frag_alloc(). 2006-01-25 05:41:02 +00:00
Jason Evans
b97aec1d61 If no coalesced exact-fit small regions are available, but delayed exact-
fit regions are available, use the delayed regions in LIFO order, in order
to increase locality of reference.  We might expect this to cause delayed
regions to be removed from the delay ring buffer more often (since we're
now re-using more recently buffered regions), but numerous tests indicate
that the overall impact on memory usage tends to be good (reduced
fragmentation).

Re-work arena_frag_reg_alloc() so that when large free regions are
exhausted, it uses small regions in a way that favors contiguous allocation
of sequentially allocated small regions.  Use arena_frag_reg_alloc() in
this capacity, rather than directly attempting over-fitting of small
requests when no large regions are available.

Remove the bin overfit statistic, since it is no longer relevant due to
the arena_frag_reg_alloc() changes.

Do not specify arena_frag_reg_alloc() as an inline function.  It is too
large to benefit much from being inlined, and it is also called in two
places, only one of which is in the critical path (the other call bloated
arena_reg_alloc()).

Call arena_coalesce() for a region before caching it with
arena_mru_cache().

Add assertions that detect the attempted caching of adjacent free regions,
so that we notice this problem when it is first created, rather than in
arena_coalesce(), when it's too late to know how the problem arose.

Reported by:    Hans Blancke
2006-01-25 04:21:22 +00:00
Jason Evans
ad4e4c676f Make the 'C' and 'c' malloc options consistent with other options; 'C'
doubles the cache size, and 'c' halves the cache size.
2006-01-23 03:32:38 +00:00
Jason Evans
5531d7fdc6 In arena_chunk_reg_alloc(), try to avoid touching the last page in the
chunk during initialization, in order to avoid physically backing the
page unless data are allocated there.
2006-01-23 03:19:01 +00:00
Jason Evans
677bc78b39 Use uintptr_t rather than size_t when casting pointers to integers. Also,
fix the few remaining casting style(9) errors that remained after the
functional change.

Reported by:	jmallett
2006-01-20 03:11:11 +00:00