races. This isn't currently necessary for libpthread or libthr, but
without it external threads libraries like the linuxthreads port are
not safe to use.
Reported by: ganbold@micom.mng.net
have to be calculated once per allocator operation.
Make nil const.
Update various comments.
Remove/avoid division where possible.
For the one division operation that remains in the critical path, add a
switch statement that has a case for each small size class, and do division
with a constant divisor in each case. This allows the compiler to generate
optimized code that does not use hardware division [1].
Obtained from: peter [1]
* Avoid choosing an arena until it's certain that an arena is needed
for allocation.
* Convert division/multiplication to bitshifting where possible.
* Avoid accessing TLS variables in single-threaded code.
* Reduce the amount of pointer dereferencing.
* Move lock acquisition in critical paths to only protect the the code
that requires synchronization, and completely remove locking where
possible.
determine its value at run time according to other relevant values. This
avoids the creation of runs that are incompletely utilized, as long as
pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW
setting).
Increase the size of several structure bitfields in arena_run_t in order
to avoid integer overflow in the case that a run's header does not overlap
with the space that is usable as application allocation regions. Given
the tiny_min_2pow change, this fix has no additional impact unless
pagesize is >32kB.
Reported by: kris
internally used chunk to start at the beginning of the heap, rather
than at a chunk-aligned address. This reduces mapped memory somewhat
for 32-bit architectures.
Add the arena_run_link_t type and use it wherever a run object is only
used as a ring 'header'. This saves approximately 40 kB of memory per
arena.
Remove an obsolete (no longer used) code path from base_alloc(), which
supported the internal allocation of objects larger than the chunk
size.
Enhance chunk_dealloc() to cache chunk addresses for all deallocated
chunks. This has no impact for most programs, but has the potential
to reduce VM map fragmentation for programs that use huge
allocations.
that no linear searching is necessary if we resort to allocating from a
run that is known to be mostly full. There are pathological edge cases
that could have caused severely degraded performance, and this change
fixes that.
close enough to each other that reallocation would allocate a new region
of the same size. This improves the performance of repeated incremental
reallocations by up to three orders of magnitude. [1]
Fix arena_new() to properly constrain run size if a small chunk size was
specified during runtime configuration.
Suggested by: se [1]
allocation patterns that involve a relatively even mixture of many
different size classes.
Reduce the chunk size from 16 MB to 2 MB. Since chunks are now carved up
using an address-ordered first best fit policy, VM map fragmentation is
much less likely, which makes smaller chunks not as much of a risk. This
reduces the virtual memory size of most applications.
Remove redzones, since program buffer overruns are no longer as likely to
corrupt malloc data structures.
Remove the C MALLOC_OPTIONS flag, and add H and S.
Remove the block of code that tries to use delayed regions in LIFO order,
since from a policy perspective, it conflicts with LRU caching of newly
coalesced regions in arena_undelay(). There are numerous policy
alternatives, and it isn't readily obvious which (if any) is superior;
this change at least has the virtue of being consistent with policy.
fit regions are available, use the delayed regions in LIFO order, in order
to increase locality of reference. We might expect this to cause delayed
regions to be removed from the delay ring buffer more often (since we're
now re-using more recently buffered regions), but numerous tests indicate
that the overall impact on memory usage tends to be good (reduced
fragmentation).
Re-work arena_frag_reg_alloc() so that when large free regions are
exhausted, it uses small regions in a way that favors contiguous allocation
of sequentially allocated small regions. Use arena_frag_reg_alloc() in
this capacity, rather than directly attempting over-fitting of small
requests when no large regions are available.
Remove the bin overfit statistic, since it is no longer relevant due to
the arena_frag_reg_alloc() changes.
Do not specify arena_frag_reg_alloc() as an inline function. It is too
large to benefit much from being inlined, and it is also called in two
places, only one of which is in the critical path (the other call bloated
arena_reg_alloc()).
Call arena_coalesce() for a region before caching it with
arena_mru_cache().
Add assertions that detect the attempted caching of adjacent free regions,
so that we notice this problem when it is first created, rather than in
arena_coalesce(), when it's too late to know how the problem arose.
Reported by: Hans Blancke
problems in cases where regions are faked up for the purposes of red-black
tree searches, since those faked region headers reside on the stack, rather
than in a malloc chunk.
allowing the error to be fatal.
Move a label in order to make sure to properly handle errors in malloc(0).
Reported by: Alastair D'Silva, Saneto Takanori
there is never any need to recursively call the main allocation functions.
Remove recursive spinlock support, since it is no longer needed.
Allow chunks to be as small as the page size.
Correctly propagate OOM errors from arena_new().
broken for non-threaded shared processes in that __tls_get_addr()
assumes the thread pointer is always initialized. This is not the
case. When arenas_map is referenced in choose_arena() and it is
defined as a thread-local variable, it will result in a SIGSEGV.
PR: ia64/91846 (describes the TLS/ia64 bug).
* Add posix_memalign().
* Move calloc() from calloc.c to malloc.c. Add a calloc() implementation in
rtld-elf in order to make the loader happy (even though calloc() isn't
used in rtld-elf).
* Add _malloc_prefork() and _malloc_postfork(), and use them instead of
directly manipulating __malloc_lock.
Approved by: phk, markm (mentor)
between a 32-bit integer and a radix-64 ASCII string. The l64a_r() function
is a NetBSD addition.
PR: 51209 (based on submission, but very different)
Reviewed by: bde, ru
a tty device instead of the legacy minor number approach. This is known to
fix gnome-vfs' sftp module as well as kio_sftp and kdesu on -CURRENT.
Thanks to scottl for the snprintf() approach idea.
Reviewed by: phk
Tested by: pav
mich
Approved by: re (scottl)
surrounding the undef'ing it. It does not seem necessary to
undef some symbol that is not exist, and gcc does not complain
about whether a symbol is exist before #undef'ing it out.
Spotted by: mingyanguo via ChinaUnix.net forum
Reviewed by: phk
really so.
"If the value of base is 16, the characters 0x or 0X may optionally
precede the sequence of letters and digits, following the sign if
present."
Found by: joerg
seed, the random number generator rand(3) still sucks and is unlikely
sufficient for crypto use. Correct what appears to be a cut and paste
error from the srandomdev() man page.
Submitted by: Ben Mesander
example. The externs haven't been needed in about 10 years, so
there's no reason to have them other than for hysterical raisins. And
the California Rasins haven't been around for a long time...
under the RETURN VALUES section so it is consistent with others.
Cleanup the return value text for getenv(3) a little while I am here.
PR: docs/58033
MFC after: 3 days
cleanups, handling 'ls -l-', handling '--*'
Note this is in the same time back out of our v1.3
"Don't print an error message if the bad option is '?'"
because it directly violates POSIX.
through a realloc like function.
Make the malloc_active variable a local static to this new function.
Don't warn about recursion more than once per base call.
constify malloc_func.
has been hit, this makes it cover more cases.
Call the message function directly rather than fiddle with flag-saving
when we find an unknown character in our options.
The 'A' flag should not trigger on legal out of memory conditions.
These files had tags after the copyright notice,
inside the comment block (incorrect, removed),
and outside the comment block (correct).
Approved by: rwatson (mentor)
This results in no functional change, aside from fixing a data
corruption bug on LP64 platforms. The code here could still use a
significant amount of cleanup.
PR: 56502
Submitted by: hrs (earlier version)
ó++ ABI document at http://www.codesourcery.com/cxx-abi/abi.html#dso-dtor
The ABI was initially defined for ia64, but GCC3 and Intel compilers
have adopted it on other platforms.
This is the patch from PR bin/59552 with a number of changes by
me.
PR: bin/59552
Submitted by: Bradley T Hughes (bhughes at trolltech dot com)
C++ ABI document at http://www.codesourcery.com/cxx-abi/abi.html#dso-dtor
The ABI was initially defined for ia64, but GCC3 and Intel compilers
have adopted it on other platforms.
This is the patch from PR bin/59552 with a number of changes by
me.
PR: bin/59552
Submitted by: Bradley T Hughes (bhughes at trolltech dot com)
initialization overhead, there's a problem in that we never call
imalloc() and thus malloc_init() for zero-sized allocations. As a
result, malloc(0) returns NULL when it's the first or only malloc in
the program. Any non-zero allocation will initialize the malloc code
with the side-effect that subsequent zero-sized allocations return a
non-NULL pointer. This is because the pointer we return for zero-
sized allocations is calculated from malloc_pageshift, which needs
to be initialized at runtime on ia64.
The result of the inconsistent behaviour described above is that
configure scripts failed the test for a GNU compatible malloc. This
resulted in a lot of broken ports.
Other, even simpler, solutions were possible as well:
1. initialize malloc_pageshift with some non-zero value (say 13 for
8KB pages) and keep the runtime adjustment.
2. Stop using malloc_pageshift to calculate ZEROSIZEPTR.
Removal of the runtime adjustment was chosen because then ia64 is the
same as any other platform. It is not to say that using a page size
obtained at runtime is bad per se. It's that there's currently a high
level of gratuity for its existence and the moment it causes problems
is the moment you need to get rid of it. Hence, it's not unthinkable
that this commit is (partially) reverted some time in the future when
we do have a good reason for it and a good way to achieve it.
Approved by: re@ (rwatson)
Reported by: kris (portmgr@) -- may the ports be with you
sorting strings with common prefixes by noting
when all the strings land in just one bin.
Testing shows significant speedups (on the order of
30%) on strings with common prefixes and no slowdowns on any
of my test cases.
Submitted by: Markus Bjartveit Kruger <markusk@pvv.ntnu.no>
PR: 58860
Approved by: gordon (mentor)