example from bsearch(3) too, so that we don't have to duplicate
the example code in both places.
PR: docs/176197
Reviewed by: stefanf
Approved by: remko (mentor), gjb (mentor)
MFC after: 1 week
- Remove unused #include.
- Do not cast away const.
- Use the canonical idiom to compare two numbers.
- Use proper type for sizes, i.e. size_t instead of int.
- Correct indentation.
- Simplify printf("\n") to puts("").
- Use return instead of exit() in main().
Submitted by: Christoph Mallon, christoph.mallon at gmx.de
Approved by: gjb (mentor)
Reviewed by: stefanf
MFC after: 1 week
qsort(3) can work together to sort an array of integers.
PR: docs/176197
Submitted by: Fernando, fapesteguia at opensistemas.com
Approved by: gjb (mentor)
MFC after: 1 week
srandomdev(). This doesn't actually work
with any modern C compiler:
In particular, both clang and modern gcc
verisons silently elide any xor operation
with 'junk'.
Approved by: secteam
MFC after: 3 days
1) Don't iterate the loop from the environment array beginning each time,
iterate it under the last place we deactivate instead.
2) Call __rebuild_environ() not on each iteration but once, only at the end
of whole loop (of course, only in case if something is changed).
MFC after: 1 week
to craft environment variables with similar names like that:
a=1
a=2
...
unsetenv("a") should remove them all to make later getenv("a") impossible.
Fix it to do so (this is GNU autoconf test #3 failure too).
PR: 172273
MFC after: 1 week
This fixes a race condition where another thread may fork() before CLOEXEC
is set, unintentionally passing the descriptor to the child process.
This commit only adds O_CLOEXEC flags to open() or openat() calls where no
fcntl(fd, F_SETFD, FD_CLOEXEC) follows. The separate fcntl() call still
leaves a race window so it should be fixed later.
The ENOTDIR mapping was introduced in r235266 for kern/128933 based on
an interpretation of the somewhat ambiguous language in the POSIX realpath
specification. The interpretation is inconsistent with Solaris and Linux,
a regression from 9.0, and does not appear to be permitted by the
description of ENOTDIR:
20 ENOTDIR Not a directory. A component of the specified pathname
existed, but it was not a directory, when a directory was
expected.
PR: standards/171577
MFC after: 3 days
http://austingroupbugs.net/view.php?id=385#c713
(Resolved state) recommend this way for the current standard (called
"earlier" in the text)
"However, earlier versions of this standard did not require this, and the
same example had to be written as:
// buf was obtained by malloc(buflen)
ret = write(fd, buf, buflen);
if (ret < 0) {
int save = errno;
free(buf);
errno = save;
return ret;
}
"
from feedback I have for previous commit it seems that many people prefer
to avoid mass code change needed for current standard compliance
and prefer to track unpublished standard instead, which requires now
that free() itself must save errno, not its usage code.
So, I back out "save errno across free()" part of previous commit,
and will fill PR for changing free() isntead.
2) Remove now unused serrno.
MFC after: 1 week
"The setting of errno after a successful call to a function is
unspecified unless the description of that function specifies that
errno shall not be modified."
However, free() in IEEE Std 1003.1-2008 does not mention its interaction
with errno, so MAY modify it after successful call
(it depends on particular free() implementation, OS-specific, etc.).
So, save errno across free() calls to make code portable and
POSIX-conformant.
2) Remove unused serrno assignment.
MFC after: 1 week
[ENOENT] A component of file_name does not name an existing file or
file_name points to an empty string.
[ENOTDIR] A component of the path prefix is not a directory, or the
file_name argument contains at least one non- <slash> character
and ends with one or more trailing <slash> characters and the last
pathname component names an existing file that is neither a
directory nor a symbolic link to a directory.
Add checks for the listed conditions, and set errno accordingly.
Update the realpath(3) manpage to mention SUS behaviour. Remove the
requirement to include sys/param.h before stdlib.h.
PR: 128933
MFC after: 3 weeks
prior to 3.0.0 release) as contrib/jemalloc, and integrate it into libc.
The code being imported by this commit diverged from
lib/libc/stdlib/malloc.c in March 2010, which means that a portion of
the jemalloc 1.0.0 ChangeLog entries are relevant, as are the entries
for all subsequent releases.
The C11 folks reinvented the wheel by introducing an aligned version of
malloc(3) called aligned_alloc(3), instead of posix_memalign(3). Instead
of returning the allocation by reference, it returns the address, just
like malloc(3).
Reviewed by: jasone@
yet (see LLVM PR 9788), and warns about it, rub it out for now. When
clang grows support for this attribute, I will revert this again.
MFC after: 1 week
__noreturn macro and modify the other exiting functions to use it.
The __noreturn macro, unlike __dead2, must be used BEFORE the function.
This is in line with the C and C++ specifications that place _Noreturn (c1x)
and [[noreturn]] (C++11) in front of the functions. As with __dead2, this
macro falls back to using the GCC attribute.
Unfortunately, clang currently sets the same value for the C version macro
in C99 and C1x modes, so these functions are hidden by default. At some
point before 10.0, I need to go through the headers and clean up the C1x /
C++11 visibility.
Reviewed by: brooks (mentor)
load of _l suffixed versions of various standard library functions that use
the global locale, making them take an explicit locale parameter. Also
adds support for per-thread locales. This work was funded by the FreeBSD
Foundation.
Please test any code you have that uses the C standard locale functions!
Reviewed by: das (gdtoa changes)
Approved by: dim (mentor)
The size passed to strlcat() must depend on the input length, not the
output length. Because the input and output buffers are equal in size,
the resulting binary does not change at all.
-g, by reverting r219139. The LLVM PR referenced in that revision was
fixed in the mean time, and we imported a clang snapshot soon
afterwards, so the temporary workaround of disabling clang's integrated
assembler is no longer needed.
In this particular case, using e.g. DEBUG_FLAGS=-g causes clang to
output certain directives into assembly that our version of GNU as
chokes on.
Reported by: dougb
Approved by: re (kib)
support for it. Note that while sparc64 also supports the static TLS
model and thus tls_model("initial-exec"), using the default model
turned out to yield slightly better buildstone performance.
kernel.debug (or possibly other files), when WITH_CTF is active.
This is caused by a bug in clang's integrated assembler, causing malloc
to sometimes hang during initialization in statically linked executables
that use threading, such as the copy of ctfmerge that is built during
the bootstrap stage of buildworld. The bug has been submitted upstream:
http://llvm.org/bugs/show_bug.cgi?id=9352
Note that you might have to rebuild and install libc first, to get your
kernel build to finish, because the ctfmerge binary built during
bootstrap is linked with your base system's copy of libc.a, which might
already contain a bad copy of malloc.o.
add a wrapper for it in libc and rework the code in libthr, the
system call still can return EINTR, we keep this feature.
Discussed on: thread
Reviewed by: jilles
their implementations aren't in the same files. Introduce LIBC_ARCH
and use that in preference to MACHINE_CPUARCH. Tested by amd64 and
powerpc64 builds (thanks nathanw@)
atexit and __cxa_atexit handlers that are either installed by unloaded
dso, or points to the functions provided by the dso.
Use _rtld_addr_phdr to locate segment information from the address of
private variable belonging to the dso, supplied by crtstuff.c. Provide
utility function __elf_phdr_match_addr to do the match of address against
dso executable segment.
Call back into libthr from __cxa_finalize using weak
__pthread_cxa_finalize symbol to remove any atfork handler which
function points into unloaded object.
The rtld needs private __pthread_cxa_finalize symbol to not require
resolution of the weak undefined symbol at initialization time. This
cannot work, since rtld is relocated before sym_zero is set up.
Idea by: kan
Reviewed by: kan (previous version)
MFC after: 3 weeks
number of host CPUs and osreldate.
This eliminates the last sysctl(2) calls from the dynamically linked image
startup.
No objections from: kan
Tested by: marius (sparc64)
MFC after: 1 month
finished using it. This allows the mutex's allocated memory to be
freed.
This is one sense a rather silly change, since at this point we're
less than a microsecond away from calling _exit; but fixing this
memory leak is likely to make life easier for anyone trying to
track down other memory leaks.
bottom of the manpages and order them consistently.
GNU groff doesn't care about the ordering, and doesn't even mention
CAVEATS and SECURITY CONSIDERATIONS as common sections and where to put
them.
Found by: mdocml lint run
Reviewed by: ru
SUSv4 requires that implementation returns EINVAL if supplied path is NULL,
and ENOENT if path is empty string [1].
Bring prototype in conformance with SUSv4, adding restrict keywords.
Allow the resolved path buffer pointer be NULL, in which case realpath(3)
allocates storage with malloc().
PR: kern/121897 [1]
MFC after: 2 weeks
Although groff_mdoc(7) gives another impression, this is the ordering
most widely used and also required by mdocml/mandoc.
Reviewed by: ru
Approved by: philip, ed (mentors)
the static TLS model, which is fundamentally different from the dynamic
TLS model. The consequence was data corruption. Limit the attribute to
i386 and amd64.
http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/143350
Empty string test gone wrong.
Testing this requires that you have a locale that has the sign string
unset but has int_n_sign_posn set (the default locale falls through to
use "()" around negative numbers which is probably another bug).
I created that setup by hand and indeed without this fix negative
numbers are put out as positive numbers (doesn't fall through to use
"-" as default indicator).
Unfixed example in nl_NL.ISO8859-1 with lc->negative_sign set to empty
string:
strfmon(buf, sizeof(buf), "%-8i", -42.0);
==>
example2: 'EUR 42,00' 'Eu 42,00'
Fixed:
example2: 'EUR 42,00-' 'Eu 42,00-'
This file and suggested fix are identical in at least freebsd-8.
Backport might be appropriate but some expert on locales should
probably have a look at us defaulting to negative numbers in
parenthesis when LC_* is default. That doesn't look right and is not
what other OSes are doing.
PR: 143350
Submitted by: Corinna Vinschen
Reviewed by: bug reporter submitted, tested by me
* Fix a race in chunk_dealloc_dss().
* Check for allocation failure before zeroing memory in base_calloc().
Merge enhancements from a divergent version of jemalloc:
* Convert thread-specific caching from magazines to an algorithm that is
more tunable, and implement incremental GC.
* Add support for medium size classes, [4KiB..32KiB], 2KiB apart by
default.
* Add dirty page tracking for pages within active small/medium object
runs. This allows malloc to track precisely which pages are in active
use, which makes dirty page purging more effective.
* Base maximum dirty page count on proportion of active memory.
* Use optional zeroing in arena_chunk_alloc() to avoid needless zeroing
of chunks. This is useful in the context of DSS allocation, since a
long-lived application may commonly recycle chunks.
* Increase the default chunk size from 1MiB to 4MiB.
Remove feature:
* Remove the dynamic rebalancing code, since thread caching reduces its
utility.
of setenv(), putenv() and unsetenv() when dealing with corrupt entries in
environ. They now output a warning and complete their task without error.
MFC after: 1 week
instead of returning an error if a corrupt (not a "name=value" string) entry
in the environ array is detected when (re)-building the internal
environment. This should prevent applications or libraries from
experiencing issues arising from the expectation that these calls will
complete even with corrupt entries. The behavior is now as it was prior to
7.0.
Reviewed by: jilles
MFC after: 1 week
find a variable. Include a note that it must not cause the internal
environment to be generated since malloc() depends upon getenv(). To call
malloc() would create a circular dependency.
Recommended by: green
Approved by: jilles
MFC after: 1 week
**environ entries. This puts non-getenv(3) operations in line with
getenv(3) in that bad environ entries do not cause all operations to
fail. There is still some inconsistency in that getenv(3) in the
absence of any environment-modifying operation does not emit corrupt
environ entry warnings.
I also fixed another inconsistency in getenv(3) where updating the
global environ pointer would not be reflected in the return values.
It would have taken an intermediary setenv(3)/putenv(3)/unsetenv(3)
in order to see the change.
a large page size that is greater than malloc(3)'s default chunk size but
less than or equal to 4 MB, then increase the chunk size to match the large
page size.
Most often, using a chunk size that is less than the large page size is not
a problem. However, consider a long-running application that allocates and
frees significant amounts of memory. In particular, it frees enough memory
at times that some of that memory is munmap()ed. Up until the first
munmap(), a 1MB chunk size is just fine; it's not a problem for the virtual
memory system. Two adjacent 1MB chunks that are aligned on a 2MB boundary
will be promoted automatically to a superpage even though they were
allocated at different times. The trouble begins with the munmap(),
releasing a 1MB chunk will trigger the demotion of the containing superpage,
leaving behind a half-used 2MB reservation. Now comes the real problem.
Unfortunately, when the application needs to allocate more memory, and it
recycles the previously munmap()ed address range, the implementation of
mmap() won't be able to reuse the reservation. Basically, the coalescing
rules in the virtual memory system don't allow this new range to combine
with its neighbor. The effect being that superpage promotion will not
reoccur for this range of addresses until both 1MB chunks are freed at some
point in the future.
Reviewed by: jasone
MFC after: 3 weeks
When I wrote the pseudo-terminal driver for the MPSAFE TTY code, Robert
Watson and I agreed the best way to implement this, would be to let
posix_openpt() create a pseudo-terminal with proper permissions in place
and let grantpt() and unlockpt() be no-ops.
This isn't valid behaviour when looking at the spec. Because I thought
it was an elegant solution, I filed a bug report at the Austin Group
about this. In their last teleconference, they agreed on this subject.
This means that future revisions of POSIX may allow grantpt() and
unlockpt() to be no-ops if an open() on /dev/ptmx (if the implementation
has such a device) and posix_openpt() already do the right thing.
I'd rather put this in the manpage, because simply mentioning we don't
comply to any standard makes it look worse than it is. Right now we
don't, but at least we took care of it.
Approved by: re (kib)
MFC after: 3 days
A more elegant way of obtaining a name of a character device by its file
descriptor on FreeBSD, is to use the FIODGNAME ioctl. Because a valid
file descriptor implies a file descriptor is visible in /dev, it will
always resolve a valid device name.
I'm adding a more friendly wrapper for this ioctl, called fdevname(). It
is a lot easier to use than devname() and also has better error
handling. When a device name cannot be resolved, it will just return
NULL instead of a generated device name that makes no sense.
Discussed with: kib
stating that in FreeBSD the atol() and atoll() functions affect
errno in the same way as strtol() and stroll().
PR: docs/126487
Submitted by: edwin
Reviewed by: trhodes, gabor
MFC after: 1 week
This caching allows for completely lock-free allocation/deallocation in the
steady state, at the expense of likely increased memory use and
fragmentation.
Reduce the default number of arenas to 2*ncpus, since thread-specific
caching typically reduces arena contention.
Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced,
cacheline-spaced, and subpage-spaced size classes. The advantages are:
fewer size classes, reduced false cacheline sharing, and reduced internal
fragmentation for allocations that are slightly over 512, 1024, etc.
Increase RUN_MAX_SMALL, in order to limit fragmentation for the
subpage-spaced size classes.
Add a size-->bin lookup table for small sizes to simplify translating sizes
to size classes. Include a hard-coded constant table that is used unless
custom size class spacing is specified at run time.
Add the ability to disable tiny size classes at compile time via
MALLOC_TINY.
The routines in grantpt.c have been moved to ptsname.c in the MPSAFE TTY
layer, because grantpt() is now effectively a no-op. I forgot to remove
the corresponding source file from libc.
The last half year I've been working on a replacement TTY layer for the
FreeBSD kernel. The new TTY layer was designed to improve the following:
- Improved driver model:
The old TTY layer has a driver model that is not abstract enough to
make it friendly to use. A good example is the output path, where the
device drivers directly access the output buffers. This means that an
in-kernel PPP implementation must always convert network buffers into
TTY buffers.
If a PPP implementation would be built on top of the new TTY layer
(still needs a hooks layer, though), it would allow the PPP
implementation to directly hand the data to the TTY driver.
- Improved hotplugging:
With the old TTY layer, it isn't entirely safe to destroy TTY's from
the system. This implementation has a two-step destructing design,
where the driver first abandons the TTY. After all threads have left
the TTY, the TTY layer calls a routine in the driver, which can be
used to free resources (unit numbers, etc).
The pts(4) driver also implements this feature, which means
posix_openpt() will now return PTY's that are created on the fly.
- Improved performance:
One of the major improvements is the per-TTY mutex, which is expected
to improve scalability when compared to the old Giant locking.
Another change is the unbuffered copying to userspace, which is both
used on TTY device nodes and PTY masters.
Upgrading should be quite straightforward. Unlike previous versions,
existing kernel configuration files do not need to be changed, except
when they reference device drivers that are listed in UPDATING.
Obtained from: //depot/projects/mpsafetty/...
Approved by: philip (ex-mentor)
Discussed: on the lists, at BSDCan, at the DevSummit
Sponsored by: Snow B.V., the Netherlands
dcons(4) fixed by: kan
detect whether the integer division table is large enough to handle the
divisor. Before this change, the last two table elements were never used,
thus causing the slow path to be used for those divisors.
environ[0] to be more obvious that environ is not NULL before environ[0]
is tested. Although I believe the previous code worked, this change
improves code maintainability.
Reviewed by: ache
MFC after: 3 days
the first value (environ[0]) to NULL. This is in addition to the
current detection of environ being replaced, which includes being set to
NULL. Without this fix, the environment is not truly wiped, but appears
to be by getenv() until an *env() call is made to alter the enviroment.
This change is necessary to support those applications that use this
method for clearing environ such as Dovecot and Postfix. Applications
such as Sendmail and the base system's env replace environ (already
detected). While neither of these methods are defined by SUSv3, it is
best to support them due to historic reasons and in lieu of a clean,
defined method.
Add extra units tests for clearing environ using four different methods:
1. Set environ to NULL pointer.
2. Set environ[0] to NULL pointer.
3. Set environ to calloc()'d NULL-terminated array.
4. Set environ to static NULL-terminated array.
Noticed by: Timo Sirainen
MFC after: 3 days
the chunk map instead of red-black trees where possible. Remove the
red-black trees and node objects that are obsoleted by this change. The
net result is a ~1-2% memory savings, and a substantial allocation speed
improvement.
statement. Add the one from the current NetBSD version.
- Also bump a date to reflect my content changes I have done in previous
revision
Approved by: imp
MFC after: 3 days
The __use_pts() routine was once probably used by libutil to determine
if we are using BSD or UNIX98 style PTY device names. It doesn't seem to
be used outside grantpt.c, which means we can make it static and remove
it from the Symbol.map.
Reviewed by: cognet, kib
Approved by: philip (mentor)
This substantially improves worst case allocation performance, since
O(lg n) tree search can be used instead of O(n) tree iteration.
Use rb_wrap() instead of directly calling rb_*() macros.
macros.
Add rb_foreach_next() and rb_foreach_reverse_prev(), which make it
possible to re-synchronize tree iteration after the tree has been
modified.
Rename rb_tree_new() to rb_new().
color bit in the least significant bit of the right child pointer, in
order to reduce red-black tree linkage overhead by ~2X as compared to
sys/tree.h.
Use the new red-black tree implementation in malloc, which drops
memory usage by ~0.5 or ~1%, for 32- and 64-bit systems, respectively.
There were no checks for left and right precisions at all, and
a check for field width had integer overflow bug.
Reported by: Maksymilian Arciemowicz
Security: http://securityreason.com/achievement_securityalert/53
Submitted by: Maxim Dounin <mdounin@mdounin.ru>
MFC after: 3 days
This reduces the size of a statically-linked binary by approximately 100KB
in a trivial "return (0)" test application. readelf -S was used to verify
that the .text section was reduced and that using strlen() saved a few
more bytes over using sizeof(). Since the section of code is only called
when environ is corrupt (program bug), I went with fewer bytes over fewer
cycles.
I made minor edits to the submitted patch to make the output resemble
warnx().
Submitted by: kib bz
Approved by: wes (mentor)
MFC after: 5 days
reallocation, when junk filling is enabled. Junk filling must occur
prior to shrinking, since any deallocated trailing pages are immediately
available for use by other threads.
Reported by: Mats Palmgren <mats.palmgren@bredband.net>
allocation patterns, number of CPUs, and MALLOC_OPTIONS settings indicate
that lazy deallocation has the potential to worsen throughput dramatically.
Performance degradation occurs when multiple threads try to clear the lazy
free cache simultaneously. Various experiments to avoid this bottleneck
failed to completely solve this problem, while adding yet more complexity.
arena_dalloc_lazy_hard() was split out of arena_dalloc_lazy() in revision
1.162.
Reduce thundering herd problems in lazy deallocation by randomly varying
how many probes a thread does before taking the slow path.
assumptions about whether bits are set at various times. This makes
adding other flags safe.
Reorganize functions in order to inline i{m,c,p,s,re}alloc(). This
allows the entire fast-path call chains for malloc() and free() to be
inlined. [1]
Suggested by: [1] Stuart Parmenter <stuart@mozilla.com>
threshold, according to the 'F' MALLOC_OPTIONS flag. This obsoletes the
'H' flag.
Try to realloc() large objects in place. This substantially speeds up
incremental large reallocations in the common case.
Fix a bug in arena_ralloc() that caused relocation of sub-page objects
even if the old and new sizes were in the same size class.
Maintain trees of runs and simplify the per-chunk page map. This allows
logarithmic-time searching for sufficiently large runs in
arena_run_alloc(), whereas the previous algorithm required linear time
in the worst case.
Break various large functions into smaller sub-functions, and inline
only the functions that are in the fast path for small object
allocation/deallocation.
Remove an unnecessary check in base_pages_alloc_mmap().
Avoid integer division in choose_arena() for the NO_TLS case on
single-CPU systems.
into slowsort for some sequences because different parts of the
code used 'r' to store two different things, one of which was
signed. Clean things up by splitting 'r' into two variables, and
use a more meaningful name.
default. This has the disadvantage of rendering the datasize resource
limit irrelevant, but without this change, legitimate uses of more
memory than will fit in the data segment are thwarted by default.
Fix chunk_alloc_mmap() to work correctly if initial mapping is not
chunk-aligned and mapping extension fails.
Clean up DSS-related locking and protect all pertinent variables with
dss_mtx (remove dss_chunks_mtx). This fixes race conditions that could
cause chunk leaks.
Reported by: [1] kris
This is a long-standing bug, but until recent changes it was difficult
to trigger, and even then its impact was non-catastrophic, with the
exception of revision 1.157.
Optimize chunk_alloc_mmap() to avoid the need for unmapping pages in the
common case. Thanks go to Kris Kennaway for a patch that inspired this
change.
Do not maintain a record of previously mmap'ed chunk address ranges.
The original intent was to avoid the extra system call overhead in
chunk_alloc_mmap(), which is no longer a concern. This also allows some
simplifications for the tree of unused DSS chunks.
Introduce huge_mtx and dss_chunks_mtx to replace chunks_mtx. There was
no compelling reason to use the same mutex for these disjoint purposes.
Avoid memset() for huge allocations when possible.
Maintain two trees instead of one for tracking unused DSS address
ranges. This allows scalable allocation of multi-chunk huge objects in
the DSS. Previously, multi-chunk huge allocation requests failed if the
DSS could not be extended.
order to support re-use of multi-chunk unused regions within the DSS for
huge allocations. This generalization is important to correct function
when mmap-based allocation is disabled.
Avoid zeroing re-used memory in the DSS unless it really needs to be
zeroed.
memory is acquired from the system via sbrk(2) and/or mmap(2). By default,
use sbrk(2) only, in order to support traditional use of resource limits.
Additionally, when both options are enabled, prefer the data segment to
anonymous mappings, in order to coexist better with large file mappings
in applications on 32-bit platforms. This change has the potential to
increase memory fragmentation due to the linear nature of the data
segment, but from a performance perspective this is mitigated by the use
of madvise(2). [1]
Add the ability to interpret integer prefixes in MALLOC_OPTIONS
processing. For example, MALLOC_OPTIONS=lllllllll can now be specified as
MALLOC_OPTIONS=9l.
Reported by: [1] rwatson
Design review: [1] alc, peter, rwatson
- Use PTY* for all pty(4) related constants.
- Use PTMX* for all pts(4) related constants.
- Consistently use _PATH_DEV PTMX rather than "/dev/ptmx".
- Revert 1.7 and properly fix it by using the correct prefix string for
pts(4) masters.
MFC after: 3 days
calculating run sizes. Use of the floating point unit was a potential
pessimization to context switching for applications that do not otherwise
use floating point math. [1]
Reformat cpp macro-related comments to improve consistency.
Submitted by: das
deallocation and dynamic load balancing via the MALLOC_LAZY_FREE and
MALLOC_BALANCE knobs. This is a non-functional change, since these
features are still enabled when possible.
Clean up a few things that more pedantic compiler settings would cause
complaints over.
adds two new directories in msun: ld80 and ld128. These are for
long double functions specific to the 80-bit long double format
used on x86-derived architectures, and the 128-bit format used on
sparc64, respectively.
contention. The intent is to dynamically adjust to load imbalances, which
can cause severe contention.
Use pthread mutexes where possible instead of libc "spinlocks" (they aren't
actually spin locks). Conceptually, this change is meant only to support
the dynamic load balancing code by enabling the use of spin locks, but it
has the added apparent benefit of substantially improving performance due to
reduced context switches when there is moderate arena lock contention.
Proper tuning parameter configuration for this change is a finicky business,
and it is very much machine-dependent. One seemingly promising solution
would be to run a tuning program during operating system installation that
computes appropriate settings for load balancing. (The pthreads adaptive
spin locks should probably be similarly tuned.)
vector of slots for lazily freed objects. For each deallocation, before
doing the hard work of locking the arena and deallocating, try several times
to randomly insert the object into the vector using atomic operations.
This approach is particularly effective at reducing contention for
multi-threaded applications that use the producer-consumer model, wherein
one producer thread allocates objects, then multiple consumer threads
deallocate those objects.
allocations. [1]
Fix calculation of the number of arenas when 'n' is specified via
MALLOC_OPTIONS.
Clean up various style inconsistencies.
Obtained from: [1] NetBSD
to an int to remove the warning from using a size_t variable on 64-bit
platforms.
Submitted by: Xin LI <delphij@FreeBSD.org>
Approved by: wes
Approved by: re (kensmith)
inactive variables should cause a rebuild of environ, otherwise, exec()'d
processes will be missing a variable in environ that has been unset then
set.
Submitted by: Taku Yamamoto <taku@tackymt.homeip.net>
Reviewed by: ache
Approved by: wes (mentor)
Approved by: re (kensmith)
or replace (i.e., zdump) the environment after a call to setenv(), putenv()
or unsetenv() has been made, a few changes were made.
- getenv() will return the value from the new environ array.
- setenv() was split into two functions: __setenv() which is most of the
previous setenv() without checks on the name and setenv() which
contains the checks before calling __setenv().
- setenv(), putenv() and unsetenv() will unset all previous values and
call __setenv() on all entries in the new environ array which in turn
adds them to the end of the envVars array. Calling __setenv() instead
of setenv() is done to avoid the temporary replacement of the '=' in a
string with a NUL byte. Some strings may be read-only data.
Added more regression checks for clearing the environment array.
Replaced gettimeofday() with getrusage() in timing regression check for
better accuracy.
Fixed an off-by-one bug in __remove_putenv() in the use of memmove(). This
went unnoticed due to the allocation of double the number of environ
entries when building envVars.
Fixed a few spelling mistakes in the comments.
Reviewed by: ache
Approved by: wes
Approved by: re (kensmith)
setenv(3) by tracking the size of the memory allocated instead of using
strlen() on the current value.
Convert all calls to POSIX from historic BSD API:
- unsetenv returns an int.
- putenv takes a char * instead of const char *.
- putenv no longer makes a copy of the input string.
- errno is set appropriately for POSIX. Exceptions involve bad environ
variable and internal initialization code. These both set errno to
EFAULT.
Several patches to base utilities to handle the POSIX changes from
Andrey Chernov's previous commit. A few I re-wrote to use setenv()
instead of putenv().
New regression module for tools/regression/environ to test these
functions. It also can be used to test the performance.
Bump __FreeBSD_version to 700050 due to API change.
PR: kern/99826
Approved by: wes
Approved by: re (kensmith)
Not because I admit they are technically wrong and not because of bug
reports (I receive nothing). But because I surprisingly meets so
strong opposition and resistance so lost any desire to continue that.
Anyone who interested in POSIX can dig out what changes and how
through cvs diffs.
(also IEEE Std 1003.1-2001)
The specs explicitly says that altering passed string
should change the environment, i.e. putenv() directly puts its arg
into environment (unlike setenv() which just copies it there).
It means that putenv() can't be implemented via setenv()
(like we have before) at all. Putenv() value lives (allows modifying)
up to the next putenv() or setenv() call.
compatibility with the different environment conventions" (man page).
With the standards, we don't have them different anymore and
IEEE Std 1003.1-2001 says that
"The values that the environment variables may be assigned are not
restricted except that they are considered to end with a null byte"
Issue 6 (also IEEE Std 1003.1-2001) in following areas:
args, return, errors.
Putenv still needs rewriting because specs explicitly says that
altering passed string later should change the environment (currently we
copy the string so can't provide that).
avoid downcasting issues. In particular, this change fixes
posix_memalign(3) for alignments greater than 2^31 on LP64 systems.
Make sure that NDEBUG is always set to be compatible with MALLOC_DEBUG. [1]
Reported by: [1] Lee Hyo geol <hyogeollee@gmail.com>
trees that track all non-full runs for each bin. Use the red-black
trees to be able to guarantee that each new allocation is placed in the
lowest address available in any non-full run. This change completes the
transition to allocating from low addresses in order to reduce the
retention of sparsely used chunks.
If the run in current use by a bin becomes empty, deallocate the run
rather than retaining it for later use. The previous behavior had the
tendency to spread empty runs across multiple chunks, thus preventing
the release of chunks that were completely unused.
Generalize base_chunk_alloc() (and rename it to base_pages_alloc()) to
handle allocation sizes larger than the chunk size, so that it is
possible to support chunk sizes that are smaller than an arena object.
Reduce the minimum chunk size from 64kB to 8kB.
Optimize tracking of addresses for deleted chunks.
Fix a statistics bug for huge allocations.
rounding and overflow. Carefully document what the various overflow
tests actually detect.
The bugs mostly canceled out, such that the worst possible failure
cases resulted in non-fatal over-allocations.
than binary buddies, the alignment guarantees are weaker, which requires
a more complex aligned allocation algorithm, similar to that used for
alignment greater than the chunk size.
Reported by: matteo
chunks. This allows runs to be any multiple of the page size. The
primary advantage is that large objects are no longer constrained to be
2^n pages, which can dramatically decrease internal fragmentation for
large objects. This also allows the sizes for runs that back small
objects to be more finely tuned.
Free runs are searched for linearly using the chunk page map (with the
help of some heuristic optimizations). This changes the allocation
policy from "first best fit" to "first fit". A prototype red-black tree
implementation for tracking free runs that implemented "first best fit"
did not cause a measurable speed or memory usage difference for
realistic chunk sizes (though of course it is possible to construct
benchmarks that favor one allocation policy over another).
Refine the handling of fullness constraints for small runs to be more
tunable.
Restructure the per chunk page map to contain only two fields per entry,
rather than four. Also, increase each entry from 4 to 8 bytes, since it
allows for 32-bit integers, without increasing the number of chunk
header pages.
Relax the maximum chunk size constraint. This is of no practical
interest; it is merely fallout from the chunk page map restructuring.
Revamp statistics gathering and reporting to be faster, clearer and more
informative. Statistics gathering is fast enough now to have little
to no impact on application speed, but it still requires approximately
two extra pages of memory per arena (per process). This memory overhead
may be acceptable for most systems, but we still need to leave
statistics gathering disabled by default in RELENG branches.
Rename NO_MALLOC_EXTRAS to MALLOC_PRODUCTION in order to make its intent
clearer (i.e. it should be defined in RELENG branches).
avoid substantial potential bloat for static binaries that do not
otherwise use any printf(3)-family functions. [1]
Rearrange arena_run_t so that the region bitmask can be minimally sized
according to constraints related to each bin's size class. Previously,
the region bitmask was the same size for all run headers, which wasted
a measurable amount of memory.
Rather than making runs for small objects as large as possible, make
runs as small as possible such that header overhead stays below a
certain bound. There are two exceptions that override the header
overhead bound:
1) If the bound is impossible to honor, it is relaxed on a
per-size-class basis. Since there is one bit of header
overhead per object (plus a constant), it is impossible to
achieve a header overhead less than or equal to 1/(# of bits
per object). For the current setting of maximum 0.5% header
overhead, this relaxation comes into play for {2, 4, 8,
16}-byte objects, for which header overhead is (on 64-bit
systems) {7.1, 4.3, 2.2, 1.2}%, respectively.
2) There is still a cap on small run size, still set to 64kB.
This comes into play for {1024, 2048}-byte objects, for which
header overhead is {1.6, 3.1}%, respectively.
In practice, this reduces the run sizes, which makes worst case
low-water memory usage due to fragmentation less bad. It also reduces
worst case high-water run fragmentation due to non-full runs, but this
is only a constant improvement (most important to small short-lived
processes).
Reduce the default chunk size from 2MB to 1MB. Benchmarks indicate that
the external fragmentation reduction makes 1MB the new sweet spot (as
small as possible without adversely affecting performance).
Reported by: [1] kientzle
This has no impact unless USE_BRK is defined (32-bit platforms), in
which case user allocations are allocated via mmap() if at all possible,
in order to avoid the possibility of unreclaimable chunks in the data
segment.
Fix an obscure bug in base_alloc() that could have allowed undefined
behavior if an application were to use sbrk() in conjunction with a
USE_BRK-enabled malloc.
chunk per arena, rather than immediately deallocating all unused chunks.
This fixes a potential performance issue when allocating/deallocating
an object of size (4kB..1MB] in a loop.
Reported by: davidxu
don't be greedy on the GNU "::" extension when arg separated by whitespace
and POSIX_CORRECTLY is set. From POSIX point of view this is unclear
situation, so minimal assumption looks right.