bounds. [1]
Modify logic for utilizing the data segment, such that it is possible to
create huge allocations there.
Shrink the data segment when deallocating a chunk, if it is at the end of
the data segment.
Rename chunk_size to csize in huge_malloc(), in order to avoid masking a
static variable of the same name. [1]
Reported by: Paul Allen <nospam@ugcs.caltech.edu>
subject: ranges of uid, ranges of gid, jail id
objects: ranges of uid, ranges of gid, filesystem,
object is suid, object is sgid, object matches subject uid/gid
object type
We can also negate individual conditions. The ruleset language is
a superset of the previous language, so old rules should continue
to work.
These changes require a change to the API between libugidfw and the
mac_bsdextended module. Add a version number, so we can tell if
we're running mismatched versions.
Update man pages to reflect changes, add extra test cases to
test_ugidfw.c and add a shell script that checks that the the
module seems to do what we expect.
Suggestions from: rwatson, trhodes
Reviewed by: trhodes
MFC after: 2 months
far more convenient for libkvm to work with because of the page table
block at the beginning. As a result, the MD code is smaller.
libkvm will automatically detect old vs mini dumps on i386 and amd64.
libkvm will handle i386 PAE and non-PAE modes. There is a PAE flag in
the i386 minidump header to signal the width of the entries in the
page table block.
Other convenient values are also present, such as kernbase and the direct
map addresses on amd64.
to pidfile_write happen, the pidfile will have nul characters prepended
due to the cached file descriptor offset...
Reviewed by: scottl
MFC after: 3 days
as well as add __sparc_utrap_install to FBSD_1.0; these are required by
the SCD libc 64 psABI and thus meant to be officially exported symbols.
- Remove the __fpu_* entries as well as the __sigtramp entry altogether as
these are internal to the libc FPU emulation and the signal trampoline
initialization in sigaction(2) respectively and thus don't need to be
externally visible.
- Add __sparc_utrap_setup to the list of FBSDprivate symbols as it's used
in src/lib/csu/sparc64/crt1.c to initialize the libc FPU emulation (I
think alternatively src/lib/csu/sparc64/crt1.c could be changed to use
__sparc_utrap_install instead, at the expense of increasing the size of
executables a bit).
- Add an entry for the vfork symbol to the FBSD_1 list and entries for it's
associated symbols generated by the RSYSCALL() macro to the FBSDprivate
list. There's some magic in place that automatically generates code for
vfork() if there's no explicit MD code for it so it might make sense to
move these symbols from the MD symbol map files to a MI one.
The last two changes make the libc symbol versioning useable on sparc64.
Ok'ed by: deischen
races. This isn't currently necessary for libpthread or libthr, but
without it external threads libraries like the linuxthreads port are
not safe to use.
Reported by: ganbold@micom.mng.net
have to be calculated once per allocator operation.
Make nil const.
Update various comments.
Remove/avoid division where possible.
For the one division operation that remains in the critical path, add a
switch statement that has a case for each small size class, and do division
with a constant divisor in each case. This allows the compiler to generate
optimized code that does not use hardware division [1].
Obtained from: peter [1]
this is used by some 3rd party applications when {e,f,g}cvt() are
not found. POSIX defines the xcvt() funtions but says they are
deprecated in favor or sprintf(). We'll import these functions
from OpenBSD and remove __gdtoa() from the exported interfaces
when libc version is bumped.
* Avoid choosing an arena until it's certain that an arena is needed
for allocation.
* Convert division/multiplication to bitshifting where possible.
* Avoid accessing TLS variables in single-threaded code.
* Reduce the amount of pointer dereferencing.
* Move lock acquisition in critical paths to only protect the the code
that requires synchronization, and completely remove locking where
possible.
uses them.
Now, we have res_nupdate and res_nmkupdate as well, but they are
still based on our old resolver for binary backward compatibility.
So, they don't provide new features such as TSIG support.
Reported by: pointyhat via kris
FBSDprivate locale symbols. These functions are needed by
libcompat.
Add _cleanup to the list of stdio FBSDprivate symbols. Some
third party applications use this. This will be removed and
replaced by fcloseall() once libc version is bumped.
Add _res to the list of resolv symbols.
Found by: portbuilder runs (thanks Kris!)
to make it work, turnstile like mechanism to support priority
propagating and other realtime scheduling options in kernel
should be available to userland mutex, for the moment, I just
want to make libthr be simple and efficient thread library.
Discussed with: deischen, julian
Kernel changes:
Inform hwpmc of executable objects brought into the system by
kldload() and mmap(), and of their removal by kldunload() and
munmap(). A helper function linker_hwpmc_list_objects() has been
added to "sys/kern/kern_linker.c" and is used by hwpmc to retrieve
the list of currently loaded kernel modules.
The unused `MAPPINGCHANGE' event has been deprecated in favour
of separate `MAP_IN' and `MAP_OUT' events; this change reduces
space wastage in the log.
Bump the hwpmc's ABI version to "2.0.00". Teach hwpmc(4) to
handle the map change callbacks.
Change the default per-cpu sample buffer size to hold
32 samples (up from 16).
Increment __FreeBSD_version.
libpmc(3) changes:
Update libpmc(3) to deal with the new events in the log file; bring
the pmclog(3) manual page in sync with the code.
pmcstat(8) changes:
Introduce new options to pmcstat(8): "-r" (root fs path), "-M"
(mapfile name), "-q"/"-v" (verbosity control). Option "-k" now
takes a kernel directory as its argument but will also work with
the older invocation syntax.
Rework string handling in pmcstat(8) to use an opaque type for
interned strings. Clean up ELF parsing code and add support for
tracking dynamic object mappings reported by a v2.0.00 hwpmc(4).
Report statistics at the end of a log conversion run depending
on the requested verbosity level.
Reviewed by: jhb, dds (kernel parts of an earlier patch)
Tested by: gallatin (earlier patch)
determine its value at run time according to other relevant values. This
avoids the creation of runs that are incompletely utilized, as long as
pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW
setting).
Increase the size of several structure bitfields in arena_run_t in order
to avoid integer overflow in the case that a run's header does not overlap
with the space that is usable as application allocation regions. Given
the tiny_min_2pow change, this fix has no additional impact unless
pagesize is >32kB.
Reported by: kris
internally used chunk to start at the beginning of the heap, rather
than at a chunk-aligned address. This reduces mapped memory somewhat
for 32-bit architectures.
Add the arena_run_link_t type and use it wherever a run object is only
used as a ring 'header'. This saves approximately 40 kB of memory per
arena.
Remove an obsolete (no longer used) code path from base_alloc(), which
supported the internal allocation of objects larger than the chunk
size.
Enhance chunk_dealloc() to cache chunk addresses for all deallocated
chunks. This has no impact for most programs, but has the potential
to reduce VM map fragmentation for programs that use huge
allocations.
documentation bug. We switched to page indexes some time around
FreeBSD 2.2. The actual 'len' limit is the maximum file size or what
will fit in your address space, whichever comes first. It should be
possible to make 1TB files on 32 bit systems, but of course address space
runs out long before then.
it's only a failure if there were actually attributes to be restored.
In particular, this fixes the problem where tar -xp always returned
a failure code on FreeBSD (which doesn't yet have all of the extended
attribute support).
Thanks to: Diego "Flameeyes" Petteno
This commit implements storing/reading POSIX.1e-style extended
attribute information in "pax" format archives. An outline of the
storage format is in the tar.5 manpage. The archive_read_extract()
function has code to restore those archives to disk for Linux; FreeBSD
implementation is forthcoming.
Many thanks to Jaakko Heinonen for finding flaws in earlier
proposals and doing the bulk of the coding in this work.
Since, res_sendsigned(3) and the friends use MD5 functions, it is
hard to include them without having MD5 functions in libc. So,
res_sendsigned(3) is not merged into libc.
Since, res_update(3) in BIND9 is not binary compatible with our
res_update(3), res_update(3) is leaved as is, except some
necessary modifications.
The res_update(3) and the friends are not essential part of the
resolver. They are not defined in resolv.h but defined in
res_update.h separately in BIND9. Further, they are not called from
our tree. So, I hide them from our resolv.h, but leave them only
for binary backward compatibility (perhaps, no one calls them).
Since, struct __res_state_ext is not exposed in BIND9, I hide it
from our resolv.h. And, global variable _res_ext is removed. It
breaks binary backward compatibility. But, since it is not used from
outside of our libc, I think it is safe.
Reviewed by: arch@ (no objection)
- <netipx> headers [1]
- IPX library (libipx)
- IPX support in ifconfig(8)
- IPXrouted(8)
- new MK_NCP option
New MK_NCP build option controls:
- <netncp> and <fs/nwfs> headers
- NCP library (libncp)
- ncplist(1) and ncplogin(1)
- mount_nwfs(8)
- ncp and nwfs kernel modules
User knobs: WITHOUT_IPX, WITHOUT_IPX_SUPPORT, WITHOUT_NCP.
[1] <netsmb/netbios.h> unconditionally uses <netipx> headers
so they are still installed. This needs to be dealt with.
that no linear searching is necessary if we resort to allocating from a
run that is known to be mostly full. There are pathological edge cases
that could have caused severely degraded performance, and this change
fixes that.
close enough to each other that reallocation would allocate a new region
of the same size. This improves the performance of repeated incremental
reallocations by up to three orders of magnitude. [1]
Fix arena_new() to properly constrain run size if a small chunk size was
specified during runtime configuration.
Suggested by: se [1]
allocation patterns that involve a relatively even mixture of many
different size classes.
Reduce the chunk size from 16 MB to 2 MB. Since chunks are now carved up
using an address-ordered first best fit policy, VM map fragmentation is
much less likely, which makes smaller chunks not as much of a risk. This
reduces the virtual memory size of most applications.
Remove redzones, since program buffer overruns are no longer as likely to
corrupt malloc data structures.
Remove the C MALLOC_OPTIONS flag, and add H and S.
used LIBTHREAD_1_0 as its version definition, but now needs
to define its symbols in the same namespace used by libc.
The compatibility hooks allows you to use libraries and
binaries built and linked to libpthread before libc was
built with symbol versioning. The shims can be removed if
libpthread is given a version bump.
Reviewed by: davidxu
disabled by default; add SYMVER_ENABLED=true to /etc/make.conf
to enable it. libc should get a version bump before this is
enabled by default.
Reviewed by: davidxu
is default and caller does not require dedicated thread. timer needs
a dedicated thread to maintain overrun count correctly in notification
context. mqueue and aio can use thread pool to do notification
concurrently, the thread pool has lifecycle control, some threads will
exit if they have idled for a while.
Staticize two tables thare are not visible in <resolv.h>
and which are also local in Solaris' libresolv.
Remove two functions that are not referenced in libc nor
anywhere else I can find, not visible in <resolv.h> and
which are also local in Solaris libresolv.
but then sizes the containing data structure at run-time to make room
for per-cpu cache data. Modify libmemstat to separately allocate a
buffer to hold per-cpu cache data, sized based on the run-time mp_maxid
variable when using libkvm to access UMA data. This avoids reading
invalid cache data from beyond the end of the uma_zone data structure
on the stack, which can result in invalid statistics and/or reads from
invalid kernel addresses.
Foot target practice by: ps
MFC after: 3 days
return a KVM error rather than an out of memory error, so that the caller
reports the KVM error state. This replaces a misleading error message
with a more accurate although equally confusing one.
MFC after: 3 days
cpu mask before looking at the cache entries for the CPU. For systems
with sparse CPU id arrays, this skips otherwise uninitialized cache
structures.
MFC after: 3 days
Remove the block of code that tries to use delayed regions in LIFO order,
since from a policy perspective, it conflicts with LRU caching of newly
coalesced regions in arena_undelay(). There are numerous policy
alternatives, and it isn't readily obvious which (if any) is superior;
this change at least has the virtue of being consistent with policy.
Add %M{essage} extension which prints an errno value as the
corresponding string if possible or numerically otherwise.
It is not currently possible to do the syslog(3) like %m extension
because errno would need to get capatured on entry to the first
function in the printf family, so %M requires you to supply errno
as an argument.
Add %Q{uote} extension which will print a string in double quotes with
appropriate back-slash escapes (only) if necessary.
fit regions are available, use the delayed regions in LIFO order, in order
to increase locality of reference. We might expect this to cause delayed
regions to be removed from the delay ring buffer more often (since we're
now re-using more recently buffered regions), but numerous tests indicate
that the overall impact on memory usage tends to be good (reduced
fragmentation).
Re-work arena_frag_reg_alloc() so that when large free regions are
exhausted, it uses small regions in a way that favors contiguous allocation
of sequentially allocated small regions. Use arena_frag_reg_alloc() in
this capacity, rather than directly attempting over-fitting of small
requests when no large regions are available.
Remove the bin overfit statistic, since it is no longer relevant due to
the arena_frag_reg_alloc() changes.
Do not specify arena_frag_reg_alloc() as an inline function. It is too
large to benefit much from being inlined, and it is also called in two
places, only one of which is in the critical path (the other call bloated
arena_reg_alloc()).
Call arena_coalesce() for a region before caching it with
arena_mru_cache().
Add assertions that detect the attempted caching of adjacent free regions,
so that we notice this problem when it is first created, rather than in
arena_coalesce(), when it's too late to know how the problem arose.
Reported by: Hans Blancke
behaviour of returning EINVAL when ".." is passed as either argument
has been restored.
rmdir("..") now returns EINVAL instead of EPERM. Document the
previously undocumented behaviour of rmdir(".") returning EINVAL
as required by POSIX and SUSv3. Bump the man page change date.
undelete("..") now returns EINVAL instead of EPERM. Bump the man
page change date.
MFC after: 3 days
problems in cases where regions are faked up for the purposes of red-black
tree searches, since those faked region headers reside on the stack, rather
than in a malloc chunk.
allowing the error to be fatal.
Move a label in order to make sure to properly handle errors in malloc(0).
Reported by: Alastair D'Silva, Saneto Takanori
routine fails or the first read fails), invoke the client close
routine immediately so the client can clean up. Also, don't store the
client pointers in this case, so that the client close routine can't
accidentally get called more than once.
A minor style fix to archive_read_open_fd.c while I'm here.
PR: 86453
Thanks to: Andrew Turner for reporting this and suggesting a fix.
archiver for Fourth Edition through Sixth Edition Unix; it was
replaced by tar in Seventh Edition. (First Edition through
Third Edition used "tap.")
Unfortunately, tp was not so very standard; there were a
few different variants. The code here attempts to support
what I believe were the most common variants.
tp support is not yet enabled by archive_read_support_format_all(),
as I'm not yet entirely comfortable with the detection
heuristics. People interested in experimenting can
add archive_read_support_format_tp() just after any calls
to archive_read_support_format_all() in bsdtar to see how
well this works.
TODO: tp format is roughly similar in structure to dump/restore
archive formats used by many systems. It should be possible
to generalize this code to handle many dump/restore variants.
Format detection heuristics are going to be rough, though.
Thanks to: Warren Toomey, whose very basic tp extraction programs
and documentation made this possible.
there is never any need to recursively call the main allocation functions.
Remove recursive spinlock support, since it is no longer needed.
Allow chunks to be as small as the page size.
Correctly propagate OOM errors from arena_new().
from /etc/login.conf, or an unterminated string buffer could result.
Probably, login_times.c should reject excessively long time strings as
unparseable, rather than truncating, which might render an invalid
string valid.
Found with: Coverity Prevent (tm)
Reviewed by: csjp
MFC after: 3 days
list, which could cause problems for multi-threaded applications
using libmemstat to monitor UMA in more than one thread
simultaneously.
MFC after: 3 days
broken for non-threaded shared processes in that __tls_get_addr()
assumes the thread pointer is always initialized. This is not the
case. When arenas_map is referenced in choose_arena() and it is
defined as a thread-local variable, it will result in a SIGSEGV.
PR: ia64/91846 (describes the TLS/ia64 bug).
had been replied, the reply was always delivered to the originator
synchronously.
With introduction of netgraph item callbacks and a wait channel with
mutex in ng_socket(4), we have fixed the problem with ngctl(8) returning
earlier than the command has been proceeded by target node. But still
ngctl(8) can return prior to the reply has arrived to its node.
To fix this:
- Introduce a new flag for netgraph(4) messages - NGM_HASREPLY.
This flag is or'ed with message like NGM_READONLY.
- In netgraph userland library if we have sent a message with
NGM_HASREPLY flag, then select(2) until reply comes.
- Mark appropriate generic commands with NGM_HASREPLY flag,
gathering them into one enum {}. Bump generic cookie.
* Add posix_memalign().
* Move calloc() from calloc.c to malloc.c. Add a calloc() implementation in
rtld-elf in order to make the loader happy (even though calloc() isn't
used in rtld-elf).
* Add _malloc_prefork() and _malloc_postfork(), and use them instead of
directly manipulating __malloc_lock.
Approved by: phk, markm (mentor)
While we don't use the NC_BROADCAST value of nc_flag anywhere in the
RPC code, it is parseable by getnetconfigent(3) from /etc/netconfig.
o Clean up some "see below"'s that were cut and pasted from netconfig.h.
operation, the caller is blocked util target threads are really
suspended, also avoid suspending a thread when it is holding a
critical lock.
Fix a bug in _thr_ref_delete which tests a never set flag.
commit broke the 2**24 cases where |x| > DBL_MAX/2. There are exponent
range problems not just for denormals (underflow) but for large values
(overflow). Doubles have more than enough exponent range to avoid the
problems, but I forgot to convert enough terms to double, so there was
an x+x term which was sometimes evaluated in float precision.
Unfortunately, this is a pessimization with some combinations of systems
and compilers (it makes no difference on Athlon XP's, but on Athlon64's
it gives a 5% pessimization with gcc-3.4 but not with gcc-3.3).
Exlain the problem better in comments.
algorithm for the second step significantly to also get a perfectly
rounded result in round-to-nearest mode. The resulting optimization
is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles
out of 100 on the former).
Using extra precision, we don't need to do anything special to avoid
large rounding errors in the third step (Newton's method), so we can
regroup terms to avoid a division, increase clarity, and increase
opportunities for parallelism. Rearrangement for parallelism loses
the increase in clarity. We end up with the same number of operations
but with a division reduced to a multiplication.
Using specifically double precision, there is enough extra precision
for the third step to give enough precision for perfect rounding to
float precision provided the previous steps are accurate to 16 bits.
(They were accurate to 12 bits, which was almost minimal for imperfect
rounding in the old version but would be more than enough for imperfect
rounding in this version (9 bits would be enough now).) I couldn't
find any significant time optimizations from optimizing the previous
steps, so I decided to optimize for accuracy instead. The second step
needed a division although a previous commit optimized it to use a
polynomial approximation for its main detail, and this division dominated
the time for the second step. Use the same Newton's method for the
second step as for the third step since this is insignificantly slower
than the division plus the polynomial (now that Newton's method only
needs 1 division), significantly more accurate, and simpler. Single
precision would be precise enough for the second step, but doesn't
have enough exponent range to handle denormals without the special
grouping of terms (as in previous versions) that requires another
division, so we use double precision for both the second and third
steps.
functions in the child after a fork() from a threaded process,
use __sys_setprocmask() rather than setprocmask() to keep our
signal handling sane. Without this fix, signals are essentially
ignored in said child and things such as protection violations
result in an endless busy loop.
Reviewed by: deischen
similar the the Solaris implementation. Repackage the krb5 GSS mechanism
as a plugin library for the new implementation. This also includes a
comprehensive set of manpages for the GSS-API functions with text mostly
taken from the RFC.
Reviewed by: Love Hörnquist Åstrand <lha@it.su.se>, ru (build system), des (openssh parts)
between a 32-bit integer and a radix-64 ASCII string. The l64a_r() function
is a NetBSD addition.
PR: 51209 (based on submission, but very different)
Reviewed by: bde, ru
distributed non-large args, this saves about 14 of 134 cycles for
Athlon64s and about 5 of 199 cycles for AthlonXPs.
Moved the check for x == 0 inside the check for subnormals. With
gcc-3.4 on uniformly distributed non-large args, this saves another
5 cycles on Athlon64s and loses 1 cycle on AthlonXPs.
Use INSERT_WORDS() and not SET_HIGH_WORD() when converting the first
approximation from bits to double. With gcc-3.4 on uniformly distributed
non-large args, this saves another 4 cycles on both Athlon64s and and
AthlonXPs.
Accessing doubles as 2 words may be an optimization on old CPUs, but on
current CPUs it tends to cause extra operations and pipeline stalls,
especially for writes, even when only 1 of the words needs to be accessed.
Removed an unused variable.
function approximation for the second step. The polynomial has degree
2 for cbrtf() and 4 for cbrt(). These degrees are minimal for the final
accuracy to be essentially the same as before (slightly smaller).
Adjust the rounding between steps 2 and 3 to match. Unfortunately,
for cbrt(), this breaks the claimed accuracy slightly although incorrect
rounding doesn't. Claim less accuracy since its not worth pessimizing
the polynomial or relying on exhaustive testing to get insignificantly
more accuracy.
This saves about 30 cycles on Athlons (mainly by avoiding 2 divisions)
so it gives an overall optimization in the 10-25% range (a larger
percentage for float precision, especially in 32-bit mode, since other
overheads are more dominant for double precision, surprisingly more
in 32-bit mode).
- in preparing for the third approximation, actually make t larger in
magnitude than cbrt(x). After chopping, t must be incremented by 2
ulps to make it larger, not 1 ulp since chopping can reduce it by
almost 1 ulp and it might already be up to half a different-sized-ulp
smaller than cbrt(x). I have not found any cases where this is
essential, but the think-time error bound depends on it. The relative
smallness of the different-sized-ulp limited the bug. If there are
cases where this is essential, then the final error bound would be
5/6+epsilon instead of of 4/6+epsilon ulps (still < 1).
- in preparing for the third approximation, round more carefully (but
still sloppily to avoid branches) so that the claimed error bound of
0.667 ulps is satisfied in all cases tested for cbrt() and remains
satisfied in all cases for cbrtf(). There isn't enough spare precision
for very sloppy rounding to work:
- in cbrt(), even with the inadequate increment, the actual error was
0.6685 in some cases, and correcting the increment increased this
a little. The fix uses sloppy rounding to 25 bits instead of very
sloppy rounding to 21 bits, and starts using uint64_t instead of 2
words for bit manipulation so that rounding more bits is not much
costly.
- in cbrtf(), the 0.667 bound was already satisfied even with the
inadequate increment, but change the code to almost match cbrt()
anyway. There is not enough spare precision in the Newton
approximation to double the inadequate increment without exceeding
the 0.667 bound, and no spare precision to avoid this problem as
in cbrt(). The fix is to round using an increment of 2 smaller-ulps
before chopping so that an increment of 1 ulp is enough. In cbrt(),
we essentially do the same, but move the chop point so that the
increment of 1 is not needed.
Fixed comments to match code:
- in cbrt(), the second approximation is good to 25 bits, not quite 26 bits.
- in cbrt(), don't claim that the second approximation may be implemented
in single precision. Single precision cannot handle the full exponent
range without minor but pessimal changes to renormalize, and although
single precision is enough, 25 bit precision is now claimed and used.
Added comments about some of the magic for the error bound 4/6+epsilon.
I still don't understand why it is 4/6+ and not 6/6+ ulps.
Indent comments at the right of code more consistently.
to be compatible with symbol versioning support as implemented by
GNU libc and documented by http://people.redhat.com/~drepper/symbol-versioning
and LSB 3.0.
Implement dlvsym() function to allow lookups for a specific version of
a given symbol.
means:
o Remove Elf64_Quarter,
o Redefine Elf64_Half to be 16-bit,
o Redefine Elf64_Word to be 32-bit,
o Add Elf64_Xword and Elf64_Sxword for 64-bit entities,
o Use Elf_Size in MI code to abstract the difference between
Elf32_Word and Elf64_Word.
o Add Elf_Ssize as the signed counterpart of Elf_Size.
MFC after: 2 weeks
on probationary terms: it may go away again if it transpires it is
a bad idea.
This extensible printf version will only be used if either
environment variable USE_XPRINTF is defined
or
one of the extension functions are called.
or
the global variable __use_xprintf is set greater than zero.
In all other cases our traditional printf implementation will
be used.
The extensible version is slower than the default printf, mostly
because less opportunity for combining I/O operation exists when
faced with extensions. The default printf on the other hand
is a bad case of spaghetti code.
The extension API has a GLIBC compatible part and a FreeBSD version
of same. The FreeBSD version exists because the GLIBC version may
run afoul of our FILE * locking in multithreaded programs and it
even further eliminate the opportunities for combining I/O operations.
Include three demo extensions which can be enabled if desired: time
(%T), hexdump (%H) and strvis (%V).
%T can format time_t (%T), struct timeval (%lT) and struct timespec (%llT)
in one of two human readable duration formats:
"%.3llT" -> "20349.245"
"%#.3llT" -> "5h39m9.245"
%H will hexdump a sequence of bytes and takes a pointer and a length
argument. The width specifies number of bytes per line.
"%4H" -> "65 72 20 65"
"%+4H" -> "0000 65 72 20 65"
"%#4H" -> "65 72 20 65 |er e|"
"%+#4H" -> "0000 65 72 20 65 |er e|"
%V will dump a string in strvis format.
"%V" -> "Hello\tWor\377ld" (C-style)
"%0V" -> "Hello\011Wor\377ld" (octal)
"%+V" -> "Hello%09Wor%FFld" (http-style)
Tests, comments, bugreports etc are most welcome.
allocate a memory block. sscanf calls __svfscanf which in turn calls
fread, fread triggers mutex initialization but the mutex is not
destroyed in sscanf, this leads to memory leak. To avoid the memory
leak and performance issue, we create a none MT-safe version of fread:
__fread, and instead let __svfscanf call __fread.
PR: threads/90392
Patch submitted by: dhartmei
MFC after: 7 days
the second step of approximating cbrt(x). It turns out to be neither
very magic not nor very good. It is just the (2,2) Pade approximation
to 1/cbrt(r) at r = 1, arranged in a strange way to use fewer operations
at a cost of replacing 4 multiplications by 1 division, which is an
especially bad tradeoff on machines where some of the multiplications
can be done in parallel. A Remez rational approximation would give
at least 2 more bits of accuracy, but the (2,2) Pade approximation
already gives 6 more bits than needed. (Changed the comment which
essentially says that it gives 3 more bits.)
Lower order Pade approximations are not quite accurate enough for
double precision but are plenty for float precision. A lower order
Remez rational approximation might be enough for double precision too.
However, rational approximations inherently require an extra division,
and polynomial approximations work well for 1/cbrt(r) at r = 1, so I
plan to switch to using the latter. There are some technical
complications that tend to cost a division in another way.
This gives an optimization of between 9 and 22% on Athlons (largest
for cbrt() on amd64 -- from 205 to 159 cycles).
We extracted the sign bit and worked with |x|, and restored the sign
bit as the last step. We avoided branches to a fault by using accesses
to FP values as bits to clear and restore the sign bit. Avoiding
branches is usually good, but the bit access macros are not so good
(especially for setting FP values), and here they always caused pipeline
stalls on Athlons. Even using branches would be faster except on args
that give perfect branch misprediction, since only mispredicted branches
cause stalls, but it possible to avoid touching the sign bit in FP
values at all (except to preserve it in conversions from bits to FP
not related to the sign bit). Do this. The results are identical
except in 2 of the 3 unsupported rounding modes, since all the
approximations use odd rational functions so they work right on strictly
negative values, and the special case of -0 doesn't use an approximation.
For some denormalized long double values, a bug in __hldtoa() (called
from *printf()'s %A format) results in a base 16 digit being rounded
up from 0xf to 0x10.
When this digit is subsequently converted to string format, an index
of 10 reaches past the end of the uppper-case hex/char array, picking
up whatever the code segment happen to contain at that address.
This mostly seem to be some character from the upper half of the
byte range.
When using the %a format instead of %A, the first character past
the end of the lowercase hex/char table happens to be index 0 in
the uppercase hex/char table hextable and therefore the string
representation features a '0', which is supposedly correct.
This leads me to belive that the proper fix _may_ be as simple as
masking all but the lower four bits off after incrementing a hex-digit
in libc/gdtoa/_hdtoa.c:roundup(). I worry however that the upper
bit in 0x10 indicates a carry not carried.
Until das@ or bde@ finds time to visit this issue, extend the
hexdigit arrays with a 17th index containing '?' so that we get a
invalid but consistent and printable output in both %a and %A formats
whenever this bug strikes.
This unmasks the bug in the %a format therefore solving the real
issue may both become easier and more urgent.
Possibly related to: PR 85080
With help by: bde@
<cbrt(x) in bits> ~= <x in bits>/3 + BIAS.
Keep the large comments only in the double version as usual.
Fixed some style bugs (mainly grammar and spelling errors in comments).
It was because I forgot to translate the part of the double precision
algorithm that chops t so that t*t is exact. Now the maximum error
is the same as for double precision (almost exactly 2.0/3 ulps).
The maximum error was 3.56 ulps.
The bug was another translation error. The double precision version
has a comment saying "new cbrt to 23 bits, may be implemented in
precision". This means exactly what it says -- that the 23 bit second
approximation for the double precision cbrt() may be implemented in
single (i.e., float) precision. It doesn't mean what the translation
assumed -- that this approximation, when implemented in float precision,
is good enough for the the final approximation in float precision.
First, float precision needs a 24 bit approximation. The "23 bit"
approximation is actually good to 24 bits on float precision args, but
only if it is evaluated in double precision. Second, the algorithm
requires a cleanup step to ensure its error bound.
In float precision, any reasonable algorithm works for the cleanup
step. Use the same algorithm as for double precision, although this
is much more than enough and is a significant pessimization, and don't
optimize or simplify anything using double precision to implement the
float case, so that the whole double precision algorithm can be verified
in float precision. A maximum error of 0.667 ulps is claimed for cbrt()
and the max for cbrtf() using the same algorithm shouldn't be different,
but the actual max for cbrtf() on amd64 is now 0.9834 ulps. (On i386
-O1 the max is 0.5006 (down from < 0.7) due to extra precision.)
The threshold for not being tiny was too small. Use the usual 2**-12
threshold. As for sinhf, use a different method (now the same as for
sinhf) to set the inexact flag for tiny nonzero x so that the larger
threshold works, although this method is imperfect. As for sinhf,
this change is not just an optimization, since the general code that
we fell into has accuracy problems even for tiny x. On amd64, avoiding
it fixes tanhf on 2*13495596 args with errors of between 1 and 1.3
ulps and thus reduces the total number of args with errors of >= 1 ulp
from 37533748 to 5271278; the maximum error is unchanged at 2.2 ulps.
The magic number 22 is log(DBL_MAX)/2 plus slop. This is bogus for
float precision. Use 9 (log(FLT_MAX)/2 plus less slop than for
double precision). Unlike for coshf and tanhf, this is just an
optimization, and MAX isn't misspelled EPSILON in the commit log.
I started testing with nonstandard rounding modes, and verified that
the chosen thresholds work for all modes modulo problems not related
to thresholds. The best thresholds are not very dependent on the mode,
at least for tanhf.