exponent bits of the reduced result, construct 2**k (hopefully in
parallel with the construction of the reduced result) and multiply by
it. This tends to be much faster if the construction of 2**k is
actually in parallel, and might be faster even with no parallelism
since adjustment of the exponent requires a read-modify-wrtite at an
unfortunate time for pipelines.
In some cases involving exp2* on amd64 (A64), this change saves about
40 cycles or 30%. I think it is inherently only about 12 cycles faster
in these cases and the rest of the speedup is from partly-accidentally
avoiding compiler pessimizations (the construction of 2**k is now
manually scheduled for good results, and -O2 doesn't always mess this
up). In most cases on amd64 (A64) and i386 (A64) the speedup is about
20 cycles. The worst case that I found is expf on ia64 where this
change is a pessimization of about 10 cycles or 5%. The manual
scheduling for plain exp[f] is harder and not as tuned.
Details specific to expm1*:
- the saving is closer to 12 cycles than to 40 for expm1* on i386 (A64).
For some reason it is much larger for negative args.
- also convert to __FBSDID().
exponent bits of the reduced result, construct 2**k (hopefully in
parallel with the construction of the reduced result) and multiply by
it. This tends to be much faster if the construction of 2**k is
actually in parallel, and might be faster even with no parallelism
since adjustment of the exponent requires a read-modify-wrtite at an
unfortunate time for pipelines.
In some cases involving exp2* on amd64 (A64), this change saves about
40 cycles or 30%. I think it is inherently only about 12 cycles faster
in these cases and the rest of the speedup is from partly-accidentally
avoiding compiler pessimizations (the construction of 2**k is now
manually scheduled for good results, and -O2 doesn't always mess this
up). In most cases on amd64 (A64) and i386 (A64) the speedup is about
20 cycles. The worst case that I found is expf on ia64 where this
change is a pessimization of about 10 cycles or 5%. The manual
scheduling for plain exp[f] is harder and not as tuned.
This change ld128/s_exp2l.c has not been tested.
that is specialized for float precision. The new polynomial has degree
5 instead of 11, and a maximum error of 2**-27.74 ulps instead
of 2**-30.64. This doesn't affect the final error significantly; the
maximum error was and is about 0.9101 ulps on amd64 -01 and the number
of cases with an error of > 0.5 ulps is actually reduced by epsilon
despite the larger error in the polynomial.
This is about 15% faster on amd64 (A64), i386 (A64) and ia64. The asm
version is still used instead of this on i386 since it is faster and
more accurate.
cases which are used mainly by regression tests.
As usual, the cutoff for tiny args was not correctly translated to
float precision. It was 2**-54 but 2**-24 works. It must be about
2**-precision, since the error from approximating log(1+x) by x is
about the same as |x|. Exhaustive testing shows that 2**-24 gives
perfect rounding in round-to-nearest mode.
Similarly for the cutoff for being small, except this is not used by
so many other functions. It was 2**-29 but 2**-15 works. It must be
a bit smaller than sqrt(2**-precision), since the error from
approximating log(1+x) by x-x*x/2 is about the same as x*x. Exhaustive
testing shows that 2**-15 gives a maximum error of 0.5052 ulps in
round-to-nearest-mode. The algorithm for the general case is only good
for 0.8388 ulps, so this is sufficient (but it loses slightly on i386 --
then extra precision gives 0.5032 ulps for the general case).
While investigating this, I noticed that optimizing the usual case by
falling into a middle case involving a simple polynomial evaluation
(return x-x*x/2 instead of x here) is not such a good idea since it
gives an enormous pessimization of tinier args on machines for which
denormals are slow. Float x*x/2 is denormal when |x| ~< 2**-64 and
x*x/2 is evaluated in float precision, so it can easily be denormal
for normal x. This is even more interesting for general polynomial
evaluations. Multiplying out large powers of x is normally a good
optimization since it reduces dependencies, but it creates denormals
starting with quite large x.
forget to translate "float" to "double".
ucbtest didn't detect the bug, but exhaustive testing of the float
case relative to the double case eventually did. The bug only affects
args x with |x| ~> 2**19*(pi/2) on non-i386 (i386 is broken in a
different way for large args).
it should never have existed and it has not been used for many years
(floats are reduced faster using doubles). All relevant changes (just
the workaround for broken assignment) have been merged to the double
version.
there is a problem with non-floats (when i386 defaults to extra
precision). This essentially restores yesterday's behaviour for doubles
on i386 (since generic rint() isn't used and everywhere else assumed
working assignment), but for arches that use the generic rint() it
finishes restoring some of 1995's behaviour (don't waste time doing
unnecessary store/load).
variable hack for exp2f() only.
The volatile variable had a surprisingly large cost for exp2f() -- 19
cycles or 15% on i386 in the worst case observed. This is only partly
explained by there being several references to the variable, only one
of which benefited from it being volatile. Arches that have working
assignment are likely to benefit even more from not having any volatile
variable.
exp2() now has a chance of working with extra precision on i386.
exp2() has even more references to the variable, so it would have been
pessimized more by simply declaring the variable as volatile. Even
the temporary volatile variable for STRICT_ASSIGN costs 5-10% on i386,
(A64) so I will change STRICT_ASSIGN() to do an ordinary assignment
until i386 defaults to extra precision.
instead of a volatile cast hack for the float version only. The cast
hack broke with gcc-4, but this was harmless since the float version
hasn't been used for a few years. Merge from the float version so
that the double version has a chance of working on i386 with extra
precision.
See k_rem_pio2f.c rev.1.8 for the original hack.
Convert to _FBSDID().
hack for log1pf() only. The cast hack broke with gcc-4, resulting in
~1 million errors of more than 1 ulp, with a maximum error of ~1.5 ulps.
Now the maximum error for log1pf() on i386 is 0.5034 ulps again (this
depends on extra precision), and log1p() has a chance of working with
extra precision.
See s_log1pf.c 1.8 for the original hack. (It claims only 62343 large
errors).
Convert to _FBSDID(). Another thing broken with gcc-4 is the static
const hack used for rcsids.
around assignments not working for gcc on i386. Now volatile hacks
for rint() and rintf() don't needlessly pessimize so many arches
and the remaining pessimizations (for arm and powerpc) can be avoided
centrally.
This cleans up after s_rint.c 1.3 and 1.13 and s_rintf.c 1.3 and 1.9:
- s_rint.c 1.13 broke 1.3 by only using a volatile cast hack in 1 place
when it was needed in 2 places, and the volatile cast hack stopped
working with gcc-4. These bugs only affected correctness tests on
i386 since i386 normally uses asm rint() and doesn't support the
extra precision mode that would break assignments of doubles.
- s_rintf.c 1.9 improved(?) on 1.3 by using a volatile variable hack
instead of an extra-precision variable hack, but it declared 2
variables as volatile when only 1 variable needed to be volatile.
This only affected speed tests on i386 since i386 uses asm rintf().
long doubles (i386, amd64, ia64) and one for machines with 128-bit
long doubles (sparc64). Other platforms use the double version.
I've only done runtime testing on i386.
Thanks to bde@ for helpful discussions and bugfixes.
assignments and casts don't clip extra precision, if any. The
implementation is to assign to a temporary volatile variable and read
the result back to assign to the original lvalue.
lib/msun currently 2 different hard-coded hacks to avoid the problem
in just a few places and needs it in a few more places. One variant
uses volatile for the original lvalue. This works but is slower than
necessary. Another temporarily casts the lvalue to volatile. This
broke with gcc-4.2.1 or earlier (gcc now stores to the lvalue but
doesn't load from it).
my original implementation made both use the same code. Unfortunately,
this meant libm depended on a vendor header at compile time and previously-
unexposed vendor bits in libc at runtime.
Hence, I just wrote my own version of the relevant vendor routine. As it
turns out, mine has a factor of 8 fewer of lines of code, and is a bit more
readable anyway. The strtod() and *scanf() routines still use vendor code.
Reviewed by: bde
someone thought it would be a good idea to copy z_abs() to libm in 1994.
However, it's never been declared or documented anywhere, and I'm
reasonably confident that nobody uses it.
Discussed with: bde, deischen, kan
I hope that this and the i386 version of it will not be needed, but
this is currently about 16 cycles or 36% faster than the C version,
and the i386 version is about 8 cycles or 19% faster than the C
version, due to poor optimization of the C version.
adds two new directories in msun: ld80 and ld128. These are for
long double functions specific to the 80-bit long double format
used on x86-derived architectures, and the 128-bit format used on
sparc64, respectively.
hundreds of similar aliasing bugs, but all except this one seem to have
been fixed by Cygnus and/or NetBSD before the modified version of fdlibm
was imported into FreeBSD in 1994.
PR: standards/113147
Submitted by: Steve Kargl <sgk@troutmask.apl.washington.edu>
involving moving the check for x == 0). The savings in cycles are
smaller for cbrtf() than for cbrt(), and positive in all measured cases
with gcc-3.4.4, but still very machine/compiler-dependent.
Warning, after symbol versioning is enabled, going back is not easy
(use WITHOUT_SYMVER at your own risk).
Change the default thread library to libthr.
There most likely still needs to be a version bump for at least the
thread libraries. If necessary, this will happen later.
that ints have >= 31 value bits elsewhere. s/int/int32_t/ seems to
have been done too globally for all other files in msun/src before
msun/ was imported into FreeBSD.
Minor fixes in comments.
e_lgamma_r.c:
Describe special cases in more detail:
- exception for lgamma(0) and lgamma(neg.integer)
- lgamma(-Inf) = Inf. This is wrong but is required by C99 Annex F. I
hope to change this.
(1) tgamma(-Inf) returned +Inf and failed to raise any exception, but
should always have raised an exception, and should behave like
tgamma(negative integer).
(2) tgamma(negative integer) returned +Inf and raised divide-by-zero,
but should return NaN and raise "invalid" on any IEEEish system.
(3) About half of the 2**52 negative intgers between -2**53 and -2**52
were misclassified as non-integers by using floor(x + 0.5) to round
to nearest, so tgamma(x) was wrong (+-0 instead of +Inf and now NaN)
on these args. The floor() expression is hard to use since rounding
of (x + 0.5) may give x or x + 1, depending on |x| and the current
rounding mode. The fixed version uses ceil(x) to classify x before
operating on x and ends up being more efficient since ceil(x) is
needed anyway.
(4) On at least the problematic args in (3), tgamma() raised a spurious
inexact.
(5) tgamma(large positive) raised divide-by-zero but should raise overflow.
(6) tgamma(+Inf) raised divide-by-zero but should not raise any exception.
(7) Raise inexact for tiny |x| in a way that has some chance of not being
optimized away.
The fix for (5) and (6), and probably for (2), also prevents -O optimizing
away the exception.
PR: 112180 (2)
Standards: Annex F in C99 (IEC 60559 binding) requires (1), (2) and (6).