Commit Graph

642 Commits

Author SHA1 Message Date
deischen
2a7306fdc5 Use C comments since we now preprocess these files with CPP. 2007-04-29 14:05:22 +00:00
imp
130ae175fc Remove California Regent's clause 3, per letter 2007-01-09 01:02:06 +00:00
das
3ef4cfecda Implement modfl(). 2007-01-07 07:54:21 +00:00
das
3d86fb6387 Fix a problem relating to fesetenv() clobbering i387 register stack.
Details: As a side-effect of restoring a saved FP environment,
fesetenv() overwrites the tag word, which indicates which i387
registers are in use.  Normally this isn't a problem because
the calling convention requires the register stack to be empty
on function entry and exit.  However, fesetenv() is inlined, so we
need to tell gcc explicitly that the i387 registers get clobbered.

PR:	85101
2007-01-06 21:46:23 +00:00
das
3e2f039e9d Fix a cut-and-paste-o. 2007-01-06 21:23:20 +00:00
das
8441570fc8 Correctly handle NaN. 2007-01-06 21:22:57 +00:00
das
b4ef63382c Correctly handle inf/nan. This routine is currently unused because we
seem to have assembly versions for all architectures, but it can't
hurt to fix it.
2007-01-06 21:22:38 +00:00
das
f0732593ba Remove modf from libm's symbol map. It's actually in libc for
hysterical raisins.
2007-01-06 21:18:17 +00:00
das
aeb763b099 Remove an unneeded fnstcw instruction.
Noticed by:	bde
2007-01-05 07:15:26 +00:00
das
a1794f56b7 Remove a note pertaining to the Alpha. 2007-01-05 07:14:26 +00:00
bde
d36e6277cb Moved __BEGIN_DECLS up a little so that it covers __test_sse() and C++
isn't broken,

PR:		104425
2006-10-14 20:35:56 +00:00
ru
4d582ffe09 Remove alpha left-overs. 2006-08-22 08:03:01 +00:00
bde
3d1bfe752d Fixed the threshold for using the simple Taylor approximation.
In e_log.c, there was just a off-by-1 (1 ulp) error in the comment
about the threshold.  The precision of the threshold is unimportant,
but the magic numbers in the code are easier to understand when the
threshold is described precisely.

In e_logf.c, mistranslation of the magic numbers gave an off-by-1
(1 * 16 ulps) error in the intended negative bound for the threshold
and an off-by-7 (7 * 16 ulps) error in the intended positive bound for
the threshold, and the intended bounds were not translated from the
double precision bounds so they were unnecessarily small by a factor
of about 2048.

The optimization of using the simple Taylor approximation for args
near a power of 2 is dubious since it only applies to a relatively
small proportion of args, but if it is done then doing it 2048 times
as often _may_ be more efficient.  (My benchmarks show unexplained
dependencies on the data that increase with further optimizations
in this area.)
2006-07-07 04:33:08 +00:00
bde
ebfec8dd17 Fixed tanh(-0.0) on ia64 and optimizeed tanh(x) for 2**-55 <= |x| <
2**-28 as a side effect, by merging with the float precision version
of tanh() and the double precision version of sinh().

For tiny x, tanh(x) ~= x, and we used the expression x*(one+x) to
return this value (x) and set the inexact flag iff x != 0.  This
doesn't work on ia64 since gcc -O does the dubious optimization
x*(one+x) = x+x*x so as to use fma, so the sign of -0.0 was lost.

Instead, handle tiny x in the same as sinh(), although this is imperfect:
- return x directly and set the inexact flag in a less efficient way.
- increased the threshold for non-tinyness from 2**-55 to 2**-28 so that
  many more cases are optimized than are pessimized.

Updated some comments and fixed bugs in others (ranges for half-open
intervals mostly had the open end backwards, and there were nearby style
bugs).
2006-07-05 22:59:33 +00:00
bde
ac26a61be9 Removed the optimized asm versions of scalb() and scalbf(). These
functions are only for compatibility with obsolete standards.  They
shouldn't be used, so they shouldn't be optimized.  Use the generic
versions instead.

This fixes scalbf() as a side effect.  The optimized asm version left
garbage on the FP stack.  I fixed the corresponding bug in the optimized
asm scalb() and scalbn() in 1996.  NetBSD fixed it in scalb(), scalbn()
and scalbnf() in 1999 but missed fixing it in scalbf().  Then in 2005
the bug was reimplemented in FreeBSD by importing NetBSD's scalbf().

The generic versions have slightly different error handling:
- the asm versions blindly round the second parameter to a (floating
  point) integer and proceed, while the generic versions return NaN
  if this rounding changes the value.  POSIX permits both behaviours
  (these functions are XSI extensions and the behaviour for a bogus
  non-integral second parameter is unspecified).   Apart from this
  and the bug in scalbf(), the behaviour of the generic versions seems
  to be identical.  (I only exhusatively tested
  generic_scalbf(1.0F, anyfloat) == asm_scalb(1.0F, anyfloat).  This
  covers many representative corner cases involving NaNs and Infs but
  doesn't test exception flags.  The brokenness of scalbf() showed up
  as weird behaviour after testing just 7 integer cases sequentially.)
2006-07-05 20:06:42 +00:00
bde
31501671ae Backed out rev.1.10. It tried to implement ldexpf() as a weak reference
to scalbf(), but ldexpf() cannot be implemented in that way since the
types of the second parameter differ.  ldexpf() can be implemented as
a weak or strong reference to scalbnf() (*) but that was already done
long before rev.1.10 was committed.  The old implementation uses a
reference, so rev.1.10 had no effect on applications.  The C files for
the scalb() family are not used for amd64 or i386, so rev.1.10 had even
less effect for these arches.

(*) scalbnf() raises the radix to the given exponent, while ldexpf()
raises 2 to the given exponent.  Thus the functions are equivalent
except possibly for their error handling iff the radix is 2.  Standards
more or less require identical error handling.  Under FreeBSD, the
functions are equivalent except for more details being missing in
scalbnf()'s man page.
2006-07-05 02:16:29 +00:00
deischen
d76f24935a Add symbol versioning to libm. 2006-03-27 23:59:45 +00:00
bde
76448b8654 Oops, on amd64 (and probably on all non-i386 systems), the previous
commit broke the 2**24 cases where |x| > DBL_MAX/2.  There are exponent
range problems not just for denormals (underflow) but for large values
(overflow).  Doubles have more than enough exponent range to avoid the
problems, but I forgot to convert enough terms to double, so there was
an x+x term which was sometimes evaluated in float precision.

Unfortunately, this is a pessimization with some combinations of systems
and compilers (it makes no difference on Athlon XP's, but on Athlon64's
it gives a 5% pessimization with gcc-3.4 but not with gcc-3.3).

Exlain the problem better in comments.
2006-01-05 09:18:48 +00:00
bde
cbc5231d53 Use double precision internally to optimize cbrtf(), and change the
algorithm for the second step significantly to also get a perfectly
rounded result in round-to-nearest mode.  The resulting optimization
is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles
out of 100 on the former).

Using extra precision, we don't need to do anything special to avoid
large rounding errors in the third step (Newton's method), so we can
regroup terms to avoid a division, increase clarity, and increase
opportunities for parallelism.  Rearrangement for parallelism loses
the increase in clarity.  We end up with the same number of operations
but with a division reduced to a multiplication.

Using specifically double precision, there is enough extra precision
for the third step to give enough precision for perfect rounding to
float precision provided the previous steps are accurate to 16 bits.
(They were accurate to 12 bits, which was almost minimal for imperfect
rounding in the old version but would be more than enough for imperfect
rounding in this version (9 bits would be enough now).)  I couldn't
find any significant time optimizations from optimizing the previous
steps, so I decided to optimize for accuracy instead.  The second step
needed a division although a previous commit optimized it to use a
polynomial approximation for its main detail, and this division dominated
the time for the second step.  Use the same Newton's method for the
second step as for the third step since this is insignificantly slower
than the division plus the polynomial (now that Newton's method only
needs 1 division), significantly more accurate, and simpler.  Single
precision would be precise enough for the second step, but doesn't
have enough exponent range to handle denormals without the special
grouping of terms (as in previous versions) that requires another
division, so we use double precision for both the second and third
steps.
2006-01-05 07:57:31 +00:00
bde
761a5296f9 Extract the high and low words together. With gcc-3.4 on uniformly
distributed non-large args, this saves about 14 of 134 cycles for
Athlon64s and about 5 of 199 cycles for AthlonXPs.

Moved the check for x == 0 inside the check for subnormals.  With
gcc-3.4 on uniformly distributed non-large args, this saves another
5 cycles on Athlon64s and loses 1 cycle on AthlonXPs.

Use INSERT_WORDS() and not SET_HIGH_WORD() when converting the first
approximation from bits to double.  With gcc-3.4 on uniformly distributed
non-large args, this saves another 4 cycles on both Athlon64s and and
AthlonXPs.

Accessing doubles as 2 words may be an optimization on old CPUs, but on
current CPUs it tends to cause extra operations and pipeline stalls,
especially for writes, even when only 1 of the words needs to be accessed.

Removed an unused variable.
2005-12-20 01:21:30 +00:00
bde
ce5f09f38d Use a minimax polynomial approximation instead of a Pade rational
function approximation for the second step.  The polynomial has degree
2 for cbrtf() and 4 for cbrt().  These degrees are minimal for the final
accuracy to be essentially the same as before (slightly smaller).
Adjust the rounding between steps 2 and 3 to match.  Unfortunately,
for cbrt(), this breaks the claimed accuracy slightly although incorrect
rounding doesn't.  Claim less accuracy since its not worth pessimizing
the polynomial or relying on exhaustive testing to get insignificantly
more accuracy.

This saves about 30 cycles on Athlons (mainly by avoiding 2 divisions)
so it gives an overall optimization in the 10-25% range (a larger
percentage for float precision, especially in 32-bit mode, since other
overheads are more dominant for double precision, surprisingly more
in 32-bit mode).
2005-12-19 00:22:03 +00:00
bde
74e09cff99 Fixed code to match comments and the algorithm:
- in preparing for the third approximation, actually make t larger in
  magnitude than cbrt(x).  After chopping, t must be incremented by 2
  ulps to make it larger, not 1 ulp since chopping can reduce it by
  almost 1 ulp and it might already be up to half a different-sized-ulp
  smaller than cbrt(x).  I have not found any cases where this is
  essential, but the think-time error bound depends on it.  The relative
  smallness of the different-sized-ulp limited the bug.  If there are
  cases where this is essential, then the final error bound would be
  5/6+epsilon instead of of 4/6+epsilon ulps (still < 1).
- in preparing for the third approximation, round more carefully (but
  still sloppily to avoid branches) so that the claimed error bound of
  0.667 ulps is satisfied in all cases tested for cbrt() and remains
  satisfied in all cases for cbrtf().  There isn't enough spare precision
  for very sloppy rounding to work:
  - in cbrt(), even with the inadequate increment, the actual error was
    0.6685 in some cases, and correcting the increment increased this
    a little.  The fix uses sloppy rounding to 25 bits instead of very
    sloppy rounding to 21 bits, and starts using uint64_t instead of 2
    words for bit manipulation so that rounding more bits is not much
    costly.
  - in cbrtf(), the 0.667 bound was already satisfied even with the
    inadequate increment, but change the code to almost match cbrt()
    anyway.  There is not enough spare precision in the Newton
    approximation to double the inadequate increment without exceeding
    the 0.667 bound, and no spare precision to avoid this problem as
    in cbrt().  The fix is to round using an increment of 2 smaller-ulps
    before chopping so that an increment of 1 ulp is enough.  In cbrt(),
    we essentially do the same, but move the chop point so that the
    increment of 1 is not needed.

Fixed comments to match code:
- in cbrt(), the second approximation is good to 25 bits, not quite 26 bits.
- in cbrt(), don't claim that the second approximation may be implemented
  in single precision.  Single precision cannot handle the full exponent
  range without minor but pessimal changes to renormalize, and although
  single precision is enough, 25 bit precision is now claimed and used.

Added comments about some of the magic for the error bound 4/6+epsilon.
I still don't understand why it is 4/6+ and not 6/6+ ulps.

Indent comments at the right of code more consistently.
2005-12-18 21:46:47 +00:00
bde
3abe21faf2 Added comments about the apparently-magic rational function used in
the second step of approximating cbrt(x).  It turns out to be neither
very magic not nor very good.  It is just the (2,2) Pade approximation
to 1/cbrt(r) at r = 1, arranged in a strange way to use fewer operations
at a cost of replacing 4 multiplications by 1 division, which is an
especially bad tradeoff on machines where some of the multiplications
can be done in parallel.  A Remez rational approximation would give
at least 2 more bits of accuracy, but the (2,2) Pade approximation
already gives 6 more bits than needed.  (Changed the comment which
essentially says that it gives 3 more bits.)

Lower order Pade approximations are not quite accurate enough for
double precision but are plenty for float precision.  A lower order
Remez rational approximation might be enough for double precision too.
However, rational approximations inherently require an extra division,
and polynomial approximations work well for 1/cbrt(r) at r = 1, so I
plan to switch to using the latter.  There are some technical
complications that tend to cost a division in another way.
2005-12-15 16:23:22 +00:00
bde
e1faa6b5ba Optimize by not doing excessive conversions for handling the sign bit.
This gives an optimization of between 9 and 22% on Athlons (largest
for cbrt() on amd64 -- from 205 to 159 cycles).

We extracted the sign bit and worked with |x|, and restored the sign
bit as the last step.  We avoided branches to a fault by using accesses
to FP values as bits to clear and restore the sign bit.  Avoiding
branches is usually good, but the bit access macros are not so good
(especially for setting FP values), and here they always caused pipeline
stalls on Athlons.  Even using branches would be faster except on args
that give perfect branch misprediction, since only mispredicted branches
cause stalls, but it possible to avoid touching the sign bit in FP
values at all (except to preserve it in conversions from bits to FP
not related to the sign bit).  Do this.  The results are identical
except in 2 of the 3 unsupported rounding modes, since all the
approximations use odd rational functions so they work right on strictly
negative values, and the special case of -0 doesn't use an approximation.
2005-12-13 20:17:23 +00:00
bde
809376d69d Fixed some especially horrible style bugs (indentation that is neither
KNF nor fdlibmNF combined with multiple statements per line).
2005-12-13 18:22:00 +00:00
bde
0c881f6eff Added comments about the magic behind
<cbrt(x) in bits> ~= <x in bits>/3 + BIAS.
Keep the large comments only in the double version as usual.

Fixed some style bugs (mainly grammar and spelling errors in comments).
2005-12-11 19:51:30 +00:00
bde
94455e43e1 Fixed the unexpectedly large maximum error after the previous commit.
It was because I forgot to translate the part of the double precision
algorithm that chops t so that t*t is exact.  Now the maximum error
is the same as for double precision (almost exactly 2.0/3 ulps).
2005-12-11 17:58:14 +00:00
bde
f60dc890a9 Fixed all 502518670 errors of more than 1 ulp for cbrtf() on amd64.
The maximum error was 3.56 ulps.

The bug was another translation error.  The double precision version
has a comment saying "new cbrt to 23 bits, may be implemented in
precision".  This means exactly what it says -- that the 23 bit second
approximation for the double precision cbrt() may be implemented in
single (i.e., float) precision.  It doesn't mean what the translation
assumed -- that this approximation, when implemented in float precision,
is good enough for the the final approximation in float precision.
First, float precision needs a 24 bit approximation.  The "23 bit"
approximation is actually good to 24 bits on float precision args, but
only if it is evaluated in double precision.  Second, the algorithm
requires a cleanup step to ensure its error bound.

In float precision, any reasonable algorithm works for the cleanup
step.  Use the same algorithm as for double precision, although this
is much more than enough and is a significant pessimization, and don't
optimize or simplify anything using double precision to implement the
float case, so that the whole double precision algorithm can be verified
in float precision.  A maximum error of 0.667 ulps is claimed for cbrt()
and the max for cbrtf() using the same algorithm shouldn't be different,
but the actual max for cbrtf() on amd64 is now 0.9834 ulps.  (On i386
-O1 the max is 0.5006 (down from < 0.7) due to extra precision.)
2005-12-11 13:22:01 +00:00
bde
7f90518f5f Fixed some magic numbers.
The threshold for not being tiny was too small.  Use the usual 2**-12
threshold.  As for sinhf, use a different method (now the same as for
sinhf) to set the inexact flag for tiny nonzero x so that the larger
threshold works, although this method is imperfect.  As for sinhf,
this change is not just an optimization, since the general code that
we fell into has accuracy problems even for tiny x.  On amd64, avoiding
it fixes tanhf on 2*13495596 args with errors of between 1 and 1.3
ulps and thus reduces the total number of args with errors of >= 1 ulp
from 37533748 to 5271278; the maximum error is unchanged at 2.2 ulps.

The magic number 22 is log(DBL_MAX)/2 plus slop.  This is bogus for
float precision.  Use 9 (log(FLT_MAX)/2 plus less slop than for
double precision).  Unlike for coshf and tanhf, this is just an
optimization, and MAX isn't misspelled EPSILON in the commit log.

I started testing with nonstandard rounding modes, and verified that
the chosen thresholds work for all modes modulo problems not related
to thresholds.  The best thresholds are not very dependent on the mode,
at least for tanhf.
2005-12-11 11:40:55 +00:00
obrien
62350809a7 "Create" ldexpf for non-i386 architectures.
Submitted by:	Steve Kargl <sgk@troutmask.apl.washington.edu>
2005-12-06 20:12:38 +00:00
bde
94a6bce548 Fixed the approximation to pio4. pio4_hi must be pio2_hi/2 since it
shares its low half with pio2_hi.  pio2_hi is rounded down although
rounding to nearest would be a tiny bit better, so pio4_hi must be
rounded down too.  It was rounded to nearest, which happens to be
different in float precision but the same in double precision.

This fixes about 13.5 million errors of more than 1 ulp in asinf().
The largest error was 2.81 ulps on amd64 and 2.57 ulps on i386 -O1.
Now the largest error is 0.93 ulps on amd65 and 0.67 ulps on i386 -O1.
2005-12-04 13:52:46 +00:00
bde
11d5bb39af For log1pf(), fixed the approximations to sqrt(2), sqrt(2)-1 and
sqrt(2)/2-1.  For log1p(), fixed the approximation to sqrt(2)/2-1.

The end result is to fix an error of 1.293 ulps in
    log1pf(0.41421395540 (hex 0x3ed413da))
and an error of 1.783 ulps in
    log1p(-0.292893409729003961761) (hex 0x12bec4 00000001)).
The former was the only error of > 1 ulp for log1pf() and the latter
is the only such error that I know of for log1p().

The approximations don't need to be very accurate, but the last 2 need
to be related to the first and be rounded up a little (even more than
1 ulp for sqrt(2)/2-1) for the following implementation-detail reason:
when the arg (x) is not between (the approximations to) sqrt(2)/2-1
and sqrt(2)-1, we commit to using a correction term, but we only
actually use it if 1+x is between sqrt(2)/2 and sqrt(2) according to
the first approximation. Thus we must ensure that
!(sqrt(2)/2-1 < x < sqrt(2)-1) implies !(sqrt(2)/2 < x+1 < sqrt(2)),
where all the sqrt(2)'s are really slightly different approximations
to sqrt(2) and some of the "<"'s are really "<="'s.  This was not done.

In log1pf(), the last 2 approximations were rounded up by about 6 ulps
more than needed relative to a good approximation to sqrt(2), but the
actual approximation to sqrt(2) was off by 3 ulps.  The approximation
to sqrt(2)-1 ended up being 4 ulps too small, so the algoritm was
broken in 4 cases.  The result happened to be broken in 1 case.  This
is fixed by using a natural approximation to sqrt(2) and derived
approximations for the others.

In logf(), all the approximations made sense, but the approximation
to sqrt(2)/2-1 was 2 ulps too small (a tiny amount, since we compare
with a granularity of 2**32 ulps), so the algorithm was broken in 2
cases.  The result was broken in 1 case.  This is fixed by rounding
up the approximation to sqrt(2)/2-1 by 2**32 ulps, so 2**32 cases are
now handled a little differently (still correctly according to my
assertion that the approximations don't need to be very accurate, but
this has not been checked).
2005-12-04 12:30:44 +00:00
bde
6d4e9a9d97 Use the usual volatile hack to trick gcc into clipping any extra precision
on assignment.

Extra precision on i386's broke hi+lo decomposition in the usual way.
It caused all except 1 of the 62343 errors of more than 1 ulp for
log1pf() on i386's with gcc -O [-fno-float-store].
2005-12-04 08:57:54 +00:00
bde
a8b03f7b44 Fixed fdlibm[+cygnus] logbf() and logb() on denormals. Adjustment
according to the highest nonzero bit in a denormal was missing.

fdlibm ilogbf() and ilogb() have always had the adjustment, but only
use a small part of their method for handling denormals; use the
normalization method in log[f]() for the main part.
2005-12-03 11:57:19 +00:00
bde
14cb0170de Restored removal of the special handling needed for a result of +-0.
It was lost in rev.1.9.  The log message for rev.1.9 says that the
special case of +-0 is handled twice, but it was only handled once,
so it became unhandled, and this happened to break half of the cases
that return +-0:
- round-towards-minus-infinity:  0   <  x < 1:  result was -0 not  0
- round-to-nearest:             -0.5 <= x < 0:  result was  0 not -0
- round-towards-plus-infinity:  -1   <  x < 0:  result was  0 not -0
- round-towards-zero:           -1   <  x < 0:  result was  0 not -0
2005-12-03 09:00:29 +00:00
bde
7aaf0755d7 Simplified the fix in rev.1.3. Instead of using long double for
TWO52[sx] to trick gcc into correctly converting TWO52[sx]+x to double
on assignment to "double w", force a correct assignment by assigning
to *(double *)&w.  This is cleaner and avoids the double rounding
problem on machines that evaluate double expressions in double
precision.  It is not necessary to convert w-TWO52[sx] to double
precision on return as implied in the comment in rev.1.3, since
the difference is exact.
2005-12-03 07:38:35 +00:00
bde
2e834acd9c Fixed rint(x) in the following cases:
(1) In round-to-nearest mode, on all machines, fdlibm rint() never
    worked for |x| = n+0.75 where n is an even integer between 262144
    and 524286 inclusive (2*131072 cases).  To avoid double rounding
    on some machines, we begin by adjusting x to a value with the 0.25
    bit not set, essentially by moving the 0.25 bit to a lower bit
    where it works well enough as a guard, but we botched the adjustment
    when log2(|x|) == 18 (2*2**52 cases) and ended up just clearing
    the 0.25 bit then.  Most subcases still worked accidentally since
    another lower bit serves as a guard.  The case of odd n worked
    accidentally because the rounding goes the right way then.  However,
    for even n, after mangling n+0.75 to 0.5, rounding gives n but the
    correct result is n+1.
(2) In round-towards-minus-infinity mode, on all machines, fdlibm rint()
    never for x = n+0.25 where n is any integer between -524287 and
    -262144 inclusive (262144 cases).  In these cases, after mangling
    n+0.25 to n, rounding gives n but the correct result is n-1.
(3) In round-towards-plus-infinity mode, on all machines, fdlibm rint()
    never for x = n+0.25 where n is any integer between 262144 and
    524287 inclusive (262144 cases).  In these cases, after mangling
    n+0.25 to n, rounding gives n but the correct result is n+1.

A variant of this bug was fixed for the float case in rev.1.9 of s_rintf.c,
but the analysis there is incomplete (it only mentions (1)) and the fix
is buggy.

Example of the problem with double rounding: rint(1.375) on a machine
which evaluates double expressions with just 1 bit of extra precision
and is in round-to-nearest mode.  We evaluate the result using
(double)(2**52 + 1.375) - 2**52.  Evaluating 2**52 + 1.375 in (53+1) bit
prcision gives 2**52 + 1.5 (first rounding).  (Second) rounding of this
to double gives 2**52 + 2.0.  Subtracting 2**52 from this gives 2.0 but
we want 1.0.  Evaluating 2**52 + 1.375 in double precision would have
given the desired intermediate result of 2**52 + 1.0.

The double rounding problem is relatively rare, so the botched adjustment
can be fixed for most machines by removing the entire adjustment.  This
would be a wrong fix (using it is 1 of the bugs in rev.1.9 of s_rintf.c)
since fdlibm is supposed to be generic, but it works in the following cases:
- on all machines that evaluate double expressions in double precision,
  provided either long double has the same precision as double (alpha,
  and i386's with precision forced to double) or my earlier fix to use
  a long double 2**52 is modified to avoid using long double precision.
- on all machines that evaluate double expressions in many more than 11
  bits of extra precision.  The 1 bit of extra precision in the example
  is the worst case.  With N bits of extra precision, it sufices to
  adjust the bit N bits below the 0.5 bit.  For N >= about 52 there is
  no such bit so the adjustment is both impossible and unnecessary.  The
  fix in rev.1.9 of s_rintf.c apparently depends on corresponding magic
  in float precision: on all supported machines N is either 0 or >= 24,
  so double rounding doesn't occur in practice.
- on all machines that don't use fdlibm rint*() (i386's).
So under FreeBSD, the double rounding problem only affects amd64 now, but
should only affect i386 in future (when double expressions are evaluated
in long double precision).
2005-12-03 07:23:30 +00:00
bde
bb298b0b5c Fixed roundf(). The following cases never worked in FreeBSD:
- in round-towards-minus-infinity mode, on all machines, roundf(x) never
  worked for 0 < |x| < 0.5 (2*0x3effffff cases in all, or almost half of
  float space).  It was -0 for 0 < x < 0.5 and 0 for -0.5 < x < 0, but
  should be 0 and -0, respectively.  This is because t = ceilf(|x|) = 1
  for these args, and when we adjust t from 1 to 0 by subtracting 1, we
  get -0 in this rounding mode, but we want and expected to get 0.
- in round-towards-minus-infinity, round towards zero and round-to-nearest
  modes, on machines that evaluate float expressions in float precision
  (most machines except i386's), roundf(x) never worked for |x| =
  <float value immediately below 0.5> (2 cases in all).  It was +-1 but
  should have been +-0.  This is because t = ceilf(|x|) = 1 for these
  args, and when we try to classify |x| by subtracting it from 1 we
  get an unexpected rounding error -- the result is 0.5 after rounding
  to float in all 3 rounding modes, so we we have forgotten the
  difference between |x| and 0.5 and end up returning the same value
  as for +-0.5.

The fix is to use floorf() instead of ceilf() and to add 1 instead of
-1 in the adjustment.  With floorf() all the expressions used are
always evaluated exactly so there are no rounding problems, and with
adjustments of +1 we don't go near -0 when adjusting.

Attempted to fix round() and roundl() by cloning the fix for roundf().
This has only been tested for round(), only on args representable as
floats.  Double expressions are evaluated in double precision even on
i386's, so round(0.5-epsilon) was broken even on i386's.  roundl()
must be completely broken on i386's since long double precision is not
really supported.  There seem to be no other dependencies on the
precision.
2005-12-02 13:45:06 +00:00
bde
8cc821405a Rearranged the polynomial evaluation to reduce dependencies, as in
k_tanf.c but with different details.

The polynomial is odd with degree 13 for tanf() and odd with degree
9 for sinf(), so the details are not very different for sinf() -- the
term with the x**11 and x**13 coefficients goes awaym and (mysteriously)
it helps to do the evaluation of w = z*z early although moving it later
was a key optimization for tanf().  The details are different but simpler
for cosf() because the polynomial is even and of lower degree.

On Athlons, for uniformly distributed args in [-2pi, 2pi], this gives
an optimization of about 4 cycles (10%) in most cases (13% for sinf()
on AXP, but 0% for cosf() with gcc-3.3 -O1 on AXP).  The best case
(sinf() with gcc-3.4 -O1 -fcaller-saves on A64) now takes 33-39 cycles
(was 37-45 cycles).  Hardware sinf takes 74-129 cycles.  Despite
being fine tuned for Athlons, the optimization is even larger on
some other arches (about 15% on ia64 (pluto2) and 20% on alpha (beast)
with gcc -O2 -fomit-frame-pointer).
2005-11-30 11:51:17 +00:00
bde
6142ede46f Fixed cosf(x) when x is a "negative" NaNs. I broke this in rev.1.10.
cosf(x) is supposed to return something like x when x is a NaN, and
we actually fairly consistently return x-x which is normally very like
x (on i386 and and it is x if x is a quiet NaN and x with the quiet bit
set if x is a signaling NaN.  Rev.1.10 broke this by normalising x to
fabsf(x).  It's not clear if fabsf(x) is should preserve x if x is a NaN,
but it actually clears the sign bit, and other parts of the code depended
on this.

The bugs can be fixed by saving x before normalizing it, and using the
saved x only for NaNs, and using uint32_t instead of int32_t for ix
so that negative NaNs are not misclassified even if fabsf() doesn't
clear their sign bit, but gcc pessimizes the saving very well, especially
on Athlon XPs (it generates extra loads and stores, and mixes use of
the SSE and i387, and this somehow messes up pipelines).  Normalizing
x is not a very good optimization anyway, so stop doing it.  (It adds
latency to the FPU pipelines, but in previous versions it helped except
for |x| <= 3pi/4 by simplifying the integer pipelines.)  Use the same
organization as in s_sinf.c and s_tanf.c with some branches reordered.
These changes combined recover most of the performance of the unfixed
version on A64 but still lose 10% on AXP with gcc-3.4 -O1 but not with
gcc-3.3 -O1.
2005-11-30 06:47:18 +00:00
bde
06d8031855 Fixed the hi+lo approximation to log(2). The normal 17+24 bit decomposition
that was used doesn't work normally here, since we want to be able to
multiply `hi' by the exponent of x _exactly_, and the exponent of x has
more than 7 significant bits for most denormal x's, so the multiplication
was not always exact despite a cloned comment claiming that it was.  (The
comment is correct in the double precision case -- with the normal 33+53
bit decomposition the exponent can have 20 significant bits and the extra
bit for denormals is only the 11th.)

Fixing this had little or no effect for denormals (I think because
more precision is inherently lost for denormals than is lost by roundoff
errors in the multiplication).

The fix is to reduce the precision of the decomposition to 16+24 bits.
Due to 2 bugs in the old deomposition and numerical accidents, reducing
the precision actually increased the precision of hi+lo.  The old hi+lo
had about 39 bits instead of at least 41 like it should have had.
There were off-by-1-bit errors in each of hi and lo, apparently due
to mistranslation from the double precision hi and lo.  The correct
16 bit hi happens to give about 19 bits of precision, so the correct
hi+lo gives about 43 bits instead of at least 40.  The end result is
that expf() is now perfectly rounded (to nearest) except in 52561 cases
instead of except in 67027 cases, and the maximum error is 0.5013 ulps
instead of 0.5023 ulps.
2005-11-30 04:56:49 +00:00
bde
e4e1becaf6 Rearranged the polynomial evaluation some more to reduce dependencies.
Instead of echoing the code in a comment, try to describe why we split
up the evaluation in a special way.

The new optimization is mostly to move the evaluation of w = z*z later
so that everything else (except z = x*x) doesn't have to wait for w.
On Athlons, FP multiplication has a latency of 4 cycles so this
optimization saves 4 cycles per call provided no new dependencies are
introduced.  Tweaking the other terms in to reduce dependencies saves
a couple more cycles in some cases (more on AXP than on A64; up to 8
cycles out of 56 altogether in some cases).  The previous version had
a similar optimization for s = z*x.  Special optimizations like these
probably have a larger effect than the simple 2-way vectorization
permitted (but not activated by gcc) in the old version, since 2-way
vectorization is not enough and the polynomial's degree is so small
in the float case that non-vectorizable dependencies dominate.

On an AXP, tanf() on uniformly distributed args in [-2pi, 2pi] now
takes 34-55 cycles (was 39-59 cycles).
2005-11-28 11:46:20 +00:00
bde
93dbe6d06f Fixed about 50 million errors of infinity ulps and about 3 million errors
of between 1.0 and 1.8509 ulps for lgammaf(x) with x between -2**-21 and
-2**-70.

As usual, the cutoff for tiny args was not correctly translated to
float precision.  It was 2**-70 but 2**-21 works.  Not as usual, having
a too-small threshold was worse than a pessimization.  It was just a
pessimization for (positive) args between 2**-70 and 2**-21, but for
the first ~50 million (negative) args below -2**-70, the general code
overflowed and gave a result of infinity instead of correct (finite)
results near 70*log(2).  For the remaining ~361 million negative args
above -2**21, the general code gave almost-acceptable errors (lgamma[f]()
is not very accurate in general) but the pessimization was larger than
for misclassified tiny positive args.

Now the max error for lgammaf(x) with |x| < 2**-21 is 0.7885 ulps, and
speed and accuracy are almost the same for positive and negative args
in this range.  The maximum error overall is still infinity ulps.

A cutoff of 2**-70 is probably wastefully small for the double precision
case.  Smaller cutoffs can be used to reduce the max error to nearly
0.5 ulps for tiny args, but this is useless since the general algrorithm
for nearly-tiny args is not nearly that accurate -- it has a max error of
about 1 ulp.
2005-11-28 08:32:15 +00:00
bde
1f04771fa4 Exploit skew-symmetry to avoid an operation: -sin(x-A) = sin(A-x). This
gives a tiny but hopefully always free optimization in the 2 quadrants
to which it applies.  On Athlons, it reduces maximum latency by 4 cycles
in these quadrants but has usually has a smaller effect on total time
(typically ~2 cycles (~5%), but sometimes 8 cycles when the compiler
generates poor code).
2005-11-28 06:15:10 +00:00
bde
9553dd02c7 Try to use the "proximity" (~) operator consistently in comments
(x ~<= a, not x <= ~a).  This got messed up in some places when the
comments were moved from e_rem_pio2f.c.

Added my (non-)copyright.
2005-11-28 05:46:13 +00:00
bde
35eb86d16d Changed spelling of the request-to-inline macro name to match the change
of the function name.

Added my (non-)copyright.

In k_tanf.c, added the first set of redundant parentheses to control
grouping which was claimed to be added in the previous commit.
2005-11-28 05:35:32 +00:00
bde
8fdb019b17 Use only double precision for "kernel" cosf and sinf (except for
returning float).  The functions are renamed from __kernel_{cos,sin}f()
to __kernel_{cos,sin}df() so that misuses of them will cause link errors
and not crashes.

This version is an almost-routine translation with no special optimizations
for accuracy or efficiency.  The not-quite-routine part is that in
__kernel_cosf(), regenerating the minimax polynomial with double
precision coefficients gives a coefficient for the x**2 term that is
not quite -0.5, so the literal 0.5 in the code and the related `hz'
variable need to be modified; also, the special code for reducing the
error in 1.0-x**2*0.5 is no longer needed, so it is convenient to
adjust all the logic for the x**2 term a little.  Note that without
extra precision, it would be very bad to use a coefficient of other
than -0.5 for the x**2 term -- the old version depends on multiplication
by -0.5 being infinitely precise so as not to need even more special
code for reducing the error in 1-x**2*0.5.

This gives an unimportant increase in accuracy, from ~0.8 to ~0.501
ulps.  Almost all of the error is from the final rounding step, since
the choice of the minimax polynomials so that their contribution to the
error is a bit less than 0.5 ulps just happens to give contributions that
are significantly less (~.001 ulps).

An Athlons, for uniformly distributed args in [-2pi, 2pi], this gives
overall speed increases in the 10-20% range, despite giving a speed
decrease of typically 19% (from 31 cycles up to 37) for sinf() on args
in [-pi/4, pi/4].
2005-11-28 04:58:57 +00:00
bde
4417000483 Minor cleanups and optimizations:
- Remove dead code that I forgot to remove in the previous commit.

- Calculate the sum of the lower terms of the polynomial (divided by
  x**5) in a single expression (sum of odd terms) + (sum of even terms)
  with parentheses to control grouping.  This is clearer and happens to
  give better instruction scheduling for a tiny optimization (an
  average of about ~0.5 cycles/call on Athlons).

- Calculate the final sum in a single expression with parentheses to
  control grouping too.  Change the grouping from
  first_term + (second_term + sum_of_lower_terms) to
  (first_term + second_term) + sum_of_lower_terms.  Normally the first
  grouping must be used for accuracy, but extra precision makes any
  grouping give a correct result so we can group for efficiency.  This
  is a larger optimization (average 3-4 cycles/call or 5%).

- Use parentheses to indicate that the C order of left to right evaluation
  is what is wanted (for efficiency) in a multiplication too.

The old fdlibm code has several optimizations related to these.  2
involve doing an extra operation that can be done almost in parallel
on some superscalar machines but are pessimizations on sequential
machines.  Others involve statement ordering or expression grouping.
All of these except the ordering for the combining the sums of the odd
and even terms seem to be ideal for Athlons, but parallelism is still
limited so all of these optimizations combined together with the ones
in this commit save only ~6-8 cycles (~10%).

On an AXP, tanf() on uniformly distributed args in [-2pi, 2pi] now
takes 39-59 cycles.  I don't know of any more optimizations for tanf()
short of writing it all in asm with very MD instruction scheduling.
Hardware fsin takes 122-138 cycles.  Most of the optimizations for
tanf() don't work very well for tan[l]().  fdlibm tan() now takes
145-365 cycles.
2005-11-24 13:48:40 +00:00
joel
7eed0b9958 s/5.5/6.0/ in HISTORY section.
Discussed with:	ru
2005-11-24 09:25:10 +00:00
bde
caae9bf081 Optimized by eliminating the special case for 0.67434 <= |x| < pi/4.
A single polynomial approximation for tan(x) works in infinite precision
up to |x| < pi/2, but in finite precision, to restrict the accumulated
roundoff error to < 1 ulp, |x| must be restricted to less than about
sqrt(0.5/((1.5+1.5)/3)) ~= 0.707.  We restricted it a bit more to
give a safety margin including some slop for optimizations.  Now that
we use double precision for the calculations, the accumulated roundoff
error is in double-precision ulps so it can easily be made almost 2**29
times smaller than a single-precision ulp.  Near x = pi/4 its maximum
is about 0.5+(1.5+1.5)*x**2/3 ~= 1.117 double-precision ulps.

The minimax polynomial needs to be different to work for the larger
interval.  I didn't increase its degree the old degree is just large
enough to keep the final error less than 1 ulp and increasing the
degree would be a pessimization.  The maximum error is now ~0.80
ulps instead of ~0.53 ulps.

The speedup from this optimization for uniformly distributed args in
[-2pi, 2pi] is 28-43% on athlons, depending on how badly gcc selected
and scheduled the instructions in the old version.  The old version
has some int-to-float conversions that are apparently difficult to schedule
well, but gcc-3.3 somehow did everything ~10 cycles or ~10% faster than
gcc-3.4, with the difference especially large on AXPs.  On A64s, the
problem seems to be related to documented penalties for moving single
precision data to undead xmm registers.  With this version, the speed
is cycles is almost independent of the athlon and gcc version despite
the large differences in instruction selection to use the FPU on AXPs
and SSE on A64s.
2005-11-24 02:04:26 +00:00
bde
1e3150891d Use only double precision for "kernel" tanf (except for returning float).
This is a minor interface change.  The function is renamed from
__kernel_tanf() to __kernel_tandf() so that misues of it will cause
link errors and not crashes.

This version is a routine translation with no special optimizations
for accuracy or efficiency.  It gives an unimportant increase in
accuracy, from ~0.9 ulps to 0.5285 ulps.  Almost all of the error is
from the minimax polynomial (~0.03 ulps and the final rounding step
(< 0.5 ulps).  It gives strange differences in efficiency in the -5
to +10% range, with -O1 fairly consistently becoming faster and -O2
slower on AXP and A64 with gcc-3.3 and gcc-3.4.
2005-11-23 14:27:56 +00:00
bde
89ac9def6a Simplified setiing up args for __kernel_rem_pio2(). We already have x
with a 24-bit fraction, so we don't need a loop to split it into up to
3 terms with 24-bit fractions.
2005-11-23 03:03:09 +00:00
bde
67ff03dd57 Quick fix for stack buffer overrun in rev.1.13. Oops. The prec == 1
arg to __kernel_rem_pio2() gives 53-bit (double) precision, not single
precision and/or the array dimension like I thought.  prec == 2 is
used in e_rem_pio2.c for double precision although it is documented
to be for 64-bit (extended) precision, and I just reduced it by 1
thinking that this would give the value suitable for 24-bit (float)
precision.  Reducing it 1 more to the documented value for float
precision doesn't actually work (it gives errors of ~0.75 ulps in the
reduced arg, but errors of much less than 0.5 ulps are needed; the bug
seems to be in kernel_rem_pio2.c).  Keep using a value 1 larger than
the documented value but supply an array large enough hold the extra
unused result from this.

The bug can also be fixed quickly by increasing init_jk[0] in
k_rem_pio2.c from 2 to 3.  This gives behaviour identical to using
prec == 1 except it doesn't create the extra result.  It isn't clear
how the precision bug affects higher precisions.  113-bit (quad) is
the largest precision, so there is no way to use a large precision
to fix it.
2005-11-23 02:06:06 +00:00
bde
d8a5fc0b49 Mess up the "kernel" float trig function .c files with ifdefs so that
they can be #included in other .c files to give inline functions, and
use them to inline the functions in most callers (not in e_lgammaf_r.c).
__kernel_tanf() is too large and complicated for gcc to inline very well.

An athlons, this gives a speed increase under favourable pipeline
conditions of about 10% overall (larger for AXP, smaller for A64).
E.g., on AXP, sinf() on uniformly distributed args in [-2Pi, 2Pi]
now takes 30-56 cycles; it used to take 45-61 cycles; hardware fsin
takes 65-129.
2005-11-21 04:57:12 +00:00
bde
d96648954f Use double precision to simplify and optimize a long division.
On athlons, this gives a speedup of 10-20% for tanf() on uniformly
distributed args in [-2Pi, 2Pi].  (It only directly applies for 43%
of the args and gives a 16-20% speedup for these (more for AXP than
A64) and this gives an overall speedup of 10-12% which is all that it
should; however, it gives an overall speedup of 17-20% with gcc-3.3
on AXP-A64 by mysteriously effected cases where it isn't executed.)

I originally intended to use double precision for all internals of
float trig functions and will probably still do this, but benchmarking
showed that converting to double precision and back is a pessimization
in cases where a simple float precision calculation works, so it may
be optimal to switch precisions only when using extra precision is
much simpler.
2005-11-21 00:38:21 +00:00
bde
01155bb235 Restored a cleanup in rev.1.9 tthat was lost in rev.1.10. 2005-11-20 20:17:04 +00:00
bde
558fb238b1 Moved all the optimizations for |x| <= 9pi/2 from
__ieee754_rem_pio2f() to its 3 callers and manually inline them.

On Athlons, with favourable compiler flags and optimizations and
favourable pipeline conditions, this gives a speedup of 30-40 cycles
for cosf(), sinf() and tanf() on the range pi/4 < |x| <= 9pi/4, so
thes functions are now signifcantly faster than the hardware trig
functions in many cases.  E.g., in a benchmark with uniformly distributed
x in [-2pi, 2pi], A64 hardware fcos took 72-129 cycles and cosf() took
37-55 cycles.  Out-of-order execution is needed to get both of these
times.  The optimizations in this commit apparently work more by
removing 1 serialization point than by reducing latency.
2005-11-19 02:38:27 +00:00
bde
63ac8a6c5f Removed an unused declaration which was so old that it wasn't a prototype
and thus just broke building at any nonzero WARNS level.

Fixed nearby style bugs.
2005-11-18 05:03:12 +00:00
ru
928d297eeb -mdoc sweep. 2005-11-17 13:00:00 +00:00
bde
5fa6749138 Minor cleanups:
s_cosf.c and s_sinf.c:
Use a non-bogus magic constant for the threshold of pi/4.  It was 2 ulps
smaller than pi/4 rounded down, but its value is not critical so it should
be the result of natural rounding.

s_cosf.c and s_tanf.c:
Use a literal 0.0 instead of an unnecessary variable initialized to
[(float)]0.0.  Let the function prototype convert to 0.0F.

Improved wording in some comments.

Attempted to improve indentation of comments.
2005-11-17 03:53:22 +00:00
bde
c2a2c2b30d Rearranged the the optimizations for special cases to reduce the average
number of branches.

Use a non-bogus magic constant for the threshold of pi/4.  It was 2 ulps
smaller than pi/4 rounded down, but its value is not critical so it should
be the result of natural rounding.  Use "<=" comparisons with rounded-
down thresholds for all small multiples of pi/4.

Cleaned up previous commit:
- use static const variables instead of expressions for multiples of pi/2
  to ensure that they are evaluated at compile time.  gcc currently
  evaluates them at compile time but C99 compilers are not required
  to do so.  We want compile time evaluation for optimization and don't
  care about side effects.
- use M_PI_2 instead of a magic constant for pi/2.  We need magic constants
  related to pi/2 elsewhere but not here since we just want pi/2 rounded
  to double and even prefer it to be rounded in the default rounding mode.
  We can depend on the cmpiler being C99ish enough to round M_PI_2 correctly
  just as much as we depended on it handling hex constants correctly.  This
  also fixes a harmless rounding error in the hex constant.
- keep using expressions n*<value for pi/2> in the initializers for the
  static const variables.  2*M_PI_2 and 4*M_PI_2 are obviously rounded in
  the same way as the corresponding infinite precision expressions for
  multiples of pi/2, and 3*M_PI_2 happens to be rounded like this, so we
  don't need magic constants for the multiples.
- fixed and/or updated some comments.
2005-11-17 02:20:04 +00:00
bde
f63f109c0b Fixed some magic numbers.
The threshold for not being tiny was too small.  Use the usual 2**-12
threshold.  This change is not just an optimization, since the general
code that we fell into has accuracy problems even for tiny x.  Avoiding
it fixes 2*1366 args with errors of more than 1 ulp, with a maximum
error of 1.167 ulps.

The magic number 22 is log(DBL_EPSILON)/2 plus slop.  This is bogus
for float precision.  Use 9 (~log(FLT_EPSILON)/2 plus less slop than
for double precision).  The code for handling the interval
[2**-28, 9_was_22] has accuracy problems even for [9, 22], so this
change happens to fix errors of more than 1 ulp in about 2*17000
cases.  It leaves such errors in about 2*1074000 cases, with a max
error of 1.242 ulps.

The threshold for switching from returning exp(x)/2 to returning
exp(x/2)^2/2 was a little smaller than necessary.  As for coshf(),
This was not quite harmless since the exp(x/2)^2/2 case is inaccurate,
and fixing it avoids accuracy problems in 2*6 cases, leaving problems
in 2*19997 cases.

Fixed naming errors in pseudo-code in comments.
2005-11-13 00:41:46 +00:00
bde
3f7e4f1538 Fixed some magic numbers.
The threshold for not being tiny was confusing and too small.  Use the
usual 2**-12 threshold and simplify the algorithm slightly so that
this threshold works (now use the threshold for sinhf() instead of one
for 1+expm1()).  This is just a small optimization.

The magic number 22 is log(DBL_EPSILON)/2 plus slop.  This is bogus
for float precision.  Use 9 (~log(FLT_EPSILON)/2 plus less slop than
for double precision).

The threshold for switching from returning exp(x)/2 to returning
exp(x/2)^2/2 was a little smaller than necessary.  This was not quite
harmless since the exp(x/2)^2/2 case is inaccurate.  Fixing it happens
to avoid accuracy problems for 2*6 of the 2*151 args that were handled
by the exp(x)/2 case.  This leaves accuracy problems for about 2*19997
args near the overflow threshold (~89); the maximum error there is
2.5029 ulps.

There are also accuracy probles for args in +-[0.5*ln2, 9] -- 2*188885
args with errors of more than 1 ulp, with a maximum error of 1.384 ulps.

Fixed a syntax error and naming errors in pseudo-code in comments.
2005-11-13 00:08:23 +00:00
bde
1bfd712b60 Imoproved comments for the minimax polynomial.
Removed an unused variable.

Fixed some wrong comments and some nearby misformatting.
2005-11-12 20:06:04 +00:00
bde
fae8bfd4c4 Tweaked the minimax polynomial and improved its comments. 2005-11-12 19:56:35 +00:00
bde
03391287df Improved comments for the minimax polynomial. 2005-11-12 19:54:45 +00:00
bde
6e7cfb2c91 As for the float trig functions, use a minimax polynomial that is
specialized for float precision.  The new polynomial has degree 8
instead of 14, and a maximum error of 2**-34.34 (absolute) instead of
2**-30.66.  This doesn't affect the final error significantly; the
maximum error was and is about 0.8879 ulps on amd64 -01.

The fdlibm expf() is not used on i386's (the "optimized" asm version
is used), but probably should be since it was already significantly
faster than the asm version on athlons.  The asm version has the
advantage of being more accurate, so keep using it for now.
2005-11-12 18:20:09 +00:00
bde
9f37514a12 As for __kernel_cosf() and __kernel_sinf(), use a fairly optimal minimax
polynomial for __kernel_tanf().  The old one was the double-precision
polynomial with coefficients truncated to float.  Truncation is not
a good way to convert minimax polynomials to lower precision.  Optimize
for efficiency and use the lowest-degree polynomial that gives a
relative error of less than 1 ulp.  It has degree 13 instead of 27,
and happens to be 2.5 times more accurate (in infinite precision) than
the old polynomial (the maximum error is 0.017 ulps instead of 0.041
ulps).

Unlike for cosf and sinf, the old accuracy was close to being inadequate
-- the polynomial for double precision has a max error of 0.014 ulps
and nearly this small an error is needed.  The new accuracy is also a
bit small, but exhaustive checking shows that even the old accuracy
was enough.  The increased accuracy reduces the maximum relative error
in the final result on amd64 -O1 from 0.9588 ulps to 0.9044 ulps.
2005-11-10 17:43:49 +00:00
bde
35f17c1d45 Detach k_rem_pio2f.c from the build since it is now unused. It is a libm
internal so this shouldn't cause version problems.
2005-11-06 17:59:40 +00:00
bde
e016ebc9a1 Use a 53-bit approximation to pi/2 instead of a 33+53 bit one for the
special case pi/4 <= |x| < 3*pi/4.  This gives a tiny optimization (it
saves 2 subtractions, which are scheduled well so they take a whole 1
cycle extra on an AthlonXP), and simplifies the code so that the
following optimization is not so ugly.

Optimize for the range 3*pi/4 < |x| < 9*Pi/2 in the same way.  On
Athlon{XP,64} systems, this gives a 25-40% optimization (depending a
lot on CFLAGS) for the cosf() and sinf() consumers on this range.
Relative to i387 hardware fcos and fsin, it makes the software versions
faster in most cases instead of slower in most cases.  The relative
optimization is smaller for tanf() the inefficient part is elsewhere.

The 53-bit approximation to pi/2 is good enough for pi/4 <= |x| <
3*pi/4 because after losing up to 24 bits to subtraction, we still
have 29 bits of precision and only need 25 bits.  Even with only 5
extra bits, it is possible to get perfectly rounded results starting
with the reduced x, since if x is nearly a multiple of pi/2 then x is
not near a half-way case and if x is not nearly a multiple of pi/2
then we don't lose many bits.  With our intentionally imperfect rounding
we get the same results for cosf(), sinf() and tanf() as without this
optimization.
2005-11-06 17:48:02 +00:00
bde
0ec5232d0c The logb() functions are not just ieee754 "test" functions, but are
standard in C99 and POSIX.1-2001+.  They are also not deprecated, since
apart from being standard they can handle special args slightly better
than the ilogb() functions.

Move their documentation to ilogb.3.  Try to use consistent and improved
wording for both sets of functions.  All of ieee854, C99 and POSIX
have better wording and more details for special args.

Add history for the logb() functions and ilogbl().  Fix history for
ilogb().
2005-11-06 12:18:27 +00:00
bde
ea9959fde3 Moved the optimization for tiny x from __kernel_tan[f](x) to tan[f](x)
so that it can be faster for tiny x and avoided for reduced x.

This improves things a little differently than for cosine and sine.
We still need to reclassify x in the "kernel" functions, but we get
an extra optimization for tiny x, and an overall optimization since
tiny reduced x rarely happens.  We also get optimizations for space
and style.  A large block of poorly duplicated code to fix a special
case is no longer needed.  This supersedes the fixes in k_sin.c revs
1.9 and 1.11 and k_sinf.c 1.8 and 1.10.

Fixed wrong constant for the cutoff for "tiny" in tanf().  It was
2**-28, but should be almost the same as the cutoff in sinf() (2**-12).
The incorrect cutoff protected us from the bugs fixed in k_sinf.c 1.8
and 1.10, except 4 cases of reduced args passed the cutoff and needed
special handling in theory although not in practice.  Now we essentially
use a cutoff of 0 for the case of reduced args, so we now have 0 special
args instead of 4.

This change makes no difference to the results for sinf() (since it
only changes the algorithm for the 4 special args and the results for
those happen not to change), but it changes lots of results for sin().
Exhaustive testing is impossible for sin(), but exhaustive testing
for sinf() (relative to a version with the old algorithm and a fixed
cutoff) shows that the changes in the error are either reductions or
from 0.5-epsilon ulps to 0.5+epsilon ulps.  The new method just uses
some extra terms in approximations so it tends to give more accurate
results, and there are apparently no problems from having extra
accuracy.  On amd64 with -O1, on all float args the error range in ulps
is reduced from (0.500, 0.665] to [0.335, 0.500) in 24168 cases and
increased from 0.500-epsilon to 0.500+epsilon in 24 cases.  Non-
exhaustive testing by ucbtest shows no differences.
2005-11-02 14:01:45 +00:00
bde
728b935c7f Updated the comment about the optimization for tiny x (the previous
commit moved it).  This includes a comment that the "kernel" sine no
longer works on arg -0, so callers must now handle this case.  The kernel
sine still works on all other tiny args; without the optimization it is
just a little slower on these args.  I intended it to keep working on
all tiny args, but that seems to be impossible without losing efficiency
or accuracy.  (sin(x) ~ x * (1 + S1*x**2 + ...) would preserve -0, but
the approximation must be written as x + S1*x**3 + ... for accuracy.)
2005-11-02 13:06:49 +00:00
bde
481c63491c Removed dead code for handling tan[f]() on odd multiples of pi/2. This
case never occurs since pi/2 is irrational so no multiple of it can
be represented as a float and we have precise arg reduction so we never
end up with a remainder of 0 in the "kernel" function unless the
original arg is 0.

If this case occurs, then we would now fall through to general code
that returns +-Inf (depending on the sign of the reduced arg) instead
of forcing +Inf.  The correct handling would be to return NaN since
we would have lost so much precision that the correct result can be
anything _except_ +-Inf.

Don't reindent the else clause left over from this, although it was already
bogusly indented ("if (foo) return; else ..." just marches the indentation
to the right), since it will be removed too.

Index: k_tan.c
===================================================================
RCS file: /home/ncvs/src/lib/msun/src/k_tan.c,v
retrieving revision 1.10
diff -r1.10 k_tan.c
88,90c88
< 			if (((ix | low) | (iy + 1)) == 0)
< 				return one / fabs(x);
< 			else {
---
> 			{
2005-11-02 06:45:21 +00:00
bde
d568fc134a Fixed some of the silliness related to rev.1.8. In 1.8, "double" in
a declaration was not translated to "float" although bit fiddling on
double variables was translated.  This resulted in garbage being put
into the low word of one of the doubles instead of non-garbage being
put into the only word of the intended float.  This had no effect on
any result because:
- with doubles, the algorithm for calculating -1/(x+y) is unnecessarily
  complicated.  Just returning -1/((double)x+y) would work, and the
  misdeclaration gave something like that except for messing up some
  low bits with the bit fiddling.
- doubles have plenty of bits to spare so messing up some of the low
  bits is unlikely to matter.
- due to other bugs, the buggy code is reached for a whole 4 args out
  of all 2**32 float args.  The bug fixed by 1.8 only affects a small
  percentage of cases and a small percentage of 4 is 0.  The 4 args
  happen to cause no problems without 1.8, so they are even less likely
  to be affected by the bug in 1.8 than average args; in fact, neither
  1.8 nor this commit makes any difference to the result for these 4
  args (and thus for all args).

Corrections to the log message in 1.8: the bug only applies to tan()
and not tanf(), not because the float type can't represent numbers
large enough to trigger the problem (e.g., the example in the fdlibm-5.3
readme which is > 1.0e269), but because:
- the float type can't represent small enough numbers.  For there to be
  a possible problem, the original arg for tanf() must lie very near an
  odd multiple of pi/2.  Doubles can get nearer in absolute units.  In
  ulps there should be little difference, but ...
- ... the cutoff for "small" numbers is bogus in k_tanf.c.  It is still
  the double value (2**-28).  Since this is 32 times smaller than
  FLT_EPSILON and large float values are not very uniformly distributed,
  only 6 args other than ones that are initially below the cutoff give
  a reduced arg that passes the cutoff (the 4 problem cases mentioned
  above and 2 non-problem cases).

Fixing the cutoff makes the bug affect tanf() and much easier to detect
than for tan().  With a cutoff of 2**-12 on amd64 with -O1, 670102
args pass the cutoff; of these, there are 337604 cases where there
might be an error of >= 1 ulp and 5826 cases where there is such an
error; the maximum error is 1.5382 ulps.

The fix in 1.8 works with the reduced cutoff in all cases despite the
bug in it.  It changes the result in 84492 cases altogether to fix the
5826 broken cases.  Fixing the fix by translating "double" to "float"
changes the result in 42 cases relative to 1.8.  In 24 cases the
(absolute) error is increased and in 18 cases it is reduced, but it
remains less than 1 ulp in all cases.
2005-11-02 05:37:31 +00:00
bde
bce05c8c60 Fixed spelling of remquof() in its prototype. 2005-10-30 12:34:58 +00:00
bde
eb7e930697 Fixed some comments added in rev.1.5.
The log message for 1.5 said that some small (one or two ulp) inaccuracies
were fixed, and a comment implied that the critical change is to switch
the rounding mode to to-nearest, with a switch of the precision to
extended at no extra cost.  Actually, the errors are very large (ucbtest
finds ones of several hundred ulps), and it is the switch of the
precision that is critical.

Another comment was wrong about NaNs being handled sloppily.
2005-10-30 12:21:02 +00:00
bde
26610cfe9b Implement inline functions to give the complex result x+I*y from float
or double args x and y.  x+I*y cannot be used directly yet due to compiler
bugs.

Submitted by:	Steve Kargl <sgk@troutmask.apl.washington.edu>
2005-10-29 17:14:11 +00:00
bde
bbfb40721e Use double precision to simplify and optimize arg reduction for small
and medium size args too: instead of conditionally subtracting a float
17+24, 17+17+24 or 17+17+17+24 bit approximation to pi/2, always
subtract a double 33+53 bit one.  The float version is now closer to
the double version than to old versions of itself -- it uses the same
33+53 bit approximation as the simplest cases in the double version,
and where the float version had to switch to the slow general case at
|x| == 2^7*pi/2, it now switches at |x| == 2^19*pi/2 the same as the
double version.

This speeds up arg reduction by a factor of 2 for |x| between 3*pi/4 and
2^7*pi/4, and by a factor of 7 for |x| between 2^7*pi/4 and 2^19*pi/4.
2005-10-29 16:34:50 +00:00
bde
48aeac9996 Start trying to make the float precision trig functions actually worth
using under FreeBSD.  Before this commit, all float precision functions
except exp2f() were implemented using only float precision, apparently
because Cygnus needed this in 1993 for embedded systems with slow or
inefficient double precision.  For FreeBSD, except possibly on systems
that do floating point entirely in software (very old i386 and now
arm), this just gives a more complicated implementation, many bugs,
and usually worse performance for float precision than for double
precision.  The bugs and worse performance were particulary large in
arg reduction for trig functions.  We want to divide by an approximation
to pi/2 which has as many as 1584 bits, so we should use the widest
type that is efficient and/or easy to use, i.e., double.  Use fdlibm's
__kernel_rem_pio2() to do this as Sun apparently intended.  Cygnus's
k_rem_pio2f.c is now unused.  e_rem_pio2f.c still needs to be separate
from e_rem_pio2.c so that it can be optimized for float args.  Similarly
for long double precision.

This speeds up cosf(x) on large args by a factor of about 2.  Correct
arg reduction on large args is still inherently very slow, so hopefully
these args rarely occur in practice.  There is much more efficiency
to be gained by using double precision to speed up arg reduction on
medium and small float args.
2005-10-29 08:15:29 +00:00
bde
8e62cdabe0 Use fairly optimal minimax polynomials for __kernel_cosf() and
__kernel_sinf().  The old ones were the double-precision polynomials
with coefficients truncated to float.  Truncation is not a good way
to convert minimax polynomials to lower precision.  Optimize for
efficiency and use the lowest-degree polynomials that give a relative
error of less than 1 ulp -- degree 8 instead of 14 for cosf and degree
9 instead of 13 for sinf.  For sinf, the degree 8 polynomial happens
to be 6 times more accurate than the old degree 14 one, but this only
gives a tiny amount of extra accuracy in results -- we just need to
use a a degree high enough to give a polynomial whose relative accuracy
in infinite precision (but with float coefficients) is a small fraction
of a float ulp (fdlibm generally uses 1/32 for the small fraction, and
the fraction for our degree 8 polynomial is about 1/600).

The maximum relative errors for cosf() and sinf() are now 0.7719 ulps
and 0.7969 ulps, respectively.
2005-10-28 13:36:58 +00:00
bde
96c89ee304 Use a better algorithm for reducing the error in __kernel_cos[f]().
This supersedes the fix for the old algorithm in rev.1.8 of k_cosf.c.

I want this change mainly because it is an optimization.  It helps
make software cos[f](x) and sin[f](x) faster than the i387 hardware
versions for small x.  It is also a simplification, and reduces the
maximum relative error for cosf() and sinf() on machines like amd64
from about 0.87 ulps to about 0.80 ulps.  It was validated for cosf()
and sinf() by exhaustive testing.  Exhaustive testing is not possible
for cos() and sin(), but ucbtest reports a similar reduction for the
worst case found by non-exhaustive testing.  ucbtest's non-exhaustive
testing seems to be good enough to find problems in algorithms but not
maximum relative errors when there are spikes.  E.g., short runs of
it find only 3 ulp error where the i387 hardware cos() has an error
of about 2**40 ulps near pi/2.
2005-10-26 12:36:18 +00:00
bde
d6cdac5f7a More fixes for arg reduction near pi/2 on systems with broken assignment
to floats (mainly i386's).  All errors of more than 1 ulp for float
precision trig functions were supposed to have been fixed; however,
compiling with gcc -O2 uncovered 18250 more such errors for cosf(),
with a maximum error of 1.409 ulps.

Use essentially the same fix as in rev.1.8 of k_rem_pio2f.c (access a
non-volatile variable as a volatile).  Here the -O1 case apparently
worked because the variable is in a 2-element array and it takes -O2
to mess up such a variable by putting it in a register.

The maximum error for cosf() on i386 with gcc -O2 is now 0.5467 (it
is still 0.5650 with gcc -O1).  This shows that -O2 still causes some
extra precision, but the extra precision is now good.

Extra precision is harmful mainly for implementing extra precision in
software.  We want to represent x+y as w+r where both "+" operations
are in infinite precision and r is tiny compared with w.  There is a
standard algorithm for this (Knuth (1981) 4.2.2 Theorem C), and fdlibm
uses this routinely, but the algorithm requires w and r to have the
same precision as x and y.  w is just x+y (calculated in the same
finite precision as x and y), and r is a tiny correction term.  The
i386 gcc bugs tend to give extra precision in w, and then using this
extra precision in the calculation of r results in the correction
mostly staying in w and being missing from r.  There still tends to
be no problem if the result is a simple expression involving w and r
-- modulo spills, w keeps its extra precision and r remains the right
correction for this wrong w.  However, here we want to pass w and r
to extern functions.  Extra precision is not retained in function args,
so w gets fixed up, but the change to the tiny r is tinier, so r almost
remains as a wrong correction for the right w.
2005-10-25 12:13:37 +00:00
bde
5931c79161 Moved the optimization for tiny x from __kernel_{cos,sin}[f](x) to
{cos_sin}[f](x) so that x doesn't need to be reclassified in the
"kernel" functions to determine if it is tiny (it still needs to be
reclassified in the cosine case for other reasons that will go away).

This optimization is quite large for exponentially distributed x, since
x is tiny for almost half of the domain, but it is a pessimization for
uniformally distributed x since it takes a little time for all cases
but rarely applies.  Arg reduction on exponentially distributed x
rarely gives a tiny x unless the reduction is null, so it is best to
only do the optimization if the initial x is tiny, which is what this
commit arranges.  The imediate result is an average optimization of
1.4% relative to the previous version in a case that doesn't favour
the optimization (double cos(x) on all float x) and a large
pessimization for the relatively unimportant cases of lgamma[f][_r](x)
on tiny, negative, exponentially distributed x.  The optimization should
be recovered for lgamma*() as part of fixing lgamma*()'s low-quality
arg reduction.

Fixed various wrong constants for the cutoff for "tiny".  For cosine,
the cutoff is when x**2/2! == {FLT or DBL}_EPSILON/2.  We round down
to an integral power of 2 (and for cos() reduce the power by another
1) because the exact cutoff doesn't matter and would take more work
to determine.  For sine, the exact cutoff is larger due to the ration
of terms being x**2/3! instead of x**2/2!, but we use the same cutoff
as for cosine.  We now use a cutoff of 2**-27 for double precision and
2**-12 for single precision.  2**-27 was used in all cases but was
misspelled 2**27 in comments.  Wrong and sloppy cutoffs just cause
missed optimizations (provided the rounding mode is to nearest --
other modes just aren't supported).
2005-10-24 14:08:36 +00:00
bde
86f27343be Fixed range reduction for large multiples of pi/2 on systems with
broken assignment to floats (e.g., i386 with gcc -O, but not amd64 or
ia64; i386 with gcc -O0 worked accidentally).

Use an unnamed volatile temporary variable to trick gcc -O into clipping
extra precision on assignment.  It's surprising that only 1 place needed
to be changed.

For tanf() on i386 with gcc -O, the bug caused errors > 1 ulp with a
density of 2.3% for args larger in magnitude than 128*pi/2, with a
maximum error of 1.624 ulps.

After this fix, exhaustive testing shows that range reduction for
floats works as intended assuming that it is in within a factor of
about 2^16 of working as intended for doubles.  It provides >= 8
extra bits of precision for all ranges.  On i386:

range                       max error in double/single ulps    extra precision
-----                       -------------------------------    ---------------
0 to 3*pi/4                 0x000d3132  /  0.0016              9+ bits
3*pi/4 to 128*pi/2          0x00160445  /  0.0027              8+
128*pi/2 to +Inf            0x00000030  /  0.00000009          23+
128*pi/2 up, -O0 before fix 0x00000030  /  0.00000009          23+
128*pi/2 up, -O1 before fix 0x10000000  /  0.5                 1

The 23+ bits of extra precision for large multiples corresponds to almost
perfect reduction to a pair of floats (24 extra would be perfect).

After this fix, the maximum relative error (relative to the corresponding
fdlibm double precision function) is < 1 ulp for all basic trig functions
on all 2^32 float args on all machines tested:

          amd64     ia64      i386-O0   i386-O1
	  ------    ------    ------    ------
cosf:     0.8681    0.8681    0.7927    0.5650
sinf:     0.8733    0.8610    0.7849    0.5651
tanf:     0.9708    0.9329    0.9329    0.7035
2005-10-11 07:56:05 +00:00
bde
32945bd185 Fixed range reduction near (but not very near) medium-sized multiples
of pi/2 (1 line) and expand a comment about related magic (many lines).

The bug was essentially the same as for the +-pi/2 case (a mistranslated
mask), but was smaller so it only significantly affected multiples
starting near +-13*pi/2.  At least on amd64, for cosf() on all 2^32
float args, the bug caused 128 errors of >= 1 ulp, with a maximum error
of 1.2393 ulps.
2005-10-10 20:02:02 +00:00
bde
6210e62129 Fix numerous errors of >= 1 ulp for cosf(x) and sinf(x) (1 line)
and add a comment about related magic (many lines)).

__kernel_cos[f]() needs a trick to reduce the error to below 1 ulp
when |x| >= 0.3 for the range-reduced x.  Modulo other bugs, naive
code that doesn't use the trick would have an error of >= 1 ulp
in about 0.00006% of cases when |x| >= 0.3 for the unreduced x,
with a maximum relative error of about 1.03 ulps.  Mistransation
of the trick from the double precision case resulted in errors in
about 0.2% of cases, with a maximum relative error of about 1.3 ulps.

The mistranslation involved not doing implicit masking of the 32-bit
float word corresponding to to implicit masking of the lower 32-bit
double word by clearing it.

sinf() uses __kernel_cosf() for half of all cases so its errors from
this bug are similar.  tanf() is not affected.

The error bounds in the above and in my other recent commit messages
are for amd64.  Extra precision for floats on i386's accidentally masks
this bug, but only if k_cosf.c is compiled with -O.  Although the extra
precision helps here, this is accidental and depends on longstanding
gcc precision bugs (not clipping extra precision on assignment...),
and the gcc bugs are mostly avoided by compiling without -O.  I now
develop libm mainly on amd64 systems to simplify error detection and
debugging.
2005-10-09 21:07:23 +00:00
bde
485c06b5bb Oops, the last-minute optimization in rev.1.8 wasn't a good idea. The
17+17+24 bit pi/2 must only be used when subtraction of the first 2
terms in it from the arg is exact.  This happens iff the the arg in
bits is one of the 2**17[-1] values on each side of (float)(pi/2).

Revert to the algorithm in rev.1.7 and only fix its threshold for using
the 3-term pi/2.  Use the threshold that maximizes the number of values
for which the 3-term pi/2 is used, subject to not changing the algorithm
for comparing with the threshold.  The 3-term pi/2 ends up being used
for about half of its usable range (about 64K values on each side).
2005-10-09 04:29:08 +00:00
bde
39c85cfe13 Fixed syntax error (a missing brace) in previous commit. 2005-10-08 22:55:36 +00:00
bde
13f78201bb Fixed range reduction near (but not very near) +-pi/2. A bug caused
a maximum error of 2.905 ulps for cosf(), but the algorithm for cosf()
is good for < 1 ulps and happens to give perfect rounding (< 0.5 ulps)
near +-pi/2 except for the bug.  The extra relative errors for tanf()
were similar (slightly larger).  The bug didn't affect sinf() since
sinf'(+-pi/2) is 0.

For range reduction in ~[-3pi/4, -pi/4] and ~[pi/4, 3pi/4] we must
subtract +-pi/2 and the only complication is that this must be done
in extra precision.  We have handy 17+24-bit and 17+17+24-bit
approximations to pi/2.  If we always used the former then we would
lose up to 24 bits of accuracy due to cancelation of leading bits, but
we need to keep at least 24 bits plus a guard digit or 2, and should
keep as many guard bits as efficiency permits.  So we used the
less-precise pi/2 not very near +-pi/2 and switched to using the
more-precise pi/2 very near +-pi/2.  However, we got the threshold for
the switch wrong by allowing 19 bits to cancel, so we ended up with
only 21 or 22 bits of accuracy in some cases, which is even worse than
naively subtracting pi/2 would have done.

Exhaustive checking shows that allowing only 17 bits to cancel (min.
accuracy ~24 bits) is sufficient to reduce the maximum error for cosf()
near +-pi/2 to 0.726 ulps, but allowing only 6 bits to cancel (min.
accuracy ~35-bits) happens to give perfect rounding for cosf() at
little extra cost so we prefer that.

We actually (in effect) allow 0 bits to cancel and always use the
17+17+24-bit pi/2 (min. accuracy ~41 bits).  This is simpler and
probably always more efficient too.  Classifying args to avoid using
this pi/2 when it is not needed takes several extra integer operations
and a branch, but just using it takes only 1 FP operation.

The patch also fixes misspelling of 17 as 24 in many comments.

For the double-precision version, the magic numbers include 33+53 bits
for the less-precise pi/2 and (53-32-1 = 20) bits being allowed to
cancel, so there are ~33-20 = 13 guard bits.  This is sufficient except
probably for perfect rounding.  The more-precise pi/2 has 33+33+53
bits and we still waste time classifying args to avoid using it.

The bug is apparently from mistranslation of the magic 32 in 53-32-1.
The number of bits allowed to cancel is not critical and we use 32 for
double precision because it allows efficient classification using a
32-bit comparison.  For float precision, we must use an explicit mask,
and there are fewer bits so there is less margin for error in their
allocation.  The 32 got reduced to 4 but should have been reduced
almost in proportion to the reduction of mantissa bits.
2005-10-08 22:43:55 +00:00
bde
41d865435a Fixed aliasing bugs in TRUNC() by using the fdlibm macros for access
to doubles as bits.  fdlibm-1.1 had similar aliasing bugs, but these
were fixed by NetBSD or Cygnus before a modified version of fdlibm was
imported in 1994.  TRUNC() is only used by tgamma() and some
implementation-detail functions.  The aliasing bugs were detected by
compiling with gcc -O2 but don't seem to have broken tgamma() on i386's
or amd64's.  They broke my modified version of tgamma().

Moved the definition of TRUNC() to mathimpl.h so that it can be fixed
in one place, although the general version is even slower than necessary
because it has to operate on pointers to volatiles to handle its arg
sometimes being volatile.  Inefficiency of the fdlibm macros slows
down libm generally, and tgamma() is a relatively unimportant part of
libm.  The macros act as if on 32-bit words in memory, so they are
hard to optimize to direct actions on 64-bit double registers for
(non-i386) machines where this is possible.  The optimization is too
hard for gcc on amd64's, and declaring variables as volatile makes it
impossible.
2005-09-19 11:28:19 +00:00
das
665ea151e9 Add a missing ldexpf() alias for amd64.
Noticed by:	bz@, tjr@
2005-09-12 20:54:00 +00:00
kensmith
f97f77429f Bump the shared library version number of all libraries that have not
been bumped since RELENG_5.

Reviewed by:	ru
Approved by:	re (not needed for commit check but in principle...)
2005-07-22 17:19:05 +00:00
ru
0c80b11e62 Markup nit.
Approved by:	re (blanket)
2005-06-16 21:56:03 +00:00
ru
3f3ef36f49 Fixed compile warning.
Approved by:	re (blanket)
2005-06-16 21:55:45 +00:00
ru
38fc91ca96 Assorted markup fixes.
Approved by:	re
2005-06-15 19:04:04 +00:00
deischen
5d3cf26519 Prevent these functions from using stack outside of their frame.
Reported by:	Marc Olzheim <marcolz at stack dot nl>
OK'd by:	das
2005-05-06 15:44:20 +00:00
stefanf
16dd1d18f5 Revert the last change, the conversion from long double to double can raise
unwanted underflow exceptions.

Pointed out by:	das
2005-04-28 19:45:55 +00:00
stefanf
13322eaf9b Use double additions to raise the inexact exception to work around problems
with long double addition on sparc64.
2005-04-22 09:57:55 +00:00
stefanf
77782516eb Fix raising the inexact exception (FE_INEXACT) if the result differs from the
argument.

Noticed by:	das
2005-04-22 08:30:33 +00:00
ache
60e7539065 Fix truncl.3 MLINKS 2005-04-17 19:57:52 +00:00
das
9c49c2a65a More optimized math functions. 2005-04-16 21:12:55 +00:00
das
a9fd105354 Implement truncl() based on floorl(). 2005-04-16 21:12:47 +00:00
das
15bd306d7a Add roundl(), lroundl(), and llroundl(). 2005-04-08 01:24:08 +00:00
das
4df744a471 These files should include s_lround.c instead of s_lrint.c.
This only matters for efficiency, not for correctness.
2005-04-08 00:52:27 +00:00
das
9977034544 Fix a (coincidentally harmless) bug. 2005-04-08 00:52:16 +00:00
das
bb5d9cb768 Fix a long-standing bug in k_rem_pio2(), which led to large errors when
tanf() was called with big arguments close to multiples of pi/2.

Reported by:	ucbtest via bde
2005-04-05 23:27:47 +00:00
das
1db6f984bc Build exp2(), exp2f(), and related documentation. 2005-04-05 02:57:39 +00:00
das
1324b71e9d Document exp2() and exp2f(), and make other minor tweaks and updates. 2005-04-05 02:57:28 +00:00
das
bf28283937 Implement exp2() and exp2f(). 2005-04-05 02:57:15 +00:00
das
da9b203aaf Implement and document remquo() and remquof(). 2005-03-25 04:40:44 +00:00
das
79b831e3a1 Fix the double rounding problem with subnormals, and
remove the XXX comments, which no longer apply.
2005-03-18 02:27:59 +00:00
das
9f8ee2b273 Add missing prototypes for fma() and fmaf(), and remove an inaccurate
comment.
2005-03-18 01:47:42 +00:00
das
fdf53809bb Make the fenv.h routines work for programs that use SSE for
floating-point arithmetic on i386.  Now I'm going to make excuses
for why this code is kinda scary:

- To avoid breaking the ABI with 5.3-RELEASE, we can't change
  sizeof(fenv_t).  I stuck the saved mxcsr in some discontiguous
  reserved bits in the existing structure.

- Attempting to access the mxcsr on older processors results
  in an illegal instruction exception, so support for SSE must
  be detected at runtime.  (The extra baggage is optimized away
  if either the application or libm is compiled with -msse{,2}.)

I didn't run tests to ensure that this doesn't SIGILL on older 486's
lacking the cpuid instruction or on other processors lacking SSE.
Results from running the fenv regression test on these processors
would be appreciated.  (You'll need to compile the test with
-DNO_STRICT_DFL_ENV.)  If you have an 80386, or if your processor
supports SSE but the kernel didn't enable it, then you're probably out
of luck.

Also, I un-inlined some of the functions that grew larger as a result
of this change, moving them from fenv.h to fenv.c.
2005-03-17 22:21:46 +00:00
das
5b7d321e53 Spell 'fedisableexcept' correctly. 2005-03-16 22:34:14 +00:00
das
bfdcd78bfc Document feenableexcept(), fedisableexcept(), and fegetexcept(). 2005-03-16 19:04:28 +00:00
das
6448887f3b Replace fegetmask() and fesetmask() with feenableexcept(),
fedisableexcept(), and fegetexcept().  These two sets of routines
provide the same functionality.  I implemented the former as an
undocumented internal interface to make the regression test easier to
write.  However, fe(enable|disable|get)except() is already part of
glibc, and I would like to avoid gratuitous differences.  The only
major flaw in the glibc API is that there's no good way to report
errors on processors that don't support all the unmasked exceptions.
2005-03-16 19:03:46 +00:00
das
1e57fa37f5 Replace strong references with weak references. There's no
particularly good reason to do this, except that __strong_reference
does type checking, whereas __weak_reference does not.
On Alpha, the compiler won't accept a 'long double' parameter in
place of a 'double' parameter even thought the two types are
identical.
2005-03-07 21:27:37 +00:00
stefanf
1376e3369e Remove an obsolete sentence from a comment. 2005-03-07 20:28:26 +00:00
das
59658f6dc5 - If z is 0, one of x or y is 0, and the other is infinite, raise
an invalid exception and return an NaN.
- If a long double has 113 bits of precision, implement fma in terms
  of simple long double arithmetic instead of complicated double arithmetic.
- If a long double is the same as a double, alias fma as fmal.
2005-03-07 05:02:09 +00:00
das
5cd14bf8eb Document scalbnl and scalblnl. 2005-03-07 05:00:44 +00:00
das
c4cf2622dd Document nextafterl and nexttoward{,f,l}. 2005-03-07 05:00:29 +00:00
das
4e4746e6f5 Add nexttoward to the list of implemented functions, and explicitly
list the four that are still missing.
2005-03-07 04:59:53 +00:00
das
9331bfcabd Document fmal. 2005-03-07 04:59:43 +00:00
das
60d1f35832 Remove ldexp and ldexpf. The former is in libc, and the latter is
identical to scalbnf, which is now aliased as ldexpf.  Note that the
old implementations made the mistake of setting errno and were the
only libm routines to do so.
2005-03-07 04:59:30 +00:00
das
8892e8e916 - Remove s_ldexpf.c (now aliased to scalbn.)
- Add nexttoward{,f,l} and nextafterl.  On all platforms,
  nexttowardl is an alias for nextafterl.
- Add fmal.
- Add man pages for new routines: fmal, nextafterl,
  nexttoward{,f,l}, scalb{,l}nl.

Note that on platforms where long double is the same as double, we
generally just alias the double versions of the routines, since doing
so avoids extra work on the source code level and redundant code in
the binary.  In particular:

		ldbl53		ldbl64/113
fmal       	s_fma.c		s_fmal.c
ldexpl     	s_scalbn.c	s_scalbnl.c
nextafterl 	s_nextafter.c	s_nextafterl.c
nexttoward 	s_nextafter.c	s_nexttoward.c
nexttowardf	s_nexttowardf.c	s_nexttowardf.c
nexttowardl	s_nextafter.c	s_nextafterl.c
scalbnl    	s_scalbn.c	s_scalbnl.c
2005-03-07 04:59:11 +00:00
das
fd680f0398 - Define FP_FAST_FMA for sparc64, since fma() is now implemented using
sparc64's 128-bit long doubles.
- Define FP_FAST_FMAL for ia64.
- Prototypes for fmal, frexpl, ldexpl, nextafterl, nexttoward{,f,l},
  scalblnl, and scalbnl.
2005-03-07 04:58:43 +00:00
das
adcae0d9cf Alias scalbn as ldexpl and scalbnl on platforms where long double is
the same as double.
2005-03-07 04:58:03 +00:00
das
6bde47de78 - Implement scalblnl.
- In scalbln and scalblnf, check the bounds of the second argument.
  This is probably unnecessary, but strictly speaking, we should
  report an error if someone tries to compute scalbln(x, INT_MAX + 1ll).
2005-03-07 04:57:50 +00:00
das
be070dc174 Implement nexttowardf. This is used on both platforms with 11-bit
exponents and platforms with 15-bit exponents for long doubles.
2005-03-07 04:57:38 +00:00
das
60fe3744a1 Implement nexttoward and nextafterl; the latter is also known as
nexttowardl.  These are not needed on machines where long doubles
look like IEEE-754 doubles, so the implementation only supports
the usual long double formats with 15-bit exponents.

Anything bizarre, such as machines where floating-point and integer
data have different endianness, will cause problems.  This is the case
with big endian ia64 according to libc/ia64/_fpmath.h.  Please contact
me if you managed to get a machine running this way.
2005-03-07 04:56:46 +00:00
das
e1ac3a8c05 - Try harder to trick gcc into not optimizing away statements
that are intended to raise underflow and inexact exceptions.
- On systems where long double is the same as double, nextafter
  should be aliased as nexttoward, nexttowardl, and nextafterl.
2005-03-07 04:55:58 +00:00
das
e08a3e75ca Implement frexpl. 2005-03-07 04:54:51 +00:00
das
2474fb3758 Alias frexp as frexpl on platforms where a long double is the same as
a double.
2005-03-07 04:54:39 +00:00
das
c8e0555e08 Implement fmal. 2005-03-07 04:54:20 +00:00
das
8ebc6e4b38 - Define the LDBL_PREC to be the number of significant bits in a long
double's mantissa.
- Add an assembly version of fmal.
2005-03-07 04:54:02 +00:00
das
70073cd00d - Define the LDBL_PREC to be the number of significant bits in a long
double's mantissa.
- Add an assembly version of scalbnl.
2005-03-07 04:53:48 +00:00
das
ed73924a0b Define the LDBL_PREC to be the number of significant bits in a long
double's mantissa.
2005-03-07 04:53:36 +00:00
das
69b60bd975 Add an assembly version of fmal. 2005-03-07 04:53:11 +00:00
das
4a2bef4123 Add scalbnl, also known as as ldexpl. 2005-03-07 04:52:58 +00:00
das
e67e9ee139 Alias scalbnf as ldexpf. The two are identical in binary
floating-point formats.
2005-03-07 04:52:43 +00:00
das
6ea772039c Fix a mistake in the exponent range. 2005-03-06 19:08:18 +00:00
das
062f662fe2 Work around a gcc bug. This fixes feholdexcept() et al. at -O1.
Symptoms of the problem included assembler warnings and
nondeterministic runtime behavior when a fe*() call that affects the
fpsr is closely followed by a float point op.

The bug (at least, I think it's a bug) is that gcc does not insert a
break between a volatile asm and a dependent instruction if the
volatile asm came from an inlined function.  Volatile asms seem to be
fine in other circumstances, even without -mvolatile-asm-stop, so
perhaps the compiler adds the stop bits before inlining takes place.
The problem does not occur at -O0 because inlining is disabled, and it
doesn't happen at -O2 because -fschedule-insns2 knows better.
2005-03-05 20:34:45 +00:00
das
ac2f0fe744 Un-document the non-extant exp10() and exp10f() functions.
exp10() was a casualty of the transition away from the VAX.
2005-02-26 08:54:45 +00:00
das
ba363997fb Revert rev 1.8, which causes small (e.g. 2 ulp) errors for some
inputs.  The trouble with replacing two floats with a double is that
the latter has 6 extra bits of precision, which actually hurts
accuracy in many cases.  All of the constants are optimal when float
arithmetic is used, and would need to be recomputed to do this right.

Noticed by:	bde (ucbtest)
2005-02-24 06:32:13 +00:00
das
347e711324 Use hardware instructions for sqrt() and sqrtf(). 2005-02-21 18:27:57 +00:00
das
c082951462 Use double arithmetic instead of simulating it with two floats. This
results in a performance gain on the order of 10% for amd64 (sledge),
ia64 (pluto1), i386+SSE (Pentium 4), and sparc64 (panther), and a
negligible improvement for i386 without SSE.  (The i386 port still
uses the hardware instruction, though.)
2005-02-21 17:44:57 +00:00
das
0ac8896337 Remove the i387 versions of atan(), atan2(), and atan2f().
They are slower than the MI routines on modern hardware,
except for degenerate cases such as the Pentium 4.

PR:		67469
2005-02-21 16:04:23 +00:00
das
967bb5dcb0 Remove i387 versions of asin() and acos(). Although the hardware
instruction was faster on the 486, it's slower than our MD version on
modern processors.

Determined by:	bde
PR:		67469
2005-02-20 22:51:08 +00:00
das
ef7a10667b Remove the float versions of the i387 trig functions obtained from
NetBSD.  They're buggy, giving particularly for inputs larger in
magnitude than 2**63.

Noticed by:	bde
PR:		67469
2005-02-20 22:50:40 +00:00
das
2992840cda Fix a small scripting snafu in the previous revision. 2005-02-04 20:05:39 +00:00
das
24d2516dd7 Remove another vestige of support for a non-IEEE libm. 2005-02-04 18:32:13 +00:00
das
a47af911ba Reduce diffs against vendor source (Sun fdlibm 5.3). 2005-02-04 18:26:06 +00:00
das
9aed1e79d6 Move machine-dependent crud to its own makefile. 2005-02-04 14:33:39 +00:00
das
ec83c7685d Remove wrappers and other cruft intended to support SVID, mistakes in
C90, and other arcana.  Most of these features were never fully
supported or enabled by default.

Ok:	bde, stefanf
2005-02-04 14:08:32 +00:00
ru
de77cf0b40 Typo. 2005-01-28 21:14:16 +00:00
ru
e1ad6e61cc Properly terminate sentence. 2005-01-28 21:13:34 +00:00
das
783f4bf0c2 - Move the functions presently described in in ieee(3) to their own
manpages.  They are not very related, so separating them makes it
  easier to add meaningful cross-references and extend some of the
  descriptions.
- Move the part of math(3) that discusses IEEE 754 to the ieee(3)
  manpage.
2005-01-27 05:46:17 +00:00
cognet
fa9ea53805 Define FE_TONEAREST, FE_TOWARDZERO, FE_UPWARD, FE_DOWNWARD and _ROUND_MASK to
unbreak the build for arm.
2005-01-24 00:35:02 +00:00
das
d7cc82f1f2 Update comment to reflect the code change in the previous revision.
Noticed by:	ceri
2005-01-23 22:56:08 +00:00
das
df543e85ea Many changes, including the following major ones:
- Rearrange the list of functions into categories.
- Remove the ulps column.  It was appropriate for only some
  of the functions in the list, and correct for even fewer
  of them.
- Add some new paragraphs, and remove some old ones about
  NaNs that may do more harm than good.
- Document precisions other than double-precision.
2005-01-23 22:05:33 +00:00
das
e52e82773b If x == y, return y, not x. C99 (though not IEEE 754) requires that
nextafter(+0.0, -0.0) returns -0.0 and nextafter(-0.0, +0.0) returns +0.0.
2005-01-23 15:46:22 +00:00
das
d55cbb5c10 Add fma() and fmaf(), which implement a fused multiply-add operation. 2005-01-22 09:53:18 +00:00
ru
ed72feddcb Sort sections. 2005-01-20 09:17:07 +00:00
ru
8f12d81a1c Use the \*(If string provided by mdoc(7), to represent infinity. 2005-01-16 16:49:10 +00:00
ru
9bccb183eb Removed redundant .br call. 2005-01-16 16:46:14 +00:00
das
64168d93e5 amd64 assembly versions of sqrt(), lrint(), and llrint() using SSE2. 2005-01-15 03:32:28 +00:00
das
c7466a47ff Most libm routines depend on the rounding mode and/or set exception
flags, so they are not pure.  Remove the __pure2 annotation from them.
I believe that the following routines and their float and long double
counterparts are the only ones here that can be __pure2:

	copysign is* fabs finite fmax fmin fpclassify ilogb nan signbit

When gcc supports FENV_ACCESS, perhaps there will be a new annotation
that allows the other functions to be considered pure when FENV_ACCESS
is off.

Discussed with:	bde
2005-01-15 02:55:10 +00:00
das
3e538fd2f7 Braino. Revert rev 1.50.
Pointy hat to:	das
2005-01-15 00:37:31 +00:00
das
df8ce09fbd Remove numerous references to VAX floating-point and the setting of
errno, replacing them with a discussion of IEEE exceptions where
appropriate.  Cross-reference fenv(3) whenever exceptions are
mentioned.
2005-01-14 23:28:28 +00:00
das
d6994ae2c3 Set math_errhandling to MATH_ERREXCEPT. Now that we have fenv.h, we
basically support this, subject to gcc's lack of FENV_ACCESS support.
In any case, the previous setting of math_errhandling to 0 is not
allowed by POSIX.
2005-01-14 22:03:27 +00:00
das
f5638fda1e Remove some #if 0'd code. 2005-01-14 21:51:46 +00:00
ru
8ef2048534 Tiny markup nits. 2005-01-14 09:12:05 +00:00
das
4ec986eab3 Mark all inline asms that read the floating-point control or status
registers as volatile.  Instructions that *wrote* to FP state were
already marked volatile, but apparently gcc has license to move
non-volatile asms past volatile asms.  This broke amd64's feupdateenv
at -O2 due to a WAR conflict between fnstsw and fldenv there.
2005-01-14 07:09:23 +00:00
stefanf
17c8f614b7 Fixed too many of "the", and enclose multi-word argument in double quotes.
Obtained from:	ru
2005-01-13 20:33:42 +00:00
das
20067523af Import the subset of J.T. Conklin's single-precision x86-optimized
math routines that appear to be (a) correct and (b) faster than their
MI counterparts on my Pentium 4.

Obtained from:	NetBSD
2005-01-13 18:58:25 +00:00
das
5670c96a33 The isnormal() in rev 1.2 should have been isfinite() so subnormals
round correctly.

Noticed by:	stefanf
2005-01-13 15:43:41 +00:00
das
ed0817dc30 Things that are broken, unneeded, and unused since 1997 belong in the attic. 2005-01-13 15:43:22 +00:00
ru
d9ecb772db Markup nits. 2005-01-13 10:43:01 +00:00
ru
a14720c679 Fixed too many of "the", and enclose multi-word argument in double quotes. 2005-01-13 09:35:47 +00:00
stefanf
1bca40ec75 Implement and document ceill(). 2005-01-13 09:11:41 +00:00
stefanf
8ce754f4eb Bump .Dd for the last commit. 2005-01-13 09:08:16 +00:00
stefanf
86ef5da3d7 Hook up and document floorl(). 2005-01-12 22:16:26 +00:00
stefanf
9704cf1a67 Implement floorl(). 2005-01-12 22:10:46 +00:00
stefanf
2f05f40a83 Whitespace nit. 2005-01-12 22:05:41 +00:00
das
75bc489b6d Add MI implementations of [l]lrint[f]() and [l]lround[f]().
Discussed with:	bde
2005-01-11 23:12:55 +00:00
das
0a78d59d1f Document [l]lrint[f]() and [l]lround[f](). 2005-01-11 23:12:17 +00:00
das
1426450140 Faster lrint() and llrint() implementations for x86. 2005-01-11 23:10:53 +00:00
das
850b1bf882 Mark inline stmxcsr instructions as volatile, since this appears to be
the only way to convince gcc that they read the MXCSR.  The volatile
annotation may be needed elsewhere as well.
2005-01-11 22:10:43 +00:00
ru
5384a04b6a Scheduled mdoc(7) sweep. 2005-01-11 20:50:51 +00:00
ru
47082de5c6 Sanitize the markup, as prompted. 2005-01-11 20:16:03 +00:00
das
0ca0cdb376 GC unused declaration 2004-12-16 20:40:49 +00:00
das
8daeb2e028 Cosmetic changes only:
- style
- remove unused variables
- de-support VAX

Inspired by:	bin/42388
2004-12-16 20:40:37 +00:00
das
62f7d2f10d More updates for math(3):
- Make some minor rearrangements in the introduction.
- Mention the problem with argument reduction on i386.
- Add recently-implemented functions to the table.
- Un-document the error bounds that only apply to the old 4BSD math
  library, and fill in the correct values where I know them.  No
  attempt has been made to document bounds lower than 1 ulp, although
  smaller bounds are usually achievable in round-to-nearest mode.
2004-10-11 20:13:52 +00:00
stefanf
5198725430 Add and document ilogbl(), a long double version of ilogb(). 2004-10-11 18:13:52 +00:00
stefanf
3725fb7eda Use the FP_ILOG macros from <math.h> rather than hardcoded return values.
Also be prepared for FP_ILOGBNAN != INT_MAX.

Reviewed by:	md5
2004-10-09 17:14:28 +00:00
kensmith
911789fdaa Bump the library version numbers for the following libraries:
/lib/{libm,libreadline}
	/usr/lib/{libhistory,libopie,libpcap}

in preparation for doing the same thing to RELENG_5.  HUGE amounts of
help for determining what to bump provided by kris.

Discussed on:	freebsd-current
Approved by:	re (not required for commit but something like this should be)
2004-10-01 15:38:07 +00:00
das
2e0425d5d9 Further refine some #ifs:
- Simplify the logic by using __GNUC_PREREQ__.
  Suggested by stefanf.
- Make math.h compile with old (pre-8.0) versions of icc.
  Submitted by sf [sic].
2004-09-17 05:15:33 +00:00
stefanf
af9e10f920 Add man pages for the cimag(), conj() and creal() functions. 2004-08-07 23:03:36 +00:00
cognet
d416db42ae Only use rfs and wfs if ARM_HARD_FLOAT is defined, and use stubs if it is not,
in order to unbreak arm make world. The right way to do it with soft floats
will be figured out later.
Discussed with:	das
2004-08-05 14:07:24 +00:00
das
e4fbd5d172 Replace s_isnan.c and s_isnanf.c with the more compact s_isnan.c from
libc.  The externally-visible effect of this is to add __isnanl() to
libm, which means that libm.so.2 can once again link against libc.so.4
when LD_BIND_NOW is set.  This was broken by the addition of fdiml(),
which calls __isnanl().
2004-08-05 01:46:11 +00:00
das
73fe96f0a7 Use isnormal() instead of fpclassify() to avoid dependency on libc.so.5. 2004-08-05 01:44:55 +00:00
kan
d037fe2cca Work around known GCC 3.4.x problem and use ANSI prototype for dremf(). 2004-07-28 05:53:18 +00:00
das
7aef999db6 Fix two bugs in the signbit() macro, which was implemented last year:
- It was added to libc instead of libm.  Hopefully no programs rely
  on this mistake.

- It didn't work properly on large long doubles because its argument
  was converted to type double, resulting in undefined behavior.
2004-07-19 08:16:10 +00:00
stefanf
b4a34b5b66 Fix minor namespace pollution: The prototypes for f{dim,max,min}(),
nearbyint(), round() and trunc() shouldn't be visible when compiling with
-D_XOPEN_SOURCE=500.
2004-07-17 15:03:52 +00:00
das
a5d1cface5 Tweak the conditions under which certain gcc builtins are used:
- Unlike the builtin relational operators, builtin floating-point
  constants were not available until gcc 3.3, so account for this.[1]

- Apparently some versions of the Intel C Compiler fallaciously define
  __GNUC__ without actually being compatible with the claimed gcc
  version.  Account for this, too.[2]

[1] Noticed by:		Christian Hiris <4711@chello.at>
[2] Submitted by:	Alexander Leidinger <Alexander@Leidinger.net>
2004-07-16 06:21:56 +00:00
das
8a3f24c8d0 Remove the declaration of isnan() from this file. It is no longer
needed as of math.h v1.40, and its prototype is incorrect here.
2004-07-09 10:01:10 +00:00
das
65d8d759b1 Implement the classification macros isfinite(), isinf(), isnan(), and
isnormal() the hard way, rather than relying on fpclassify().  This is
a lose in the sense that we need a total of 12 functions, but it is
necessary for binary compatibility because we have never bumped libm's
major version number.  In particular, isinf(), isnan(), and isnanf()
were BSD libc functions before they were C99 macros, so we can't
reimplement them in terms of fpclassify() without adding a dependency
on libc.so.5.  I have tried to arrange things so that programs that
could be compiled in FreeBSD 4.X will generate the same external
references when compiled in 5.X.  At the same time, the new macros
should remain C99-compliant.

The isinf() and isnan() functions remain in libc for historical
reasons; however, I have moved the functions that implement the macros
isfinite() and isnormal() to libm where they belong.  Moreover,
half a dozen MD versions of isinf() and isnan() have been replaced
with MI versions that work equally well.

Prodded by:	kris
2004-07-09 03:32:40 +00:00
das
5ef7c3d0ff Define the following macros in terms of [gi]cc builtins when the
builtins are available: HUGE_VAL, HUGE_VALF, HUGE_VALL, INFINITY,
and NAN.  These macros now expand to floating-point constant
expressions rather than external references, as required by C99.
Other compilers will retain the historical behavior.  Note that
it is not possible say, e.g.
#define	HUGE_VAL	1.0e9999
because the above may result in diagnostics at translation time
and spurious exceptions at runtime.  Hence the need for compiler
support for these features.

Also use builtins to implement the macros isgreater(),
isgreaterequal(), isless(), islessequal(), islessgreater(),
and isunordered() when such builtins are available.
Although the old macros are correct, the builtin versions
are much faster, and they avoid double-expansion problems.
2004-07-09 03:31:09 +00:00
das
370370ec79 Add C99's nearbyint{,f}() functions as wrappers around rint().
These trivial implementations are about 25 times slower than
rint{,f}() on x86 due to the FP environment save/restore.
They should eventually be redone in terms of fegetround() and
bit fiddling.
2004-07-06 04:46:08 +00:00
ru
57ce50860e Eliminate double whitespace. 2004-07-03 22:30:10 +00:00
ru
01548ace15 Mechanically kill hard sentence breaks. 2004-07-02 23:52:20 +00:00
ru
615a6a246a Markup, grammar, punctuation. 2004-07-01 18:20:57 +00:00
das
86ae148680 Implement and document fdim{,f,l}, fmax{,f,l}, and fmin{,f,l}. 2004-06-30 07:04:01 +00:00
marcel
b6e99841cc s/ARCH/ARCH_SUBDIR/g -- This reduces the chance of possible conflicts
with the user's environment.

Wondered why his cross-builds kept failing: marcel
2004-06-24 00:02:32 +00:00
stefanf
bcffee208f Completely remove s_ilogb.S as the assembler implementation gives very little
speed improvement to none at all over the MI version.

Submitted by:	bde
2004-06-20 10:42:23 +00:00
das
2e83c4463a Uncomment some functions that we now support. 2004-06-20 10:39:09 +00:00
das
1dc40d294e Cross-reference round(3) and trunc(3) as appropriate. 2004-06-20 09:27:17 +00:00
das
59cebf2b44 Connect scalbln(), trunc(), and the associated documentation to the build. 2004-06-20 09:27:03 +00:00
das
62b8ef8dc2 Declare scalbln(), scalblnf(), trunc(), and truncf(). 2004-06-20 09:26:41 +00:00
das
a97ec37c72 Implement trunc() and truncf(). 2004-06-20 09:25:43 +00:00
das
dd81b94d1c Add trivial implementations of scalbln() and scalblnf().
These routines are specified in C99 for the sake of
architectures where an int isn't big enough to represent
the full range of floating-point exponents.  However,
even the 128-bit long double format has an exponent smaller
than 15 bits, so for all practical purposes, scalbln() and
scalblnf() are aliases for scalbn() and scalbnf(), respectively.
2004-06-20 09:25:27 +00:00
stefanf
c3b1d7dffc Document ilogb()'s return values in terms of the FP_ILOGB* macros. 2004-06-19 09:33:29 +00:00
stefanf
ac3aff3300 Return the same result as the MI version for 0.0, INFINITY and NaN.
Reviewed by:	standards@
2004-06-19 09:30:00 +00:00
stefanf
127bbb4fe3 Our MI implementation of ilogb() returns -INT_MAX for the argument 0.0 rather
than INT_MIN, so adjust FP_ILOGB0 to reflect this.  Use <machine/_limits.h> for
INT_MAX's value while there.

Reviewed by:	standards@
2004-06-19 09:25:21 +00:00
das
5cfbdc1d4a Memory's free, but all the world ain't a VAX anymore. Bring math.3
kicking and screaming into the 1980's.  This change converts most of
the markup from man(7) to mdoc(7) format, and I believe it removes or
updates everything that was flat out wrong.  However, much work is
still needed to sanitize the markup, improve coverage, and reduce
overlap with other manpages.  Some of the sections would better belong
in a philosophy_of_w_kahan.3 manpage, but they are informative and
remain at least as reminders of topics to cover.

Reviewed by:	doc@, trhodes@
2004-06-19 03:25:28 +00:00
das
699d33669c The references to scalbn and scalbnf should be scalb and scalbf.
(The former are actually useful, and ieee_test(3) only documents
functions that aren't.)  Add a sentence describing the domain of
scalb() and scalbf().
2004-06-12 04:40:47 +00:00
das
388fd1cd29 Shift the FPSR contents by the correct amount so feupdateenv() raises
the correct exceptions from the old environment.
2004-06-11 02:35:30 +00:00
das
a19b0e4d1e Insert a missing '~' in feholdexcept(), so that it correctly clears
the exception flags in the mxcsr as well as the x87 FPU.
2004-06-11 02:35:19 +00:00
das
7765c93088 Fix a bug where rintf() rounded the wrong way in round-to-nearest mode
on all inputs of the form x.75, where x is an even integer and
log2(x) = 21.  A similar problem occurred when rounding upward.
The bug involves the following snippet copied from rint():

	i>>=1;
	if((i0&i)!=0) i0 = (i0&(~i))|((0x100000)>>j0);

The constant 0x100000 should be 0x200000.  Apparently this case was
never tested.

It turns out that the bit manipulation is completely superfluous
anyway, so remove it.  (It tries to simulate 90% of the rounding
process that the FPU does anyway.)  Also, the special case of +-0 is
handled twice (in different ways), so remove the second instance.

Throw in some related simplifications from bde:

- Work around a bug where gcc fails to clip to float precision by
  declaring two float variables as volatile.  Previously, we
  tricked gcc into generating correct code by declaring some
  float constants as doubles.

- Remove additional superfluous bit manipulation.

- Minor reorganization.

- Include <sys/types.h> explicitly.

Note that some of the equivalent lines in rint() also appear to be
unnecessary, but I'll defer to the numerical analysts who wrote it,
since I can't test all 2^64 cases.

Discussed with:	bde
2004-06-09 21:24:52 +00:00
das
2e3c47ad48 Include <sys/cdefs.h> earlier to get the various visibility constants.
Previously, we were relying on <sys/_types.h> to include it implicitly.
2004-06-09 10:32:05 +00:00
das
e2928bd733 Add round(3) and roundf(3) and the associated documentation.
PR:		59797
Submitted by:	"Steven G. Kargl" <kargl@troutmask.apl.washington.edu>
Reviewed by:	bde (earlier version, last year)
2004-06-07 08:05:36 +00:00
das
04b52e2cd3 Add fenv.h, fenv.c, and the associated documentation to the libm
build.  To facilitate this, add ${.CURDIR}/${ARCH} to make's search
path unconditionally.

Reviewed by:	standards@
2004-06-06 10:06:57 +00:00
das
535ca6faf5 Add documentation for:
- fenv(3)
- feclearexcept(3), fegetexceptflag(3), feraiseexcept(3),
  fesetexceptflag(3), fetestexcept(3)
- fegetround(3), fesetround(3)
- fegetenv(3), feholdexcept(3), fesetenv(3), feupdateenv(3)

Reviewed by:	standards@
2004-06-06 10:06:26 +00:00
das
53f61273e3 Add an fenv.h implementation for the sparc64 port.
Reviewed by:	standards@
2004-06-06 10:05:57 +00:00
das
31eea9495c Add an fenv.h implementation for the powerpc port.
Reviewed by:	standards@
2004-06-06 10:05:10 +00:00
das
3cbeb05df0 Add an fenv.h implementation for the ia64 port.
Reviewed by:	standards@
2004-06-06 10:04:43 +00:00
das
b1670fc3d8 Add an fenv.h implementation for the i386 port.
Reviewed by:	standards@
2004-06-06 10:04:17 +00:00
das
3994165f74 Add an fenv.h implementation for the arm port.
It does not appear to be possible to cross-build arm from i386 at the
moment, and I have no ARM hardware anyway.  Thus, I'm sure there are
bugs.  I will gladly fix these when the arm port is more mature.

Reviewed by:	standards@
2004-06-06 10:03:59 +00:00
das
0252cb85b9 Add an fenv.h implementation for the amd64 port.
Reviewed by:	standards@
2004-06-06 10:03:25 +00:00
das
0cf0cfc69d Add an fenv.h implementation for the alpha port. All of the standard
features appear to work, subject to the caveat that you tell gcc you
want standard rather than recklessly fast behavior
(-mieee-with-inexact -mfp-rounding-mode=d).

The non-standard feature of delivering a SIGFPE when an application
raises an unmasked exception does not work, presumably due to a kernel
bug.  This isn't so bad given that floating-point exceptions on the
Alpha architecture are not precise, so making them useful in userland
requires a significant amount of wizardry.

Reviewed by:	standards@
2004-06-06 09:58:55 +00:00
bde
f744761f22 Fixed lots of 1 ULP errors caused by a broken approximation for pi/2.
We approximate pi with more than float precision using pi_hi+pi_lo in
the usual way (pi_hi is actually spelled pi in the source code), and
expect (float)0.5*pi_lo to give the low part of the corresponding
approximation for pi/2.  However, the high part for pi/2 (pi_o_2) is
rounded to nearest, which happens to round up, while the high part for
pi was rounded down.  Thus pi_o_2+(float)0.5*pi (in infinite precision)
was a very bad approximation for pi/2 -- the low term has the wrong
sign and increases the error drom less than half an ULP to a full ULP.

This fix rounds up instead of down for pi_hi.  Consistently rounding
down instead of up should work, and is the method used in e_acosf.c
and e_asinf.c.  The reason for the difference is that we sometimes
want to return precisely pi/2 in e_atan2f.c, so it is convenient to
have a correctly rounded (to nearest) value for pi/2 in a variable.
a_acosf.c and e_asinf.c also differ in directly approximating pi/2
instead pi; they multiply by 2.0 instead of dividing by 0.5 to convert
the approximation.

These complications are not directly visible in the double precision
versions because rounding to nearest happens to round down.
2004-06-02 17:09:05 +00:00
das
152a4c4166 Port a bugfix from FDLIBM 5.3. The bug really only applies to tan()
and not tanf() because float type can't represent numbers large enough
to trigger the problem.  However, there seems to be a precedent that
the float versions of the fdlibm routines should mirror their double
counterparts.

Also update to the FDLIBM 5.3 license.

Obtained from:	FDLIBM
Reviewed by:	exhaustive comparison
2004-06-02 04:39:44 +00:00
das
75a66e7e89 Merge a bugfix from FDLIBM 5.3 to ensure that the error in tan()
is always less than 1 ulp.  Also update to the 5.3 license.

Obtained from:	FDLIBM
2004-06-02 04:39:29 +00:00
bde
dbfd4ab6f2 Merged from double precision case (e_pow.c 1.10: sign fixes). 2004-06-01 19:33:30 +00:00
bde
152787c7f1 Fixed the sign of the result in some overflow and underflow cases (ones
where the exponent is an odd integer and the base is negative).

Obtained from:	fdlibm-5.3

Sun finally released a new version of fdlibm just a coupe of weeks
ago.  It only fixes 3 bugs (this one, another one in pow() that we
already have (rev.1.9), and one in tan().  I've learned too much about
powf() lately, so this fix was easy to merge.  The patch is not verbatim,
because our base version has many differences for portability and I
didn't like global renaming of an unrelated variable to keep it separate
from the sign variable.  This patch uses a new variable named sn for
the sign.
2004-06-01 19:28:38 +00:00
bde
719aa077cb Fixed another precision bug in powf(). This one is in the computation
[t=p_l+p_h High].  We multiply t by lg2_h, and want the result to be
exact.  For the bogus float case of the high-low decomposition trick,
we normally discard the lowest 12 bits of the fraction for the high
part, keeping 12 bits of precision.  That was used for t here, but it
doesnt't work because for some reason we only discard the lowest 9
bits in the fraction for lg2_h.  Discard another 3 bits of the fraction
for t to compensate.

This bug gave wrong results like:

      powf(0.9999999, -2.9999995) = 1.0000002 (should be 1.0000001)
        hex values: 3F7FFFFF C03FFFFE 3F800002 3F800001

As explained in the log for the previous commit, the bug is normally
masked by doing float calculations in extra precision on i386's, but
is easily detected by ucbtest on systems that don't have accidental
extra precision.

This completes fixing all the bugs in powf() that were routinely found
by ucbtest.
2004-06-01 19:03:31 +00:00
bde
ad1b692494 Fixed 2 bugs in the computation /* t_h=ax+bp[k] High */.
(1) The bit for the 1.0 part of bp[k] was right shifted by 4.  This seems
    to have been caused by a typo in converting e_pow.c to e_powf.c.
(2) The lower 12 bits of ax+bp[k] were not discarded, so t_h was actually
    plain ax+bp[k].  This seems to have been caused by a logic error in
    the conversion.

These bugs gave wrong results like:

    powf(-1.1, 101.0) = -15158.703 (should be -15158.707)
      hex values: BF8CCCCD 42CA0000 C66CDAD0 C66CDAD4

Fixing (1) gives a result wrong in the opposite direction (hex C66CDAD8),
and fixing (2) gives the correct result.

ucbtest has been reporting this particular wrong result on i386 systems
with unpatched libraries for 9 years.  I finally figured out the extent
of the bugs.  On i386's they are normally hidden by extra precision.
We use the trick of representing floats as a sum of 2 floats (one much
smaller) to get extra precision in intermediate calculations without
explicitly using more than float precision.  This trick is just a
pessimization when extra precision is available naturally (as it always
is when dealing with IEEE single precision, so the float precision part
of the library is mostly misimplemented).  (1) and (2) break the trick
in different ways, except on i386's it turns out that the intermediate
calculations are done in enough precision to mask both the bugs and
the limited precision of the float variables (as far as ucbtest can
check).

ucbtest detects the bugs because it forces float precision, but this
is not a normal mode of operation so the bug normally has little effect
on i386's.

On systems that do float arithmetic in float precision, e.g., amd64's,
there is no accidental extra precision and the bugs just give wrong
results.
2004-06-01 18:08:39 +00:00
stefanf
46d384e689 Add implementations for cimag{,f,l}, creal{,f,l} and conj{,f,l}. They are
needed for cases where GCC's builtin functions cannot be used and for
compilers that don't know about them.

Approved by:	das (mentor)
2004-05-30 09:21:56 +00:00
das
c49ff0a6a1 Remove some kludges designed to ensure that the compiler didn't round
constants the wrong way on the VAX.  Instead, use C99 hexadecimal
floating-point constants, which are guaranteed to be exact on binary
IEEE machines.  (The correct hexadecimal values were already provided
in the source, but not used.)  Also, convert the constants to
lowercase to work around a gcc bug that wasn't fixed until gcc 3.4.0.

Prompted by:	stefanf
2004-05-17 01:04:37 +00:00
stefanf
2a0970f8ce Add an implementation of copysignl(), a long double version of copysign().
Approved by:	das (mentor)
2004-05-07 18:56:31 +00:00
stefanf
65ef3a8ab7 Add an MLINK for fabsl().
Approved by:	das (mentor)
2004-05-07 17:55:07 +00:00
stefanf
5cf96830e0 The prototypes for cabs() and cabsf() are in <complex.h>. Fix their arguments'
types and describe them briefly.

Reviewed by:	ru, bde
Approved by:	das (mentor)
2004-05-06 13:11:18 +00:00
das
d24349d79f Make sure that symbols are declared in math.h iff the appropriate
namespaces are visible.  Previously, math.h failed to hide some C99-,
XSI-, and BSD-specific symbols in certain compilation environments.

The referenced PR has a nice listing of the appropriate conditions for
making symbols visible in math.h.  The only non-stylistic difference
between the patch in the PR and this commit is that I superfluously
test for __BSD_VISIBLE in a few places to be more explicit about which
symbols have historically been part of the FreeBSD environment.

PR:		65939
Submitted by:	Stefan Farfeleder <stefan@fafoe.narf.at>
2004-04-25 02:35:42 +00:00
das
1e626ef3b4 Remove a stale comment referring to values.h, which has never been
part of FreeBSD.

PR:		65939
2004-04-25 02:32:46 +00:00
bde
5c7ee701c7 Initial support for C99's (or is it POSIX.1-2001's?) MATH_ERRNO,
MATH_ERREXCEPTION and math_errhandling, so that C99 applications at
least have the possibility of determining that errno is not set for
math functions.  Set math_errhandling to the non-standard-conforming
value of 0 for now to indicate that we don't support either method
of reporting errors.  We intentionally don't support MATH_ERRNO
because errno is a mistake, and we are missing support for
MATH_ERREXCEPTION (<fenv.h>, compiler support for <fenv.h>, and
actually setting the exception flags correctly).
2004-03-12 12:02:03 +00:00
das
477842ce6b Fix a problem where libm compiled under 5.X would depend on features
that are only in libc.so.5.  This broke some 4.X applications linked
to libm and run under 5.X.

Background:
In C99, isinf() and isnan() cannot be implemented as regular
functions.  We use macros that call libc functions in 5.X, but for
libm-internal use, we need to use the old versions until the next
time libm's major version number is bumped.

Submitted by:	bde
Reported by:	imp, kris
2003-10-27 01:28:07 +00:00
des
a7b0d81550 Better safe than clever.
Submitted by:	das
2003-10-25 19:53:28 +00:00
des
681563239d Document fabsl(3).
Submitted by:	Stefan Farfeleder <stefan@fafoe.narf.at>
2003-10-25 13:45:11 +00:00
des
8c5b85e155 - fabsl.c should be named s_fabsl.c for consistency with libmsun's
documented naming scheme (unfortunately the documentation isn't in the
   tree as far as I can tell); no repocopy is required as there is no
   history to preserve.

 - replace simple and almost-correct implementation with slightly hackish
   but definitely correct implementation (tested on i386, alpha, sparc64)
   which requires pulling in fpmath.h and the MD _fpmath.h from libc.

 - try not to make a mess of the Makefile in the process.

 - enterprising minds are encouraged to implement more C99 long double
   functions.
2003-10-25 09:32:18 +00:00
des
6cefc48da3 Connect fabsl.c to the build. 2003-10-23 08:23:51 +00:00
des
858c5bdd96 Add prototypes for all long double functions in C99. Leave them all
#if 0'd out, except for fabsl(3) which I've implemented.
2003-10-23 08:23:38 +00:00
des
0c217c1679 Implement fabsl(3), allowing the world to build with -fno-builtin. 2003-10-23 08:20:47 +00:00
gordon
5901302929 Stage 3 of dynamic root support. Make all the libraries needed to run
binaries in /bin and /sbin installed in /lib. Only the versioned files
reside in /lib, the .so symlink continues to live /usr/lib so the
toolchain doesn't need to be modified.
2003-08-17 08:28:46 +00:00
bde
3342068b97 Fixed some style bugs (misplacement and misformatting of some commented-out
code).
2003-07-23 09:24:44 +00:00
peter
52ebc02e14 Only provide one copy of the math functions. If we provide a MD function,
do not also provide a __generic_XXX version as well.  This is how we
used to runtime select the generic vs i387 versions on the i386 platform.

This saves a pile of #defines in the src/math_private.h file to undo the
__generic_XXX renames in some of the *.c files.
2003-07-23 04:53:47 +00:00
peter
3894161bb0 No longer need the internal __get_hw_float() function. 2003-07-23 04:25:04 +00:00
peter
07a2b34dd7 Now that we do not need to do runtime detection for the broken default
fp emulator, stop doing the runtime selection of hardware or emulated
floating point operations on i386.  Note that I have not suppressed the
duplicate compiles yet.

While here, fix the alpha.  It has provided specific copysign/copysignf
functions since the beginning of time, but they have never been used.
2003-07-23 04:23:36 +00:00
mike
272a8dbe20 Fix two misuses of __BSD_VISIBLE.
Submitted by:	bde
Approved by:	re
2003-05-22 17:07:57 +00:00
peter
2d929d2534 AMD64 support (another IEEEFP platform) 2003-04-30 21:06:30 +00:00
das
534d93f661 Fix braino in definition of isfinite().
Noticed by:	marcus
Pointy hat to:	das
2003-04-04 13:27:47 +00:00
ru
ea25d256ea mdoc(7) police: Nits. 2003-03-02 21:04:21 +00:00
imp
c054fa155d - gamma_r, lgamma_r, gammaf_r, and lgammaf_r were protected by _REENTRANT
in math.h; the consensus here was that __BSD_VISIBLE was correct instead.

- gamma_r, lgamma_r, gammaf_r, and lgammaf_r had no documentation in the
  lgamma(3) manpage.

Reviewed by: standards@
Submitted by: Ben Mesander
2003-02-26 13:12:03 +00:00
mike
1998abeb23 o Implement C99 classification macros isfinite(), isinf(), isnan(),
isnormal().  The current isinf() and isnan() are perserved for
  binary compatibility with 5.0, but new programs will use the macros.
o Implement C99 comparison macros isgreater(), isgreaterequal(),
  isless(), islessequal(), islessgreater(), isunordered().

Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
2003-02-12 20:03:41 +00:00
mike
b56a102d98 Implement C99's signbit() macro. 2003-02-11 21:56:21 +00:00
mike
b4e3f2f94a Implement fpclassify():
o Add a MD header private to libc called _fpmath.h; this header
  contains bitfield layouts of MD floating-point types.
o Add a MI header private to libc called fpmath.h; this header
  contains bitfield layouts of MI floating-point types.
o Add private libc variables to lib/libc/$arch/gen/infinity.c for
  storing NaN values.
o Add __double_t and __float_t to <machine/_types.h>, and provide
  double_t and float_t typedefs in <math.h>.
o Add some C99 manifest constants (FP_ILOGB0, FP_ILOGBNAN, HUGE_VALF,
  HUGE_VALL, INFINITY, NAN, and return values for fpclassify()) to
  <math.h> and others (FLT_EVAL_METHOD, DECIMAL_DIG) to <float.h> via
  <machine/float.h>.
o Add C99 macro fpclassify() which calls __fpclassify{d,f,l}() based
  on the size of its argument.  __fpclassifyl() is never called on
  alpha because (sizeof(long double) == sizeof(double)), which is good
  since __fpclassifyl() can't deal with such a small `long double'.

This was developed by David Schultz and myself with input from bde and
fenner.

PR:		23103
Submitted by:	David Schultz <dschultz@uclink.Berkeley.EDU>
		(significant portions)
Reviewed by:	bde, fenner (earlier versions)
2003-02-08 20:37:55 +00:00
schweikh
d3367c5f5d Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,
especially in troff files.
2003-01-01 18:49:04 +00:00
schweikh
fec6546e12 english(4) police. 2002-12-27 12:15:40 +00:00
archie
d93f84495b Re-apply the previously backed-out commit that fixes the problem where
HUGE_VAL is not properly aligned on some architectures. The previous
fix now works because the two versions of 'math.h' (include/math.h
and lib/msun/src/math.h) have since been merged into one.

PR:	bin/43544
2002-10-31 23:05:20 +00:00
markm
f29af1e793 Remove duplicate declaration. 2002-10-23 17:35:11 +00:00
bde
e4147dd2dc Fixed a last-minute editing error in previous commit. nfs and/or cvs
replaced a 14-byte change in the middle of the file with 14 NULs at EOF
despite or because of aborting the initial commit to pick up the change.
2002-10-01 11:44:35 +00:00
bde
bfd55981c7 Merged all interesting difference between the old math.h and the current
one into the latter and removed the former.

This works around the bug that some broken Makefiles add -I.../src/include
to CFLAGS, resulting in the old math.h being preferred and differences
between the headers possibly being fatal.

The merge mainly involves declaring some functions as __pure2 although
they are not yet all strictly free of side effects.

PR:		43544
2002-10-01 11:34:42 +00:00
archie
207fb06f80 Revert previous commit to unbreak world until we figure out the
right way to do it.
2002-09-20 15:43:26 +00:00
archie
59b24d359e Fix a problem with the definition of HUGE_VAL causing the gcc warning
"cast increases required alignment of target type" on some platforms.

Reviewed by:	bde
2002-09-19 19:47:27 +00:00
bde
4d5cb9b473 e_pow.c:
Fixed pow(x, y) when x is very close to -1.0 and y is a very large odd
integer.  E.g., pow(-1.0 - pow(2.0, -52.0), 1.0 + pow(2.0, 52.0)) was
0.0 instead of being very close to -exp(1.0).

PR:		39236
Submitted by:	Stephen L Moshier <steve@moshier.net>

e_powf.c:
Apply the same patch although it is just cosmetic because odd integers
large enough to cause the problem are too large to be precisely represented
as floats.

MFC after:	1 week
2002-06-17 15:28:59 +00:00
alfred
565689ac5b Fix formatting, this is hard to explain, so I'll show one example.
-       float ynf(int n, float x)       /* wrapper ynf */
+float
+ynf(int n, float x)    /* wrapper ynf */

This is because the __STDC__ stuff was indented.

Reviewed by: md5
2002-05-28 18:15:04 +00:00
alfred
0b8481982e Assume __STDC__, remove non-__STDC__ code.
Reviewed by: md5
2002-05-28 17:51:46 +00:00
alfred
1ee311b26d Assume __STDC__, remove non-__STDC__ code.
Submitted by: keramida
2002-05-28 17:03:12 +00:00
benno
e55ebd8f96 Spread the word of PowerPC. 2002-05-21 04:00:47 +00:00
ru
59049318b6 Added new bsd.incs.mk which handles installing of header files
via INCS.  Implemented INCSLINKS (equivalent to SYMLINKS) to
handle symlinking include files.  Allow for multiple groups of
include files to be installed, with the powerful INCSGROUPS knob.
Documentation to follow.

Added standard `includes' and `incsinstall' targets, use them
in Makefile.inc1.  Headers from the following makefiles were
not installed before (during `includes' in Makefile.inc1):

	kerberos5/lib/libtelnet/Makefile
	lib/libbz2/Makefile
	lib/libdevinfo/Makefile
	lib/libform/Makefile
	lib/libisc/Makefile
	lib/libmenu/Makefile
	lib/libmilter/Makefile
	lib/libpanel/Makefile

Replaced all `beforeinstall' targets for installing includes
with the INCS stuff.

Renamed INCDIR to INCSDIR, for consistency with FILES and SCRIPTS,
and for compatibility with NetBSD.  Similarly for INCOWN, INCGRP,
and INCMODE.

Consistently use INCLUDEDIR instead of /usr/include.

gnu/lib/libstdc++/Makefile and gnu/lib/libsupc++/Makefile changes
were only lightly tested due to the missing contrib/libstdc++-v3.
I fully tested the pre-WIP_GCC31 version of this patch with the
contrib/libstdc++.295 stuff.

These changes have been tested on i386 with the -DNO_WERROR "make
world" and "make release".
2002-05-12 16:01:00 +00:00
bde
755d0bf04f Resurrect Lite1's gamma() as C99's tgamma(). Minimal changes. 2002-03-26 11:59:29 +00:00
bde
4fdf2dbaad Fixed some bugs in the description of plain gamma() (and gammaf()).
Give a more detailed and correct history of when gamma() was actually
the gamma function.
2002-03-26 10:18:20 +00:00
bde
5713462d5a Fixed some minor style bugs. 2002-03-26 09:18:09 +00:00
obrien
fd9d7ac0ed Remove __P() usage. 2002-03-21 23:54:04 +00:00
obrien
c6f1189467 Fix SCM ID's. 2002-03-21 18:06:09 +00:00
obrien
b34beb1e0b We need an frexp() function. 2002-03-01 01:58:20 +00:00
jake
f22e9b26c0 Add ifdef sparc64. 2002-01-02 06:54:18 +00:00
phantom
ba657b6ec7 Fix style bugs (mostly remove 'extern' from function prototypes)
Inspired by: conversation with bde
2001-12-13 17:22:17 +00:00
phantom
c4a3969a7e * remove reference to m68k-dependent sources
* fix comment
2001-12-13 17:18:26 +00:00
ru
1fe81e216a Grammar nit. 2001-11-21 09:25:14 +00:00