Fix bug in jn(3) and jnf(3) that led to -inf results

Explanation by Steve:
jn[f](n,x) for certain ranges of x uses downward recursion to compute
the value of the function.  The recursion sequence that is generated is
proportional to the actual desired value, so a normalization step is
taken.  This normalization is j0[f](x) divided by the zeroth sequence
member.  As Bruce notes, near the zeros of j0[f](x) the computed value
can have giga-ULP inaccuracy. I found for the 1st zero of j0f(x) only
the leading decimal digit is correct.  The solution to the issue is
fairly straight forward.  The zeros of j0(x) and j1(x) never coincide,
so as j0(x) approaches a zero, the normalization constant switches to
j1[f](x) divided by the 2nd sequence member.  The expectation is that
j1[f](x) is a more accurately computed value.

PR:		bin/144306
Submitted by:	Steven G. Kargl <kargl@troutmask.apl.washington.edu>
Reviewed by:	bde
MFC after:	7 days
This commit is contained in:
Ulrich Spörlein 2010-11-13 10:54:10 +00:00
parent f29af3b2ac
commit e61ffaea2a
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=215237
2 changed files with 12 additions and 2 deletions

View File

@ -200,7 +200,12 @@ __ieee754_jn(int n, double x)
}
}
}
b = (t*__ieee754_j0(x)/b);
z = __ieee754_j0(x);
w = __ieee754_j1(x);
if (fabs(z) >= fabs(w))
b = (t*z/b);
else
b = (t*w/a);
}
}
if(sgn==1) return -b; else return b;

View File

@ -152,7 +152,12 @@ __ieee754_jnf(int n, float x)
}
}
}
b = (t*__ieee754_j0f(x)/b);
z = __ieee754_j0f(x);
w = __ieee754_j1f(x);
if (fabsf(z) >= fabsf(w))
b = (t*z/b);
else
b = (t*w/a);
}
}
if(sgn==1) return -b; else return b;