permission), try to continue in FTS_DONTCHDIR mode. Of course this
won't work for long paths, but we can't descend more than one pathname
component beyond the directory anyway if we lack search permission.
Here is a transcript demonstrating the change, where oldls is ls(1)
linked with the old fts(3):
das@VARK:~> mkdir t && touch t/{a,b,c} && chmod u-x t
das@VARK:~> oldls t
a b c
das@VARK:~> oldls -l t
das@VARK:~> \ls t
a b c
das@VARK:~> \ls -l t
ls: a: Permission denied
ls: b: Permission denied
ls: c: Permission denied
I had forgotten about this patch until bde reminded me. He reports
using it without problems for over a year.
PR: 45723
writable. Affected callers include fwrite(), put?(), and *printf().
The issue of whether this is the right errno for funopened streams is
unresolved, but that's an obscure case, and some errno is better than
no errno.
Discussed with: bde, jkh
distinguish files from dirs (trailing '/' indicated a dir). Since
POSIX.1-1987, this convention is no longer necessary. However, there
are current tar programs that pretend to write POSIX-compliant
archives, yet store directories as "regular files", relying on this
old filename convention to save them. <sigh> So, move the check for
this old convention so it applies to all tar archives, not just those
identified as "old."
Pointed out by: Broken distfile for audio/faad port
it sees a truncated input the first time it gets called.
(In particular, files shorter than 512 bytes cannot be tar archives.)
This allows the top-level archive_read_next_header code to
generate a proper error message for unrecognized file types.
Pointed out by: numerous ports that expect tar to extract non-tar files ;-(
Thanks to: Kris Kennaway
It does not appear to be possible to cross-build arm from i386 at the
moment, and I have no ARM hardware anyway. Thus, I'm sure there are
bugs. I will gladly fix these when the arm port is more mature.
Reviewed by: standards@
features appear to work, subject to the caveat that you tell gcc you
want standard rather than recklessly fast behavior
(-mieee-with-inexact -mfp-rounding-mode=d).
The non-standard feature of delivering a SIGFPE when an application
raises an unmasked exception does not work, presumably due to a kernel
bug. This isn't so bad given that floating-point exceptions on the
Alpha architecture are not precise, so making them useful in userland
requires a significant amount of wizardry.
Reviewed by: standards@
version called the higher-level archive_read_data and
archive_read_data_skip functions, which screwed up state management of
those functions. This bit of mis-design has existed for a long time,
but became a serious issue with the recent changes to the
archive_read_data APIs, which added more internal state to the
high-level archive_read_data function. Most common symptom was a
failure to correctly read 'L' entries (long filename) from GNU-style
archives, causing the message ": Can't open: No such file or
directory" with an empty filename.
Pointed out by: Numerous port build failures
Thanks to: Kris Kennaway
that as end-of-archive. Otherwise, a short read at this point
generates an error. This accomodates broken tar writers (such as the
one apparently in use at AT&T Labs) that don't even write a single
end-of-archive block.
Note that both star and pdtar behave this way as well.
In contrast, gtar doesn't complain in either case, and as a
result, will generate no warning for a lot of trashed archives.
Pointed out by: shells/ksh93 port (Thanks to Kris Kennaway)
push extract data down into archive_read_extract.c and out
of the library-global archive_private.h; push dir-specific
mode/time fixup down into dir restore function; now that the
fixup list is file-local, I can use somewhat more natural
naming.
Oh, yeah, update a bunch of comments to match current reality.
reflect src/libexec/rtld-elf/rtld.c rev. 1.68 - the globally-loaded
objects (RTLD_GLOBAL) are searched before the local object's DAG's.
PR: 62770
Submitted by: Kimura Fuyuki <fuyuki@nigredo.org>
We approximate pi with more than float precision using pi_hi+pi_lo in
the usual way (pi_hi is actually spelled pi in the source code), and
expect (float)0.5*pi_lo to give the low part of the corresponding
approximation for pi/2. However, the high part for pi/2 (pi_o_2) is
rounded to nearest, which happens to round up, while the high part for
pi was rounded down. Thus pi_o_2+(float)0.5*pi (in infinite precision)
was a very bad approximation for pi/2 -- the low term has the wrong
sign and increases the error drom less than half an ULP to a full ULP.
This fix rounds up instead of down for pi_hi. Consistently rounding
down instead of up should work, and is the method used in e_acosf.c
and e_asinf.c. The reason for the difference is that we sometimes
want to return precisely pi/2 in e_atan2f.c, so it is convenient to
have a correctly rounded (to nearest) value for pi/2 in a variable.
a_acosf.c and e_asinf.c also differ in directly approximating pi/2
instead pi; they multiply by 2.0 instead of dividing by 0.5 to convert
the approximation.
These complications are not directly visible in the double precision
versions because rounding to nearest happens to round down.
* New read_data_block is both sparse-file aware and uses zero-copy semantics
* Push read_data_block down into specific formats (opens door to
various encoded entry bodies, such as zip or gtar -S)
* Reimplement read_data, read_data_skip, read_data_into_fd in terms
of new read_data_block.
* Update documentation
It's unfortunate that I couldn't just call the new interface
archive_read_data, but didn't want to upset the API that much.
and not tanf() because float type can't represent numbers large enough
to trigger the problem. However, there seems to be a precedent that
the float versions of the fdlibm routines should mirror their double
counterparts.
Also update to the FDLIBM 5.3 license.
Obtained from: FDLIBM
Reviewed by: exhaustive comparison
where the exponent is an odd integer and the base is negative).
Obtained from: fdlibm-5.3
Sun finally released a new version of fdlibm just a coupe of weeks
ago. It only fixes 3 bugs (this one, another one in pow() that we
already have (rev.1.9), and one in tan(). I've learned too much about
powf() lately, so this fix was easy to merge. The patch is not verbatim,
because our base version has many differences for portability and I
didn't like global renaming of an unrelated variable to keep it separate
from the sign variable. This patch uses a new variable named sn for
the sign.
[t=p_l+p_h High]. We multiply t by lg2_h, and want the result to be
exact. For the bogus float case of the high-low decomposition trick,
we normally discard the lowest 12 bits of the fraction for the high
part, keeping 12 bits of precision. That was used for t here, but it
doesnt't work because for some reason we only discard the lowest 9
bits in the fraction for lg2_h. Discard another 3 bits of the fraction
for t to compensate.
This bug gave wrong results like:
powf(0.9999999, -2.9999995) = 1.0000002 (should be 1.0000001)
hex values: 3F7FFFFF C03FFFFE 3F800002 3F800001
As explained in the log for the previous commit, the bug is normally
masked by doing float calculations in extra precision on i386's, but
is easily detected by ucbtest on systems that don't have accidental
extra precision.
This completes fixing all the bugs in powf() that were routinely found
by ucbtest.
(1) The bit for the 1.0 part of bp[k] was right shifted by 4. This seems
to have been caused by a typo in converting e_pow.c to e_powf.c.
(2) The lower 12 bits of ax+bp[k] were not discarded, so t_h was actually
plain ax+bp[k]. This seems to have been caused by a logic error in
the conversion.
These bugs gave wrong results like:
powf(-1.1, 101.0) = -15158.703 (should be -15158.707)
hex values: BF8CCCCD 42CA0000 C66CDAD0 C66CDAD4
Fixing (1) gives a result wrong in the opposite direction (hex C66CDAD8),
and fixing (2) gives the correct result.
ucbtest has been reporting this particular wrong result on i386 systems
with unpatched libraries for 9 years. I finally figured out the extent
of the bugs. On i386's they are normally hidden by extra precision.
We use the trick of representing floats as a sum of 2 floats (one much
smaller) to get extra precision in intermediate calculations without
explicitly using more than float precision. This trick is just a
pessimization when extra precision is available naturally (as it always
is when dealing with IEEE single precision, so the float precision part
of the library is mostly misimplemented). (1) and (2) break the trick
in different ways, except on i386's it turns out that the intermediate
calculations are done in enough precision to mask both the bugs and
the limited precision of the float variables (as far as ucbtest can
check).
ucbtest detects the bugs because it forces float precision, but this
is not a normal mode of operation so the bug normally has little effect
on i386's.
On systems that do float arithmetic in float precision, e.g., amd64's,
there is no accidental extra precision and the bugs just give wrong
results.