violated ECMA-119 (ISO9660): allow reserved4 to be 0x20 in PVD.
This allows tar to read FreeBSD distribution ISO images created
with makefs prior to NetBSD bin/45217 bugfix (up to 9.0-BETA1).
In addition, merge following important bugfixes from
libarchive's release/2.8 branch:
Revision 2812:
Merge 2811 from trunk: Don't try to verify that compression-level=0
produces larger results than the default compression, since this isn't
true for all versions of liblzma.
Revision 2817:
Merge 2814 from trunk: Fix Issue 121 (mtree parser error)
http://code.google.com/p/libarchive/issues/detail?id=121
Revision 2820:
Fix issue 119.
Change the file location check that a file location does not exceed
volume block. New one is that a file content does not exceed volume
block(end of an ISO image). It is better than previous check even
if the issue did not happen.
While reading an ISO image generated by an older version of mkisofs
utility, a file location indicates the end the ISO image if its file
size is zero and it is the last file of all files of the ISO image,
so it is possible that the location value is the same as the number
of the total block of the ISO image.
http://code.google.com/p/libarchive/issues/detail?id=119
Revision 2955:
Issue 134: Fix libarchive 2.8 crashing in archive_write_finish() when
the open has failed and we're trying to write Zip format.
http://code.google.com/p/libarchive/issues/detail?id=134
Revision 2958:
Followup on Issue 134:
1) Port test_open_failure to libarchive 2.8 branch to test
the problem reported in Issue 134.
This test also shows that archive_read_open() sometimes
fails to report open errors correctly.
2) Fix the bug in archive_read.c
3) Comment out the tests that close functions are invoked
promptly when open fails; that's fully fixed in libarchive 3.0,
but I don't think it's worth fixing here.
Revision 3484:
Use uintmax_t with %ju
Revision 3487:
Fix issue 163.
Correctly allocate enough memory for a input buffer saved.
http://code.google.com/p/libarchive/issues/detail?id=163
Revision 3542:
Merge 2516, 2536 from trunk: Allow path table offset values of
0 and 18, which are used by some ISO writers.
Reviewed by: kientzle
Approved by: re (kib)
MFC after: 3 days
* Enforce option interface can only be used before the archive is opened
* Correctly handle large skips on platforms with 32-bit off_t
* Use int64_t instead of off_t
(incorrect handling of zero-length reads before the copy buffer is
allocated) is masked by the iso9660 taster. Tar and cpio both enable
that taster so were protected from the bug; unzip is susceptible.
This both fixes the bug and updates the test harness to exercise
this case.
Submitted by: Ed Schouten diagnosed the bug and drafted a patch
MFC after: 7 days
* Split whiny skip function to create a new best-effort skip_lenient()
* Correctly increment the top-level file position only for the top filter
* Simulate skip by reading against the current filter, not the top filter
The latter two bugs aren't currently visible because no existing
filter delegates skip operations.
as the compression name when no other read filter bid. Add some
assertions to various tests to verify that read filters are properly
setting the textual name as well as the compression code.
This is the last phase of the "big decompression refactor" that
puts a lazy reblocking layer between each pair of read filters.
I've also changed the terminology for this area---the two kinds
of objects are now called "read filters" and "read filter bidders"---and
moved ownership of these objects to the archive_read core.
This greatly simplifies implementing new read filters, which
can now use peek/consume I/O semantics both for bidding (arbitrary
look-ahead!) and for reading streams (look-ahead simplifies handling
concatenated streams, for instance).
The first merge here is the overhaul proper; the remainder are small
fixes to correct errors in the initial implementation.
This is an attempt to eliminate a lot of redundant
code from the read ("decompression") filters by
changing them to juggle arbitrary-sized blocks
and consolidate reblocking code at a single point
in archive_read.c.
Along the way, I've changed the internal read/consume
API used by the format handlers to a slightly
different style originally suggested by des@. It
does seem to simplify a lot of common cases.
The most dramatic change is, of course, to
archive_read_support_compression_none(), which
has just evaporated into a no-op as the blocking
code this used to hold has all been moved up
a level.
There's at least one more big round of refactoring
yet to come before the individual filters are as
straightforward as I think they should be...
(left over from when the unified read/write structure was copied
to form separate read and write structures) and eliminate the
pointless initialization of a couple of the unused fields.
the number of bytes read is actually not important as long as we have at
least what we ask for. Illustrate its benefits by using it throughout
the ZIP support code, except for the few cases where it doesn't apply.
Approved by: kientzle
that I've been working on but put off committing until after the
RELENG_7 branch, including:
* New manpages: cpio.5 mtree.5
* New archive_entry_strmode()
* New archive_entry_link_resolver()
* New read support: mtree format
* Internal API change: read format auction only runs once
* Running the auction only once allowed simplifying a lot of bid logic.
* Cpio robustness: search for next header after a sync error
* Support device nodes on ISO9660 images
* Eliminate a lot of unnecessary copies for uncompressed archives
* Corrected handling of new GNU --sparse --posix formats
* Correctly handle a zero-byte write to a compressed archive
* Fixed memory leaks
Many of these improvements were motivated by the upcoming bsdcpio
front-end.
There have also been extensive improvements to the libarchive_test
test harness, which I'll commit separately.
* "compression_program" support uses an external program
* Portability: no longer uses "struct stat" as a primary
data interchange structure internally
* Part of the above: refactor archive_entry to separate
out copy_stat() and stat() functions
* More complete tests for archive_entry
* Finish archive_entry_clone()
* Isolate major()/minor()/makedev() in archive_entry; remove
these from everywhere else.
* Bug fix: properly handle decompression look-ahead at end-of-data
* Bug fixes to 'ar' support
* Fix memory leak in ZIP reader
* Portability: better timegm() emulation in iso9660 reader
* New write_disk flags to suppress auto dir creation and not
overwrite newer files (for future cpio front-end)
* Simplify trailing-'/' fixup when writing tar and pax
* Test enhancements: fix various compiler warnings, improve
portability, add lots of new tests.
* Documentation: document new functions, first draft of
libarchive_internals.3
MFC after: 14 days
Thanks to: Joerg Sonnenberger (compression_program)
Thanks to: Kai Wang (ar)
Thanks to: Colin Percival (many small fixes)
Thanks to: Many others who sent me various patches and problem reports.
implementation, and mark it as deprecated. It will be removed entirely
in libarchive 3.0 (in FreeBSD 8.0?) but there's no reason for anyone to
use it instead of archive_read_data.
Approved by: kientzle
discards it, for use when the compression layer code doesn't know how to
skip data (e.g., everything other than the "none" compressor). This makes
format level code simpler because that code can now assume that the
compression layer always knows how to skip and will always skip exactly
the requested number of bytes.
Discussed with: kientzle (3 months ago)
* libarchive_test program exercises many of the core features
* Refactored old "read_extract" into new "archive_write_disk", which
uses archive_write methods to put entries onto disk. In particular,
you can now use archive_write_disk to create objects on disk
without having an archive available.
* Pushed some security checks from bsdtar down into libarchive, where
they can be better optimized.
* Rearchitected the logic for creating objects on disk to reduce
the number of system calls. Several common cases now use a
minimum number of system calls.
* Virtualized some internal interfaces to provide a clearer separation
of read and write handling and make it simpler to override key
methods.
* New "empty" format reader.
* Corrected return types (this ABI breakage required the "2.0" version bump)
* Many bug fixes.
Fallout from changing the skip API to use off_t instead of size_t: Print
the skip length using %jd and cast to (intmax_t) instead of %d / (int),
and if ARCHIVE_API_VERSION >= 2, allow the client skipper to be called
for requests longer than SSIZE_MAX. [2]
Approved by: kientzle
Pointy hats to: kientzle [1], cperciva [2]
MFC after: 3 days
a vanilla 2-clause BSD license, but somehow some confusing
extra verbage get copied from somewhere.
Also, update the copyright dates to 2007 for all of the files.
Prompted by: several questions about what those extra words really mean
* Actually use the HAVE_<header>_H macros to conditionally include
system headers. They've been defined for a long time, but only
used in a few places. Now they're used pretty consistently
throughout.
* Fill in a lot of missing casts for conversions from void*.
Although Standard C doesn't require this, some people have been
trying to use C++ compilers with this code, and they do require it.
Bit-for-bit, the compiled object files are identical, except for
one assert() whose line number changed, so I'm pretty confident I
didn't break anything. ;-)
* Expose functions for setting the "skip file" dev/ino information
* Expose functions for setting/querying the block size on reads
* Correctly propagate errors out of archive_read_close/archive_write_close
* Update manpage with information about new functions
increases performance when extracting a single entry from a large
uncompressed archive, especially on slow devices such as USB hard
drives.
Requires a number of changes:
* New archive_read_open2() supports a 'skip' client function
* Old archive_read_open() is implemented as a wrapper now, to
continue supporting the old API/ABI.
* _read_open_fd and _read_open_file sprout new 'skip' functions.
* compression layer gets a new 'skip' operation.
* compression_none passes skip requests through to client.
* compression_{gzip,bzip2,compress} simply ignore skip requests.
Thanks to: Benjamin Lutz, who designed and implemented the whole thing.
I'm just committing it. ;-)
TODO: Need to update the documentation a little bit.
routine fails or the first read fails), invoke the client close
routine immediately so the client can clean up. Also, don't store the
client pointers in this case, so that the client close routine can't
accidentally get called more than once.
A minor style fix to archive_read_open_fd.c while I'm here.
PR: 86453
Thanks to: Andrew Turner for reporting this and suggesting a fix.
"HEADER" unless the open is successful. Instead, leave the state as
"NEW." In particular, if archive_read_open() fails, a subsequent call
to archive_read_next_header() will now cause an explicit assertion
failure instead of a silent segmentation fault.
This may need a little more work to fully realize the intention: If
archive_read_open() fails, you should be able to call it again on the
same archive handle to open a different archive (or the same archive
using a different mechanism).
compiling on IRIX and Solaris. Remove the "archive_check_magic" macro
that existed only to provide __func__ to the underlying __archive_check_magic
function.
Thanks to: Darin Broady
MFC after: 14 days
* Handles entries with compressed size >2GB (signed/unsigned cleanup)
* Handles entries with compressed size >4GB ("ZIP64" extension)
* Handles Unix extensions (ctime, atime, mtime, mode, uid, etc)
* Format-specific "skip data" override allows ZIP reader to skip
entries without decompressing them, which makes "tar -t"
a lot faster.
* Handles "length-at-end" entries generated by, e.g., "zip -r - foo"
Many thanks to: Dan Nelson, who contributed the code and test files for
the first three items above and suggested the fourth.
simple errx() function.
Improve behavior when bzlib/zlib are missing by detecting and
issuing an error message on attempts to read gzip/bzip2 compressed
archives.
and close it) and "finish" (destroy the object) functions. For backwards
compat and simplicity, have "finish" invoke "close" transparently if needed.
This allows clients to close the archive and check end-of-operation
statistics before destroying the object.
push extract data down into archive_read_extract.c and out
of the library-global archive_private.h; push dir-specific
mode/time fixup down into dir restore function; now that the
fixup list is file-local, I can use somewhat more natural
naming.
Oh, yeah, update a bunch of comments to match current reality.