into the debugger on test setup failures (otherwise, the console window
just goes away and you can't see what went wrong). On all platforms,
clean up a stray buffer before exiting.
This is the last phase of the "big decompression refactor" that
puts a lazy reblocking layer between each pair of read filters.
I've also changed the terminology for this area---the two kinds
of objects are now called "read filters" and "read filter bidders"---and
moved ownership of these objects to the archive_read core.
This greatly simplifies implementing new read filters, which
can now use peek/consume I/O semantics both for bidding (arbitrary
look-ahead!) and for reading streams (look-ahead simplifies handling
concatenated streams, for instance).
The first merge here is the overhaul proper; the remainder are small
fixes to correct errors in the initial implementation.
locale-based failures on systems where the "C" locale is so permissive
that it cannot possibly fail. In particular, this fixes a test
problem on Cygwin.
In archive_write_disk: If archive_write_header() fails to create
the file, that's a failure and should return ARCHIVE_FAILED.
Metadata restore failures still return ARCHIVE_WARN, because
that's non-critical. Fix test_write_disk_secure test to
verify the correct return code in one case; add test_write_disk_failures
to do another very simple test of restore failure.
This should fix cpio coredumping when it tries to restore to
a write-protected directory.
Thanks to: Giorgos Keramidas
MFC after: 30 days
end of the compressed stream. This is desirable behavior,
but the implementation here is very broken and causes strange
problems, so disable it for now.
Thanks to Simon L. Nielsen for reporting this problem.
* support for bzip2 file with multiple concatenated bzip2 streams
* support for bzip2 file with junk after bzip2 stream
* support for gzip file with junk after gzip stream
* "fuzz" tester randomly modifies a bunch of input files in order to try
to crash libarchive (this found an amusing hang in the ISO9660 code
when trying to read images that advertised a zero blocksize).
This test is implemented, but commented out for now:
* support for gzip file with multiple concatenated gzip streams
This is an attempt to eliminate a lot of redundant
code from the read ("decompression") filters by
changing them to juggle arbitrary-sized blocks
and consolidate reblocking code at a single point
in archive_read.c.
Along the way, I've changed the internal read/consume
API used by the format handlers to a slightly
different style originally suggested by des@. It
does seem to simplify a lot of common cases.
The most dramatic change is, of course, to
archive_read_support_compression_none(), which
has just evaporated into a no-op as the blocking
code this used to hold has all been moved up
a level.
There's at least one more big round of refactoring
yet to come before the individual filters are as
straightforward as I think they should be...
* Wrap long declarations to fit 80 chars
* #undef macros that shouldn't be exported
* Organize the version-dependent conditionals a
bit more consistently
Speculative:
* libarchive 3.0 will (eventually) use int64_t
instead of off_t. This is an attempt to avoid
some the headaches caused by Linux LFS. (I'll
still have to do ugly things for the struct stat
references in archive_entry.h, of course.)