as the compression name when no other read filter bid. Add some
assertions to various tests to verify that read filters are properly
setting the textual name as well as the compression code.
information to error strings. This caused a lot of unnecessary
duplication in error messages; in particular, there are a few cases
where error messages get copied from one archive object to another
and this would cause the strerror() info to get appended each time.
Restoring POSIX.1e Extended Attributes on FreeBSD, part 1
This implements the basic ability to restore extended attributes
on FreeBSD, including a test suite.
Zip entries that are zero length but stored with deflate. This
is arguably a silly thing to do (deflating a zero-length file actually
makes it bigger) but apparently quite a few Zip writers do this.
This was broken in two places: archive_write_disk disliked being asked
to write data to zero-length files (even if the write was zero-length)
and zip_read_file_header tripped over itself when non-regular files
had compressed bodies.
from libarchive.googlecode.com: Add a new "archive_read_disk" API
that provides the important service of reading metadata from the
disk. In particular, this will make it possible to remove all
knowledge of extended attributes, ACLs, etc, from clients such
as bsdtar and bsdcpio.
Closely related, this API also provides pluggable uid->uname
and gid->gname lookup and caching services similar to
the uname->uid and gname->gid services provided by archive_write_disk.
Remember this is also required for correct ACL management.
Documentation is still pending...
into the debugger on test setup failures (otherwise, the console window
just goes away and you can't see what went wrong). On all platforms,
clean up a stray buffer before exiting.
This is the last phase of the "big decompression refactor" that
puts a lazy reblocking layer between each pair of read filters.
I've also changed the terminology for this area---the two kinds
of objects are now called "read filters" and "read filter bidders"---and
moved ownership of these objects to the archive_read core.
This greatly simplifies implementing new read filters, which
can now use peek/consume I/O semantics both for bidding (arbitrary
look-ahead!) and for reading streams (look-ahead simplifies handling
concatenated streams, for instance).
The first merge here is the overhaul proper; the remainder are small
fixes to correct errors in the initial implementation.
locale-based failures on systems where the "C" locale is so permissive
that it cannot possibly fail. In particular, this fixes a test
problem on Cygwin.
In archive_write_disk: If archive_write_header() fails to create
the file, that's a failure and should return ARCHIVE_FAILED.
Metadata restore failures still return ARCHIVE_WARN, because
that's non-critical. Fix test_write_disk_secure test to
verify the correct return code in one case; add test_write_disk_failures
to do another very simple test of restore failure.
This should fix cpio coredumping when it tries to restore to
a write-protected directory.
Thanks to: Giorgos Keramidas
MFC after: 30 days
end of the compressed stream. This is desirable behavior,
but the implementation here is very broken and causes strange
problems, so disable it for now.
Thanks to Simon L. Nielsen for reporting this problem.
* support for bzip2 file with multiple concatenated bzip2 streams
* support for bzip2 file with junk after bzip2 stream
* support for gzip file with junk after gzip stream
* "fuzz" tester randomly modifies a bunch of input files in order to try
to crash libarchive (this found an amusing hang in the ISO9660 code
when trying to read images that advertised a zero blocksize).
This test is implemented, but commented out for now:
* support for gzip file with multiple concatenated gzip streams
This is an attempt to eliminate a lot of redundant
code from the read ("decompression") filters by
changing them to juggle arbitrary-sized blocks
and consolidate reblocking code at a single point
in archive_read.c.
Along the way, I've changed the internal read/consume
API used by the format handlers to a slightly
different style originally suggested by des@. It
does seem to simplify a lot of common cases.
The most dramatic change is, of course, to
archive_read_support_compression_none(), which
has just evaporated into a no-op as the blocking
code this used to hold has all been moved up
a level.
There's at least one more big round of refactoring
yet to come before the individual filters are as
straightforward as I think they should be...
* Wrap long declarations to fit 80 chars
* #undef macros that shouldn't be exported
* Organize the version-dependent conditionals a
bit more consistently
Speculative:
* libarchive 3.0 will (eventually) use int64_t
instead of off_t. This is an attempt to avoid
some the headaches caused by Linux LFS. (I'll
still have to do ugly things for the struct stat
references in archive_entry.h, of course.)
If it's not a regular file, don't return any data, even if the size is unknown.
Update the Zip test with a hand-tweaked Zip archive that has a
directory (with length-at-end set), a regular file without
length-at-end set, and a regular file with length-at-end set and a bad
CRC. Update the test code to verify that the file size is unset
for the regular file with length-at-end.
MFC after: 7 days
logic here gets a little complex, but the net effect is that the
SECURE_SYMLINKS flag will prevent us from ever following a symlink.
Without it, we'll only follow symlinks to dirs. bsdtar specifies
SECURE_SYMLINKS by default, suppresses it for -P.
I've also beefed up the write_disk_secure test to verify this
behavior.
PR: bin/126849
unspecified size are "unlimited" (required by Zip reader, which
sometimes does not know the uncompressed size of an entry until it
gets to the end). Also, hardlinks with unspecified (or zero) size do
not overwrite the data on disk nor do they set metadata. This is
compatible with GNU tar and NetBSD pax behavior.
This generalizes the existing set/unset tracking for hardlink/symlink
fields and extends it to cover non-string fields. Eventually, this
will be further extended to cover most fields.
In particular, this is needed to correctly detect when time fields
are missing (for example, reading ustar archives doesn't set atime or
ctime) for proper time restore and is helpful when trying to determine
whether to overwrite data when restoring hardlinks.
This commit updates the tests but not the docs.
Since various 'find' incantations can emit container directories
in various orders, we cannot refuse to update a dir because it's
apparently the same age.
MFC after: 3 days
understand which code paths aren't possible.
This commit eliminates 117 false positive bug reports of the form
"allocate memory; error out if pointer is NULL; use pointer".
schedule a chmod() fixup for directories. In particular, this fixes
sgid handling on systems where the sgid bit is inherited from the
parent directory (which means that the actual mode of the dir
does not match the mode used in the mkdir() system call.
It may be possible to tighten this condition a bit. In
working through this, I also found a few other places where
it looks like we can avoid a redundant syscall or two. I've
commented those here but not yet tried to address them.
file into a separate file (instead of embedding it in the C code)
and use later timestamps (timestamps too close to the Epoch fail
predictably on systems that lack timegm(), whose mktime() doesn't
support dates before the Epoch and which are running in timezones
with negative offsets from GMT). The goal here is to test the ISO
extraction, not the local platform's time support.
operation) and not ARCHIVE_WARN, since we don't actually open the file.
Both bsdtar and bsdcpio will try to copy file contents after an ARCHIVE_WARN,
which will fail loudly.
feedback, but the 2.5 branch is shaping up nicely.)
In addition to many small bug fixes and code improvements:
* Another iteration of versioning; I think I've got it right now.
* Portability: A lot of progress on Windows support (though I'm
not committing all of the Windows support files to FreeBSD CVS)
* Explicit tracking of MBS, WCS, and UTF-8 versions of strings
in archive_entry; the archive_entry routines now correctly return
NULL only when something is unset, setting NULL properly clears
string values. Most charset conversions have been pushed down to
archive_string.
* Better handling of charset conversion failure when writing or
reading UTF-8 headers in pax archives
* archive_entry_linkify() provides multiple strategies for
hardlink matching to suit different format expectations
* More accurate bzip2 format detection
* Joerg Sonnenberger's extensive improvements to mtree support
* Rough support for self-extracting ZIP archives. Not an ideal
approach, but it works for the archives I've tried.
* New "sparsify" option in archive_write_disk converts blocks of nulls
into seeks.
* Better default behavior for the test harness; it now reports
all failures by default instead of coredumping at the first one.
While we're here, fix a long-standing bug in the handling of write(2)
errors: The API changed from "return # of bytes written" to "return
status code" almost 4 years ago, so instead of returning (-1) we need
to return ARCHIVE_FATAL.
Found by: Coverity Prevent [1]
declaring a variable which points to it. Aside from eliminating a
line of code and one level of unnecessary indirection, this eliminates
a false positive in Coverity.
from the private archive_write structure and fix up all writers to use
the format fields in the base "archive" structure. This error made it
impossible to query the format after setting up a writer because the
write format was stored in an inaccessible place.
"file" is described by multiple "lines" each possibly containing
multiple "keywords." Incorporate some additions from Joerg Sonnenberger
to handle linked files and correctly deal with backing files on disk.
Disable the use of PaxHeader.<pid> for the fake pax extension pathname
until I can make the name here settable. Otherwise, tests that try
to compare output to static pre-generated reference files break.
(including pathname, gname, uname) be stored in UTF-8. This usually
doesn't cause problems on FreeBSD because the "C" locale on FreeBSD
can convert any byte to Unicode/wchar_t and from there to UTF-8. In
other locales (including the "C" locale on Linux which is really
ASCII), you can get into trouble with pathnames that cannot be
converted to UTF-8.
Libarchive's pax writer truncated pathnames and other strings at the
first nonconvertible character. (ouch!) Other archivers have worked
around this by storing unconvertible pathnames as raw binary, a
practice which has been sanctioned by the Austin group. However,
libarchive's pax reader would segfault reading headers that weren't
proper UTF-8. (ouch!) Since bsdtar defaults to pax format, this
affects bsdtar rather heavily.
To correctly support the new "hdrcharset" header that is going into
SUS and to handle conversion failures in general, libarchive's pax reader
and writer have been overhauled fairly extensively. They used to do
most of the pax header processing using wchar_t (Unicode); they now do
most of it using char so that common logic applies to either UTF-8 or
"binary" strings.
As a bonus, a number of extraneous conversions to/from wchar_t have
been eliminated, which should speed things up just a tad.
Thanks to: Bjoern Jacke for originally reporting this to me
Thanks to: Joerg Sonnenberger for noting a bad typo in my first draft of this
Thanks to: Gunnar Ritter for getting the standard fixed
MFC after: 5 days
rely on a deprecated value to set the default. This is also
related to a longer-term goal of setting the default block
size based on format and possibly other factors, which makes
it a bad idea to tie this to a published constant.
new interface. Mark the functions that are going away in
libarchive 3.0.
In particular, archive_version_string() now computes the
string rather than assuming that it will be created by the
build infrastructure. Eventually, this will allow some
simplification of the build infrastructure.
* There are now only two public version identifiers: "number" is
a single integer that combines Major/minor/release in a single
value of the form Mmmmrrr. This is easy to compare against for
checking feature support. "string" is a displayable text string
of the form "libarchive M.mm.rr".
* The number is present both as a macro (version of the installed header)
and a function (version of the shared library). The string form
is available only as a function.
* Retain the older version definitions for now, but mark them all
as deprecated, to disappear in libarchive 3.0 (whenever that happens).
* Rework the various deprecation conditionals to use ARCHIVE_VERSION_NUMBER.
An ancillary goal is to reduce the number of @...@ substitutions that
are required. Someday, I might even be able to avoid build-time
processing of archive.h entirely.
Remove the entirely pointless symbolic constant
and sizeof(unsigned char). (The constant
here is doubly wrong, since not only does
it obscure a basic format constant, it was
never intended to be a tar-specific value,
so could conceivably be changed at some point
in the future.)
filename table whose size is less than 65536 bytes.
The original intention was to not consume the filename table, so the
client will have a chance to look at it. To achieve that, the library
call decompressor->read_ahead to read(look ahead) but do not call
decompressor->consume to consume the data, thus a limit was raised
since read_ahead call can only look ahead at most BUFFER_SIZE(65536)
bytes at the moment, and you can not "look any further" before you
consume what you already "saw".
This commit will turn GNU/SVR4 filename table into "archive format
data", i.e., filename table will be consumed by libarchive, so the
65536-bytes limit will be gone, but client can no longer have access
to the content of filename table.
'ar' support test suite is changed accordingly. BSD ar(1) is not
affected by this change since it doesn't look at the filename table.
Reported by: erwin
Discussed with: jkoshy, kientzle
Reviewed by: jkoshy, kientzle
Approved by: jkoshy(mentor), kientzle
uudecode into the main test driver and invoking it just-in-time
within the various tests.
Also, incorporate a number of improvements to the main test support
code that have proven useful on other projects where I've used this
framework.
(left over from when the unified read/write structure was copied
to form separate read and write structures) and eliminate the
pointless initialization of a couple of the unused fields.
Maybe. In the meantime, my workarounds for trying to coax UTC without
timegm() are getting uglier and uglier. Apparently, some systems
don't support setenv()/unsetenv(), so you can't set the TZ env var and
hope thereby to coax mktime() into generating UTC. Without that, I
don't see a really good alternative to just giving up and converting to
localtime with mktime(). (I suppose I should research the Perl library
approach for computing an inverse function to gmtime(); that might
actually be simpler than this growing list of hacks.)
now returns a value, which supports such convenient
constructs as:
if (assert(NULL != foo())) {
}
Also be careful to setlocale("C") for each new test to
avoid locale pollution.
Also a couple of minor portability enhancements.
* If the platform can't restore char nodes, block nodes, or fifos,
don't try and just return error.
* Include O_BINARY in most open() calls (define O_BINARY to 0 if the
platform doesn't provide a definition already)
* Refactor the ownership restore to more cleanly support platforms
that don't have any form of {l,f,}chown() call.
* Comment a lingering issue with older Unix-like systems that allow
root to hose the filesystem. I don't (yet) have a good solution for
this, but I expect it will require adding more redundant stat()
calls. <sigh>
MFC after: 14 days
global header if nothing else has been written before the closing of
the archive. This will change the behaviour when creating archives
without members, i.e., instead of generating a 0-size archive file, an
archive with just the global header (8 bytes in total) will be created
and it is indeed a valid archive by the definition of libarchive, thus
subsequent operation on this archive will be accepted. This especially
solves the failure caused by following sequence: (several ports do)
% ar cru libfoo.a # without specifying obj files
% ranlib libfoo.a
Reviewed by: kientzle, jkoshy
Approved by: kientzle
Approved by: jkoshy (mentor)
Reported by: erwin
MFC after: 1 month
obey or ignore the size field on a hardlink entry. In particular,
if we're reading a non-POSIX archive, we should always ignore
the size field.
This should fix both the audio/xmcd port and the math/unixstat port.
Thanks to: Pav Lucistnik for pointing these two ports out to me.
MFC after: 7 days
Even though I believe this is a good change, it does
have the potential to break certain clients, so it's
good to document the reasoning behind the change.
write a new test to exercise the hardlink strategies used
by different archive formats (tar, old cpio, new cpio).
This uncovered two problems, both fixed by this commit:
1) Enforce file size when writing files to disk.
2) When restoring hardlink entries, if they have data associated, go
ahead and open the file so we can write the data.
In particular, this fixes bsdtar/bsdcpio extraction of new cpio
formats where the "original" is empty and the subsequent "hardlink"
entry actually carries the data. It also provides correct behavior
for old cpio archives where hardlinked entries have their bodies
stored multiple times in the archive; the last body should always be
the one that ends up in the final file. The new pax format also
permits (but does not require) hardlinks to carry file data; again,
the last contents should always win.
Note that with any of these, a size of zero on a hardlink simply means
that the hardlink carries no data; it does not mean that the file has
zero size. A non-zero size on a hardlink does provide the file size.
Thanks to: John Baldwin, for reminding me about this long-standing bug
and sending me a simple example archive that prompted this test case
one part" by simply ignoring the marker at the beginning
of the file. (Zip archivers reserve four bytes at the beginning
of each part of a multi-part archive, if it happens to only
require one part, those four bytes get filled with a placeholder
that can be ignored.)
Thanks to: Marius Nuennerich,
for pointing me to a Zip archive that libarchive couldn't handle
MFC after: 7 days
doesn't need to compensate for this situation.
While here, fix a minor longstanding bug that empty tar archives
(which begin with at least 512 zero bytes) never properly reported
their format. In particular, this fixes the output of:
bsdtar tvvf /dev/zero
And, of course, a new test to verify that libarchive correctly
recognizes the format of such files.
the number of bytes read is actually not important as long as we have at
least what we ask for. Illustrate its benefits by using it throughout
the ZIP support code, except for the few cases where it doesn't apply.
Approved by: kientzle
exercises and verifies the libarchive APIs:
* Improved error reporting; hexdumps are now provided for
many file/memory content differences.
* Overall status more clearly counts "tests" and "assertions"
* Reference files can now be stored on disk instead of having
to be compiled into the test program itself. A couple of
tests have been converted to this more natural structure.
* Several memory leaks corrected so that leaks within libarchive
itself can be more easily detected and diagnosed.
* New test: GNU tar compatibility
* New test: Zip compatibility
* New test: Zero-byte writes to a compressed archive entry
* New test: archive_entry_strmode() format verification
* New test: mtree reader
* New test: write/read of large (2G - 1TB) entries to tar archives
(thanks to recent performance work, this test only requires a few seconds)
* New test: detailed format verification of cpio odc and newc writers
* Many minor additions/improvements to existing tests as well.
that I've been working on but put off committing until after the
RELENG_7 branch, including:
* New manpages: cpio.5 mtree.5
* New archive_entry_strmode()
* New archive_entry_link_resolver()
* New read support: mtree format
* Internal API change: read format auction only runs once
* Running the auction only once allowed simplifying a lot of bid logic.
* Cpio robustness: search for next header after a sync error
* Support device nodes on ISO9660 images
* Eliminate a lot of unnecessary copies for uncompressed archives
* Corrected handling of new GNU --sparse --posix formats
* Correctly handle a zero-byte write to a compressed archive
* Fixed memory leaks
Many of these improvements were motivated by the upcoming bsdcpio
front-end.
There have also been extensive improvements to the libarchive_test
test harness, which I'll commit separately.
a length field of zero; it does not mean the body is empty.
Thanks to: Lapo Luchini for sending me a JAR archive that demonstrated this bug
MFC after: 3 days
This can only happen on 32-bit systems when you're reading
an uncompressed archive and the skip request is an exact
multiple of 4G (e.g., skipping a tar entry with an 8G body).
The symptom is that the read_ahead() ends up returning zero
bytes, and the extraction stops with a premature end-of-file.
Using '1' here is more correct anyway, as it allows read_ahead()
to function opportunistically and minimize copying.
MFC after: 5 days
In particular, the previous code led to archives that had
non-empty bodies following directory entries. Not a fatal
problem, as bsdtar and GNU cpio are both happy to just skip
this bogus data, but it still shouldn't be there.
MFC after: 3 days
Return EOF immediately if an entry in a ZIP archive has no body.
In particular, the latter issue was causing bsdtar to emit spurious
warnings when extracting directory entries from ZIP archives.
MFC after: 3 days
number of bytes written, even when used to write files to
disk. Extend the test suite to verify the correct return
values for archive_write_data() and archive_write_data_block().
Thanks to: Bruce Mah, for stepping in promptly to back out the
earlier broken version of this fix
Thanks to: Colin Percival, for pointing out the correct fix
MFC after: 5 days
Approved by: re (ksmith)
Pointy hat: \me
most noticably the incorrect extraction of files by bsdtar.
This commit reverts:
src/lib/libarchive/archive_write_disk.c 1.15
src/lib/libarchive/test/test_write_disk.c 1.4
Approved by: re (implicitly)
(when used to restore files to disk) to match:
* The documentation
* The return values of this function when used
to write files into an archive.
Approved by: re (bmah)
Pointy hat: \me
MFC after: 5 days
GNU tar 1.17's implementation of --posix --sparse,
at the cost of losing compatibility with GNU tar 1.16.
Fortunately, the 1.17 implementation actually makes sense,
so the libarchive code is now a bit more straightforward
than before.
Background: GNU tar 1.16 defined a new way to store
sparse files in --posix archives. Unfortunately,
the implementation incorrectly inserted several
blocks of null padding after each such entry.
As a result, non-GNU tar implementations saw the
archive as truncated after any sparse entry.
This was fixed in GNU tar 1.17 at the cost of
losing compatibility with GNU tar 1.16 for this
new format (which is not the default, so hopefully
rarely used). Libarchive recently gained support
for reading the GNU tar 1.16 formats; this commit
updates it to read the GNU tar 1.17 variant instead.
Approved by: re (ksmith for libarchive portion)
Approved by: re (blanket for libarchive_test portion)
MFC after: 5 days
owner restore is not requested. If you ask
for permissions to be restored but not owner,
you will now get no error if suid/sgid bits
cannot be set. (It's a security hole to restore
suid/sgid bits if the owner/group aren't restored.)
This fixes an obscure problem where a simple
"tar -xf" with no other options will sometimes
fail gratuitously because of suid/sgid bits.
This is causing occasional problems for people
using bsdtar as a drop-in replacement for
"that other tar program." ;-)
Note: If you do ask for owner restore, then suid/sgid
restore failures still issue an error. This
only suppresses the error in the case where an
suid/sgid bit restore fails because of an owner
mismatch and owner restore was not requested.
Approved by: re (bmah)
MFC after: 7 days
In particular:
* Include a second entry in all of the test archives (to catch errors
with intermediate padding)
* Test the GNU tar 1.17 version of "posix sparse format 1.0"
instead of the GNU tar 1.16 version (the latter is no longer
supported by GNU tar).
Right now, libarchive fails this test because I originally
implemented the GNU tar 1.16 version of "posix sparse format 1.0".
I'll fix libarchive shortly.
Approved by: re (blanket, libarchive testing)
* Allow libarchive_test to compile on Interix again.
* Track the test name (not just line number) when counting skipped tests.
Thanks to: Joerg Sonnenberger
Approved by: re (blanket; libarchive testing)
couldn't allocate more memory for a string. Change
this so it returns NULL in that case, and update
all of its callers to handle the error. Some of
those callers can now return errors back to the
client instead of calling exit(3).
Approved by: re (bmah)
if there was more than one. In particular, this simplifies
test_tar_filenames.c, which has a tendency to be very noisy otherwise.
Approved by: re (blanket, libarchive testing)
it now verifies that the returned blocks have the correct data
at the correct file offsets, ignoring any null padding that
may exist.
Approved by: re (blanket, libarchive test suite)
behavior with truncated or damaged pax archives. This
tests most of the cases covered by the recent security advisory.
Approved by: re (blanket, libarchive test suite)
archive_read_open_memory.c that tries to test border
cases. In particular, it copies over each returned block
so that formats or decompressors that read past the end
of a returned block will break.
Approved by: re (blanket, libarchive test suite)
tar archives, including a potentially exploitable buffer overflow.
Approved by: re (kensmith, security blanket)
Reviewed by: kientzle
Security: FreeBSD-SA-07:05.libarchive
ARCHIVE_VERSION_STAMP to selectively disable tests that don't
apply to that version; new "skipping()" function reports skipped
tests; modify final summary to report component test failures and
skips.
Note: I don't currently intend to MFC the test suite itself;
anyone interested should just checkout and use this version
of the test suite, which should work for any library version.
Approved by: re (Ken Smith, blanket)
of libarchive being used. I've been taking advantage of this
with a recent round of updates to libarchive_test so that it
can test older and newer versions of the library.
Approved by: re (Ken Smith)
skip() callback to skip over data when reading uncompressed
archives. This gets invoked, for example, during tar -t
or tar -x with a filename argument. The revised code
only calls [lf]seek() on regular files, instead of depending
on the kernel to return an error.
Thanks to: bde for explaining the implementation of lseek()
Thanks to: Daniel O'Connor for testing
Approved by: re (Ken Smith)
MFC after: 5 days
assume yes unless seek has previously failed, but I fear I'll have to
avoid seeks under other circumstances. (For instance, tape drives on
FreeBSD seem to return garbage from lseek().) Also, optimize away
zero-byte skips.
- Add and document the KVM and KVM_SUPPORT options that
are needed for the ifmcstats(3) makefile
- Garbage collect unused variables
- Add missing inclusion of bsd.own.mk where needed
Approved by: kan (mentor)
Reviewed by: ru
* "compression_program" support uses an external program
* Portability: no longer uses "struct stat" as a primary
data interchange structure internally
* Part of the above: refactor archive_entry to separate
out copy_stat() and stat() functions
* More complete tests for archive_entry
* Finish archive_entry_clone()
* Isolate major()/minor()/makedev() in archive_entry; remove
these from everywhere else.
* Bug fix: properly handle decompression look-ahead at end-of-data
* Bug fixes to 'ar' support
* Fix memory leak in ZIP reader
* Portability: better timegm() emulation in iso9660 reader
* New write_disk flags to suppress auto dir creation and not
overwrite newer files (for future cpio front-end)
* Simplify trailing-'/' fixup when writing tar and pax
* Test enhancements: fix various compiler warnings, improve
portability, add lots of new tests.
* Documentation: document new functions, first draft of
libarchive_internals.3
MFC after: 14 days
Thanks to: Joerg Sonnenberger (compression_program)
Thanks to: Kai Wang (ar)
Thanks to: Colin Percival (many small fixes)
Thanks to: Many others who sent me various patches and problem reports.
"cache_size * sizeof(struct bucket)". The former is valid in C99 but can
confuse earlier compilers, while the latter is a standard idiom which all
C compilers understand.
Approved by: kientzle
against NULL when it is first allocated) and pointless (we've already
dereferenced the pointer several times).
Found by: Coverity Prevent(tm)
CID: 3204
going to overwrite it with a new value a few lines later.
Visual inspection of the surrounding code indicates that the code does
what it's supposed to do; i.e., the pointless code wasn't supposed to
be doing something other than what it was doing.
CID: 3323
Found by: Coverity Prevent(tm)
occur on the write side of extracting a file to ARCHIVE_WARN errors
when returning them from archive_read_extract.
In bsdtar: Use the return code from archive_read_data_into_fd and
archive_read_extract to determine whether we should continue trying to
extract an archive after one of the entries fails.
This commit makes extracting a truncated tarball complain once about
the archive being truncated, instead of complaining twice (once when
trying to extract an entry, and once when trying to seek to the next
entry).
Discussed with: kientzle
* use "AR_GNU" as the format name instead of AR_SVR4 (it's what everyone is going to call it anyway)
* Simplify numeric parsing to unsigned (none of the numeric values should ever be negative); don't run off end of numeric fields.
* Finish parsing the common header fields before the next I/O request (which might dump the contents)
* Be smarter about format guessing and trimming filenames.
* Most of the magic values are only used in one place, so just inline them.
* Many more comments.
* Be smarter about handling damaged entries; return something reasonable.
* Call it a "filename table" instead of a "string table"
* Update tests.
Enable selection of 'ar', 'arbsd', and 'argnu' formats by name
(this allows bsdtar to create ar format archives).
The 'ar' writer still needs some work; it should reject
entries that aren't regular files and should probably also
strip leading paths from filenames.
for directories. bsdtar used to add this, but that recently got
lost somehow. So now I'm adding it back in libarchive.
The only odd part of doing this in libarchive: Adding a directory to
a tar archive and then reading it back again can yield a different name.
Add a test case to exercise some boundary conditions with
tar filenames and ensure that trailing slashes are added to
dir names only as necessary.
Thanks to: Oliver Lehmann for bringing this regression to my attention.
conditionally use utime() when utimes() is not available;
allow the most common wide-char functions to be replaced
when local alternatives are lacking.
message in the reader to the error message from the writer if the
error which occurred was in the writer. This avoids error messages
of "Empty error message" when extracting truncated archives.
implementation, and mark it as deprecated. It will be removed entirely
in libarchive 3.0 (in FreeBSD 8.0?) but there's no reason for anyone to
use it instead of archive_read_data.
Approved by: kientzle
skip over the end-of-entry padding instead of reading and discarding
it.
Considering that tar files normally have a block size of 10kB, this
isn't likely to avoid reading any data, but at least it makes the code
simpler and clearer.
discards it, for use when the compression layer code doesn't know how to
skip data (e.g., everything other than the "none" compressor). This makes
format level code simpler because that code can now assume that the
compression layer always knows how to skip and will always skip exactly
the requested number of bytes.
Discussed with: kientzle (3 months ago)