hardlink table for two reasons: 1. If le->name is set to NULL, the
structure le won't be inserted into the table; 2. Even if le somehow
did manage to get into the table with le->name equal to NULL, we would
die when we dereferenced le->null before we could get to the point of
freeing the entry.
Remove the unnecessary "if (le->name != NULL)" test and just free the
pointer.
Found by: Coverity Prevent
While we're here, fix a long-standing bug in the handling of write(2)
errors: The API changed from "return # of bytes written" to "return
status code" almost 4 years ago, so instead of returning (-1) we need
to return ARCHIVE_FATAL.
Found by: Coverity Prevent [1]
declaring a variable which points to it. Aside from eliminating a
line of code and one level of unnecessary indirection, this eliminates
a false positive in Coverity.
the elf files. This is complicated by the fact that the actual CTF
parsing has to be done in CDDL'd code, so the BSD licensed code only
knows about the opaque data which it must be able to free.
to spam all vaps and this won't happen if the frame comes from a station
that is associated to an ap vap (and so has an entry in the table)
Noticed by: Jared Go
Reviewed by: thompsa
on a 7.0 or later system that were created on a pre-5.0 system.
We must ensure that restore zeros out the previously undefined
birthtime and external attribute size fields when reading dump
tapes made by the UFS1 dump program.
The problem is that UFS2 dump carefully zeros out the unused
birthtime and external attribute size fields in the dump header
when dumping UFS1 filesystems, but the UFS1 dump didn't know about
those fields (they were spares) so just left whatever random junk
was in them. So, when restoring one of these pre-UFS2 dumps,
the new restore would eventually trip across a header that had
a non-zero external attribute size and try to extract it. That
consumed several tape blocks which left it totally out of sync
and very unhappy (i.e., the panic). The fix is in the gethead()
function which modernizes old headers by copying old fields to
their new location (and with this fix) zeroing out previously
undefined fields.
PR: bin/120881
Review by: David Malone & Scott Lambert
MFC after: 1 week
Must ensure that dump tapes from UFS1 filesystems properly copy
old fields of dump headers to new locations. Move check of dumpdate
to follow the code which ensures that the appropriate fields have
been copied.
PR: bin/118087
Help from: David Malone, Scott Lambert, Javier Martín Rueda
MFC after: 2 weeks
Even though single linked lists allow items to be removed at constant time
(when the previous element is known), the queue macro's don't allow this.
Implement new REMOVE_NEXT() macro's. Because the REMOVE() macro's also
contain the same code, make it call REMOVE_NEXT().
The OpenBSD version of SLIST_REMOVE_NEXT() needs a reference to the list
head, even though it is unused. We'd better mimic this. The STAILQ version
also needs a reference to the list. This means the prototypes of both
macro's are the same.
Approved by: philip (mentor)
PR: kern/121117
Our current TTY layer uses a set-uid application called ptchown to
change ownership of a PTY slave device. The new TTY layer implements
this functionality through a new ioctl().
By accident I discovered Darwin's TTY layer also uses this approach.
Because of this, they also have a GID_TTY.
Approved by: philip (mentor)
routines for those modules, rather than in the raw socket code. This
each privilege check to occur in exactly once place and avoids
duplicate checks across layers.
MFC after: 3 weeks
Sponsored by: nCircle Network Security, Inc.
argument, call mac_socket_check_connect() on that address before
proceeding with the send. Otherwise policies instrumenting the
connect entry point for the purposes of checking destination
addresses will not have the opportunity to check implicit
connect requests.
MFC after: 3 weeks
Sponsored by: nCircle Network Security, Inc.
hard limit of 512 pending mutexes in the witness code and
we can easily have 1 million bucket mutexes initialized before
witness is up and running. Bumping the limit from 512 to 1M
is not really an option here...