function, pru_close, to notify protocols that the file descriptor or
other consumer of a socket is closing the socket. pru_abort is now a
notification of close also, and no longer detaches. pru_detach is no
longer used to notify of close, and will be called during socket
tear-down by sofree() when all references to a socket evaporate after
an earlier call to abort or close the socket. This means detach is now
an unconditional teardown of a socket, whereas previously sockets could
persist after detach of the protocol retained a reference.
This faciliates sharing mutexes between layers of the network stack as
the mutex is required during the checking and removal of references at
the head of sofree(). With this change, pru_detach can now assume that
the mutex will no longer be required by the socket layer after
completion, whereas before this was not necessarily true.
Reviewed by: gnn
rather than an error. Detaches do not "fail", they other occur or
the protocol flags SS_PROTOREF to take ownership of the socket.
soclose() no longer looks at so_pcb to see if it's NULL, relying
entirely on the protocol to decide whether it's time to free the
socket or not using SS_PROTOREF. so_pcb is now entirely owned and
managed by the protocol code. Likewise, no longer test so_pcb in
other socket functions, such as soreceive(), which have no business
digging into protocol internals.
Protocol detach routines no longer try to free the socket on detach,
this is performed in the socket code if the protocol permits it.
In rts_detach(), no longer test for rp != NULL in detach, and
likewise in other protocols that don't permit a NULL so_pcb, reduce
the incidence of testing for it during detach.
netinet and netinet6 are not fully updated to this change, which
will be in an upcoming commit. In their current state they may leak
memory or panic.
MFC after: 3 months
than an int, as an error here is not meaningful. Modify soabort() to
unconditionally free the socket on the return of pru_abort(), and
modify most protocols to no longer conditionally free the socket,
since the caller will do this.
This commit likely leaves parts of netinet and netinet6 in a situation
where they may panic or leak memory, as they have not are not fully
updated by this commit. This will be corrected shortly in followup
commits to these components.
MFC after: 3 months
ipxpcb mutex. Contrary to the comment, even in 4.x this was unsafe,
as parallel use of the socket by another process would result in pcb
corruption if the mbuf allocation slept.
MFC after: 1 month
IPXP_DROPPED before continuing, and return EINVAL or ECONNRESET if
it is flagged. It's unclear why each situation should be one or
the other, but it is copied from netinet which has the same bugs.
MFC after: 1 month
as belonging to SPX. This replaces the implicit assumption that the cb
pointer for non-SPX pcb's will be NULL. This isn't required in TCP/IP
as different pcb lists are maintained for different IP protocols; IPX
stores all pcbs on the same global ipxpcb_list.
Foot provided by: gnn
MFC after: 1 month
- Introduce invariant that all IPX/SPX sockets will have valid so_pcb
pointers to ipxpcb structures, and that for SPX, the control block
pointer will always be valid. Don't attempt to free the socket or
pcb at various odd points, such as disconnect.
- Add a new ipxpcb flag, IPXP_DROPPED, which will be set in place of
freeing PCB's so that this invariant can be maintained. This flag
is now checked instead of a NULL check in various socket protocol
calls.
- Introduce many assertions that this invariant holds.
- Various pieces of code, such as the SPX timer code, no longer needs
to jump through hoops in case it frees a PCB while running.
- Break out ipx_pcbfree() from ipx_pcbdetach(). Likewise
spx_pcbdetach().
- Comment on some SMP-related limitations to the SPX code.
- Update copyrights.
MFC after: 1 month
of its allocations fails. Allocate the ipxp last so as to avoid having
to free it if another allocation goes wrong.
Normalize retrieval of ipxp and cb from socket in spx_sp_attach(), and
add assertions.
MFC after: 1 month
especially reads of spx header structures, which will now be cached
in the stack until they can be copied out after releasing the lock.
Panic if a bad socket option direction is passed in by the caller.
MFC after: 1 month
being committed:
- Wrap comments more evenly on right border.
- Clean up braces.
Also, along similar lines:
- Assert some pointers are non-NULL before dereferencing them.
- Remove one assertion that looks, on face value, poor.
MFC after: 1 month
variable on the spx_input() stack. It's not very large, and this will
avoid parallelism issues when spx_input() runs in more than one thread at
a time.
MFC after: 1 month
ipxpcb is NULL or not: in attach it will be, and on detach it won't be.
If for any reason these invariants don't hold true, panicking is a good
idea.
Noticed by: Coverity Prevent analysis tool
MFC after: 3 days
Having an additional MT_HEADER mbuf type is superfluous and redundant
as nothing depends on it. It only adds a layer of confusion. The
distinction between header mbuf's and data mbuf's is solely done
through the m->m_flags M_PKTHDR flag.
Non-native code is not changed in this commit. For compatibility
MT_HEADER is mapped to MT_DATA.
Sponsored by: TCP/IP Optimization Fundraise 2005
following the protocol pru_listen() call to solisten_proto(), so
that it occurs under the socket lock acquisition that also sets
SO_ACCEPTCONN. This requires passing the new backlog parameter
to the protocol, which also allows the protocol to be aware of
changes in queue limit should it wish to do something about the
new queue limit. This continues a move towards the socket layer
acting as a library for the protocol.
Bump __FreeBSD_version due to a change in the in-kernel protocol
interface. This change has been tested with IPv4 and UNIX domain
sockets, but not other protocols.
a socket from a regular socket to a listening socket able to accept new
connections. As part of this state transition, solisten() calls into the
protocol to update protocol-layer state. There were several bugs in this
implementation that could result in a race wherein a TCP SYN received
in the interval between the protocol state transition and the shortly
following socket layer transition would result in a panic in the TCP code,
as the socket would be in the TCPS_LISTEN state, but the socket would not
have the SO_ACCEPTCONN flag set.
This change does the following:
- Pushes the socket state transition from the socket layer solisten() to
to socket "library" routines called from the protocol. This permits
the socket routines to be called while holding the protocol mutexes,
preventing a race exposing the incomplete socket state transition to TCP
after the TCP state transition has completed. The check for a socket
layer state transition is performed by solisten_proto_check(), and the
actual transition is performed by solisten_proto().
- Holds the socket lock for the duration of the socket state test and set,
and over the protocol layer state transition, which is now possible as
the socket lock is acquired by the protocol layer, rather than vice
versa. This prevents additional state related races in the socket
layer.
This permits the dual transition of socket layer and protocol layer state
to occur while holding locks for both layers, making the two changes
atomic with respect to one another. Similar changes are likely require
elsewhere in the socket/protocol code.
Reported by: Peter Holm <peter@holm.cc>
Review and fixes from: emax, Antoine Brodin <antoine.brodin@laposte.net>
Philosophical head nod: gnn
portion of IPX/SPX:
- Protect IPX PCB lists with the IPX PCB list mutex, in particular
when calling PCB and PCB list manipulation routines in ipx_pcb.c.
- Protect both IPX PCB state and SPX PCB state using the IPX PCB
mutex.
- Generally annotate locking, as well as adding liberal use of lock
assertions to document locking requirements.
- Where possible, use unlocked reads when reading integer or smaller
sized socket options on SPX sockets.
- De-spl throughout.
Notes:
- spx_input() expects both the list mutex and PCB mutex to be held
on entry, but will release both on return. Because sonewconn() is
called from spx_input(), it may actually drop one PCB lock and
acquire another during generation of a new connection, meaning the
caller is not in a position to unlock the PCB mutex.
MFC after: 3 weeks
were derived from more complex TCP versions of the same:
- spx_close(), spx_disconnect(), spx_drop(), and spx_usrclosed() all
always free's the spxpcb invalidating the argument, so a return
value is not required to indicate if it has.
- Annotate that the cb arguments to each of these functions is
invalidated via a comment.
- When tearing down a pcb due to sonewconn() having failed, mark the
cb as NULL; later, when deciding whether to store trace information
due to SO_DEBUG, check that cb is not NULL before dereferencing or
a NULL pointer dereference may occur.
MFC after: 3 weeks
spx_reass() to increase atomicity across multiple operations on the
socket buffer when iterating over the SPX fragment reassembly list
for the ipxpcb, as well a to reduce the number of locking operations.
record loop for ACK'd data, rather than relying on lokcing in
sbdroprecord() and sowwakeup(), reducing the number of lock operations
as well as eliminating a possible race against the head of the send
buffer mbuf chain. Use the _locked variants of sbdroprecord() and
sowwakeup().
properly handle the case where a connection is disconnected. The
queue(9)-enabled version of this code broke from the inner but not
outer loop, and so potentially frobbed an ipxpcb flag after the ipxpcb
was free'd, which might be picked up later by the malloc debugging
code. Properly break from the loop context and avoid touching the
cb/ipxpcb after free.
- ipx_pcbnotify(), which is never called.
- ipx_rtchange(), which is never called, is incomplete inplemented, and
also #ifdef notdef.
- spx_fixmtu(), which is never called, is incompletely implemented, and
also #ifdef notdef.
socket from its accept queue when aborting it during a new inbound
connection. Update spx_input() to acquire the accept lock, assert
the condition of the socket on its parent queue, and approriately
disconnect it from the queue before calling soabort() on it.
flags relating to several aspects of socket functionality. This change
breaks out several bits relating to send and receive operation into a
new per-socket buffer field, sb_state, in order to facilitate locking.
This is required because, in order to provide more granular locking of
sockets, different state fields have different locking properties. The
following fields are moved to sb_state:
SS_CANTRCVMORE (so_state)
SS_CANTSENDMORE (so_state)
SS_RCVATMARK (so_state)
Rename respectively to:
SBS_CANTRCVMORE (so_rcv.sb_state)
SBS_CANTSENDMORE (so_snd.sb_state)
SBS_RCVATMARK (so_rcv.sb_state)
This facilitates locking by isolating fields to be located with other
identically locked fields, and permits greater granularity in socket
locking by avoiding storing fields with different locking semantics in
the same short (avoiding locking conflicts). In the future, we may
wish to coallesce sb_state and sb_flags; for the time being I leave
them separate and there is no additional memory overhead due to the
packing/alignment of shorts in the socket buffer structure.
functions in kern_socket.c.
Rename the "canwait" field to "mflags" and pass M_WAITOK and M_NOWAIT
in from the caller context rather than "1" or "0".
Correct mflags pass into mac_init_socket() from previous commit to not
include M_ZERO.
Submitted by: sam
the MAC label referenced from 'struct socket' in the IPv4 and
IPv6-based protocols. This permits MAC labels to be checked during
network delivery operations without dereferencing inp->inp_socket
to get to so->so_label, which will eventually avoid our having to
grab the socket lock during delivery at the network layer.
This change introduces 'struct inpcb' as a labeled object to the
MAC Framework, along with the normal circus of entry points:
initialization, creation from socket, destruction, as well as a
delivery access control check.
For most policies, the inpcb label will simply be a cache of the
socket label, so a new protocol switch method is introduced,
pr_sosetlabel() to notify protocols that the socket layer label
has been updated so that the cache can be updated while holding
appropriate locks. Most protocols implement this using
pru_sosetlabel_null(), but IPv4/IPv6 protocols using inpcbs use
the the worker function in_pcbsosetlabel(), which calls into the
MAC Framework to perform a cache update.
Biba, LOMAC, and MLS implement these entry points, as do the stub
policy, and test policy.
Reviewed by: sam, bms
Obtained from: TrustedBSD Project
Sponsored by: DARPA, Network Associates Laboratories