Commit Graph

157 Commits

Author SHA1 Message Date
Robert Watson
47605579c9 Marginally reformat my copyright statement to remove the spurious ','. 2005-03-10 14:19:31 +00:00
Robert Watson
0daccb9c94 In the current world order, solisten() implements the state transition of
a socket from a regular socket to a listening socket able to accept new
connections.  As part of this state transition, solisten() calls into the
protocol to update protocol-layer state.  There were several bugs in this
implementation that could result in a race wherein a TCP SYN received
in the interval between the protocol state transition and the shortly
following socket layer transition would result in a panic in the TCP code,
as the socket would be in the TCPS_LISTEN state, but the socket would not
have the SO_ACCEPTCONN flag set.

This change does the following:

- Pushes the socket state transition from the socket layer solisten() to
  to socket "library" routines called from the protocol.  This permits
  the socket routines to be called while holding the protocol mutexes,
  preventing a race exposing the incomplete socket state transition to TCP
  after the TCP state transition has completed.  The check for a socket
  layer state transition is performed by solisten_proto_check(), and the
  actual transition is performed by solisten_proto().

- Holds the socket lock for the duration of the socket state test and set,
  and over the protocol layer state transition, which is now possible as
  the socket lock is acquired by the protocol layer, rather than vice
  versa.  This prevents additional state related races in the socket
  layer.

This permits the dual transition of socket layer and protocol layer state
to occur while holding locks for both layers, making the two changes
atomic with respect to one another.  Similar changes are likely require
elsewhere in the socket/protocol code.

Reported by:		Peter Holm <peter@holm.cc>
Review and fixes from:	emax, Antoine Brodin <antoine.brodin@laposte.net>
Philosophical head nod:	gnn
2005-02-21 21:58:17 +00:00
Robert Watson
66d165347d Mark the IPX netisr as MPSAFE so that inbound IPX traffic is processed
without Giant, and can be directly dispatched in the ithread when
net.isr.enable is turned on.

MFC after:	4 weeks
2005-01-09 07:34:55 +00:00
Robert Watson
e926c0ae48 Recent changes have locked down most of the highly dynamic data
structures in IPX/SPX -- primarily, sequence numbering, PCB lists,
and PCBs for IPX raw sockets, IPX datagram sockets, and IPX/SPX.
As such, remove remove NET_NEEDS_GIANT() for IPX, and remove the
assertion of Giant in the ipxintr() IPX input path.

Note that IPX/SPX is not fully MPSAFE, and that there are some
problems with IPX/SPX locking that will require some further work.
However, it is now safe enough to run in general without the Giant
lock.

MFC after:	4 weeks
2005-01-09 05:34:37 +00:00
Robert Watson
2082ca5d57 Use the IPX PCB list mutex and IPX PCB mutexes to lock down the SPX
portion of IPX/SPX:

- Protect IPX PCB lists with the IPX PCB list mutex, in particular
  when calling PCB and PCB list manipulation routines in ipx_pcb.c.
- Protect both IPX PCB state and SPX PCB state using the IPX PCB
  mutex.
- Generally annotate locking, as well as adding liberal use of lock
  assertions to document locking requirements.
- Where possible, use unlocked reads when reading integer or smaller
  sized socket options on SPX sockets.
- De-spl throughout.

Notes:

- spx_input() expects both the list mutex and PCB mutex to be held
  on entry, but will release both on return.  Because sonewconn() is
  called from spx_input(), it may actually drop one PCB lock and
  acquire another during generation of a new connection, meaning the
  caller is not in a position to unlock the PCB mutex.

MFC after:	3 weeks
2005-01-09 05:31:16 +00:00
Robert Watson
2375a5a16a Clean up return handling for a number of SPX-related routines that
were derived from more complex TCP versions of the same:

- spx_close(), spx_disconnect(), spx_drop(), and spx_usrclosed() all
  always free's the spxpcb invalidating the argument, so a return
  value is not required to indicate if it has.
- Annotate that the cb arguments to each of these functions is
  invalidated via a comment.
- When tearing down a pcb due to sonewconn() having failed, mark the
  cb as NULL; later, when deciding whether to store trace information
  due to SO_DEBUG, check that cb is not NULL before dereferencing or
  a NULL pointer dereference may occur.

MFC after:	3 weeks
2005-01-09 05:25:02 +00:00
Robert Watson
971365a711 Protect ipx_pexseq with the IPX PCB list mutex.
When processing socket options against IPX PCBs, generally protect
PCB fields using the IPX PCB mutex.  Where possible, use unlocked
reads on integer values to avoid locking overhead.

MFC after:	3 weeks
2005-01-09 05:15:59 +00:00
Robert Watson
0caa61a005 Acquire or assert the IPX PCB list lock or IPX PCB lock during various
protocol methods relating to IPX.  Conditionally acquire the PCB list
lock in the send operation only if the socket requires binding in order
to use the requested address.

Remove spl's generally no longer required during these accesses.

MFC after:	3 weeks
2005-01-09 05:13:14 +00:00
Robert Watson
0c3833b6ba Assert or acquire the IPX PCB list lock or IPX PCB locks throughout
the IPX-related PCB routines.  In general, the list lock is required
to iterate the PCB list, either for read or write; the PCB lock is
required to access or modify a PCB.  To change the binding of a PCB,
both locks must be held.

MFC after:	3 weeks
2005-01-09 05:10:43 +00:00
Robert Watson
31f1a840d9 Hold the IPX PCB mutex around calls to ipx_input() in the IPX input
path.

MFC after:	3 weeks
2005-01-09 05:08:47 +00:00
Robert Watson
992e1a5842 Hold the global IPX PCB list mutex in the IPX input path when walking
the IPX PCB list.

MFC after:	3 weeks
2005-01-09 05:06:19 +00:00
Robert Watson
c2b563b532 Introduce a global mutex, ipxpcb_list_mtx, to protect the global
IPX PCB lists.  Add macros to initialize, destroy, lock, unlock,
and assert the mutex.  Initialize the mutex when IPX is started.

Add per-IPX PCB mutexes, ipxp_mtx in struct ipxpcb, to protect
per-PCB IPX/SPX state.  Add macros to initialize, destroy, lock,
unlock, and assert the mutex.  Initialize the mutex when a new
PCB is allocated; destroy it when the PCB is free'd.

MFC after:	2 weeks
2005-01-09 05:00:41 +00:00
Robert Watson
9d98ffa087 In ipx_setsockaddr(), use M_WAITOK instead of M_NOWAIT so that the
call always succeeds, avoiding causing the caller to return success
even though the returned *sockaddr is NULL.

MFC after:	2 weeks
2005-01-09 04:47:42 +00:00
Robert Watson
f7bca2686a Eliminate jump to 'bad' label in order to clean up the ipx_input()
return/unwind path for locking work.

MFC after:	2 weeks
2005-01-09 04:39:16 +00:00
Warner Losh
c398230b64 /* -> /*- for license, minor formatting changes 2005-01-07 01:45:51 +00:00
Robert Watson
62f6bcfbef Garbage collect unused ipx_abort().
Spell NULL right in a KASSERT() panic message.

MFC after:	1 week
2005-01-03 12:54:31 +00:00
Robert Watson
66685810b9 Acquire the socket buffer receive lock in spx_rcvoob() to permit
multiple reads of receive buffer state to be performed atomically.
2005-01-02 15:38:47 +00:00
Robert Watson
19e2d43969 Increase the coverage scope of the receive socket buffer lock in
spx_reass() to increase atomicity across multiple operations on the
socket buffer when iterating over the SPX fragment reassembly list
for the ipxpcb, as well a to reduce the number of locking operations.
2005-01-02 15:36:16 +00:00
Robert Watson
97270cf1b6 Explicitly lock the send socket buffer in spx_reass() to cover the drop
record loop for ACK'd data, rather than relying on lokcing in
sbdroprecord() and sowwakeup(), reducing the number of lock operations
as well as eliminating a possible race against the head of the send
buffer mbuf chain.  Use the _locked variants of sbdroprecord() and
sowwakeup().
2005-01-02 15:33:13 +00:00
Robert Watson
0cdc892230 Restructure ipx_input() return code to match similar code in netinet,
avoiding a goto.
2005-01-02 15:29:29 +00:00
Robert Watson
944731d517 Eliminate XXX comments regarding allocation failures when retrieving
the peer address by using M_WAITOK in ipx_setpeeraddr() to prevent
allocation failure.  The socket reference used to reach these calls
will prevent the ipxpcb from being released prematurely.
2005-01-02 15:25:59 +00:00
Robert Watson
360fb9f83a Use KASSERT() in preference to if()panic(). 2005-01-02 15:19:24 +00:00
Robert Watson
43ae56438e Extern declaration of old 'ipxpcb' list head no longer required. 2005-01-02 15:16:35 +00:00
Robert Watson
928944eeb5 Trim trailing whitespace. 2005-01-02 15:13:59 +00:00
Robert Watson
9e7c226533 Document copyright updates in netipx README as other prior updates have
been documented.
2005-01-02 15:10:02 +00:00
Robert Watson
a3acf5d5a0 Mark 'struct spx' and 'struct spxhdr' as __packed to prevent possible
alignment problems.

MFC after:	3 days
2005-01-02 15:06:47 +00:00
Robert Watson
14fad7b9d6 Improve handling of SPX session timeout, specifically, make sure to
properly handle the case where a connection is disconnected.  The
queue(9)-enabled version of this code broke from the inner but not
outer loop, and so potentially frobbed an ipxpcb flag after the ipxpcb
was free'd, which might be picked up later by the malloc debugging
code.  Properly break from the loop context and avoid touching the
cb/ipxpcb after free.
2005-01-02 14:46:18 +00:00
Robert Watson
16b47e3540 Compare and assign pointers with NULL in preference to 0. 2005-01-02 14:07:05 +00:00
Robert Watson
521a8487f5 Don't cast NULL on return or when passing to another function.
Extend the annotation as to why spx_close() isn't called in spx_reass(),
and mark this code more clearly as broken.
2005-01-02 14:03:47 +00:00
Robert Watson
2fbe8bd709 Mark 'struct ipx', the IPX packet header, as __packed. Otherwise,
recent versions of gcc will insert an extra 16 bits of padding in
the structure, corrupting all IPX packet output.

MFC after:	3 days
2005-01-02 02:30:27 +00:00
Robert Watson
21275ec3db Use 'NULL' in preference to '0' for pointer comparisons.
MFC after:	2 weeks
2005-01-02 01:51:18 +00:00
Robert Watson
6c56a18747 Use RTFREE() to free route references rather than rtfree(), as rtfree()
expects a locked route reference.  This removes a panic that occurs
when connected ipxpcb is closed and its route free'd, and may have been
present since the route locking took place.

MFC after:	2 weeks
2005-01-02 01:47:56 +00:00
Robert Watson
c2b8a29d33 Prefer rtalloc_ign() API to rtalloc() API. 2005-01-02 01:39:38 +00:00
Robert Watson
86c788d323 Move the definition of ipxpcb_lport_cache from ipx_input.c to ipx_pcb.c,
the only source file where it is actually used.
2005-01-01 22:04:03 +00:00
Robert Watson
96979ee67d Marginally reformat copyright statements to remove an excess ','. 2004-12-31 17:05:37 +00:00
Robert Watson
502c374fea Add 'struct ipxpcb' forward declaration to ipx_var.h. I had this in
the netperf branch but for some reason didn't trigger a build failure
locally when I merged to CVS and omitted it.  Presumably driver error.

Pointed out by:	cperciva, tinderbox
2004-12-31 11:54:39 +00:00
Robert Watson
08e044cb89 Use a global variable, ipxpcb_lport_cache, to cache the most recently
used IPX port number, rather than using the global ipxpcb list head.
2004-12-30 17:54:53 +00:00
Robert Watson
80a4dabe7d Convert netipx to use queue(9) doubly-linked lists instead of home-brew
linked lists for ipxpcb's.
2004-12-30 17:49:40 +00:00
Robert Watson
ffeb1a497a Garbage collect unused (and incompletely implemented) functions:
- ipx_pcbnotify(), which is never called.
- ipx_rtchange(), which is never called, is incomplete inplemented, and
  also #ifdef notdef.
- spx_fixmtu(), which is never called, is incompletely implemented, and
  also #ifdef notdef.
2004-12-30 17:21:07 +00:00
Robert Watson
05b4b08b61 Constify ipx_zeronet, ipx_zerohost, ipx_broadnet, ipx_broadhost.
Remove 'allones' since the values of the broadcast network and
host variables are set statically.
2004-12-30 16:56:07 +00:00
Poul-Henning Kamp
756d52a195 Initialize struct pr_userreqs in new/sparse style and fill in common
default elements in net_init_domain().

This makes it possible to grep these structures and see any bogosities.
2004-11-08 14:44:54 +00:00
Robert Watson
81158452be Push acquisition of the accept mutex out of sofree() into the caller
(sorele()/sotryfree()):

- This permits the caller to acquire the accept mutex before the socket
  mutex, avoiding sofree() having to drop the socket mutex and re-order,
  which could lead to races permitting more than one thread to enter
  sofree() after a socket is ready to be free'd.

- This also covers clearing of the so_pcb weak socket reference from
  the protocol to the socket, preventing races in clearing and
  evaluation of the reference such that sofree() might be called more
  than once on the same socket.

This appears to close a race I was able to easily trigger by repeatedly
opening and resetting TCP connections to a host, in which the
tcp_close() code called as a result of the RST raced with the close()
of the accepted socket in the user process resulting in simultaneous
attempts to de-allocate the same socket.  The new locking increases
the overhead for operations that may potentially free the socket, so we
will want to revise the synchronization strategy here as we normalize
the reference counting model for sockets.  The use of the accept mutex
in freeing of sockets that are not listen sockets is primarily
motivated by the potential need to remove the socket from the
incomplete connection queue on its parent (listen) socket, so cleaning
up the reference model here may allow us to substantially weaken the
synchronization requirements.

RELENG_5_3 candidate.

MFC after:	3 days
Reviewed by:	dwhite
Discussed with:	gnn, dwhite, green
Reported by:	Marc UBM Bocklet <ubm at u-boot-man dot de>
Reported by:	Vlad <marchenko at gmail dot com>
2004-10-18 22:19:43 +00:00
Robert Watson
98f6a62499 Mark Netgraph TTY, KAME IPSEC, and IPX/SPX as requiring Giant for correct
operation using NET_NEEDS_GIANT().  This will result in a boot-time
restoration of Giant-enabled network operation, or run-time warning on
dynamic load (applicable only to the Netgraph component).  Additional
components will likely need to be marked with this in the future.
2004-08-28 15:24:53 +00:00
Alexander Kabaev
766f8c9247 Avoid casts as lvalues. Declare local variable as u_char * instead of
declaring it as u_short * and casting it back to uchar * all over the place.
2004-07-28 06:58:23 +00:00
Robert Watson
ab89ee6253 Constify 'spx_backoff'. 2004-07-12 19:35:29 +00:00
Robert Watson
613a4366cb Acquire the receive socket buffer lock when modifying out-of-band
data fields of the socket in SPX.
2004-06-24 04:29:53 +00:00
Bruce M Simpson
100ecbae22 Improve source-code compatibility with Linux applications using the
IPX stack.

PR:		kern/65217
Submitted by:	Radim Kolar
2004-06-22 21:46:49 +00:00
Robert Watson
cce9682e55 It's now the responsibility of the consumer of soabort() to remove a
socket from its accept queue when aborting it during a new inbound
connection.  Update spx_input() to acquire the accept lock, assert
the condition of the socket on its parent queue, and approriately
disconnect it from the queue before calling soabort() on it.
2004-06-20 21:47:12 +00:00
Robert Watson
7721f5d760 Grab the socket buffer send or receive mutex when performing a
read-modify-write on the sb_state field.  This commit catches only
the "easy" ones where it doesn't interact with as yet unmerged
locking.
2004-06-15 03:51:44 +00:00
Robert Watson
c0b99ffa02 The socket field so_state is used to hold a variety of socket related
flags relating to several aspects of socket functionality.  This change
breaks out several bits relating to send and receive operation into a
new per-socket buffer field, sb_state, in order to facilitate locking.
This is required because, in order to provide more granular locking of
sockets, different state fields have different locking properties.  The
following fields are moved to sb_state:

  SS_CANTRCVMORE            (so_state)
  SS_CANTSENDMORE           (so_state)
  SS_RCVATMARK              (so_state)

Rename respectively to:

  SBS_CANTRCVMORE           (so_rcv.sb_state)
  SBS_CANTSENDMORE          (so_snd.sb_state)
  SBS_RCVATMARK             (so_rcv.sb_state)

This facilitates locking by isolating fields to be located with other
identically locked fields, and permits greater granularity in socket
locking by avoiding storing fields with different locking semantics in
the same short (avoiding locking conflicts).  In the future, we may
wish to coallesce sb_state and sb_flags; for the time being I leave
them separate and there is no additional memory overhead due to the
packing/alignment of shorts in the socket buffer structure.
2004-06-14 18:16:22 +00:00