b2e60773c6
KTLS adds support for in-kernel framing and encryption of Transport Layer Security (1.0-1.2) data on TCP sockets. KTLS only supports offload of TLS for transmitted data. Key negotation must still be performed in userland. Once completed, transmit session keys for a connection are provided to the kernel via a new TCP_TXTLS_ENABLE socket option. All subsequent data transmitted on the socket is placed into TLS frames and encrypted using the supplied keys. Any data written to a KTLS-enabled socket via write(2), aio_write(2), or sendfile(2) is assumed to be application data and is encoded in TLS frames with an application data type. Individual records can be sent with a custom type (e.g. handshake messages) via sendmsg(2) with a new control message (TLS_SET_RECORD_TYPE) specifying the record type. At present, rekeying is not supported though the in-kernel framework should support rekeying. KTLS makes use of the recently added unmapped mbufs to store TLS frames in the socket buffer. Each TLS frame is described by a single ext_pgs mbuf. The ext_pgs structure contains the header of the TLS record (and trailer for encrypted records) as well as references to the associated TLS session. KTLS supports two primary methods of encrypting TLS frames: software TLS and ifnet TLS. Software TLS marks mbufs holding socket data as not ready via M_NOTREADY similar to sendfile(2) when TLS framing information is added to an unmapped mbuf in ktls_frame(). ktls_enqueue() is then called to schedule TLS frames for encryption. In the case of sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving the mbufs marked M_NOTREADY until encryption is completed. For other writes (vn_sendfile when pages are available, write(2), etc.), the PRUS_NOTREADY is set when invoking pru_send() along with invoking ktls_enqueue(). A pool of worker threads (the "KTLS" kernel process) encrypts TLS frames queued via ktls_enqueue(). Each TLS frame is temporarily mapped using the direct map and passed to a software encryption backend to perform the actual encryption. (Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if someone wished to make this work on architectures without a direct map.) KTLS supports pluggable software encryption backends. Internally, Netflix uses proprietary pure-software backends. This commit includes a simple backend in a new ktls_ocf.ko module that uses the kernel's OpenCrypto framework to provide AES-GCM encryption of TLS frames. As a result, software TLS is now a bit of a misnomer as it can make use of hardware crypto accelerators. Once software encryption has finished, the TLS frame mbufs are marked ready via pru_ready(). At this point, the encrypted data appears as regular payload to the TCP stack stored in unmapped mbufs. ifnet TLS permits a NIC to offload the TLS encryption and TCP segmentation. In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS) is allocated on the interface a socket is routed over and associated with a TLS session. TLS records for a TLS session using ifnet TLS are not marked M_NOTREADY but are passed down the stack unencrypted. The ip_output_send() and ip6_output_send() helper functions that apply send tags to outbound IP packets verify that the send tag of the TLS record matches the outbound interface. If so, the packet is tagged with the TLS send tag and sent to the interface. The NIC device driver must recognize packets with the TLS send tag and schedule them for TLS encryption and TCP segmentation. If the the outbound interface does not match the interface in the TLS send tag, the packet is dropped. In addition, a task is scheduled to refresh the TLS send tag for the TLS session. If a new TLS send tag cannot be allocated, the connection is dropped. If a new TLS send tag is allocated, however, subsequent packets will be tagged with the correct TLS send tag. (This latter case has been tested by configuring both ports of a Chelsio T6 in a lagg and failing over from one port to another. As the connections migrated to the new port, new TLS send tags were allocated for the new port and connections resumed without being dropped.) ifnet TLS can be enabled and disabled on supported network interfaces via new '[-]txtls[46]' options to ifconfig(8). ifnet TLS is supported across both vlan devices and lagg interfaces using failover, lacp with flowid enabled, or lacp with flowid enabled. Applications may request the current KTLS mode of a connection via a new TCP_TXTLS_MODE socket option. They can also use this socket option to toggle between software and ifnet TLS modes. In addition, a testing tool is available in tools/tools/switch_tls. This is modeled on tcpdrop and uses similar syntax. However, instead of dropping connections, -s is used to force KTLS connections to switch to software TLS and -i is used to switch to ifnet TLS. Various sysctls and counters are available under the kern.ipc.tls sysctl node. The kern.ipc.tls.enable node must be set to true to enable KTLS (it is off by default). The use of unmapped mbufs must also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS. KTLS is enabled via the KERN_TLS kernel option. This patch is the culmination of years of work by several folks including Scott Long and Randall Stewart for the original design and implementation; Drew Gallatin for several optimizations including the use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records awaiting software encryption, and pluggable software crypto backends; and John Baldwin for modifications to support hardware TLS offload. Reviewed by: gallatin, hselasky, rrs Obtained from: Netflix Sponsored by: Netflix, Chelsio Communications Differential Revision: https://reviews.freebsd.org/D21277
1530 lines
37 KiB
C
1530 lines
37 KiB
C
/*-
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
*
|
|
* Copyright (c) 1982, 1986, 1988, 1990, 1993
|
|
* The Regents of the University of California. All rights reserved.
|
|
*
|
|
* Redistribution and use in source and binary forms, with or without
|
|
* modification, are permitted provided that the following conditions
|
|
* are met:
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
* notice, this list of conditions and the following disclaimer.
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
* documentation and/or other materials provided with the distribution.
|
|
* 3. Neither the name of the University nor the names of its contributors
|
|
* may be used to endorse or promote products derived from this software
|
|
* without specific prior written permission.
|
|
*
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
* SUCH DAMAGE.
|
|
*
|
|
* @(#)uipc_socket2.c 8.1 (Berkeley) 6/10/93
|
|
*/
|
|
|
|
#include <sys/cdefs.h>
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
#include "opt_kern_tls.h"
|
|
#include "opt_param.h"
|
|
|
|
#include <sys/param.h>
|
|
#include <sys/aio.h> /* for aio_swake proto */
|
|
#include <sys/kernel.h>
|
|
#include <sys/ktls.h>
|
|
#include <sys/lock.h>
|
|
#include <sys/malloc.h>
|
|
#include <sys/mbuf.h>
|
|
#include <sys/mutex.h>
|
|
#include <sys/proc.h>
|
|
#include <sys/protosw.h>
|
|
#include <sys/resourcevar.h>
|
|
#include <sys/signalvar.h>
|
|
#include <sys/socket.h>
|
|
#include <sys/socketvar.h>
|
|
#include <sys/sx.h>
|
|
#include <sys/sysctl.h>
|
|
|
|
/*
|
|
* Function pointer set by the AIO routines so that the socket buffer code
|
|
* can call back into the AIO module if it is loaded.
|
|
*/
|
|
void (*aio_swake)(struct socket *, struct sockbuf *);
|
|
|
|
/*
|
|
* Primitive routines for operating on socket buffers
|
|
*/
|
|
|
|
u_long sb_max = SB_MAX;
|
|
u_long sb_max_adj =
|
|
(quad_t)SB_MAX * MCLBYTES / (MSIZE + MCLBYTES); /* adjusted sb_max */
|
|
|
|
static u_long sb_efficiency = 8; /* parameter for sbreserve() */
|
|
|
|
static struct mbuf *sbcut_internal(struct sockbuf *sb, int len);
|
|
static void sbflush_internal(struct sockbuf *sb);
|
|
|
|
/*
|
|
* Our own version of m_clrprotoflags(), that can preserve M_NOTREADY.
|
|
*/
|
|
static void
|
|
sbm_clrprotoflags(struct mbuf *m, int flags)
|
|
{
|
|
int mask;
|
|
|
|
mask = ~M_PROTOFLAGS;
|
|
if (flags & PRUS_NOTREADY)
|
|
mask |= M_NOTREADY;
|
|
while (m) {
|
|
m->m_flags &= mask;
|
|
m = m->m_next;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Compress M_NOTREADY mbufs after they have been readied by sbready().
|
|
*
|
|
* sbcompress() skips M_NOTREADY mbufs since the data is not available to
|
|
* be copied at the time of sbcompress(). This function combines small
|
|
* mbufs similar to sbcompress() once mbufs are ready. 'm0' is the first
|
|
* mbuf sbready() marked ready, and 'end' is the first mbuf still not
|
|
* ready.
|
|
*/
|
|
static void
|
|
sbready_compress(struct sockbuf *sb, struct mbuf *m0, struct mbuf *end)
|
|
{
|
|
struct mbuf *m, *n;
|
|
int ext_size;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
if ((sb->sb_flags & SB_NOCOALESCE) != 0)
|
|
return;
|
|
|
|
for (m = m0; m != end; m = m->m_next) {
|
|
MPASS((m->m_flags & M_NOTREADY) == 0);
|
|
|
|
/* Compress small unmapped mbufs into plain mbufs. */
|
|
if ((m->m_flags & M_NOMAP) && m->m_len <= MLEN &&
|
|
!mbuf_has_tls_session(m)) {
|
|
MPASS(m->m_flags & M_EXT);
|
|
ext_size = m->m_ext.ext_size;
|
|
if (mb_unmapped_compress(m) == 0) {
|
|
sb->sb_mbcnt -= ext_size;
|
|
sb->sb_ccnt -= 1;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* NB: In sbcompress(), 'n' is the last mbuf in the
|
|
* socket buffer and 'm' is the new mbuf being copied
|
|
* into the trailing space of 'n'. Here, the roles
|
|
* are reversed and 'n' is the next mbuf after 'm'
|
|
* that is being copied into the trailing space of
|
|
* 'm'.
|
|
*/
|
|
n = m->m_next;
|
|
while ((n != NULL) && (n != end) && (m->m_flags & M_EOR) == 0 &&
|
|
M_WRITABLE(m) &&
|
|
(m->m_flags & M_NOMAP) == 0 &&
|
|
!mbuf_has_tls_session(n) &&
|
|
!mbuf_has_tls_session(m) &&
|
|
n->m_len <= MCLBYTES / 4 && /* XXX: Don't copy too much */
|
|
n->m_len <= M_TRAILINGSPACE(m) &&
|
|
m->m_type == n->m_type) {
|
|
KASSERT(sb->sb_lastrecord != n,
|
|
("%s: merging start of record (%p) into previous mbuf (%p)",
|
|
__func__, n, m));
|
|
m_copydata(n, 0, n->m_len, mtodo(m, m->m_len));
|
|
m->m_len += n->m_len;
|
|
m->m_next = n->m_next;
|
|
m->m_flags |= n->m_flags & M_EOR;
|
|
if (sb->sb_mbtail == n)
|
|
sb->sb_mbtail = m;
|
|
|
|
sb->sb_mbcnt -= MSIZE;
|
|
sb->sb_mcnt -= 1;
|
|
if (n->m_flags & M_EXT) {
|
|
sb->sb_mbcnt -= n->m_ext.ext_size;
|
|
sb->sb_ccnt -= 1;
|
|
}
|
|
m_free(n);
|
|
n = m->m_next;
|
|
}
|
|
}
|
|
SBLASTRECORDCHK(sb);
|
|
SBLASTMBUFCHK(sb);
|
|
}
|
|
|
|
/*
|
|
* Mark ready "count" units of I/O starting with "m". Most mbufs
|
|
* count as a single unit of I/O except for EXT_PGS-backed mbufs which
|
|
* can be backed by multiple pages.
|
|
*/
|
|
int
|
|
sbready(struct sockbuf *sb, struct mbuf *m0, int count)
|
|
{
|
|
struct mbuf *m;
|
|
u_int blocker;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
KASSERT(sb->sb_fnrdy != NULL, ("%s: sb %p NULL fnrdy", __func__, sb));
|
|
KASSERT(count > 0, ("%s: invalid count %d", __func__, count));
|
|
|
|
m = m0;
|
|
blocker = (sb->sb_fnrdy == m) ? M_BLOCKED : 0;
|
|
|
|
while (count > 0) {
|
|
KASSERT(m->m_flags & M_NOTREADY,
|
|
("%s: m %p !M_NOTREADY", __func__, m));
|
|
if ((m->m_flags & M_EXT) != 0 &&
|
|
m->m_ext.ext_type == EXT_PGS) {
|
|
if (count < m->m_ext.ext_pgs->nrdy) {
|
|
m->m_ext.ext_pgs->nrdy -= count;
|
|
count = 0;
|
|
break;
|
|
}
|
|
count -= m->m_ext.ext_pgs->nrdy;
|
|
m->m_ext.ext_pgs->nrdy = 0;
|
|
} else
|
|
count--;
|
|
|
|
m->m_flags &= ~(M_NOTREADY | blocker);
|
|
if (blocker)
|
|
sb->sb_acc += m->m_len;
|
|
m = m->m_next;
|
|
}
|
|
|
|
/*
|
|
* If the first mbuf is still not fully ready because only
|
|
* some of its backing pages were readied, no further progress
|
|
* can be made.
|
|
*/
|
|
if (m0 == m) {
|
|
MPASS(m->m_flags & M_NOTREADY);
|
|
return (EINPROGRESS);
|
|
}
|
|
|
|
if (!blocker) {
|
|
sbready_compress(sb, m0, m);
|
|
return (EINPROGRESS);
|
|
}
|
|
|
|
/* This one was blocking all the queue. */
|
|
for (; m && (m->m_flags & M_NOTREADY) == 0; m = m->m_next) {
|
|
KASSERT(m->m_flags & M_BLOCKED,
|
|
("%s: m %p !M_BLOCKED", __func__, m));
|
|
m->m_flags &= ~M_BLOCKED;
|
|
sb->sb_acc += m->m_len;
|
|
}
|
|
|
|
sb->sb_fnrdy = m;
|
|
sbready_compress(sb, m0, m);
|
|
|
|
return (0);
|
|
}
|
|
|
|
/*
|
|
* Adjust sockbuf state reflecting allocation of m.
|
|
*/
|
|
void
|
|
sballoc(struct sockbuf *sb, struct mbuf *m)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
sb->sb_ccc += m->m_len;
|
|
|
|
if (sb->sb_fnrdy == NULL) {
|
|
if (m->m_flags & M_NOTREADY)
|
|
sb->sb_fnrdy = m;
|
|
else
|
|
sb->sb_acc += m->m_len;
|
|
} else
|
|
m->m_flags |= M_BLOCKED;
|
|
|
|
if (m->m_type != MT_DATA && m->m_type != MT_OOBDATA)
|
|
sb->sb_ctl += m->m_len;
|
|
|
|
sb->sb_mbcnt += MSIZE;
|
|
sb->sb_mcnt += 1;
|
|
|
|
if (m->m_flags & M_EXT) {
|
|
sb->sb_mbcnt += m->m_ext.ext_size;
|
|
sb->sb_ccnt += 1;
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Adjust sockbuf state reflecting freeing of m.
|
|
*/
|
|
void
|
|
sbfree(struct sockbuf *sb, struct mbuf *m)
|
|
{
|
|
|
|
#if 0 /* XXX: not yet: soclose() call path comes here w/o lock. */
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
#endif
|
|
|
|
sb->sb_ccc -= m->m_len;
|
|
|
|
if (!(m->m_flags & M_NOTAVAIL))
|
|
sb->sb_acc -= m->m_len;
|
|
|
|
if (m == sb->sb_fnrdy) {
|
|
struct mbuf *n;
|
|
|
|
KASSERT(m->m_flags & M_NOTREADY,
|
|
("%s: m %p !M_NOTREADY", __func__, m));
|
|
|
|
n = m->m_next;
|
|
while (n != NULL && !(n->m_flags & M_NOTREADY)) {
|
|
n->m_flags &= ~M_BLOCKED;
|
|
sb->sb_acc += n->m_len;
|
|
n = n->m_next;
|
|
}
|
|
sb->sb_fnrdy = n;
|
|
}
|
|
|
|
if (m->m_type != MT_DATA && m->m_type != MT_OOBDATA)
|
|
sb->sb_ctl -= m->m_len;
|
|
|
|
sb->sb_mbcnt -= MSIZE;
|
|
sb->sb_mcnt -= 1;
|
|
if (m->m_flags & M_EXT) {
|
|
sb->sb_mbcnt -= m->m_ext.ext_size;
|
|
sb->sb_ccnt -= 1;
|
|
}
|
|
|
|
if (sb->sb_sndptr == m) {
|
|
sb->sb_sndptr = NULL;
|
|
sb->sb_sndptroff = 0;
|
|
}
|
|
if (sb->sb_sndptroff != 0)
|
|
sb->sb_sndptroff -= m->m_len;
|
|
}
|
|
|
|
/*
|
|
* Socantsendmore indicates that no more data will be sent on the socket; it
|
|
* would normally be applied to a socket when the user informs the system
|
|
* that no more data is to be sent, by the protocol code (in case
|
|
* PRU_SHUTDOWN). Socantrcvmore indicates that no more data will be
|
|
* received, and will normally be applied to the socket by a protocol when it
|
|
* detects that the peer will send no more data. Data queued for reading in
|
|
* the socket may yet be read.
|
|
*/
|
|
void
|
|
socantsendmore_locked(struct socket *so)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(&so->so_snd);
|
|
|
|
so->so_snd.sb_state |= SBS_CANTSENDMORE;
|
|
sowwakeup_locked(so);
|
|
mtx_assert(SOCKBUF_MTX(&so->so_snd), MA_NOTOWNED);
|
|
}
|
|
|
|
void
|
|
socantsendmore(struct socket *so)
|
|
{
|
|
|
|
SOCKBUF_LOCK(&so->so_snd);
|
|
socantsendmore_locked(so);
|
|
mtx_assert(SOCKBUF_MTX(&so->so_snd), MA_NOTOWNED);
|
|
}
|
|
|
|
void
|
|
socantrcvmore_locked(struct socket *so)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(&so->so_rcv);
|
|
|
|
so->so_rcv.sb_state |= SBS_CANTRCVMORE;
|
|
sorwakeup_locked(so);
|
|
mtx_assert(SOCKBUF_MTX(&so->so_rcv), MA_NOTOWNED);
|
|
}
|
|
|
|
void
|
|
socantrcvmore(struct socket *so)
|
|
{
|
|
|
|
SOCKBUF_LOCK(&so->so_rcv);
|
|
socantrcvmore_locked(so);
|
|
mtx_assert(SOCKBUF_MTX(&so->so_rcv), MA_NOTOWNED);
|
|
}
|
|
|
|
/*
|
|
* Wait for data to arrive at/drain from a socket buffer.
|
|
*/
|
|
int
|
|
sbwait(struct sockbuf *sb)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
sb->sb_flags |= SB_WAIT;
|
|
return (msleep_sbt(&sb->sb_acc, &sb->sb_mtx,
|
|
(sb->sb_flags & SB_NOINTR) ? PSOCK : PSOCK | PCATCH, "sbwait",
|
|
sb->sb_timeo, 0, 0));
|
|
}
|
|
|
|
int
|
|
sblock(struct sockbuf *sb, int flags)
|
|
{
|
|
|
|
KASSERT((flags & SBL_VALID) == flags,
|
|
("sblock: flags invalid (0x%x)", flags));
|
|
|
|
if (flags & SBL_WAIT) {
|
|
if ((sb->sb_flags & SB_NOINTR) ||
|
|
(flags & SBL_NOINTR)) {
|
|
sx_xlock(&sb->sb_sx);
|
|
return (0);
|
|
}
|
|
return (sx_xlock_sig(&sb->sb_sx));
|
|
} else {
|
|
if (sx_try_xlock(&sb->sb_sx) == 0)
|
|
return (EWOULDBLOCK);
|
|
return (0);
|
|
}
|
|
}
|
|
|
|
void
|
|
sbunlock(struct sockbuf *sb)
|
|
{
|
|
|
|
sx_xunlock(&sb->sb_sx);
|
|
}
|
|
|
|
/*
|
|
* Wakeup processes waiting on a socket buffer. Do asynchronous notification
|
|
* via SIGIO if the socket has the SS_ASYNC flag set.
|
|
*
|
|
* Called with the socket buffer lock held; will release the lock by the end
|
|
* of the function. This allows the caller to acquire the socket buffer lock
|
|
* while testing for the need for various sorts of wakeup and hold it through
|
|
* to the point where it's no longer required. We currently hold the lock
|
|
* through calls out to other subsystems (with the exception of kqueue), and
|
|
* then release it to avoid lock order issues. It's not clear that's
|
|
* correct.
|
|
*/
|
|
void
|
|
sowakeup(struct socket *so, struct sockbuf *sb)
|
|
{
|
|
int ret;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
selwakeuppri(sb->sb_sel, PSOCK);
|
|
if (!SEL_WAITING(sb->sb_sel))
|
|
sb->sb_flags &= ~SB_SEL;
|
|
if (sb->sb_flags & SB_WAIT) {
|
|
sb->sb_flags &= ~SB_WAIT;
|
|
wakeup(&sb->sb_acc);
|
|
}
|
|
KNOTE_LOCKED(&sb->sb_sel->si_note, 0);
|
|
if (sb->sb_upcall != NULL) {
|
|
ret = sb->sb_upcall(so, sb->sb_upcallarg, M_NOWAIT);
|
|
if (ret == SU_ISCONNECTED) {
|
|
KASSERT(sb == &so->so_rcv,
|
|
("SO_SND upcall returned SU_ISCONNECTED"));
|
|
soupcall_clear(so, SO_RCV);
|
|
}
|
|
} else
|
|
ret = SU_OK;
|
|
if (sb->sb_flags & SB_AIO)
|
|
sowakeup_aio(so, sb);
|
|
SOCKBUF_UNLOCK(sb);
|
|
if (ret == SU_ISCONNECTED)
|
|
soisconnected(so);
|
|
if ((so->so_state & SS_ASYNC) && so->so_sigio != NULL)
|
|
pgsigio(&so->so_sigio, SIGIO, 0);
|
|
mtx_assert(SOCKBUF_MTX(sb), MA_NOTOWNED);
|
|
}
|
|
|
|
/*
|
|
* Socket buffer (struct sockbuf) utility routines.
|
|
*
|
|
* Each socket contains two socket buffers: one for sending data and one for
|
|
* receiving data. Each buffer contains a queue of mbufs, information about
|
|
* the number of mbufs and amount of data in the queue, and other fields
|
|
* allowing select() statements and notification on data availability to be
|
|
* implemented.
|
|
*
|
|
* Data stored in a socket buffer is maintained as a list of records. Each
|
|
* record is a list of mbufs chained together with the m_next field. Records
|
|
* are chained together with the m_nextpkt field. The upper level routine
|
|
* soreceive() expects the following conventions to be observed when placing
|
|
* information in the receive buffer:
|
|
*
|
|
* 1. If the protocol requires each message be preceded by the sender's name,
|
|
* then a record containing that name must be present before any
|
|
* associated data (mbuf's must be of type MT_SONAME).
|
|
* 2. If the protocol supports the exchange of ``access rights'' (really just
|
|
* additional data associated with the message), and there are ``rights''
|
|
* to be received, then a record containing this data should be present
|
|
* (mbuf's must be of type MT_RIGHTS).
|
|
* 3. If a name or rights record exists, then it must be followed by a data
|
|
* record, perhaps of zero length.
|
|
*
|
|
* Before using a new socket structure it is first necessary to reserve
|
|
* buffer space to the socket, by calling sbreserve(). This should commit
|
|
* some of the available buffer space in the system buffer pool for the
|
|
* socket (currently, it does nothing but enforce limits). The space should
|
|
* be released by calling sbrelease() when the socket is destroyed.
|
|
*/
|
|
int
|
|
soreserve(struct socket *so, u_long sndcc, u_long rcvcc)
|
|
{
|
|
struct thread *td = curthread;
|
|
|
|
SOCKBUF_LOCK(&so->so_snd);
|
|
SOCKBUF_LOCK(&so->so_rcv);
|
|
if (sbreserve_locked(&so->so_snd, sndcc, so, td) == 0)
|
|
goto bad;
|
|
if (sbreserve_locked(&so->so_rcv, rcvcc, so, td) == 0)
|
|
goto bad2;
|
|
if (so->so_rcv.sb_lowat == 0)
|
|
so->so_rcv.sb_lowat = 1;
|
|
if (so->so_snd.sb_lowat == 0)
|
|
so->so_snd.sb_lowat = MCLBYTES;
|
|
if (so->so_snd.sb_lowat > so->so_snd.sb_hiwat)
|
|
so->so_snd.sb_lowat = so->so_snd.sb_hiwat;
|
|
SOCKBUF_UNLOCK(&so->so_rcv);
|
|
SOCKBUF_UNLOCK(&so->so_snd);
|
|
return (0);
|
|
bad2:
|
|
sbrelease_locked(&so->so_snd, so);
|
|
bad:
|
|
SOCKBUF_UNLOCK(&so->so_rcv);
|
|
SOCKBUF_UNLOCK(&so->so_snd);
|
|
return (ENOBUFS);
|
|
}
|
|
|
|
static int
|
|
sysctl_handle_sb_max(SYSCTL_HANDLER_ARGS)
|
|
{
|
|
int error = 0;
|
|
u_long tmp_sb_max = sb_max;
|
|
|
|
error = sysctl_handle_long(oidp, &tmp_sb_max, arg2, req);
|
|
if (error || !req->newptr)
|
|
return (error);
|
|
if (tmp_sb_max < MSIZE + MCLBYTES)
|
|
return (EINVAL);
|
|
sb_max = tmp_sb_max;
|
|
sb_max_adj = (u_quad_t)sb_max * MCLBYTES / (MSIZE + MCLBYTES);
|
|
return (0);
|
|
}
|
|
|
|
/*
|
|
* Allot mbufs to a sockbuf. Attempt to scale mbmax so that mbcnt doesn't
|
|
* become limiting if buffering efficiency is near the normal case.
|
|
*/
|
|
int
|
|
sbreserve_locked(struct sockbuf *sb, u_long cc, struct socket *so,
|
|
struct thread *td)
|
|
{
|
|
rlim_t sbsize_limit;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
/*
|
|
* When a thread is passed, we take into account the thread's socket
|
|
* buffer size limit. The caller will generally pass curthread, but
|
|
* in the TCP input path, NULL will be passed to indicate that no
|
|
* appropriate thread resource limits are available. In that case,
|
|
* we don't apply a process limit.
|
|
*/
|
|
if (cc > sb_max_adj)
|
|
return (0);
|
|
if (td != NULL) {
|
|
sbsize_limit = lim_cur(td, RLIMIT_SBSIZE);
|
|
} else
|
|
sbsize_limit = RLIM_INFINITY;
|
|
if (!chgsbsize(so->so_cred->cr_uidinfo, &sb->sb_hiwat, cc,
|
|
sbsize_limit))
|
|
return (0);
|
|
sb->sb_mbmax = min(cc * sb_efficiency, sb_max);
|
|
if (sb->sb_lowat > sb->sb_hiwat)
|
|
sb->sb_lowat = sb->sb_hiwat;
|
|
return (1);
|
|
}
|
|
|
|
int
|
|
sbsetopt(struct socket *so, int cmd, u_long cc)
|
|
{
|
|
struct sockbuf *sb;
|
|
short *flags;
|
|
u_int *hiwat, *lowat;
|
|
int error;
|
|
|
|
sb = NULL;
|
|
SOCK_LOCK(so);
|
|
if (SOLISTENING(so)) {
|
|
switch (cmd) {
|
|
case SO_SNDLOWAT:
|
|
case SO_SNDBUF:
|
|
lowat = &so->sol_sbsnd_lowat;
|
|
hiwat = &so->sol_sbsnd_hiwat;
|
|
flags = &so->sol_sbsnd_flags;
|
|
break;
|
|
case SO_RCVLOWAT:
|
|
case SO_RCVBUF:
|
|
lowat = &so->sol_sbrcv_lowat;
|
|
hiwat = &so->sol_sbrcv_hiwat;
|
|
flags = &so->sol_sbrcv_flags;
|
|
break;
|
|
}
|
|
} else {
|
|
switch (cmd) {
|
|
case SO_SNDLOWAT:
|
|
case SO_SNDBUF:
|
|
sb = &so->so_snd;
|
|
break;
|
|
case SO_RCVLOWAT:
|
|
case SO_RCVBUF:
|
|
sb = &so->so_rcv;
|
|
break;
|
|
}
|
|
flags = &sb->sb_flags;
|
|
hiwat = &sb->sb_hiwat;
|
|
lowat = &sb->sb_lowat;
|
|
SOCKBUF_LOCK(sb);
|
|
}
|
|
|
|
error = 0;
|
|
switch (cmd) {
|
|
case SO_SNDBUF:
|
|
case SO_RCVBUF:
|
|
if (SOLISTENING(so)) {
|
|
if (cc > sb_max_adj) {
|
|
error = ENOBUFS;
|
|
break;
|
|
}
|
|
*hiwat = cc;
|
|
if (*lowat > *hiwat)
|
|
*lowat = *hiwat;
|
|
} else {
|
|
if (!sbreserve_locked(sb, cc, so, curthread))
|
|
error = ENOBUFS;
|
|
}
|
|
if (error == 0)
|
|
*flags &= ~SB_AUTOSIZE;
|
|
break;
|
|
case SO_SNDLOWAT:
|
|
case SO_RCVLOWAT:
|
|
/*
|
|
* Make sure the low-water is never greater than the
|
|
* high-water.
|
|
*/
|
|
*lowat = (cc > *hiwat) ? *hiwat : cc;
|
|
break;
|
|
}
|
|
|
|
if (!SOLISTENING(so))
|
|
SOCKBUF_UNLOCK(sb);
|
|
SOCK_UNLOCK(so);
|
|
return (error);
|
|
}
|
|
|
|
/*
|
|
* Free mbufs held by a socket, and reserved mbuf space.
|
|
*/
|
|
void
|
|
sbrelease_internal(struct sockbuf *sb, struct socket *so)
|
|
{
|
|
|
|
sbflush_internal(sb);
|
|
(void)chgsbsize(so->so_cred->cr_uidinfo, &sb->sb_hiwat, 0,
|
|
RLIM_INFINITY);
|
|
sb->sb_mbmax = 0;
|
|
}
|
|
|
|
void
|
|
sbrelease_locked(struct sockbuf *sb, struct socket *so)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
sbrelease_internal(sb, so);
|
|
}
|
|
|
|
void
|
|
sbrelease(struct sockbuf *sb, struct socket *so)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbrelease_locked(sb, so);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
void
|
|
sbdestroy(struct sockbuf *sb, struct socket *so)
|
|
{
|
|
|
|
sbrelease_internal(sb, so);
|
|
#ifdef KERN_TLS
|
|
if (sb->sb_tls_info != NULL)
|
|
ktls_free(sb->sb_tls_info);
|
|
sb->sb_tls_info = NULL;
|
|
#endif
|
|
}
|
|
|
|
/*
|
|
* Routines to add and remove data from an mbuf queue.
|
|
*
|
|
* The routines sbappend() or sbappendrecord() are normally called to append
|
|
* new mbufs to a socket buffer, after checking that adequate space is
|
|
* available, comparing the function sbspace() with the amount of data to be
|
|
* added. sbappendrecord() differs from sbappend() in that data supplied is
|
|
* treated as the beginning of a new record. To place a sender's address,
|
|
* optional access rights, and data in a socket receive buffer,
|
|
* sbappendaddr() should be used. To place access rights and data in a
|
|
* socket receive buffer, sbappendrights() should be used. In either case,
|
|
* the new data begins a new record. Note that unlike sbappend() and
|
|
* sbappendrecord(), these routines check for the caller that there will be
|
|
* enough space to store the data. Each fails if there is not enough space,
|
|
* or if it cannot find mbufs to store additional information in.
|
|
*
|
|
* Reliable protocols may use the socket send buffer to hold data awaiting
|
|
* acknowledgement. Data is normally copied from a socket send buffer in a
|
|
* protocol with m_copy for output to a peer, and then removing the data from
|
|
* the socket buffer with sbdrop() or sbdroprecord() when the data is
|
|
* acknowledged by the peer.
|
|
*/
|
|
#ifdef SOCKBUF_DEBUG
|
|
void
|
|
sblastrecordchk(struct sockbuf *sb, const char *file, int line)
|
|
{
|
|
struct mbuf *m = sb->sb_mb;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
while (m && m->m_nextpkt)
|
|
m = m->m_nextpkt;
|
|
|
|
if (m != sb->sb_lastrecord) {
|
|
printf("%s: sb_mb %p sb_lastrecord %p last %p\n",
|
|
__func__, sb->sb_mb, sb->sb_lastrecord, m);
|
|
printf("packet chain:\n");
|
|
for (m = sb->sb_mb; m != NULL; m = m->m_nextpkt)
|
|
printf("\t%p\n", m);
|
|
panic("%s from %s:%u", __func__, file, line);
|
|
}
|
|
}
|
|
|
|
void
|
|
sblastmbufchk(struct sockbuf *sb, const char *file, int line)
|
|
{
|
|
struct mbuf *m = sb->sb_mb;
|
|
struct mbuf *n;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
while (m && m->m_nextpkt)
|
|
m = m->m_nextpkt;
|
|
|
|
while (m && m->m_next)
|
|
m = m->m_next;
|
|
|
|
if (m != sb->sb_mbtail) {
|
|
printf("%s: sb_mb %p sb_mbtail %p last %p\n",
|
|
__func__, sb->sb_mb, sb->sb_mbtail, m);
|
|
printf("packet tree:\n");
|
|
for (m = sb->sb_mb; m != NULL; m = m->m_nextpkt) {
|
|
printf("\t");
|
|
for (n = m; n != NULL; n = n->m_next)
|
|
printf("%p ", n);
|
|
printf("\n");
|
|
}
|
|
panic("%s from %s:%u", __func__, file, line);
|
|
}
|
|
}
|
|
#endif /* SOCKBUF_DEBUG */
|
|
|
|
#define SBLINKRECORD(sb, m0) do { \
|
|
SOCKBUF_LOCK_ASSERT(sb); \
|
|
if ((sb)->sb_lastrecord != NULL) \
|
|
(sb)->sb_lastrecord->m_nextpkt = (m0); \
|
|
else \
|
|
(sb)->sb_mb = (m0); \
|
|
(sb)->sb_lastrecord = (m0); \
|
|
} while (/*CONSTCOND*/0)
|
|
|
|
/*
|
|
* Append mbuf chain m to the last record in the socket buffer sb. The
|
|
* additional space associated the mbuf chain is recorded in sb. Empty mbufs
|
|
* are discarded and mbufs are compacted where possible.
|
|
*/
|
|
void
|
|
sbappend_locked(struct sockbuf *sb, struct mbuf *m, int flags)
|
|
{
|
|
struct mbuf *n;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
if (m == NULL)
|
|
return;
|
|
sbm_clrprotoflags(m, flags);
|
|
SBLASTRECORDCHK(sb);
|
|
n = sb->sb_mb;
|
|
if (n) {
|
|
while (n->m_nextpkt)
|
|
n = n->m_nextpkt;
|
|
do {
|
|
if (n->m_flags & M_EOR) {
|
|
sbappendrecord_locked(sb, m); /* XXXXXX!!!! */
|
|
return;
|
|
}
|
|
} while (n->m_next && (n = n->m_next));
|
|
} else {
|
|
/*
|
|
* XXX Would like to simply use sb_mbtail here, but
|
|
* XXX I need to verify that I won't miss an EOR that
|
|
* XXX way.
|
|
*/
|
|
if ((n = sb->sb_lastrecord) != NULL) {
|
|
do {
|
|
if (n->m_flags & M_EOR) {
|
|
sbappendrecord_locked(sb, m); /* XXXXXX!!!! */
|
|
return;
|
|
}
|
|
} while (n->m_next && (n = n->m_next));
|
|
} else {
|
|
/*
|
|
* If this is the first record in the socket buffer,
|
|
* it's also the last record.
|
|
*/
|
|
sb->sb_lastrecord = m;
|
|
}
|
|
}
|
|
sbcompress(sb, m, n);
|
|
SBLASTRECORDCHK(sb);
|
|
}
|
|
|
|
/*
|
|
* Append mbuf chain m to the last record in the socket buffer sb. The
|
|
* additional space associated the mbuf chain is recorded in sb. Empty mbufs
|
|
* are discarded and mbufs are compacted where possible.
|
|
*/
|
|
void
|
|
sbappend(struct sockbuf *sb, struct mbuf *m, int flags)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbappend_locked(sb, m, flags);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
/*
|
|
* This version of sbappend() should only be used when the caller absolutely
|
|
* knows that there will never be more than one record in the socket buffer,
|
|
* that is, a stream protocol (such as TCP).
|
|
*/
|
|
void
|
|
sbappendstream_locked(struct sockbuf *sb, struct mbuf *m, int flags)
|
|
{
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
KASSERT(m->m_nextpkt == NULL,("sbappendstream 0"));
|
|
KASSERT(sb->sb_mb == sb->sb_lastrecord,("sbappendstream 1"));
|
|
|
|
SBLASTMBUFCHK(sb);
|
|
|
|
#ifdef KERN_TLS
|
|
if (sb->sb_tls_info != NULL)
|
|
ktls_seq(sb, m);
|
|
#endif
|
|
|
|
/* Remove all packet headers and mbuf tags to get a pure data chain. */
|
|
m_demote(m, 1, flags & PRUS_NOTREADY ? M_NOTREADY : 0);
|
|
|
|
sbcompress(sb, m, sb->sb_mbtail);
|
|
|
|
sb->sb_lastrecord = sb->sb_mb;
|
|
SBLASTRECORDCHK(sb);
|
|
}
|
|
|
|
/*
|
|
* This version of sbappend() should only be used when the caller absolutely
|
|
* knows that there will never be more than one record in the socket buffer,
|
|
* that is, a stream protocol (such as TCP).
|
|
*/
|
|
void
|
|
sbappendstream(struct sockbuf *sb, struct mbuf *m, int flags)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbappendstream_locked(sb, m, flags);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
#ifdef SOCKBUF_DEBUG
|
|
void
|
|
sbcheck(struct sockbuf *sb, const char *file, int line)
|
|
{
|
|
struct mbuf *m, *n, *fnrdy;
|
|
u_long acc, ccc, mbcnt;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
acc = ccc = mbcnt = 0;
|
|
fnrdy = NULL;
|
|
|
|
for (m = sb->sb_mb; m; m = n) {
|
|
n = m->m_nextpkt;
|
|
for (; m; m = m->m_next) {
|
|
if (m->m_len == 0) {
|
|
printf("sb %p empty mbuf %p\n", sb, m);
|
|
goto fail;
|
|
}
|
|
if ((m->m_flags & M_NOTREADY) && fnrdy == NULL) {
|
|
if (m != sb->sb_fnrdy) {
|
|
printf("sb %p: fnrdy %p != m %p\n",
|
|
sb, sb->sb_fnrdy, m);
|
|
goto fail;
|
|
}
|
|
fnrdy = m;
|
|
}
|
|
if (fnrdy) {
|
|
if (!(m->m_flags & M_NOTAVAIL)) {
|
|
printf("sb %p: fnrdy %p, m %p is avail\n",
|
|
sb, sb->sb_fnrdy, m);
|
|
goto fail;
|
|
}
|
|
} else
|
|
acc += m->m_len;
|
|
ccc += m->m_len;
|
|
mbcnt += MSIZE;
|
|
if (m->m_flags & M_EXT) /*XXX*/ /* pretty sure this is bogus */
|
|
mbcnt += m->m_ext.ext_size;
|
|
}
|
|
}
|
|
if (acc != sb->sb_acc || ccc != sb->sb_ccc || mbcnt != sb->sb_mbcnt) {
|
|
printf("acc %ld/%u ccc %ld/%u mbcnt %ld/%u\n",
|
|
acc, sb->sb_acc, ccc, sb->sb_ccc, mbcnt, sb->sb_mbcnt);
|
|
goto fail;
|
|
}
|
|
return;
|
|
fail:
|
|
panic("%s from %s:%u", __func__, file, line);
|
|
}
|
|
#endif
|
|
|
|
/*
|
|
* As above, except the mbuf chain begins a new record.
|
|
*/
|
|
void
|
|
sbappendrecord_locked(struct sockbuf *sb, struct mbuf *m0)
|
|
{
|
|
struct mbuf *m;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
if (m0 == NULL)
|
|
return;
|
|
m_clrprotoflags(m0);
|
|
/*
|
|
* Put the first mbuf on the queue. Note this permits zero length
|
|
* records.
|
|
*/
|
|
sballoc(sb, m0);
|
|
SBLASTRECORDCHK(sb);
|
|
SBLINKRECORD(sb, m0);
|
|
sb->sb_mbtail = m0;
|
|
m = m0->m_next;
|
|
m0->m_next = 0;
|
|
if (m && (m0->m_flags & M_EOR)) {
|
|
m0->m_flags &= ~M_EOR;
|
|
m->m_flags |= M_EOR;
|
|
}
|
|
/* always call sbcompress() so it can do SBLASTMBUFCHK() */
|
|
sbcompress(sb, m, m0);
|
|
}
|
|
|
|
/*
|
|
* As above, except the mbuf chain begins a new record.
|
|
*/
|
|
void
|
|
sbappendrecord(struct sockbuf *sb, struct mbuf *m0)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbappendrecord_locked(sb, m0);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
/* Helper routine that appends data, control, and address to a sockbuf. */
|
|
static int
|
|
sbappendaddr_locked_internal(struct sockbuf *sb, const struct sockaddr *asa,
|
|
struct mbuf *m0, struct mbuf *control, struct mbuf *ctrl_last)
|
|
{
|
|
struct mbuf *m, *n, *nlast;
|
|
#if MSIZE <= 256
|
|
if (asa->sa_len > MLEN)
|
|
return (0);
|
|
#endif
|
|
m = m_get(M_NOWAIT, MT_SONAME);
|
|
if (m == NULL)
|
|
return (0);
|
|
m->m_len = asa->sa_len;
|
|
bcopy(asa, mtod(m, caddr_t), asa->sa_len);
|
|
if (m0) {
|
|
m_clrprotoflags(m0);
|
|
m_tag_delete_chain(m0, NULL);
|
|
/*
|
|
* Clear some persistent info from pkthdr.
|
|
* We don't use m_demote(), because some netgraph consumers
|
|
* expect M_PKTHDR presence.
|
|
*/
|
|
m0->m_pkthdr.rcvif = NULL;
|
|
m0->m_pkthdr.flowid = 0;
|
|
m0->m_pkthdr.csum_flags = 0;
|
|
m0->m_pkthdr.fibnum = 0;
|
|
m0->m_pkthdr.rsstype = 0;
|
|
}
|
|
if (ctrl_last)
|
|
ctrl_last->m_next = m0; /* concatenate data to control */
|
|
else
|
|
control = m0;
|
|
m->m_next = control;
|
|
for (n = m; n->m_next != NULL; n = n->m_next)
|
|
sballoc(sb, n);
|
|
sballoc(sb, n);
|
|
nlast = n;
|
|
SBLINKRECORD(sb, m);
|
|
|
|
sb->sb_mbtail = nlast;
|
|
SBLASTMBUFCHK(sb);
|
|
|
|
SBLASTRECORDCHK(sb);
|
|
return (1);
|
|
}
|
|
|
|
/*
|
|
* Append address and data, and optionally, control (ancillary) data to the
|
|
* receive queue of a socket. If present, m0 must include a packet header
|
|
* with total length. Returns 0 if no space in sockbuf or insufficient
|
|
* mbufs.
|
|
*/
|
|
int
|
|
sbappendaddr_locked(struct sockbuf *sb, const struct sockaddr *asa,
|
|
struct mbuf *m0, struct mbuf *control)
|
|
{
|
|
struct mbuf *ctrl_last;
|
|
int space = asa->sa_len;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
if (m0 && (m0->m_flags & M_PKTHDR) == 0)
|
|
panic("sbappendaddr_locked");
|
|
if (m0)
|
|
space += m0->m_pkthdr.len;
|
|
space += m_length(control, &ctrl_last);
|
|
|
|
if (space > sbspace(sb))
|
|
return (0);
|
|
return (sbappendaddr_locked_internal(sb, asa, m0, control, ctrl_last));
|
|
}
|
|
|
|
/*
|
|
* Append address and data, and optionally, control (ancillary) data to the
|
|
* receive queue of a socket. If present, m0 must include a packet header
|
|
* with total length. Returns 0 if insufficient mbufs. Does not validate space
|
|
* on the receiving sockbuf.
|
|
*/
|
|
int
|
|
sbappendaddr_nospacecheck_locked(struct sockbuf *sb, const struct sockaddr *asa,
|
|
struct mbuf *m0, struct mbuf *control)
|
|
{
|
|
struct mbuf *ctrl_last;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
ctrl_last = (control == NULL) ? NULL : m_last(control);
|
|
return (sbappendaddr_locked_internal(sb, asa, m0, control, ctrl_last));
|
|
}
|
|
|
|
/*
|
|
* Append address and data, and optionally, control (ancillary) data to the
|
|
* receive queue of a socket. If present, m0 must include a packet header
|
|
* with total length. Returns 0 if no space in sockbuf or insufficient
|
|
* mbufs.
|
|
*/
|
|
int
|
|
sbappendaddr(struct sockbuf *sb, const struct sockaddr *asa,
|
|
struct mbuf *m0, struct mbuf *control)
|
|
{
|
|
int retval;
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
retval = sbappendaddr_locked(sb, asa, m0, control);
|
|
SOCKBUF_UNLOCK(sb);
|
|
return (retval);
|
|
}
|
|
|
|
void
|
|
sbappendcontrol_locked(struct sockbuf *sb, struct mbuf *m0,
|
|
struct mbuf *control)
|
|
{
|
|
struct mbuf *m, *mlast;
|
|
|
|
m_clrprotoflags(m0);
|
|
m_last(control)->m_next = m0;
|
|
|
|
SBLASTRECORDCHK(sb);
|
|
|
|
for (m = control; m->m_next; m = m->m_next)
|
|
sballoc(sb, m);
|
|
sballoc(sb, m);
|
|
mlast = m;
|
|
SBLINKRECORD(sb, control);
|
|
|
|
sb->sb_mbtail = mlast;
|
|
SBLASTMBUFCHK(sb);
|
|
|
|
SBLASTRECORDCHK(sb);
|
|
}
|
|
|
|
void
|
|
sbappendcontrol(struct sockbuf *sb, struct mbuf *m0, struct mbuf *control)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbappendcontrol_locked(sb, m0, control);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
/*
|
|
* Append the data in mbuf chain (m) into the socket buffer sb following mbuf
|
|
* (n). If (n) is NULL, the buffer is presumed empty.
|
|
*
|
|
* When the data is compressed, mbufs in the chain may be handled in one of
|
|
* three ways:
|
|
*
|
|
* (1) The mbuf may simply be dropped, if it contributes nothing (no data, no
|
|
* record boundary, and no change in data type).
|
|
*
|
|
* (2) The mbuf may be coalesced -- i.e., data in the mbuf may be copied into
|
|
* an mbuf already in the socket buffer. This can occur if an
|
|
* appropriate mbuf exists, there is room, both mbufs are not marked as
|
|
* not ready, and no merging of data types will occur.
|
|
*
|
|
* (3) The mbuf may be appended to the end of the existing mbuf chain.
|
|
*
|
|
* If any of the new mbufs is marked as M_EOR, mark the last mbuf appended as
|
|
* end-of-record.
|
|
*/
|
|
void
|
|
sbcompress(struct sockbuf *sb, struct mbuf *m, struct mbuf *n)
|
|
{
|
|
int eor = 0;
|
|
struct mbuf *o;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
while (m) {
|
|
eor |= m->m_flags & M_EOR;
|
|
if (m->m_len == 0 &&
|
|
(eor == 0 ||
|
|
(((o = m->m_next) || (o = n)) &&
|
|
o->m_type == m->m_type))) {
|
|
if (sb->sb_lastrecord == m)
|
|
sb->sb_lastrecord = m->m_next;
|
|
m = m_free(m);
|
|
continue;
|
|
}
|
|
if (n && (n->m_flags & M_EOR) == 0 &&
|
|
M_WRITABLE(n) &&
|
|
((sb->sb_flags & SB_NOCOALESCE) == 0) &&
|
|
!(m->m_flags & M_NOTREADY) &&
|
|
!(n->m_flags & (M_NOTREADY | M_NOMAP)) &&
|
|
!mbuf_has_tls_session(m) &&
|
|
!mbuf_has_tls_session(n) &&
|
|
m->m_len <= MCLBYTES / 4 && /* XXX: Don't copy too much */
|
|
m->m_len <= M_TRAILINGSPACE(n) &&
|
|
n->m_type == m->m_type) {
|
|
m_copydata(m, 0, m->m_len, mtodo(n, n->m_len));
|
|
n->m_len += m->m_len;
|
|
sb->sb_ccc += m->m_len;
|
|
if (sb->sb_fnrdy == NULL)
|
|
sb->sb_acc += m->m_len;
|
|
if (m->m_type != MT_DATA && m->m_type != MT_OOBDATA)
|
|
/* XXX: Probably don't need.*/
|
|
sb->sb_ctl += m->m_len;
|
|
m = m_free(m);
|
|
continue;
|
|
}
|
|
if (m->m_len <= MLEN && (m->m_flags & M_NOMAP) &&
|
|
(m->m_flags & M_NOTREADY) == 0 &&
|
|
!mbuf_has_tls_session(m))
|
|
(void)mb_unmapped_compress(m);
|
|
if (n)
|
|
n->m_next = m;
|
|
else
|
|
sb->sb_mb = m;
|
|
sb->sb_mbtail = m;
|
|
sballoc(sb, m);
|
|
n = m;
|
|
m->m_flags &= ~M_EOR;
|
|
m = m->m_next;
|
|
n->m_next = 0;
|
|
}
|
|
if (eor) {
|
|
KASSERT(n != NULL, ("sbcompress: eor && n == NULL"));
|
|
n->m_flags |= eor;
|
|
}
|
|
SBLASTMBUFCHK(sb);
|
|
}
|
|
|
|
/*
|
|
* Free all mbufs in a sockbuf. Check that all resources are reclaimed.
|
|
*/
|
|
static void
|
|
sbflush_internal(struct sockbuf *sb)
|
|
{
|
|
|
|
while (sb->sb_mbcnt) {
|
|
/*
|
|
* Don't call sbcut(sb, 0) if the leading mbuf is non-empty:
|
|
* we would loop forever. Panic instead.
|
|
*/
|
|
if (sb->sb_ccc == 0 && (sb->sb_mb == NULL || sb->sb_mb->m_len))
|
|
break;
|
|
m_freem(sbcut_internal(sb, (int)sb->sb_ccc));
|
|
}
|
|
KASSERT(sb->sb_ccc == 0 && sb->sb_mb == 0 && sb->sb_mbcnt == 0,
|
|
("%s: ccc %u mb %p mbcnt %u", __func__,
|
|
sb->sb_ccc, (void *)sb->sb_mb, sb->sb_mbcnt));
|
|
}
|
|
|
|
void
|
|
sbflush_locked(struct sockbuf *sb)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
sbflush_internal(sb);
|
|
}
|
|
|
|
void
|
|
sbflush(struct sockbuf *sb)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbflush_locked(sb);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
/*
|
|
* Cut data from (the front of) a sockbuf.
|
|
*/
|
|
static struct mbuf *
|
|
sbcut_internal(struct sockbuf *sb, int len)
|
|
{
|
|
struct mbuf *m, *next, *mfree;
|
|
|
|
KASSERT(len >= 0, ("%s: len is %d but it is supposed to be >= 0",
|
|
__func__, len));
|
|
KASSERT(len <= sb->sb_ccc, ("%s: len: %d is > ccc: %u",
|
|
__func__, len, sb->sb_ccc));
|
|
|
|
next = (m = sb->sb_mb) ? m->m_nextpkt : 0;
|
|
mfree = NULL;
|
|
|
|
while (len > 0) {
|
|
if (m == NULL) {
|
|
KASSERT(next, ("%s: no next, len %d", __func__, len));
|
|
m = next;
|
|
next = m->m_nextpkt;
|
|
}
|
|
if (m->m_len > len) {
|
|
KASSERT(!(m->m_flags & M_NOTAVAIL),
|
|
("%s: m %p M_NOTAVAIL", __func__, m));
|
|
m->m_len -= len;
|
|
m->m_data += len;
|
|
sb->sb_ccc -= len;
|
|
sb->sb_acc -= len;
|
|
if (sb->sb_sndptroff != 0)
|
|
sb->sb_sndptroff -= len;
|
|
if (m->m_type != MT_DATA && m->m_type != MT_OOBDATA)
|
|
sb->sb_ctl -= len;
|
|
break;
|
|
}
|
|
len -= m->m_len;
|
|
sbfree(sb, m);
|
|
/*
|
|
* Do not put M_NOTREADY buffers to the free list, they
|
|
* are referenced from outside.
|
|
*/
|
|
if (m->m_flags & M_NOTREADY)
|
|
m = m->m_next;
|
|
else {
|
|
struct mbuf *n;
|
|
|
|
n = m->m_next;
|
|
m->m_next = mfree;
|
|
mfree = m;
|
|
m = n;
|
|
}
|
|
}
|
|
/*
|
|
* Free any zero-length mbufs from the buffer.
|
|
* For SOCK_DGRAM sockets such mbufs represent empty records.
|
|
* XXX: For SOCK_STREAM sockets such mbufs can appear in the buffer,
|
|
* when sosend_generic() needs to send only control data.
|
|
*/
|
|
while (m && m->m_len == 0) {
|
|
struct mbuf *n;
|
|
|
|
sbfree(sb, m);
|
|
n = m->m_next;
|
|
m->m_next = mfree;
|
|
mfree = m;
|
|
m = n;
|
|
}
|
|
if (m) {
|
|
sb->sb_mb = m;
|
|
m->m_nextpkt = next;
|
|
} else
|
|
sb->sb_mb = next;
|
|
/*
|
|
* First part is an inline SB_EMPTY_FIXUP(). Second part makes sure
|
|
* sb_lastrecord is up-to-date if we dropped part of the last record.
|
|
*/
|
|
m = sb->sb_mb;
|
|
if (m == NULL) {
|
|
sb->sb_mbtail = NULL;
|
|
sb->sb_lastrecord = NULL;
|
|
} else if (m->m_nextpkt == NULL) {
|
|
sb->sb_lastrecord = m;
|
|
}
|
|
|
|
return (mfree);
|
|
}
|
|
|
|
/*
|
|
* Drop data from (the front of) a sockbuf.
|
|
*/
|
|
void
|
|
sbdrop_locked(struct sockbuf *sb, int len)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
m_freem(sbcut_internal(sb, len));
|
|
}
|
|
|
|
/*
|
|
* Drop data from (the front of) a sockbuf,
|
|
* and return it to caller.
|
|
*/
|
|
struct mbuf *
|
|
sbcut_locked(struct sockbuf *sb, int len)
|
|
{
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
return (sbcut_internal(sb, len));
|
|
}
|
|
|
|
void
|
|
sbdrop(struct sockbuf *sb, int len)
|
|
{
|
|
struct mbuf *mfree;
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
mfree = sbcut_internal(sb, len);
|
|
SOCKBUF_UNLOCK(sb);
|
|
|
|
m_freem(mfree);
|
|
}
|
|
|
|
struct mbuf *
|
|
sbsndptr_noadv(struct sockbuf *sb, uint32_t off, uint32_t *moff)
|
|
{
|
|
struct mbuf *m;
|
|
|
|
KASSERT(sb->sb_mb != NULL, ("%s: sb_mb is NULL", __func__));
|
|
if (sb->sb_sndptr == NULL || sb->sb_sndptroff > off) {
|
|
*moff = off;
|
|
if (sb->sb_sndptr == NULL) {
|
|
sb->sb_sndptr = sb->sb_mb;
|
|
sb->sb_sndptroff = 0;
|
|
}
|
|
return (sb->sb_mb);
|
|
} else {
|
|
m = sb->sb_sndptr;
|
|
off -= sb->sb_sndptroff;
|
|
}
|
|
*moff = off;
|
|
return (m);
|
|
}
|
|
|
|
void
|
|
sbsndptr_adv(struct sockbuf *sb, struct mbuf *mb, uint32_t len)
|
|
{
|
|
/*
|
|
* A small copy was done, advance forward the sb_sbsndptr to cover
|
|
* it.
|
|
*/
|
|
struct mbuf *m;
|
|
|
|
if (mb != sb->sb_sndptr) {
|
|
/* Did not copyout at the same mbuf */
|
|
return;
|
|
}
|
|
m = mb;
|
|
while (m && (len > 0)) {
|
|
if (len >= m->m_len) {
|
|
len -= m->m_len;
|
|
if (m->m_next) {
|
|
sb->sb_sndptroff += m->m_len;
|
|
sb->sb_sndptr = m->m_next;
|
|
}
|
|
m = m->m_next;
|
|
} else {
|
|
len = 0;
|
|
}
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Return the first mbuf and the mbuf data offset for the provided
|
|
* send offset without changing the "sb_sndptroff" field.
|
|
*/
|
|
struct mbuf *
|
|
sbsndmbuf(struct sockbuf *sb, u_int off, u_int *moff)
|
|
{
|
|
struct mbuf *m;
|
|
|
|
KASSERT(sb->sb_mb != NULL, ("%s: sb_mb is NULL", __func__));
|
|
|
|
/*
|
|
* If the "off" is below the stored offset, which happens on
|
|
* retransmits, just use "sb_mb":
|
|
*/
|
|
if (sb->sb_sndptr == NULL || sb->sb_sndptroff > off) {
|
|
m = sb->sb_mb;
|
|
} else {
|
|
m = sb->sb_sndptr;
|
|
off -= sb->sb_sndptroff;
|
|
}
|
|
while (off > 0 && m != NULL) {
|
|
if (off < m->m_len)
|
|
break;
|
|
off -= m->m_len;
|
|
m = m->m_next;
|
|
}
|
|
*moff = off;
|
|
return (m);
|
|
}
|
|
|
|
/*
|
|
* Drop a record off the front of a sockbuf and move the next record to the
|
|
* front.
|
|
*/
|
|
void
|
|
sbdroprecord_locked(struct sockbuf *sb)
|
|
{
|
|
struct mbuf *m;
|
|
|
|
SOCKBUF_LOCK_ASSERT(sb);
|
|
|
|
m = sb->sb_mb;
|
|
if (m) {
|
|
sb->sb_mb = m->m_nextpkt;
|
|
do {
|
|
sbfree(sb, m);
|
|
m = m_free(m);
|
|
} while (m);
|
|
}
|
|
SB_EMPTY_FIXUP(sb);
|
|
}
|
|
|
|
/*
|
|
* Drop a record off the front of a sockbuf and move the next record to the
|
|
* front.
|
|
*/
|
|
void
|
|
sbdroprecord(struct sockbuf *sb)
|
|
{
|
|
|
|
SOCKBUF_LOCK(sb);
|
|
sbdroprecord_locked(sb);
|
|
SOCKBUF_UNLOCK(sb);
|
|
}
|
|
|
|
/*
|
|
* Create a "control" mbuf containing the specified data with the specified
|
|
* type for presentation on a socket buffer.
|
|
*/
|
|
struct mbuf *
|
|
sbcreatecontrol(caddr_t p, int size, int type, int level)
|
|
{
|
|
struct cmsghdr *cp;
|
|
struct mbuf *m;
|
|
|
|
if (CMSG_SPACE((u_int)size) > MCLBYTES)
|
|
return ((struct mbuf *) NULL);
|
|
if (CMSG_SPACE((u_int)size) > MLEN)
|
|
m = m_getcl(M_NOWAIT, MT_CONTROL, 0);
|
|
else
|
|
m = m_get(M_NOWAIT, MT_CONTROL);
|
|
if (m == NULL)
|
|
return ((struct mbuf *) NULL);
|
|
cp = mtod(m, struct cmsghdr *);
|
|
m->m_len = 0;
|
|
KASSERT(CMSG_SPACE((u_int)size) <= M_TRAILINGSPACE(m),
|
|
("sbcreatecontrol: short mbuf"));
|
|
/*
|
|
* Don't leave the padding between the msg header and the
|
|
* cmsg data and the padding after the cmsg data un-initialized.
|
|
*/
|
|
bzero(cp, CMSG_SPACE((u_int)size));
|
|
if (p != NULL)
|
|
(void)memcpy(CMSG_DATA(cp), p, size);
|
|
m->m_len = CMSG_SPACE(size);
|
|
cp->cmsg_len = CMSG_LEN(size);
|
|
cp->cmsg_level = level;
|
|
cp->cmsg_type = type;
|
|
return (m);
|
|
}
|
|
|
|
/*
|
|
* This does the same for socket buffers that sotoxsocket does for sockets:
|
|
* generate an user-format data structure describing the socket buffer. Note
|
|
* that the xsockbuf structure, since it is always embedded in a socket, does
|
|
* not include a self pointer nor a length. We make this entry point public
|
|
* in case some other mechanism needs it.
|
|
*/
|
|
void
|
|
sbtoxsockbuf(struct sockbuf *sb, struct xsockbuf *xsb)
|
|
{
|
|
|
|
xsb->sb_cc = sb->sb_ccc;
|
|
xsb->sb_hiwat = sb->sb_hiwat;
|
|
xsb->sb_mbcnt = sb->sb_mbcnt;
|
|
xsb->sb_mcnt = sb->sb_mcnt;
|
|
xsb->sb_ccnt = sb->sb_ccnt;
|
|
xsb->sb_mbmax = sb->sb_mbmax;
|
|
xsb->sb_lowat = sb->sb_lowat;
|
|
xsb->sb_flags = sb->sb_flags;
|
|
xsb->sb_timeo = sb->sb_timeo;
|
|
}
|
|
|
|
/* This takes the place of kern.maxsockbuf, which moved to kern.ipc. */
|
|
static int dummy;
|
|
SYSCTL_INT(_kern, KERN_DUMMY, dummy, CTLFLAG_RW, &dummy, 0, "");
|
|
SYSCTL_OID(_kern_ipc, KIPC_MAXSOCKBUF, maxsockbuf, CTLTYPE_ULONG|CTLFLAG_RW,
|
|
&sb_max, 0, sysctl_handle_sb_max, "LU", "Maximum socket buffer size");
|
|
SYSCTL_ULONG(_kern_ipc, KIPC_SOCKBUF_WASTE, sockbuf_waste_factor, CTLFLAG_RW,
|
|
&sb_efficiency, 0, "Socket buffer size waste factor");
|