2005-01-07 01:45:51 +00:00
|
|
|
/*-
|
2017-11-20 19:43:44 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
1995-09-21 17:29:13 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1993, 1994, 1995
|
1994-05-24 10:09:53 +00:00
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2017-02-28 23:42:47 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
1994-05-24 10:09:53 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1995-09-21 17:29:13 +00:00
|
|
|
* @(#)tcp_var.h 8.4 (Berkeley) 5/24/95
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
|
1994-08-21 05:27:42 +00:00
|
|
|
#ifndef _NETINET_TCP_VAR_H_
|
|
|
|
#define _NETINET_TCP_VAR_H_
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2002-06-10 20:05:46 +00:00
|
|
|
#include <netinet/tcp.h>
|
2016-01-27 00:45:46 +00:00
|
|
|
#include <netinet/tcp_fsm.h>
|
2001-11-22 04:50:44 +00:00
|
|
|
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
#ifdef _KERNEL
|
|
|
|
#include <net/vnet.h>
|
There are times when it would be really nice to have a record of the last few
packets and/or state transitions from each TCP socket. That would help with
narrowing down certain problems we see in the field that are hard to reproduce
without understanding the history of how we got into a certain state. This
change provides just that.
It saves copies of the last N packets in a list in the tcpcb. When the tcpcb is
destroyed, the list is freed. I thought this was likely to be more
performance-friendly than saving copies of the tcpcb. Plus, with the packets,
you should be able to reverse-engineer what happened to the tcpcb.
To enable the feature, you will need to compile a kernel with the TCPPCAP
option. Even then, the feature defaults to being deactivated. You can activate
it by setting a positive value for the number of captured packets. You can do
that on either a global basis or on a per-socket basis (via a setsockopt call).
There is no way to get the packets out of the kernel other than using kmem or
getting a coredump. I thought that would help some of the legal/privacy concerns
regarding such a feature. However, it should be possible to add a future effort
to export them in PCAP format.
I tested this at low scale, and found that there were no mbuf leaks and the peak
mbuf usage appeared to be unchanged with and without the feature.
The main performance concern I can envision is the number of mbufs that would be
used on systems with a large number of sockets. If you save five packets per
direction per socket and have 3,000 sockets, that will consume at least 30,000
mbufs just to keep these packets. I tried to reduce the concerns associated with
this by limiting the number of clusters (not mbufs) that could be used for this
feature. Again, in my testing, that appears to work correctly.
Differential Revision: D3100
Submitted by: Jonathan Looney <jlooney at juniper dot net>
Reviewed by: gnn, hiren
2015-10-14 00:35:37 +00:00
|
|
|
#include <sys/mbuf.h>
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
#endif
|
Permit buiding kernels with options VIMAGE, restricted to only a single
active network stack instance. Turning on options VIMAGE at compile
time yields the following changes relative to default kernel build:
1) V_ accessor macros for virtualized variables resolve to structure
fields via base pointers, instead of being resolved as fields in global
structs or plain global variables. As an example, V_ifnet becomes:
options VIMAGE: ((struct vnet_net *) vnet_net)->_ifnet
default build: vnet_net_0._ifnet
options VIMAGE_GLOBALS: ifnet
2) INIT_VNET_* macros will declare and set up base pointers to be used
by V_ accessor macros, instead of resolving to whitespace:
INIT_VNET_NET(ifp->if_vnet); becomes
struct vnet_net *vnet_net = (ifp->if_vnet)->mod_data[VNET_MOD_NET];
3) Memory for vnet modules registered via vnet_mod_register() is now
allocated at run time in sys/kern/kern_vimage.c, instead of per vnet
module structs being declared as globals. If required, vnet modules
can now request the framework to provide them with allocated bzeroed
memory by filling in the vmi_size field in their vmi_modinfo structures.
4) structs socket, ifnet, inpcbinfo, tcpcb and syncache_head are
extended to hold a pointer to the parent vnet. options VIMAGE builds
will fill in those fields as required.
5) curvnet is introduced as a new global variable in options VIMAGE
builds, always pointing to the default and only struct vnet.
6) struct sysctl_oid has been extended with additional two fields to
store major and minor virtualization module identifiers, oid_v_subs and
oid_v_mod. SYSCTL_V_* family of macros will fill in those fields
accordingly, and store the offset in the appropriate vnet container
struct in oid_arg1.
In sysctl handlers dealing with virtualized sysctls, the
SYSCTL_RESOLVE_V_ARG1() macro will compute the address of the target
variable and make it available in arg1 variable for further processing.
Unused fields in structs vnet_inet, vnet_inet6 and vnet_ipfw have
been deleted.
Reviewed by: bz, rwatson
Approved by: julian (mentor)
2009-04-30 13:36:26 +00:00
|
|
|
|
2020-04-27 16:30:29 +00:00
|
|
|
#define TCP_END_BYTE_INFO 8 /* Bytes that makeup the "end information array" */
|
|
|
|
/* Types of ending byte info */
|
|
|
|
#define TCP_EI_EMPTY_SLOT 0
|
|
|
|
#define TCP_EI_STATUS_CLIENT_FIN 0x1
|
|
|
|
#define TCP_EI_STATUS_CLIENT_RST 0x2
|
|
|
|
#define TCP_EI_STATUS_SERVER_FIN 0x3
|
|
|
|
#define TCP_EI_STATUS_SERVER_RST 0x4
|
|
|
|
#define TCP_EI_STATUS_RETRAN 0x5
|
|
|
|
#define TCP_EI_STATUS_PROGRESS 0x6
|
|
|
|
#define TCP_EI_STATUS_PERSIST_MAX 0x7
|
|
|
|
#define TCP_EI_STATUS_KEEP_MAX 0x8
|
|
|
|
#define TCP_EI_STATUS_DATA_A_CLOSE 0x9
|
|
|
|
#define TCP_EI_STATUS_RST_IN_FRONT 0xa
|
|
|
|
#define TCP_EI_STATUS_2MSL 0xb
|
|
|
|
#define TCP_EI_STATUS_MAX_VALUE 0xb
|
|
|
|
|
|
|
|
/************************************************/
|
|
|
|
/* Status bits we track to assure no duplicates,
|
|
|
|
* the bits here are not used by the code but
|
|
|
|
* for human representation. To check a bit we
|
|
|
|
* take and shift over by 1 minus the value (1-8).
|
|
|
|
*/
|
|
|
|
/************************************************/
|
|
|
|
#define TCP_EI_BITS_CLIENT_FIN 0x001
|
|
|
|
#define TCP_EI_BITS_CLIENT_RST 0x002
|
|
|
|
#define TCP_EI_BITS_SERVER_FIN 0x004
|
|
|
|
#define TCP_EI_BITS_SERVER_RST 0x008
|
|
|
|
#define TCP_EI_BITS_RETRAN 0x010
|
|
|
|
#define TCP_EI_BITS_PROGRESS 0x020
|
|
|
|
#define TCP_EI_BITS_PRESIST_MAX 0x040
|
|
|
|
#define TCP_EI_BITS_KEEP_MAX 0x080
|
|
|
|
#define TCP_EI_BITS_DATA_A_CLO 0x100
|
|
|
|
#define TCP_EI_BITS_RST_IN_FR 0x200 /* a front state reset */
|
|
|
|
#define TCP_EI_BITS_2MS_TIMER 0x400 /* 2 MSL timer expired */
|
|
|
|
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
#if defined(_KERNEL) || defined(_WANT_TCPCB)
|
2015-07-29 17:59:13 +00:00
|
|
|
/* TCP segment queue entry */
|
|
|
|
struct tseg_qent {
|
2018-08-20 12:43:18 +00:00
|
|
|
TAILQ_ENTRY(tseg_qent) tqe_q;
|
|
|
|
struct mbuf *tqe_m; /* mbuf contains packet */
|
|
|
|
struct mbuf *tqe_last; /* last mbuf in chain */
|
|
|
|
tcp_seq tqe_start; /* TCP Sequence number start */
|
2015-07-29 17:59:13 +00:00
|
|
|
int tqe_len; /* TCP segment data length */
|
2018-08-20 12:43:18 +00:00
|
|
|
uint32_t tqe_flags; /* The flags from the th->th_flags */
|
|
|
|
uint32_t tqe_mbuf_cnt; /* Count of mbuf overhead */
|
2015-07-29 17:59:13 +00:00
|
|
|
};
|
2018-08-20 12:43:18 +00:00
|
|
|
TAILQ_HEAD(tsegqe_head, tseg_qent);
|
2015-07-29 17:59:13 +00:00
|
|
|
|
2004-06-23 21:04:37 +00:00
|
|
|
struct sackblk {
|
|
|
|
tcp_seq start; /* start seq no. of sack block */
|
2004-06-25 02:29:58 +00:00
|
|
|
tcp_seq end; /* end seq no. */
|
2004-06-23 21:04:37 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct sackhole {
|
|
|
|
tcp_seq start; /* start seq no. of hole */
|
|
|
|
tcp_seq end; /* end seq no. */
|
|
|
|
tcp_seq rxmit; /* next seq. no in hole to be retransmitted */
|
2005-04-21 20:11:01 +00:00
|
|
|
TAILQ_ENTRY(sackhole) scblink; /* scoreboard linkage */
|
2004-06-23 21:04:37 +00:00
|
|
|
};
|
2004-06-25 02:29:58 +00:00
|
|
|
|
2005-05-11 21:37:42 +00:00
|
|
|
struct sackhint {
|
|
|
|
struct sackhole *nexthole;
|
2020-08-13 16:30:09 +00:00
|
|
|
int32_t sack_bytes_rexmit;
|
2010-12-28 03:27:20 +00:00
|
|
|
tcp_seq last_sack_ack; /* Most recent/largest sacked ack */
|
2009-07-12 09:14:28 +00:00
|
|
|
|
2020-08-13 16:30:09 +00:00
|
|
|
int32_t delivered_data; /* Newly acked data from last SACK */
|
|
|
|
|
|
|
|
int32_t sacked_bytes; /* Total sacked bytes reported by the
|
2015-10-28 22:57:51 +00:00
|
|
|
* receiver via sack option
|
|
|
|
*/
|
2020-12-04 11:29:27 +00:00
|
|
|
uint32_t recover_fs; /* Flight Size at the start of Loss recovery */
|
|
|
|
uint32_t prr_delivered; /* Total bytes delivered using PRR */
|
2021-03-05 23:36:48 +00:00
|
|
|
uint32_t prr_out; /* Bytes sent during IN_RECOVERY */
|
2005-05-11 21:37:42 +00:00
|
|
|
};
|
|
|
|
|
2018-08-20 12:43:18 +00:00
|
|
|
#define SEGQ_EMPTY(tp) TAILQ_EMPTY(&(tp)->t_segq)
|
|
|
|
|
2018-03-22 09:40:08 +00:00
|
|
|
STAILQ_HEAD(tcp_log_stailq, tcp_log_mem);
|
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
/*
|
|
|
|
* Tcp control block, one per tcp; fields:
|
2018-04-26 21:41:16 +00:00
|
|
|
* Organized for 64 byte cacheline efficiency based
|
|
|
|
* on common tcp_input/tcp_output processing.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
|
|
|
struct tcpcb {
|
2018-04-26 21:41:16 +00:00
|
|
|
/* Cache line 1 */
|
Improved connection establishment performance by doing local port lookups via
a hashed port list. In the new scheme, in_pcblookup() goes away and is
replaced by a new routine, in_pcblookup_local() for doing the local port
check. Note that this implementation is space inefficient in that the PCB
struct is now too large to fit into 128 bytes. I might deal with this in the
future by using the new zone allocator, but I wanted these changes to be
extensively tested in their current form first.
Also:
1) Fixed off-by-one errors in the port lookup loops in in_pcbbind().
2) Got rid of some unneeded rehashing. Adding a new routine, in_pcbinshash()
to do the initialial hash insertion.
3) Renamed in_pcblookuphash() to in_pcblookup_hash() for easier readability.
4) Added a new routine, in_pcbremlists() to remove the PCB from the various
hash lists.
5) Added/deleted comments where appropriate.
6) Removed unnecessary splnet() locking. In general, the PCB functions should
be called at splnet()...there are unfortunately a few exceptions, however.
7) Reorganized a few structs for better cache line behavior.
8) Killed my TCP_ACK_HACK kludge. It may come back in a different form in
the future, however.
These changes have been tested on wcarchive for more than a month. In tests
done here, connection establishment overhead is reduced by more than 50
times, thus getting rid of one of the major networking scalability problems.
Still to do: make tcp_fastimo/tcp_slowtimo scale well for systems with a
large number of connections. tcp_fastimo is easy; tcp_slowtimo is difficult.
WARNING: Anything that knows about inpcb and tcpcb structs will have to be
recompiled; at the very least, this includes netstat(1).
1998-01-27 09:15:13 +00:00
|
|
|
struct inpcb *t_inpcb; /* back pointer to internet pcb */
|
2018-04-26 21:41:16 +00:00
|
|
|
struct tcp_function_block *t_fb;/* TCP function call block */
|
|
|
|
void *t_fb_ptr; /* Pointer to t_fb specific data */
|
|
|
|
uint32_t t_maxseg:24, /* maximum segment size */
|
|
|
|
t_logstate:8; /* State of "black box" logging */
|
2018-06-07 18:18:13 +00:00
|
|
|
uint32_t t_port:16, /* Tunneling (over udp) port */
|
|
|
|
t_state:4, /* state of this connection */
|
|
|
|
t_idle_reduce : 1,
|
|
|
|
t_delayed_ack: 7, /* Delayed ack variable */
|
2019-07-10 20:40:39 +00:00
|
|
|
t_fin_is_rst: 1, /* Are fin's treated as resets */
|
2020-01-06 12:48:06 +00:00
|
|
|
t_log_state_set: 1,
|
|
|
|
bits_spare : 2;
|
1995-10-04 20:49:03 +00:00
|
|
|
u_int t_flags;
|
2015-07-30 19:24:49 +00:00
|
|
|
tcp_seq snd_una; /* sent but unacknowledged */
|
Improved connection establishment performance by doing local port lookups via
a hashed port list. In the new scheme, in_pcblookup() goes away and is
replaced by a new routine, in_pcblookup_local() for doing the local port
check. Note that this implementation is space inefficient in that the PCB
struct is now too large to fit into 128 bytes. I might deal with this in the
future by using the new zone allocator, but I wanted these changes to be
extensively tested in their current form first.
Also:
1) Fixed off-by-one errors in the port lookup loops in in_pcbbind().
2) Got rid of some unneeded rehashing. Adding a new routine, in_pcbinshash()
to do the initialial hash insertion.
3) Renamed in_pcblookuphash() to in_pcblookup_hash() for easier readability.
4) Added a new routine, in_pcbremlists() to remove the PCB from the various
hash lists.
5) Added/deleted comments where appropriate.
6) Removed unnecessary splnet() locking. In general, the PCB functions should
be called at splnet()...there are unfortunately a few exceptions, however.
7) Reorganized a few structs for better cache line behavior.
8) Killed my TCP_ACK_HACK kludge. It may come back in a different form in
the future, however.
These changes have been tested on wcarchive for more than a month. In tests
done here, connection establishment overhead is reduced by more than 50
times, thus getting rid of one of the major networking scalability problems.
Still to do: make tcp_fastimo/tcp_slowtimo scale well for systems with a
large number of connections. tcp_fastimo is easy; tcp_slowtimo is difficult.
WARNING: Anything that knows about inpcb and tcpcb structs will have to be
recompiled; at the very least, this includes netstat(1).
1998-01-27 09:15:13 +00:00
|
|
|
tcp_seq snd_max; /* highest sequence number sent;
|
|
|
|
* used to recognize retransmits
|
|
|
|
*/
|
1994-05-24 10:09:53 +00:00
|
|
|
tcp_seq snd_nxt; /* send next */
|
|
|
|
tcp_seq snd_up; /* send urgent pointer */
|
2018-04-26 21:41:16 +00:00
|
|
|
uint32_t snd_wnd; /* send window */
|
|
|
|
uint32_t snd_cwnd; /* congestion-controlled window */
|
2018-06-07 18:18:13 +00:00
|
|
|
uint32_t t_peakrate_thr; /* pre-calculated peak rate threshold */
|
2018-04-26 21:41:16 +00:00
|
|
|
/* Cache line 2 */
|
|
|
|
u_int32_t ts_offset; /* our timestamp offset */
|
|
|
|
u_int32_t rfbuf_ts; /* recv buffer autoscaling timestamp */
|
|
|
|
int rcv_numsacks; /* # distinct sack blks present */
|
|
|
|
u_int t_tsomax; /* TSO total burst length limit in bytes */
|
|
|
|
u_int t_tsomaxsegcount; /* TSO maximum segment count */
|
|
|
|
u_int t_tsomaxsegsize; /* TSO maximum segment size in bytes */
|
Improved connection establishment performance by doing local port lookups via
a hashed port list. In the new scheme, in_pcblookup() goes away and is
replaced by a new routine, in_pcblookup_local() for doing the local port
check. Note that this implementation is space inefficient in that the PCB
struct is now too large to fit into 128 bytes. I might deal with this in the
future by using the new zone allocator, but I wanted these changes to be
extensively tested in their current form first.
Also:
1) Fixed off-by-one errors in the port lookup loops in in_pcbbind().
2) Got rid of some unneeded rehashing. Adding a new routine, in_pcbinshash()
to do the initialial hash insertion.
3) Renamed in_pcblookuphash() to in_pcblookup_hash() for easier readability.
4) Added a new routine, in_pcbremlists() to remove the PCB from the various
hash lists.
5) Added/deleted comments where appropriate.
6) Removed unnecessary splnet() locking. In general, the PCB functions should
be called at splnet()...there are unfortunately a few exceptions, however.
7) Reorganized a few structs for better cache line behavior.
8) Killed my TCP_ACK_HACK kludge. It may come back in a different form in
the future, however.
These changes have been tested on wcarchive for more than a month. In tests
done here, connection establishment overhead is reduced by more than 50
times, thus getting rid of one of the major networking scalability problems.
Still to do: make tcp_fastimo/tcp_slowtimo scale well for systems with a
large number of connections. tcp_fastimo is easy; tcp_slowtimo is difficult.
WARNING: Anything that knows about inpcb and tcpcb structs will have to be
recompiled; at the very least, this includes netstat(1).
1998-01-27 09:15:13 +00:00
|
|
|
tcp_seq rcv_nxt; /* receive next */
|
1994-05-24 10:09:53 +00:00
|
|
|
tcp_seq rcv_adv; /* advertised window */
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t rcv_wnd; /* receive window */
|
2018-04-26 21:41:16 +00:00
|
|
|
u_int t_flags2; /* More tcpcb flags storage */
|
|
|
|
int t_srtt; /* smoothed round-trip time */
|
|
|
|
int t_rttvar; /* variance in round-trip time */
|
|
|
|
u_int32_t ts_recent; /* timestamp echo data */
|
|
|
|
u_char snd_scale; /* window scaling for send window */
|
|
|
|
u_char rcv_scale; /* window scaling for recv window */
|
|
|
|
u_char snd_limited; /* segments limited transmitted */
|
|
|
|
u_char request_r_scale; /* pending window scaling */
|
|
|
|
tcp_seq last_ack_sent;
|
|
|
|
u_int t_rcvtime; /* inactivity time */
|
|
|
|
/* Cache line 3 */
|
Improved connection establishment performance by doing local port lookups via
a hashed port list. In the new scheme, in_pcblookup() goes away and is
replaced by a new routine, in_pcblookup_local() for doing the local port
check. Note that this implementation is space inefficient in that the PCB
struct is now too large to fit into 128 bytes. I might deal with this in the
future by using the new zone allocator, but I wanted these changes to be
extensively tested in their current form first.
Also:
1) Fixed off-by-one errors in the port lookup loops in in_pcbbind().
2) Got rid of some unneeded rehashing. Adding a new routine, in_pcbinshash()
to do the initialial hash insertion.
3) Renamed in_pcblookuphash() to in_pcblookup_hash() for easier readability.
4) Added a new routine, in_pcbremlists() to remove the PCB from the various
hash lists.
5) Added/deleted comments where appropriate.
6) Removed unnecessary splnet() locking. In general, the PCB functions should
be called at splnet()...there are unfortunately a few exceptions, however.
7) Reorganized a few structs for better cache line behavior.
8) Killed my TCP_ACK_HACK kludge. It may come back in a different form in
the future, however.
These changes have been tested on wcarchive for more than a month. In tests
done here, connection establishment overhead is reduced by more than 50
times, thus getting rid of one of the major networking scalability problems.
Still to do: make tcp_fastimo/tcp_slowtimo scale well for systems with a
large number of connections. tcp_fastimo is easy; tcp_slowtimo is difficult.
WARNING: Anything that knows about inpcb and tcpcb structs will have to be
recompiled; at the very least, this includes netstat(1).
1998-01-27 09:15:13 +00:00
|
|
|
tcp_seq rcv_up; /* receive urgent pointer */
|
2018-04-26 21:41:16 +00:00
|
|
|
int t_segqlen; /* segment reassembly queue length */
|
2018-08-20 12:43:18 +00:00
|
|
|
uint32_t t_segqmbuflen; /* Count of bytes mbufs on all entries */
|
2018-04-26 21:41:16 +00:00
|
|
|
struct tsegqe_head t_segq; /* segment reassembly queue */
|
|
|
|
struct mbuf *t_in_pkt;
|
|
|
|
struct mbuf *t_tail_pkt;
|
|
|
|
struct tcp_timer *t_timers; /* All the TCP timers in one struct */
|
|
|
|
struct vnet *t_vnet; /* back pointer to parent vnet */
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t snd_ssthresh; /* snd_cwnd size threshold for
|
1994-05-24 10:09:53 +00:00
|
|
|
* for slow start exponential to
|
|
|
|
* linear switch
|
|
|
|
*/
|
2018-04-26 21:41:16 +00:00
|
|
|
tcp_seq snd_wl1; /* window update seg seq number */
|
|
|
|
/* Cache line 4 */
|
|
|
|
tcp_seq snd_wl2; /* window update seg ack number */
|
|
|
|
|
|
|
|
tcp_seq irs; /* initial receive sequence number */
|
2020-06-24 13:42:42 +00:00
|
|
|
tcp_seq iss; /* initial send sequence number */
|
|
|
|
u_int t_acktime; /* RACK and BBR incoming new data was acked */
|
|
|
|
u_int t_sndtime; /* time last data was sent */
|
2018-04-26 21:41:16 +00:00
|
|
|
u_int ts_recent_age; /* when last updated */
|
2003-01-13 11:01:20 +00:00
|
|
|
tcp_seq snd_recover; /* for use in NewReno Fast Recovery */
|
2018-04-26 21:41:16 +00:00
|
|
|
uint16_t cl4_spare; /* Spare to adjust CL 4 */
|
|
|
|
char t_oobflags; /* have some */
|
|
|
|
char t_iobc; /* input character */
|
|
|
|
int t_rxtcur; /* current retransmit value (ticks) */
|
2000-05-06 03:31:09 +00:00
|
|
|
|
2018-04-26 21:41:16 +00:00
|
|
|
int t_rxtshift; /* log(2) of rexmt exp. backoff */
|
2009-06-16 18:58:50 +00:00
|
|
|
u_int t_rtttime; /* RTT measurement start time */
|
2018-04-26 21:41:16 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
tcp_seq t_rtseq; /* sequence number being timed */
|
2018-04-26 21:41:16 +00:00
|
|
|
u_int t_starttime; /* time connection was established */
|
2020-06-08 11:48:07 +00:00
|
|
|
u_int t_fbyte_in; /* ticks time when first byte queued in */
|
|
|
|
u_int t_fbyte_out; /* ticks time when first byte queued out */
|
Improved connection establishment performance by doing local port lookups via
a hashed port list. In the new scheme, in_pcblookup() goes away and is
replaced by a new routine, in_pcblookup_local() for doing the local port
check. Note that this implementation is space inefficient in that the PCB
struct is now too large to fit into 128 bytes. I might deal with this in the
future by using the new zone allocator, but I wanted these changes to be
extensively tested in their current form first.
Also:
1) Fixed off-by-one errors in the port lookup loops in in_pcbbind().
2) Got rid of some unneeded rehashing. Adding a new routine, in_pcbinshash()
to do the initialial hash insertion.
3) Renamed in_pcblookuphash() to in_pcblookup_hash() for easier readability.
4) Added a new routine, in_pcbremlists() to remove the PCB from the various
hash lists.
5) Added/deleted comments where appropriate.
6) Removed unnecessary splnet() locking. In general, the PCB functions should
be called at splnet()...there are unfortunately a few exceptions, however.
7) Reorganized a few structs for better cache line behavior.
8) Killed my TCP_ACK_HACK kludge. It may come back in a different form in
the future, however.
These changes have been tested on wcarchive for more than a month. In tests
done here, connection establishment overhead is reduced by more than 50
times, thus getting rid of one of the major networking scalability problems.
Still to do: make tcp_fastimo/tcp_slowtimo scale well for systems with a
large number of connections. tcp_fastimo is easy; tcp_slowtimo is difficult.
WARNING: Anything that knows about inpcb and tcpcb structs will have to be
recompiled; at the very least, this includes netstat(1).
1998-01-27 09:15:13 +00:00
|
|
|
|
2016-01-07 00:14:42 +00:00
|
|
|
u_int t_pmtud_saved_maxseg; /* pre-blackhole MSS */
|
2020-04-14 16:35:05 +00:00
|
|
|
int t_blackhole_enter; /* when to enter blackhole detection */
|
|
|
|
int t_blackhole_exit; /* when to exit blackhole detection */
|
1995-10-04 20:49:03 +00:00
|
|
|
u_int t_rttmin; /* minimum rtt allowed */
|
2018-04-26 21:41:16 +00:00
|
|
|
|
2002-08-17 18:26:02 +00:00
|
|
|
u_int t_rttbest; /* best rtt we've seen */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
Improved connection establishment performance by doing local port lookups via
a hashed port list. In the new scheme, in_pcblookup() goes away and is
replaced by a new routine, in_pcblookup_local() for doing the local port
check. Note that this implementation is space inefficient in that the PCB
struct is now too large to fit into 128 bytes. I might deal with this in the
future by using the new zone allocator, but I wanted these changes to be
extensively tested in their current form first.
Also:
1) Fixed off-by-one errors in the port lookup loops in in_pcbbind().
2) Got rid of some unneeded rehashing. Adding a new routine, in_pcbinshash()
to do the initialial hash insertion.
3) Renamed in_pcblookuphash() to in_pcblookup_hash() for easier readability.
4) Added a new routine, in_pcbremlists() to remove the PCB from the various
hash lists.
5) Added/deleted comments where appropriate.
6) Removed unnecessary splnet() locking. In general, the PCB functions should
be called at splnet()...there are unfortunately a few exceptions, however.
7) Reorganized a few structs for better cache line behavior.
8) Killed my TCP_ACK_HACK kludge. It may come back in a different form in
the future, however.
These changes have been tested on wcarchive for more than a month. In tests
done here, connection establishment overhead is reduced by more than 50
times, thus getting rid of one of the major networking scalability problems.
Still to do: make tcp_fastimo/tcp_slowtimo scale well for systems with a
large number of connections. tcp_fastimo is easy; tcp_slowtimo is difficult.
WARNING: Anything that knows about inpcb and tcpcb structs will have to be
recompiled; at the very least, this includes netstat(1).
1998-01-27 09:15:13 +00:00
|
|
|
int t_softerror; /* possible error not yet reported */
|
2018-04-26 21:41:16 +00:00
|
|
|
uint32_t max_sndwnd; /* largest window peer has offered */
|
|
|
|
/* Cache line 5 */
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t snd_cwnd_prev; /* cwnd prior to retransmit */
|
|
|
|
uint32_t snd_ssthresh_prev; /* ssthresh prior to retransmit */
|
2003-07-15 21:49:53 +00:00
|
|
|
tcp_seq snd_recover_prev; /* snd_recover prior to retransmit */
|
2010-11-17 18:55:12 +00:00
|
|
|
int t_sndzerowin; /* zero-window updates sent */
|
2018-04-26 21:41:16 +00:00
|
|
|
u_long t_rttupdated; /* number of times rtt sampled */
|
2004-06-23 21:04:37 +00:00
|
|
|
int snd_numholes; /* number of holes seen by sender */
|
2018-04-26 21:41:16 +00:00
|
|
|
u_int t_badrxtwin; /* window for retransmit recovery */
|
2005-06-04 08:03:28 +00:00
|
|
|
TAILQ_HEAD(sackhole_head, sackhole) snd_holes;
|
|
|
|
/* SACK scoreboard (sorted) */
|
2005-05-25 17:55:27 +00:00
|
|
|
tcp_seq snd_fack; /* last seq number(+1) sack'd by rcv'r*/
|
2018-04-26 21:41:16 +00:00
|
|
|
struct sackblk sackblks[MAX_SACK_BLKS]; /* seq nos. of sack blocks */
|
2005-05-11 21:37:42 +00:00
|
|
|
struct sackhint sackhint; /* SACK scoreboard hint */
|
2006-02-16 19:38:07 +00:00
|
|
|
int t_rttlow; /* smallest observerved RTT */
|
2007-02-01 18:32:13 +00:00
|
|
|
int rfbuf_cnt; /* recv buffer autoscaling byte count */
|
2012-06-19 07:34:13 +00:00
|
|
|
struct toedev *tod; /* toedev handling this connection */
|
2010-11-17 18:55:12 +00:00
|
|
|
int t_sndrexmitpack; /* retransmit packets sent */
|
|
|
|
int t_rcvoopack; /* out-of-order packets received */
|
2007-12-12 23:31:49 +00:00
|
|
|
void *t_toe; /* TOE pcb pointer */
|
2010-11-12 06:41:55 +00:00
|
|
|
struct cc_algo *cc_algo; /* congestion control algorithm */
|
2010-12-28 12:37:57 +00:00
|
|
|
struct cc_var *ccv; /* congestion control specific vars */
|
2010-12-28 12:13:30 +00:00
|
|
|
struct osd *osd; /* storage for Khelp module data */
|
2018-04-26 21:41:16 +00:00
|
|
|
int t_bytes_acked; /* # bytes acked during current RTT */
|
2018-06-07 18:18:13 +00:00
|
|
|
u_int t_maxunacktime;
|
2012-02-05 16:53:02 +00:00
|
|
|
u_int t_keepinit; /* time to establish connection */
|
|
|
|
u_int t_keepidle; /* time before keepalive probes begin */
|
|
|
|
u_int t_keepintvl; /* interval between keepalives */
|
|
|
|
u_int t_keepcnt; /* number of keepalives before close */
|
2018-04-26 21:41:16 +00:00
|
|
|
int t_dupacks; /* consecutive dup acks recd */
|
2018-03-22 09:40:08 +00:00
|
|
|
int t_lognum; /* Number of log entries */
|
2020-01-06 12:48:06 +00:00
|
|
|
int t_loglimit; /* Maximum number of log entries */
|
2020-10-29 00:03:19 +00:00
|
|
|
int64_t t_pacing_rate; /* bytes / sec, -1 => unlimited */
|
2018-04-26 21:41:16 +00:00
|
|
|
struct tcp_log_stailq t_logs; /* Log buffer */
|
2018-03-22 09:40:08 +00:00
|
|
|
struct tcp_log_id_node *t_lin;
|
|
|
|
struct tcp_log_id_bucket *t_lib;
|
|
|
|
const char *t_output_caller; /* Function that called tcp_output */
|
2019-12-02 20:58:04 +00:00
|
|
|
struct statsblob *t_stats; /* Per-connection stats */
|
2018-04-26 21:41:16 +00:00
|
|
|
uint32_t t_logsn; /* Log "serial number" */
|
2019-12-02 20:58:04 +00:00
|
|
|
uint32_t gput_ts; /* Time goodput measurement started */
|
|
|
|
tcp_seq gput_seq; /* Outbound measurement seq */
|
|
|
|
tcp_seq gput_ack; /* Inbound measurement ack */
|
|
|
|
int32_t t_stats_gput_prev; /* XXXLAS: Prev gput measurement */
|
2018-02-26 02:53:22 +00:00
|
|
|
uint8_t t_tfo_client_cookie_len; /* TCP Fast Open client cookie length */
|
2020-04-27 16:30:29 +00:00
|
|
|
uint32_t t_end_info_status; /* Status flag of end info */
|
2018-02-26 02:53:22 +00:00
|
|
|
unsigned int *t_tfo_pending; /* TCP Fast Open server pending counter */
|
|
|
|
union {
|
|
|
|
uint8_t client[TCP_FASTOPEN_MAX_COOKIE_LEN];
|
|
|
|
uint64_t server;
|
|
|
|
} t_tfo_cookie; /* TCP Fast Open cookie to send */
|
2020-04-27 16:30:29 +00:00
|
|
|
union {
|
|
|
|
uint8_t t_end_info_bytes[TCP_END_BYTE_INFO];
|
|
|
|
uint64_t t_end_info;
|
|
|
|
};
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
#ifdef TCPPCAP
|
There are times when it would be really nice to have a record of the last few
packets and/or state transitions from each TCP socket. That would help with
narrowing down certain problems we see in the field that are hard to reproduce
without understanding the history of how we got into a certain state. This
change provides just that.
It saves copies of the last N packets in a list in the tcpcb. When the tcpcb is
destroyed, the list is freed. I thought this was likely to be more
performance-friendly than saving copies of the tcpcb. Plus, with the packets,
you should be able to reverse-engineer what happened to the tcpcb.
To enable the feature, you will need to compile a kernel with the TCPPCAP
option. Even then, the feature defaults to being deactivated. You can activate
it by setting a positive value for the number of captured packets. You can do
that on either a global basis or on a per-socket basis (via a setsockopt call).
There is no way to get the packets out of the kernel other than using kmem or
getting a coredump. I thought that would help some of the legal/privacy concerns
regarding such a feature. However, it should be possible to add a future effort
to export them in PCAP format.
I tested this at low scale, and found that there were no mbuf leaks and the peak
mbuf usage appeared to be unchanged with and without the feature.
The main performance concern I can envision is the number of mbufs that would be
used on systems with a large number of sockets. If you save five packets per
direction per socket and have 3,000 sockets, that will consume at least 30,000
mbufs just to keep these packets. I tried to reduce the concerns associated with
this by limiting the number of clusters (not mbufs) that could be used for this
feature. Again, in my testing, that appears to work correctly.
Differential Revision: D3100
Submitted by: Jonathan Looney <jlooney at juniper dot net>
Reviewed by: gnn, hiren
2015-10-14 00:35:37 +00:00
|
|
|
struct mbufq t_inpkts; /* List of saved input packets. */
|
|
|
|
struct mbufq t_outpkts; /* List of saved output packets. */
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
#endif
|
|
|
|
};
|
|
|
|
#endif /* _KERNEL || _WANT_TCPCB */
|
|
|
|
|
|
|
|
#ifdef _KERNEL
|
|
|
|
struct tcptemp {
|
|
|
|
u_char tt_ipgen[40]; /* the size must be of max ip header, now IPv6 */
|
|
|
|
struct tcphdr tt_t;
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
|
|
|
|
2019-12-17 16:08:07 +00:00
|
|
|
/* Minimum map entries limit value, if set */
|
|
|
|
#define TCP_MIN_MAP_ENTRIES_LIMIT 128
|
|
|
|
|
2020-02-12 13:31:36 +00:00
|
|
|
/*
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
* TODO: We yet need to brave plowing in
|
|
|
|
* to tcp_input() and the pru_usrreq() block.
|
|
|
|
* Right now these go to the old standards which
|
|
|
|
* are somewhat ok, but in the long term may
|
|
|
|
* need to be changed. If we do tackle tcp_input()
|
|
|
|
* then we need to get rid of the tcp_do_segment()
|
|
|
|
* function below.
|
|
|
|
*/
|
|
|
|
/* Flags for tcp functions */
|
|
|
|
#define TCP_FUNC_BEING_REMOVED 0x01 /* Can no longer be referenced */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If defining the optional tcp_timers, in the
|
|
|
|
* tfb_tcp_timer_stop call you must use the
|
|
|
|
* callout_async_drain() function with the
|
|
|
|
* tcp_timer_discard callback. You should check
|
|
|
|
* the return of callout_async_drain() and if 0
|
|
|
|
* increment tt_draincnt. Since the timer sub-system
|
|
|
|
* does not know your callbacks you must provide a
|
|
|
|
* stop_all function that loops through and calls
|
|
|
|
* tcp_timer_stop() with each of your defined timers.
|
|
|
|
* Adding a tfb_tcp_handoff_ok function allows the socket
|
|
|
|
* option to change stacks to query you even if the
|
|
|
|
* connection is in a later stage. You return 0 to
|
|
|
|
* say you can take over and run your stack, you return
|
|
|
|
* non-zero (an error number) to say no you can't.
|
|
|
|
* If the function is undefined you can only change
|
|
|
|
* in the early states (before connect or listen).
|
|
|
|
* tfb_tcp_fb_fini is changed to add a flag to tell
|
|
|
|
* the old stack if the tcb is being destroyed or
|
|
|
|
* not. A one in the flag means the TCB is being
|
|
|
|
* destroyed, a zero indicates its transitioning to
|
|
|
|
* another stack (via socket option).
|
|
|
|
*/
|
|
|
|
struct tcp_function_block {
|
|
|
|
char tfb_tcp_block_name[TCP_FUNCTION_NAME_LEN_MAX];
|
|
|
|
int (*tfb_tcp_output)(struct tcpcb *);
|
2018-04-19 13:37:59 +00:00
|
|
|
int (*tfb_tcp_output_wtime)(struct tcpcb *, const struct timeval *);
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
void (*tfb_tcp_do_segment)(struct mbuf *, struct tcphdr *,
|
|
|
|
struct socket *, struct tcpcb *,
|
2018-07-04 02:47:16 +00:00
|
|
|
int, int, uint8_t);
|
2019-07-10 20:40:39 +00:00
|
|
|
int (*tfb_do_queued_segments)(struct socket *, struct tcpcb *, int);
|
|
|
|
int (*tfb_do_segment_nounlock)(struct mbuf *, struct tcphdr *,
|
|
|
|
struct socket *, struct tcpcb *,
|
|
|
|
int, int, uint8_t,
|
|
|
|
int, struct timeval *);
|
2018-04-19 13:37:59 +00:00
|
|
|
void (*tfb_tcp_hpts_do_segment)(struct mbuf *, struct tcphdr *,
|
|
|
|
struct socket *, struct tcpcb *,
|
|
|
|
int, int, uint8_t,
|
2018-07-04 02:47:16 +00:00
|
|
|
int, struct timeval *);
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
int (*tfb_tcp_ctloutput)(struct socket *so, struct sockopt *sopt,
|
|
|
|
struct inpcb *inp, struct tcpcb *tp);
|
|
|
|
/* Optional memory allocation/free routine */
|
2018-04-19 13:37:59 +00:00
|
|
|
int (*tfb_tcp_fb_init)(struct tcpcb *);
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
void (*tfb_tcp_fb_fini)(struct tcpcb *, int);
|
|
|
|
/* Optional timers, must define all if you define one */
|
|
|
|
int (*tfb_tcp_timer_stop_all)(struct tcpcb *);
|
|
|
|
void (*tfb_tcp_timer_activate)(struct tcpcb *,
|
|
|
|
uint32_t, u_int);
|
|
|
|
int (*tfb_tcp_timer_active)(struct tcpcb *, uint32_t);
|
|
|
|
void (*tfb_tcp_timer_stop)(struct tcpcb *, uint32_t);
|
|
|
|
void (*tfb_tcp_rexmit_tmr)(struct tcpcb *);
|
|
|
|
int (*tfb_tcp_handoff_ok)(struct tcpcb *);
|
2018-04-19 13:37:59 +00:00
|
|
|
void (*tfb_tcp_mtu_chg)(struct tcpcb *);
|
2020-05-04 20:19:57 +00:00
|
|
|
int (*tfb_pru_options)(struct tcpcb *, int);
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
volatile uint32_t tfb_refcnt;
|
|
|
|
uint32_t tfb_flags;
|
2018-03-22 09:40:08 +00:00
|
|
|
uint8_t tfb_id;
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct tcp_function {
|
2017-06-08 20:41:28 +00:00
|
|
|
TAILQ_ENTRY(tcp_function) tf_next;
|
|
|
|
char tf_name[TCP_FUNCTION_NAME_LEN_MAX];
|
|
|
|
struct tcp_function_block *tf_fb;
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
TAILQ_HEAD(tcp_funchead, tcp_function);
|
|
|
|
#endif /* _KERNEL */
|
|
|
|
|
2008-12-10 11:03:16 +00:00
|
|
|
/*
|
|
|
|
* Flags and utility macros for the t_flags field.
|
|
|
|
*/
|
2019-12-01 20:45:48 +00:00
|
|
|
#define TF_ACKNOW 0x00000001 /* ack peer immediately */
|
|
|
|
#define TF_DELACK 0x00000002 /* ack, but try to delay it */
|
|
|
|
#define TF_NODELAY 0x00000004 /* don't delay packets to coalesce */
|
|
|
|
#define TF_NOOPT 0x00000008 /* don't use tcp options */
|
|
|
|
#define TF_SENTFIN 0x00000010 /* have sent FIN */
|
|
|
|
#define TF_REQ_SCALE 0x00000020 /* have/will request window scaling */
|
|
|
|
#define TF_RCVD_SCALE 0x00000040 /* other side has requested scaling */
|
|
|
|
#define TF_REQ_TSTMP 0x00000080 /* have/will request timestamps */
|
|
|
|
#define TF_RCVD_TSTMP 0x00000100 /* a timestamp was received in SYN */
|
|
|
|
#define TF_SACK_PERMIT 0x00000200 /* other side said I could SACK */
|
|
|
|
#define TF_NEEDSYN 0x00000400 /* send SYN (implicit state) */
|
|
|
|
#define TF_NEEDFIN 0x00000800 /* send FIN (implicit state) */
|
|
|
|
#define TF_NOPUSH 0x00001000 /* don't push */
|
|
|
|
#define TF_PREVVALID 0x00002000 /* saved values for bad rxmit valid */
|
2020-11-08 18:47:05 +00:00
|
|
|
#define TF_WAKESOR 0x00004000 /* wake up receive socket */
|
2019-12-02 20:58:04 +00:00
|
|
|
#define TF_GPUTINPROG 0x00008000 /* Goodput measurement in progress */
|
2019-12-01 20:45:48 +00:00
|
|
|
#define TF_MORETOCOME 0x00010000 /* More data to be appended to sock */
|
|
|
|
#define TF_LQ_OVERFLOW 0x00020000 /* listen queue overflow */
|
|
|
|
#define TF_LASTIDLE 0x00040000 /* connection was previously idle */
|
|
|
|
#define TF_RXWIN0SENT 0x00080000 /* sent a receiver win 0 in response */
|
|
|
|
#define TF_FASTRECOVERY 0x00100000 /* in NewReno Fast Recovery */
|
|
|
|
#define TF_WASFRECOVERY 0x00200000 /* was in NewReno Fast Recovery */
|
|
|
|
#define TF_SIGNATURE 0x00400000 /* require MD5 digests (RFC2385) */
|
|
|
|
#define TF_FORCEDATA 0x00800000 /* force out a byte */
|
|
|
|
#define TF_TSO 0x01000000 /* TSO enabled on this connection */
|
|
|
|
#define TF_TOE 0x02000000 /* this connection is offloaded */
|
2020-11-08 18:47:05 +00:00
|
|
|
#define TF_WAKESOW 0x04000000 /* wake up send socket */
|
|
|
|
#define TF_UNUSED1 0x08000000 /* unused */
|
|
|
|
#define TF_UNUSED2 0x10000000 /* unused */
|
2010-11-12 06:41:55 +00:00
|
|
|
#define TF_CONGRECOVERY 0x20000000 /* congestion recovery mode */
|
|
|
|
#define TF_WASCRECOVERY 0x40000000 /* was in congestion recovery */
|
2015-12-24 19:09:48 +00:00
|
|
|
#define TF_FASTOPEN 0x80000000 /* TCP Fast Open indication */
|
2008-12-10 11:03:16 +00:00
|
|
|
|
2010-11-12 06:41:55 +00:00
|
|
|
#define IN_FASTRECOVERY(t_flags) (t_flags & TF_FASTRECOVERY)
|
|
|
|
#define ENTER_FASTRECOVERY(t_flags) t_flags |= TF_FASTRECOVERY
|
|
|
|
#define EXIT_FASTRECOVERY(t_flags) t_flags &= ~TF_FASTRECOVERY
|
|
|
|
|
|
|
|
#define IN_CONGRECOVERY(t_flags) (t_flags & TF_CONGRECOVERY)
|
|
|
|
#define ENTER_CONGRECOVERY(t_flags) t_flags |= TF_CONGRECOVERY
|
|
|
|
#define EXIT_CONGRECOVERY(t_flags) t_flags &= ~TF_CONGRECOVERY
|
|
|
|
|
|
|
|
#define IN_RECOVERY(t_flags) (t_flags & (TF_CONGRECOVERY | TF_FASTRECOVERY))
|
|
|
|
#define ENTER_RECOVERY(t_flags) t_flags |= (TF_CONGRECOVERY | TF_FASTRECOVERY)
|
|
|
|
#define EXIT_RECOVERY(t_flags) t_flags &= ~(TF_CONGRECOVERY | TF_FASTRECOVERY)
|
|
|
|
|
2016-10-12 19:06:50 +00:00
|
|
|
#if defined(_KERNEL) && !defined(TCP_RFC7413)
|
|
|
|
#define IS_FASTOPEN(t_flags) (false)
|
|
|
|
#else
|
|
|
|
#define IS_FASTOPEN(t_flags) (t_flags & TF_FASTOPEN)
|
|
|
|
#endif
|
|
|
|
|
2010-11-12 06:41:55 +00:00
|
|
|
#define BYTES_THIS_ACK(tp, th) (th->th_ack - tp->snd_una)
|
2003-07-15 21:49:53 +00:00
|
|
|
|
2008-12-10 11:03:16 +00:00
|
|
|
/*
|
|
|
|
* Flags for the t_oobflags field.
|
|
|
|
*/
|
|
|
|
#define TCPOOB_HAVEDATA 0x01
|
|
|
|
#define TCPOOB_HADDATA 0x02
|
|
|
|
|
2014-10-07 21:50:28 +00:00
|
|
|
/*
|
2018-03-22 09:40:08 +00:00
|
|
|
* Flags for the extended TCP flags field, t_flags2
|
2014-10-07 21:50:28 +00:00
|
|
|
*/
|
|
|
|
#define TF2_PLPMTU_BLACKHOLE 0x00000001 /* Possible PLPMTUD Black Hole. */
|
|
|
|
#define TF2_PLPMTU_PMTUD 0x00000002 /* Allowed to attempt PLPMTUD. */
|
|
|
|
#define TF2_PLPMTU_MAXSEGSNT 0x00000004 /* Last seg sent was full seg. */
|
2018-03-22 09:40:08 +00:00
|
|
|
#define TF2_LOG_AUTO 0x00000008 /* Session is auto-logging. */
|
2018-06-07 18:18:13 +00:00
|
|
|
#define TF2_DROP_AF_DATA 0x00000010 /* Drop after all data ack'd */
|
2019-12-01 21:01:33 +00:00
|
|
|
#define TF2_ECN_PERMIT 0x00000020 /* connection ECN-ready */
|
|
|
|
#define TF2_ECN_SND_CWR 0x00000040 /* ECN CWR in queue */
|
|
|
|
#define TF2_ECN_SND_ECE 0x00000080 /* ECN ECE in queue */
|
|
|
|
#define TF2_ACE_PERMIT 0x00000100 /* Accurate ECN mode */
|
2020-06-08 11:48:07 +00:00
|
|
|
#define TF2_FBYTES_COMPLETE 0x00000400 /* We have first bytes in and out */
|
1995-02-08 20:18:48 +00:00
|
|
|
/*
|
|
|
|
* Structure to hold TCP options that are only used during segment
|
|
|
|
* processing (in tcp_input), but not held in the tcpcb.
|
|
|
|
* It's basically used to reduce the number of parameters
|
2007-03-15 15:59:28 +00:00
|
|
|
* to tcp_dooptions and tcp_addoptions.
|
|
|
|
* The binary order of the to_flags is relevant for packing of the
|
|
|
|
* options in tcp_addoptions.
|
1995-02-08 20:18:48 +00:00
|
|
|
*/
|
|
|
|
struct tcpopt {
|
2016-03-22 15:55:17 +00:00
|
|
|
u_int32_t to_flags; /* which options are present */
|
2007-03-15 15:59:28 +00:00
|
|
|
#define TOF_MSS 0x0001 /* maximum segment size */
|
|
|
|
#define TOF_SCALE 0x0002 /* window scaling */
|
2008-04-20 18:36:59 +00:00
|
|
|
#define TOF_SACKPERM 0x0004 /* SACK permitted */
|
2007-03-15 15:59:28 +00:00
|
|
|
#define TOF_TS 0x0010 /* timestamp */
|
2008-04-20 18:36:59 +00:00
|
|
|
#define TOF_SIGNATURE 0x0040 /* TCP-MD5 signature option (RFC2385) */
|
2007-04-20 15:28:01 +00:00
|
|
|
#define TOF_SACK 0x0080 /* Peer sent SACK option */
|
2015-12-24 19:09:48 +00:00
|
|
|
#define TOF_FASTOPEN 0x0100 /* TCP Fast Open (TFO) cookie */
|
|
|
|
#define TOF_MAXOPT 0x0200
|
2007-04-20 15:28:01 +00:00
|
|
|
u_int32_t to_tsval; /* new timestamp */
|
2007-03-15 15:59:28 +00:00
|
|
|
u_int32_t to_tsecr; /* reflected timestamp */
|
2009-07-13 11:51:02 +00:00
|
|
|
u_char *to_sacks; /* pointer to the first SACK blocks */
|
|
|
|
u_char *to_signature; /* pointer to the TCP-MD5 signature */
|
2018-02-26 02:53:22 +00:00
|
|
|
u_int8_t *to_tfo_cookie; /* pointer to the TFO cookie */
|
2007-03-15 15:59:28 +00:00
|
|
|
u_int16_t to_mss; /* maximum segment size */
|
|
|
|
u_int8_t to_wscale; /* window scaling */
|
2005-06-27 22:27:42 +00:00
|
|
|
u_int8_t to_nsacks; /* number of SACK blocks */
|
2015-12-24 19:09:48 +00:00
|
|
|
u_int8_t to_tfo_len; /* TFO cookie length */
|
2011-07-17 21:15:20 +00:00
|
|
|
u_int32_t to_spare; /* UTO */
|
2001-11-22 04:50:44 +00:00
|
|
|
};
|
|
|
|
|
2006-06-26 15:35:25 +00:00
|
|
|
/*
|
|
|
|
* Flags for tcp_dooptions.
|
|
|
|
*/
|
|
|
|
#define TO_SYN 0x01 /* parse SYN-only options */
|
|
|
|
|
2003-11-20 20:07:39 +00:00
|
|
|
struct hc_metrics_lite { /* must stay in sync with hc_metrics */
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t rmx_mtu; /* MTU for this path */
|
|
|
|
uint32_t rmx_ssthresh; /* outbound gateway buffer limit */
|
|
|
|
uint32_t rmx_rtt; /* estimated round trip time */
|
|
|
|
uint32_t rmx_rttvar; /* estimated rtt variance */
|
|
|
|
uint32_t rmx_cwnd; /* congestion window */
|
|
|
|
uint32_t rmx_sendpipe; /* outbound delay-bandwidth product */
|
|
|
|
uint32_t rmx_recvpipe; /* inbound delay-bandwidth product */
|
2003-11-20 20:07:39 +00:00
|
|
|
};
|
|
|
|
|
2013-06-03 12:55:13 +00:00
|
|
|
/*
|
|
|
|
* Used by tcp_maxmtu() to communicate interface specific features
|
|
|
|
* and limits at the time of connection setup.
|
|
|
|
*/
|
|
|
|
struct tcp_ifcap {
|
|
|
|
int ifcap;
|
|
|
|
u_int tsomax;
|
2014-09-22 08:27:27 +00:00
|
|
|
u_int tsomaxsegcount;
|
|
|
|
u_int tsomaxsegsize;
|
2013-06-03 12:55:13 +00:00
|
|
|
};
|
|
|
|
|
2006-06-18 12:26:11 +00:00
|
|
|
#ifndef _NETINET_IN_PCB_H_
|
|
|
|
struct in_conninfo;
|
|
|
|
#endif /* _NETINET_IN_PCB_H_ */
|
|
|
|
|
2003-02-19 22:32:43 +00:00
|
|
|
struct tcptw {
|
|
|
|
struct inpcb *tw_inpcb; /* XXX back pointer to internet pcb */
|
|
|
|
tcp_seq snd_nxt;
|
|
|
|
tcp_seq rcv_nxt;
|
2003-11-01 07:30:08 +00:00
|
|
|
tcp_seq iss;
|
2003-11-02 07:47:03 +00:00
|
|
|
tcp_seq irs;
|
2003-02-19 22:32:43 +00:00
|
|
|
u_short last_win; /* cached window value */
|
2017-01-12 10:14:54 +00:00
|
|
|
short tw_so_options; /* copy of so_options */
|
2003-02-19 22:32:43 +00:00
|
|
|
struct ucred *tw_cred; /* user credentials */
|
2009-06-16 18:58:50 +00:00
|
|
|
u_int32_t t_recent;
|
2007-05-11 18:29:39 +00:00
|
|
|
u_int32_t ts_offset; /* our timestamp offset */
|
2009-06-16 18:58:50 +00:00
|
|
|
u_int t_starttime;
|
2003-03-08 22:06:20 +00:00
|
|
|
int tw_time;
|
2006-09-07 13:06:00 +00:00
|
|
|
TAILQ_ENTRY(tcptw) tw_2msl;
|
2014-11-17 14:56:02 +00:00
|
|
|
void *tw_pspare; /* TCP_SIGNATURE */
|
|
|
|
u_int *tw_spare; /* TCP_SIGNATURE */
|
2003-02-19 22:32:43 +00:00
|
|
|
};
|
2004-08-16 18:32:07 +00:00
|
|
|
|
1994-05-24 10:09:53 +00:00
|
|
|
#define intotcpcb(ip) ((struct tcpcb *)(ip)->inp_ppcb)
|
2003-02-19 22:32:43 +00:00
|
|
|
#define intotw(ip) ((struct tcptw *)(ip)->inp_ppcb)
|
1994-05-24 10:09:53 +00:00
|
|
|
#define sototcpcb(so) (intotcpcb(sotoinpcb(so)))
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The smoothed round-trip time and estimated variance
|
|
|
|
* are stored as fixed point numbers scaled by the values below.
|
|
|
|
* For convenience, these scales are also used in smoothing the average
|
|
|
|
* (smoothed = (1/scale)sample + ((scale-1)/scale)smoothed).
|
|
|
|
* With these scales, srtt has 3 bits to the right of the binary point,
|
|
|
|
* and thus an "ALPHA" of 0.875. rttvar has 2 bits to the right of the
|
|
|
|
* binary point, and is smoothed with an ALPHA of 0.75.
|
|
|
|
*/
|
1996-03-22 18:09:21 +00:00
|
|
|
#define TCP_RTT_SCALE 32 /* multiplier for srtt; 3 bits frac. */
|
|
|
|
#define TCP_RTT_SHIFT 5 /* shift for srtt; 3 bits frac. */
|
|
|
|
#define TCP_RTTVAR_SCALE 16 /* multiplier for rttvar; 2 bits */
|
|
|
|
#define TCP_RTTVAR_SHIFT 4 /* shift for rttvar; 2 bits */
|
|
|
|
#define TCP_DELTA_SHIFT 2 /* see tcp_input.c */
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The initial retransmission should happen at rtt + 4 * rttvar.
|
|
|
|
* Because of the way we do the smoothing, srtt and rttvar
|
|
|
|
* will each average +1/2 tick of bias. When we compute
|
|
|
|
* the retransmit timer, we want 1/2 tick of rounding and
|
|
|
|
* 1 extra tick because of +-1/2 tick uncertainty in the
|
|
|
|
* firing of the timer. The bias will give us exactly the
|
|
|
|
* 1.5 tick we need. But, because the bias is
|
|
|
|
* statistical, we have to test that we don't drop below
|
|
|
|
* the minimum feasible timer (which is 2 ticks).
|
1996-03-22 18:09:21 +00:00
|
|
|
* This version of the macro adapted from a paper by Lawrence
|
|
|
|
* Brakmo and Larry Peterson which outlines a problem caused
|
|
|
|
* by insufficient precision in the original implementation,
|
|
|
|
* which results in inappropriately large RTO values for very
|
|
|
|
* fast networks.
|
1994-05-24 10:09:53 +00:00
|
|
|
*/
|
1996-03-22 18:09:21 +00:00
|
|
|
#define TCP_REXMTVAL(tp) \
|
1998-04-24 09:25:39 +00:00
|
|
|
max((tp)->t_rttmin, (((tp)->t_srtt >> (TCP_RTT_SHIFT - TCP_DELTA_SHIFT)) \
|
1996-06-05 16:57:38 +00:00
|
|
|
+ (tp)->t_rttvar) >> TCP_DELTA_SHIFT)
|
1994-05-24 10:09:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TCP statistics.
|
|
|
|
* Many of these should be kept per connection,
|
|
|
|
* but that's inconvenient at the moment.
|
|
|
|
*/
|
|
|
|
struct tcpstat {
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t tcps_connattempt; /* connections initiated */
|
|
|
|
uint64_t tcps_accepts; /* connections accepted */
|
|
|
|
uint64_t tcps_connects; /* connections established */
|
|
|
|
uint64_t tcps_drops; /* connections dropped */
|
|
|
|
uint64_t tcps_conndrops; /* embryonic connections dropped */
|
|
|
|
uint64_t tcps_minmssdrops; /* average minmss too low drops */
|
|
|
|
uint64_t tcps_closed; /* conn. closed (includes drops) */
|
|
|
|
uint64_t tcps_segstimed; /* segs where we tried to get rtt */
|
|
|
|
uint64_t tcps_rttupdated; /* times we succeeded */
|
|
|
|
uint64_t tcps_delack; /* delayed acks sent */
|
|
|
|
uint64_t tcps_timeoutdrop; /* conn. dropped in rxmt timeout */
|
|
|
|
uint64_t tcps_rexmttimeo; /* retransmit timeouts */
|
|
|
|
uint64_t tcps_persisttimeo; /* persist timeouts */
|
|
|
|
uint64_t tcps_keeptimeo; /* keepalive timeouts */
|
|
|
|
uint64_t tcps_keepprobe; /* keepalive probes sent */
|
|
|
|
uint64_t tcps_keepdrops; /* connections dropped in keepalive */
|
|
|
|
|
|
|
|
uint64_t tcps_sndtotal; /* total packets sent */
|
|
|
|
uint64_t tcps_sndpack; /* data packets sent */
|
|
|
|
uint64_t tcps_sndbyte; /* data bytes sent */
|
|
|
|
uint64_t tcps_sndrexmitpack; /* data packets retransmitted */
|
|
|
|
uint64_t tcps_sndrexmitbyte; /* data bytes retransmitted */
|
|
|
|
uint64_t tcps_sndrexmitbad; /* unnecessary packet retransmissions */
|
|
|
|
uint64_t tcps_sndacks; /* ack-only packets sent */
|
|
|
|
uint64_t tcps_sndprobe; /* window probes sent */
|
|
|
|
uint64_t tcps_sndurg; /* packets sent with URG only */
|
|
|
|
uint64_t tcps_sndwinup; /* window update-only packets sent */
|
|
|
|
uint64_t tcps_sndctrl; /* control (SYN|FIN|RST) packets sent */
|
|
|
|
|
|
|
|
uint64_t tcps_rcvtotal; /* total packets received */
|
|
|
|
uint64_t tcps_rcvpack; /* packets received in sequence */
|
|
|
|
uint64_t tcps_rcvbyte; /* bytes received in sequence */
|
|
|
|
uint64_t tcps_rcvbadsum; /* packets received with ccksum errs */
|
|
|
|
uint64_t tcps_rcvbadoff; /* packets received with bad offset */
|
2014-05-06 00:00:07 +00:00
|
|
|
uint64_t tcps_rcvreassfull; /* packets dropped for no reass space */
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t tcps_rcvshort; /* packets received too short */
|
|
|
|
uint64_t tcps_rcvduppack; /* duplicate-only packets received */
|
|
|
|
uint64_t tcps_rcvdupbyte; /* duplicate-only bytes received */
|
|
|
|
uint64_t tcps_rcvpartduppack; /* packets with some duplicate data */
|
|
|
|
uint64_t tcps_rcvpartdupbyte; /* dup. bytes in part-dup. packets */
|
|
|
|
uint64_t tcps_rcvoopack; /* out-of-order packets received */
|
|
|
|
uint64_t tcps_rcvoobyte; /* out-of-order bytes received */
|
|
|
|
uint64_t tcps_rcvpackafterwin; /* packets with data after window */
|
|
|
|
uint64_t tcps_rcvbyteafterwin; /* bytes rcvd after window */
|
|
|
|
uint64_t tcps_rcvafterclose; /* packets rcvd after "close" */
|
|
|
|
uint64_t tcps_rcvwinprobe; /* rcvd window probe packets */
|
|
|
|
uint64_t tcps_rcvdupack; /* rcvd duplicate acks */
|
|
|
|
uint64_t tcps_rcvacktoomuch; /* rcvd acks for unsent data */
|
|
|
|
uint64_t tcps_rcvackpack; /* rcvd ack packets */
|
|
|
|
uint64_t tcps_rcvackbyte; /* bytes acked by rcvd acks */
|
|
|
|
uint64_t tcps_rcvwinupd; /* rcvd window update packets */
|
|
|
|
uint64_t tcps_pawsdrop; /* segments dropped due to PAWS */
|
|
|
|
uint64_t tcps_predack; /* times hdr predict ok for acks */
|
|
|
|
uint64_t tcps_preddat; /* times hdr predict ok for data pkts */
|
|
|
|
uint64_t tcps_pcbcachemiss;
|
|
|
|
uint64_t tcps_cachedrtt; /* times cached RTT in route updated */
|
|
|
|
uint64_t tcps_cachedrttvar; /* times cached rttvar updated */
|
|
|
|
uint64_t tcps_cachedssthresh; /* times cached ssthresh updated */
|
|
|
|
uint64_t tcps_usedrtt; /* times RTT initialized from route */
|
|
|
|
uint64_t tcps_usedrttvar; /* times RTTVAR initialized from rt */
|
|
|
|
uint64_t tcps_usedssthresh; /* times ssthresh initialized from rt*/
|
|
|
|
uint64_t tcps_persistdrop; /* timeout in persist state */
|
|
|
|
uint64_t tcps_badsyn; /* bogus SYN, e.g. premature ACK */
|
|
|
|
uint64_t tcps_mturesent; /* resends due to MTU discovery */
|
|
|
|
uint64_t tcps_listendrop; /* listen queue overflows */
|
|
|
|
uint64_t tcps_badrst; /* ignored RSTs in the window */
|
|
|
|
|
|
|
|
uint64_t tcps_sc_added; /* entry added to syncache */
|
|
|
|
uint64_t tcps_sc_retransmitted; /* syncache entry was retransmitted */
|
|
|
|
uint64_t tcps_sc_dupsyn; /* duplicate SYN packet */
|
|
|
|
uint64_t tcps_sc_dropped; /* could not reply to packet */
|
|
|
|
uint64_t tcps_sc_completed; /* successful extraction of entry */
|
|
|
|
uint64_t tcps_sc_bucketoverflow;/* syncache per-bucket limit hit */
|
|
|
|
uint64_t tcps_sc_cacheoverflow; /* syncache cache limit hit */
|
|
|
|
uint64_t tcps_sc_reset; /* RST removed entry from syncache */
|
|
|
|
uint64_t tcps_sc_stale; /* timed out or listen socket gone */
|
|
|
|
uint64_t tcps_sc_aborted; /* syncache entry aborted */
|
|
|
|
uint64_t tcps_sc_badack; /* removed due to bad ACK */
|
|
|
|
uint64_t tcps_sc_unreach; /* ICMP unreachable received */
|
|
|
|
uint64_t tcps_sc_zonefail; /* zalloc() failed */
|
|
|
|
uint64_t tcps_sc_sendcookie; /* SYN cookie sent */
|
|
|
|
uint64_t tcps_sc_recvcookie; /* SYN cookie received */
|
|
|
|
|
|
|
|
uint64_t tcps_hc_added; /* entry added to hostcache */
|
|
|
|
uint64_t tcps_hc_bucketoverflow;/* hostcache per bucket limit hit */
|
|
|
|
|
|
|
|
uint64_t tcps_finwait2_drops; /* Drop FIN_WAIT_2 connection after time limit */
|
2007-02-26 22:25:21 +00:00
|
|
|
|
2004-06-23 21:04:37 +00:00
|
|
|
/* SACK related stats */
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t tcps_sack_recovery_episode; /* SACK recovery episodes */
|
|
|
|
uint64_t tcps_sack_rexmits; /* SACK rexmit segments */
|
|
|
|
uint64_t tcps_sack_rexmit_bytes; /* SACK rexmit bytes */
|
|
|
|
uint64_t tcps_sack_rcv_blocks; /* SACK blocks (options) received */
|
|
|
|
uint64_t tcps_sack_send_blocks; /* SACK blocks (options) sent */
|
|
|
|
uint64_t tcps_sack_sboverflow; /* times scoreboard overflowed */
|
2020-02-12 13:31:36 +00:00
|
|
|
|
2008-07-31 15:10:09 +00:00
|
|
|
/* ECN related stats */
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t tcps_ecn_ce; /* ECN Congestion Experienced */
|
|
|
|
uint64_t tcps_ecn_ect0; /* ECN Capable Transport */
|
|
|
|
uint64_t tcps_ecn_ect1; /* ECN Capable Transport */
|
|
|
|
uint64_t tcps_ecn_shs; /* ECN successful handshakes */
|
|
|
|
uint64_t tcps_ecn_rcwnd; /* # times ECN reduced the cwnd */
|
2009-07-12 09:14:28 +00:00
|
|
|
|
2011-04-25 17:13:40 +00:00
|
|
|
/* TCP_SIGNATURE related stats */
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t tcps_sig_rcvgoodsig; /* Total matching signature received */
|
|
|
|
uint64_t tcps_sig_rcvbadsig; /* Total bad signature received */
|
2017-02-06 08:49:57 +00:00
|
|
|
uint64_t tcps_sig_err_buildsig; /* Failed to make signature */
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t tcps_sig_err_sigopt; /* No signature expected by socket */
|
|
|
|
uint64_t tcps_sig_err_nosigopt; /* No signature provided by segment */
|
2011-04-25 17:13:40 +00:00
|
|
|
|
2017-08-25 19:41:38 +00:00
|
|
|
/* Path MTU Discovery Black Hole Detection related stats */
|
|
|
|
uint64_t tcps_pmtud_blackhole_activated; /* Black Hole Count */
|
|
|
|
uint64_t tcps_pmtud_blackhole_activated_min_mss; /* BH at min MSS Count */
|
|
|
|
uint64_t tcps_pmtud_blackhole_failed; /* Black Hole Failure Count */
|
|
|
|
|
2013-04-08 19:57:21 +00:00
|
|
|
uint64_t _pad[12]; /* 6 UTO, 6 TBD */
|
1994-05-24 10:09:53 +00:00
|
|
|
};
|
|
|
|
|
2014-05-17 12:33:27 +00:00
|
|
|
#define tcps_rcvmemdrop tcps_rcvreassfull /* compat */
|
|
|
|
|
2009-04-12 21:28:35 +00:00
|
|
|
#ifdef _KERNEL
|
2015-12-16 00:56:45 +00:00
|
|
|
#define TI_UNLOCKED 1
|
|
|
|
#define TI_RLOCKED 2
|
2013-04-08 19:57:21 +00:00
|
|
|
#include <sys/counter.h>
|
|
|
|
|
2013-07-09 09:43:03 +00:00
|
|
|
VNET_PCPUSTAT_DECLARE(struct tcpstat, tcpstat); /* tcp statistics */
|
2009-08-02 19:43:32 +00:00
|
|
|
/*
|
|
|
|
* In-kernel consumers can use these accessor macros directly to update
|
|
|
|
* stats.
|
|
|
|
*/
|
2013-07-09 09:43:03 +00:00
|
|
|
#define TCPSTAT_ADD(name, val) \
|
|
|
|
VNET_PCPUSTAT_ADD(struct tcpstat, tcpstat, name, (val))
|
2009-04-11 22:07:19 +00:00
|
|
|
#define TCPSTAT_INC(name) TCPSTAT_ADD(name, 1)
|
2009-08-02 19:43:32 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Kernel module consumers must use this accessor macro.
|
|
|
|
*/
|
2020-03-12 15:37:41 +00:00
|
|
|
void kmod_tcpstat_add(int statnum, int val);
|
|
|
|
#define KMOD_TCPSTAT_ADD(name, val) \
|
|
|
|
kmod_tcpstat_add(offsetof(struct tcpstat, name) / sizeof(uint64_t), val)
|
|
|
|
#define KMOD_TCPSTAT_INC(name) KMOD_TCPSTAT_ADD(name, 1)
|
2010-12-28 12:13:30 +00:00
|
|
|
|
2016-03-15 00:15:10 +00:00
|
|
|
/*
|
|
|
|
* Running TCP connection count by state.
|
|
|
|
*/
|
|
|
|
VNET_DECLARE(counter_u64_t, tcps_states[TCP_NSTATES]);
|
2016-05-17 23:14:17 +00:00
|
|
|
#define V_tcps_states VNET(tcps_states)
|
|
|
|
#define TCPSTATES_INC(state) counter_u64_add(V_tcps_states[state], 1)
|
|
|
|
#define TCPSTATES_DEC(state) counter_u64_add(V_tcps_states[state], -1)
|
2016-03-15 00:15:10 +00:00
|
|
|
|
2010-12-28 12:13:30 +00:00
|
|
|
/*
|
|
|
|
* TCP specific helper hook point identifiers.
|
|
|
|
*/
|
|
|
|
#define HHOOK_TCP_EST_IN 0
|
|
|
|
#define HHOOK_TCP_EST_OUT 1
|
|
|
|
#define HHOOK_TCP_LAST HHOOK_TCP_EST_OUT
|
|
|
|
|
|
|
|
struct tcp_hhook_data {
|
2011-01-11 01:32:08 +00:00
|
|
|
struct tcpcb *tp;
|
|
|
|
struct tcphdr *th;
|
|
|
|
struct tcpopt *to;
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t len;
|
2011-01-11 01:32:08 +00:00
|
|
|
int tso;
|
|
|
|
tcp_seq curack;
|
2010-12-28 12:13:30 +00:00
|
|
|
};
|
2018-06-07 18:18:13 +00:00
|
|
|
#ifdef TCP_HHOOK
|
|
|
|
void hhook_run_tcp_est_out(struct tcpcb *tp,
|
|
|
|
struct tcphdr *th, struct tcpopt *to,
|
|
|
|
uint32_t len, int tso);
|
|
|
|
#endif
|
2009-04-12 21:28:35 +00:00
|
|
|
#endif
|
2009-04-11 22:07:19 +00:00
|
|
|
|
1998-05-15 20:11:40 +00:00
|
|
|
/*
|
|
|
|
* TCB structure exported to user-land via sysctl(3).
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
*
|
|
|
|
* Fields prefixed with "xt_" are unique to the export structure, and fields
|
|
|
|
* with "t_" or other prefixes match corresponding fields of 'struct tcpcb'.
|
|
|
|
*
|
|
|
|
* Legend:
|
|
|
|
* (s) - used by userland utilities in src
|
|
|
|
* (p) - used by utilities in ports
|
|
|
|
* (3) - is known to be used by third party software not in ports
|
|
|
|
* (n) - no known usage
|
|
|
|
*
|
1998-06-27 07:30:45 +00:00
|
|
|
* Evil hack: declare only if in_pcb.h and sys/socketvar.h have been
|
|
|
|
* included. Not all of our clients do.
|
1998-05-15 20:11:40 +00:00
|
|
|
*/
|
1998-06-27 07:30:45 +00:00
|
|
|
#if defined(_NETINET_IN_PCB_H_) && defined(_SYS_SOCKETVAR_H_)
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
struct xtcpcb {
|
2018-07-05 13:13:48 +00:00
|
|
|
ksize_t xt_len; /* length of this structure */
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
struct xinpcb xt_inp;
|
2017-09-12 13:34:43 +00:00
|
|
|
char xt_stack[TCP_FUNCTION_NAME_LEN_MAX]; /* (s) */
|
2018-03-22 09:40:08 +00:00
|
|
|
char xt_logid[TCP_LOG_ID_LEN]; /* (s) */
|
2020-09-13 09:06:50 +00:00
|
|
|
char xt_cc[TCP_CA_NAME_MAX]; /* (s) */
|
|
|
|
int64_t spare64[6];
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
int32_t t_state; /* (s,p) */
|
|
|
|
uint32_t t_flags; /* (s,p) */
|
|
|
|
int32_t t_sndzerowin; /* (s) */
|
|
|
|
int32_t t_sndrexmitpack; /* (s) */
|
|
|
|
int32_t t_rcvoopack; /* (s) */
|
|
|
|
int32_t t_rcvtime; /* (s) */
|
|
|
|
int32_t tt_rexmt; /* (s) */
|
|
|
|
int32_t tt_persist; /* (s) */
|
|
|
|
int32_t tt_keep; /* (s) */
|
|
|
|
int32_t tt_2msl; /* (s) */
|
|
|
|
int32_t tt_delack; /* (s) */
|
2018-03-22 09:40:08 +00:00
|
|
|
int32_t t_logstate; /* (3) */
|
2020-10-09 10:55:19 +00:00
|
|
|
uint32_t t_snd_cwnd; /* (s) */
|
|
|
|
uint32_t t_snd_ssthresh; /* (s) */
|
|
|
|
uint32_t t_maxseg; /* (s) */
|
|
|
|
uint32_t t_rcv_wnd; /* (s) */
|
|
|
|
uint32_t t_snd_wnd; /* (s) */
|
|
|
|
uint32_t xt_ecn; /* (s) */
|
|
|
|
int32_t spare32[26];
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
} __aligned(8);
|
2018-03-22 09:40:08 +00:00
|
|
|
|
Hide struct inpcb, struct tcpcb from the userland.
This is a painful change, but it is needed. On the one hand, we avoid
modifying them, and this slows down some ideas, on the other hand we still
eventually modify them and tools like netstat(1) never work on next version of
FreeBSD. We maintain a ton of spares in them, and we already got some ifdef
hell at the end of tcpcb.
Details:
- Hide struct inpcb, struct tcpcb under _KERNEL || _WANT_FOO.
- Make struct xinpcb, struct xtcpcb pure API structures, not including
kernel structures inpcb and tcpcb inside. Export into these structures
the fields from inpcb and tcpcb that are known to be used, and put there
a ton of spare space.
- Make kernel and userland utilities compilable after these changes.
- Bump __FreeBSD_version.
Reviewed by: rrs, gnn
Differential Revision: D10018
2017-03-21 06:39:49 +00:00
|
|
|
#ifdef _KERNEL
|
|
|
|
void tcp_inptoxtp(const struct inpcb *, struct xtcpcb *);
|
|
|
|
#endif
|
1998-05-15 20:11:40 +00:00
|
|
|
#endif
|
|
|
|
|
2018-03-22 09:40:08 +00:00
|
|
|
/*
|
2018-04-10 17:00:37 +00:00
|
|
|
* TCP function information (name-to-id mapping, aliases, and refcnt)
|
|
|
|
* exported to user-land via sysctl(3).
|
2018-03-22 09:40:08 +00:00
|
|
|
*/
|
2018-04-10 17:00:37 +00:00
|
|
|
struct tcp_function_info {
|
|
|
|
uint32_t tfi_refcnt;
|
2018-03-22 09:40:08 +00:00
|
|
|
uint8_t tfi_id;
|
|
|
|
char tfi_name[TCP_FUNCTION_NAME_LEN_MAX];
|
2018-04-10 17:00:37 +00:00
|
|
|
char tfi_alias[TCP_FUNCTION_NAME_LEN_MAX];
|
2018-03-22 09:40:08 +00:00
|
|
|
};
|
|
|
|
|
1995-02-08 20:18:48 +00:00
|
|
|
/*
|
2014-02-25 18:44:33 +00:00
|
|
|
* Identifiers for TCP sysctl nodes
|
1995-02-08 20:18:48 +00:00
|
|
|
*/
|
|
|
|
#define TCPCTL_DO_RFC1323 1 /* use RFC-1323 extensions */
|
|
|
|
#define TCPCTL_MSSDFLT 3 /* MSS default */
|
2016-03-14 18:06:59 +00:00
|
|
|
#define TCPCTL_STATS 4 /* statistics */
|
1995-02-16 00:27:47 +00:00
|
|
|
#define TCPCTL_RTTDFLT 5 /* default RTT estimate */
|
|
|
|
#define TCPCTL_KEEPIDLE 6 /* keepalive idle timer */
|
|
|
|
#define TCPCTL_KEEPINTVL 7 /* interval to send keepalives */
|
1996-09-13 23:54:03 +00:00
|
|
|
#define TCPCTL_SENDSPACE 8 /* send buffer space */
|
|
|
|
#define TCPCTL_RECVSPACE 9 /* receive buffer space */
|
2000-07-18 16:43:29 +00:00
|
|
|
#define TCPCTL_KEEPINIT 10 /* timeout for establishing syn */
|
1998-05-15 20:11:40 +00:00
|
|
|
#define TCPCTL_PCBLIST 11 /* list of all outstanding PCBs */
|
1999-08-30 21:17:07 +00:00
|
|
|
#define TCPCTL_DELACKTIME 12 /* time before sending delayed ACK */
|
1999-11-05 14:41:39 +00:00
|
|
|
#define TCPCTL_V6MSSDFLT 13 /* MSS default for IPv6 */
|
2004-06-23 21:04:37 +00:00
|
|
|
#define TCPCTL_SACK 14 /* Selective Acknowledgement,rfc 2018 */
|
2005-02-06 10:47:12 +00:00
|
|
|
#define TCPCTL_DROP 15 /* drop tcp connection */
|
2016-03-15 00:15:10 +00:00
|
|
|
#define TCPCTL_STATES 16 /* connection counts by TCP state */
|
1995-02-08 20:18:48 +00:00
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#ifdef _KERNEL
|
1999-02-16 10:49:55 +00:00
|
|
|
#ifdef SYSCTL_DECL
|
|
|
|
SYSCTL_DECL(_net_inet_tcp);
|
2004-10-05 18:36:24 +00:00
|
|
|
SYSCTL_DECL(_net_inet_tcp_sack);
|
Add tcp_log_addrs() function to generate and standardized TCP log line
for use thoughout the tcp subsystem.
It is IPv4 and IPv6 aware creates a line in the following format:
"TCP: [1.2.3.4]:50332 to [1.2.3.4]:80 tcpflags <RST>"
A "\n" is not included at the end. The caller is supposed to add
further information after the standard tcp log header.
The function returns a NUL terminated string which the caller has
to free(s, M_TCPLOG) after use. All memory allocation is done
with M_NOWAIT and the return value may be NULL in memory shortage
situations.
Either struct in_conninfo || (struct tcphdr && (struct ip || struct
ip6_hdr) have to be supplied.
Due to ip[6].h header inclusion limitations and ordering issues the
struct ip and struct ip6_hdr parameters have to be casted and passed
as void * pointers.
tcp_log_addrs(struct in_conninfo *inc, struct tcphdr *th, void *ip4hdr,
void *ip6hdr)
Usage example:
struct ip *ip;
char *tcplog;
if (tcplog = tcp_log_addrs(NULL, th, (void *)ip, NULL)) {
log(LOG_DEBUG, "%s; %s: Connection attempt to closed port\n",
tcplog, __func__);
free(s, M_TCPLOG);
}
2007-05-18 19:58:37 +00:00
|
|
|
MALLOC_DECLARE(M_TCPLOG);
|
1999-02-16 10:49:55 +00:00
|
|
|
#endif
|
|
|
|
|
2020-01-08 23:30:26 +00:00
|
|
|
VNET_DECLARE(int, tcp_log_in_vain);
|
|
|
|
#define V_tcp_log_in_vain VNET(tcp_log_in_vain)
|
2017-10-11 20:36:09 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Global TCP tunables shared between different stacks.
|
|
|
|
* Please keep the list sorted.
|
|
|
|
*/
|
|
|
|
VNET_DECLARE(int, drop_synfin);
|
|
|
|
VNET_DECLARE(int, path_mtu_discovery);
|
|
|
|
VNET_DECLARE(int, tcp_abc_l_var);
|
|
|
|
VNET_DECLARE(int, tcp_autorcvbuf_max);
|
|
|
|
VNET_DECLARE(int, tcp_autosndbuf_inc);
|
|
|
|
VNET_DECLARE(int, tcp_autosndbuf_max);
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
VNET_DECLARE(int, tcp_delack_enabled);
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_autorcvbuf);
|
|
|
|
VNET_DECLARE(int, tcp_do_autosndbuf);
|
|
|
|
VNET_DECLARE(int, tcp_do_ecn);
|
2019-12-01 20:35:41 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_newcwv);
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_rfc1323);
|
2021-01-13 21:48:17 +00:00
|
|
|
VNET_DECLARE(int, tcp_tolerate_missing_ts);
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_rfc3042);
|
2010-08-18 18:05:54 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_rfc3390);
|
2010-11-12 06:41:55 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_rfc3465);
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_do_rfc6675_pipe);
|
|
|
|
VNET_DECLARE(int, tcp_do_sack);
|
|
|
|
VNET_DECLARE(int, tcp_do_tso);
|
2010-04-29 11:52:42 +00:00
|
|
|
VNET_DECLARE(int, tcp_ecn_maxretries);
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_initcwnd_segments);
|
|
|
|
VNET_DECLARE(int, tcp_insecure_rst);
|
|
|
|
VNET_DECLARE(int, tcp_insecure_syn);
|
2019-12-17 16:08:07 +00:00
|
|
|
VNET_DECLARE(uint32_t, tcp_map_entries_limit);
|
|
|
|
VNET_DECLARE(uint32_t, tcp_map_split_limit);
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_minmss);
|
|
|
|
VNET_DECLARE(int, tcp_mssdflt);
|
2019-12-02 20:58:04 +00:00
|
|
|
#ifdef STATS
|
|
|
|
VNET_DECLARE(int, tcp_perconn_stats_dflt_tpl);
|
|
|
|
VNET_DECLARE(int, tcp_perconn_stats_enable);
|
|
|
|
#endif /* STATS */
|
2017-10-11 20:36:09 +00:00
|
|
|
VNET_DECLARE(int, tcp_recvspace);
|
|
|
|
VNET_DECLARE(int, tcp_sack_globalholes);
|
|
|
|
VNET_DECLARE(int, tcp_sack_globalmaxholes);
|
|
|
|
VNET_DECLARE(int, tcp_sack_maxholes);
|
|
|
|
VNET_DECLARE(int, tcp_sc_rst_sock_fail);
|
|
|
|
VNET_DECLARE(int, tcp_sendspace);
|
|
|
|
VNET_DECLARE(struct inpcbhead, tcb);
|
|
|
|
VNET_DECLARE(struct inpcbinfo, tcbinfo);
|
|
|
|
|
2020-12-04 11:29:27 +00:00
|
|
|
#define V_tcp_do_prr VNET(tcp_do_prr)
|
|
|
|
#define V_tcp_do_prr_conservative VNET(tcp_do_prr_conservative)
|
2019-12-01 20:35:41 +00:00
|
|
|
#define V_tcp_do_newcwv VNET(tcp_do_newcwv)
|
2017-10-11 20:36:09 +00:00
|
|
|
#define V_drop_synfin VNET(drop_synfin)
|
|
|
|
#define V_path_mtu_discovery VNET(path_mtu_discovery)
|
|
|
|
#define V_tcb VNET(tcb)
|
|
|
|
#define V_tcbinfo VNET(tcbinfo)
|
|
|
|
#define V_tcp_abc_l_var VNET(tcp_abc_l_var)
|
|
|
|
#define V_tcp_autorcvbuf_max VNET(tcp_autorcvbuf_max)
|
|
|
|
#define V_tcp_autosndbuf_inc VNET(tcp_autosndbuf_inc)
|
|
|
|
#define V_tcp_autosndbuf_max VNET(tcp_autosndbuf_max)
|
|
|
|
#define V_tcp_delack_enabled VNET(tcp_delack_enabled)
|
|
|
|
#define V_tcp_do_autorcvbuf VNET(tcp_do_autorcvbuf)
|
|
|
|
#define V_tcp_do_autosndbuf VNET(tcp_do_autosndbuf)
|
|
|
|
#define V_tcp_do_ecn VNET(tcp_do_ecn)
|
|
|
|
#define V_tcp_do_rfc1323 VNET(tcp_do_rfc1323)
|
2021-01-13 21:48:17 +00:00
|
|
|
#define V_tcp_tolerate_missing_ts VNET(tcp_tolerate_missing_ts)
|
2019-07-23 21:28:20 +00:00
|
|
|
#define V_tcp_ts_offset_per_conn VNET(tcp_ts_offset_per_conn)
|
2017-10-11 20:36:09 +00:00
|
|
|
#define V_tcp_do_rfc3042 VNET(tcp_do_rfc3042)
|
|
|
|
#define V_tcp_do_rfc3390 VNET(tcp_do_rfc3390)
|
|
|
|
#define V_tcp_do_rfc3465 VNET(tcp_do_rfc3465)
|
|
|
|
#define V_tcp_do_rfc6675_pipe VNET(tcp_do_rfc6675_pipe)
|
|
|
|
#define V_tcp_do_sack VNET(tcp_do_sack)
|
|
|
|
#define V_tcp_do_tso VNET(tcp_do_tso)
|
|
|
|
#define V_tcp_ecn_maxretries VNET(tcp_ecn_maxretries)
|
|
|
|
#define V_tcp_initcwnd_segments VNET(tcp_initcwnd_segments)
|
|
|
|
#define V_tcp_insecure_rst VNET(tcp_insecure_rst)
|
|
|
|
#define V_tcp_insecure_syn VNET(tcp_insecure_syn)
|
2019-12-17 16:08:07 +00:00
|
|
|
#define V_tcp_map_entries_limit VNET(tcp_map_entries_limit)
|
|
|
|
#define V_tcp_map_split_limit VNET(tcp_map_split_limit)
|
2017-10-11 20:36:09 +00:00
|
|
|
#define V_tcp_minmss VNET(tcp_minmss)
|
|
|
|
#define V_tcp_mssdflt VNET(tcp_mssdflt)
|
2019-12-02 20:58:04 +00:00
|
|
|
#ifdef STATS
|
|
|
|
#define V_tcp_perconn_stats_dflt_tpl VNET(tcp_perconn_stats_dflt_tpl)
|
|
|
|
#define V_tcp_perconn_stats_enable VNET(tcp_perconn_stats_enable)
|
|
|
|
#endif /* STATS */
|
2017-10-11 20:36:09 +00:00
|
|
|
#define V_tcp_recvspace VNET(tcp_recvspace)
|
|
|
|
#define V_tcp_sack_globalholes VNET(tcp_sack_globalholes)
|
|
|
|
#define V_tcp_sack_globalmaxholes VNET(tcp_sack_globalmaxholes)
|
|
|
|
#define V_tcp_sack_maxholes VNET(tcp_sack_maxholes)
|
|
|
|
#define V_tcp_sc_rst_sock_fail VNET(tcp_sc_rst_sock_fail)
|
|
|
|
#define V_tcp_sendspace VNET(tcp_sendspace)
|
2018-06-07 18:18:13 +00:00
|
|
|
#define V_tcp_udp_tunneling_overhead VNET(tcp_udp_tunneling_overhead)
|
|
|
|
#define V_tcp_udp_tunneling_port VNET(tcp_udp_tunneling_port)
|
|
|
|
|
In the TCP stack, the hhook(9) framework provides hooks for kernel modules
to add actions that run when a TCP frame is sent or received on a TCP
session in the ESTABLISHED state. In the base tree, this functionality is
only used for the h_ertt module, which is used by the cc_cdg, cc_chd, cc_hd,
and cc_vegas congestion control modules.
Presently, we incur overhead to check for hooks each time a TCP frame is
sent or received on an ESTABLISHED TCP session.
This change adds a new compile-time option (TCP_HHOOK) to determine whether
to include the hhook(9) framework for TCP. To retain backwards
compatibility, I added the TCP_HHOOK option to every configuration file that
already defined "options INET". (Therefore, this patch introduces no
functional change. In order to see a functional difference, you need to
compile a custom kernel without the TCP_HHOOK option.) This change will
allow users to easily exclude this functionality from their kernel, should
they wish to do so.
Note that any users who use a custom kernel configuration and use one of the
congestion control modules listed above will need to add the TCP_HHOOK
option to their kernel configuration.
Reviewed by: rrs, lstewart, hiren (previous version), sjg (makefiles only)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D8185
2016-10-12 02:16:42 +00:00
|
|
|
#ifdef TCP_HHOOK
|
2011-01-11 01:32:08 +00:00
|
|
|
VNET_DECLARE(struct hhook_head *, tcp_hhh[HHOOK_TCP_LAST + 1]);
|
2010-12-28 12:13:30 +00:00
|
|
|
#define V_tcp_hhh VNET(tcp_hhh)
|
In the TCP stack, the hhook(9) framework provides hooks for kernel modules
to add actions that run when a TCP frame is sent or received on a TCP
session in the ESTABLISHED state. In the base tree, this functionality is
only used for the h_ertt module, which is used by the cc_cdg, cc_chd, cc_hd,
and cc_vegas congestion control modules.
Presently, we incur overhead to check for hooks each time a TCP frame is
sent or received on an ESTABLISHED TCP session.
This change adds a new compile-time option (TCP_HHOOK) to determine whether
to include the hhook(9) framework for TCP. To retain backwards
compatibility, I added the TCP_HHOOK option to every configuration file that
already defined "options INET". (Therefore, this patch introduces no
functional change. In order to see a functional difference, you need to
compile a custom kernel without the TCP_HHOOK option.) This change will
allow users to easily exclude this functionality from their kernel, should
they wish to do so.
Note that any users who use a custom kernel configuration and use one of the
congestion control modules listed above will need to add the TCP_HHOOK
option to their kernel configuration.
Reviewed by: rrs, lstewart, hiren (previous version), sjg (makefiles only)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D8185
2016-10-12 02:16:42 +00:00
|
|
|
#endif
|
2010-12-28 12:13:30 +00:00
|
|
|
|
2007-03-15 15:59:28 +00:00
|
|
|
int tcp_addoptions(struct tcpopt *, u_char *);
|
2010-11-16 08:30:39 +00:00
|
|
|
int tcp_ccalgounload(struct cc_algo *unload_algo);
|
1994-05-24 10:09:53 +00:00
|
|
|
struct tcpcb *
|
2002-03-19 21:25:46 +00:00
|
|
|
tcp_close(struct tcpcb *);
|
Update TCP for infrastructural changes to the socket/pcb refcount model,
pru_abort(), pru_detach(), and in_pcbdetach():
- Universally support and enforce the invariant that so_pcb is
never NULL, converting dozens of unnecessary NULL checks into
assertions, and eliminating dozens of unnecessary error handling
cases in protocol code.
- In some cases, eliminate unnecessary pcbinfo locking, as it is no
longer required to ensure so_pcb != NULL. For example, the receive
code no longer requires the pcbinfo lock, and the send code only
requires it if building a new connection on an otherwise unconnected
socket triggered via sendto() with an address. This should
significnatly reduce tcbinfo lock contention in the receive and send
cases.
- In order to support the invariant that so_pcb != NULL, it is now
necessary for the TCP code to not discard the tcpcb any time a
connection is dropped, but instead leave the tcpcb until the socket
is shutdown. This case is handled by setting INP_DROPPED, to
substitute for using a NULL so_pcb to indicate that the connection
has been dropped. This requires the inpcb lock, but not the pcbinfo
lock.
- Unlike all other protocols in the tree, TCP may need to retain access
to the socket after the file descriptor has been closed. Set
SS_PROTOREF in tcp_detach() in order to prevent the socket from being
freed, and add a flag, INP_SOCKREF, so that the TCP code knows whether
or not it needs to free the socket when the connection finally does
close. The typical case where this occurs is if close() is called on
a TCP socket before all sent data in the send socket buffer has been
transmitted or acknowledged. If INP_SOCKREF is found when the
connection is dropped, we release the inpcb, tcpcb, and socket instead
of flagging INP_DROPPED.
- Abort and detach protocol switch methods no longer return failures,
nor attempt to free sockets, as the socket layer does this.
- Annotate the existence of a long-standing race in the TCP timer code,
in which timers are stopped but not drained when the socket is freed,
as waiting for drain may lead to deadlocks, or have to occur in a
context where waiting is not permitted. This race has been handled
by testing to see if the tcpcb pointer in the inpcb is NULL (and vice
versa), which is not normally permitted, but may be true of a inpcb
and tcpcb have been freed. Add a counter to test how often this race
has actually occurred, and a large comment for each instance where
we compare potentially freed memory with NULL. This will have to be
fixed in the near future, but requires is to further address how to
handle the timer shutdown shutdown issue.
- Several TCP calls no longer potentially free the passed inpcb/tcpcb,
so no longer need to return a pointer to indicate whether the argument
passed in is still valid.
- Un-macroize debugging and locking setup for various protocol switch
methods for TCP, as it lead to more obscurity, and as locking becomes
more customized to the methods, offers less benefit.
- Assert copyright on tcp_usrreq.c due to significant modifications that
have been made as part of this work.
These changes significantly modify the memory management and connection
logic of our TCP implementation, and are (as such) High Risk Changes,
and likely to contain serious bugs. Please report problems to the
current@ mailing list ASAP, ideally with simple test cases, and
optionally, packet traces.
MFC after: 3 months
2006-04-01 16:36:36 +00:00
|
|
|
void tcp_discardcb(struct tcpcb *);
|
2003-02-19 22:32:43 +00:00
|
|
|
void tcp_twstart(struct tcpcb *);
|
2014-10-30 08:53:56 +00:00
|
|
|
void tcp_twclose(struct tcptw *, int);
|
2002-03-19 21:25:46 +00:00
|
|
|
void tcp_ctlinput(int, struct sockaddr *, void *);
|
|
|
|
int tcp_ctloutput(struct socket *, struct sockopt *);
|
1994-05-24 10:09:53 +00:00
|
|
|
struct tcpcb *
|
2002-03-19 21:25:46 +00:00
|
|
|
tcp_drop(struct tcpcb *, int);
|
|
|
|
void tcp_drain(void);
|
|
|
|
void tcp_init(void);
|
2004-04-20 06:33:39 +00:00
|
|
|
void tcp_fini(void *);
|
2011-01-07 21:40:34 +00:00
|
|
|
char *tcp_log_addrs(struct in_conninfo *, struct tcphdr *, void *,
|
2007-07-05 05:55:57 +00:00
|
|
|
const void *);
|
2010-08-18 17:39:47 +00:00
|
|
|
char *tcp_log_vain(struct in_conninfo *, struct tcphdr *, void *,
|
|
|
|
const void *);
|
2020-11-08 18:47:05 +00:00
|
|
|
int tcp_reass(struct tcpcb *, struct tcphdr *, tcp_seq *, int *,
|
|
|
|
struct mbuf *);
|
2015-07-29 17:59:13 +00:00
|
|
|
void tcp_reass_global_init(void);
|
2010-09-25 04:58:46 +00:00
|
|
|
void tcp_reass_flush(struct tcpcb *);
|
2015-12-16 00:56:45 +00:00
|
|
|
void tcp_dooptions(struct tcpopt *, u_char *, int, int);
|
|
|
|
void tcp_dropwithreset(struct mbuf *, struct tcphdr *,
|
|
|
|
struct tcpcb *, int, int);
|
|
|
|
void tcp_pulloutofband(struct socket *,
|
|
|
|
struct tcphdr *, struct mbuf *, int);
|
|
|
|
void tcp_xmit_timer(struct tcpcb *, int);
|
|
|
|
void tcp_newreno_partial_ack(struct tcpcb *, struct tcphdr *);
|
|
|
|
void cc_ack_received(struct tcpcb *tp, struct tcphdr *th,
|
2016-08-25 13:33:32 +00:00
|
|
|
uint16_t nsegs, uint16_t type);
|
2015-12-16 00:56:45 +00:00
|
|
|
void cc_conn_init(struct tcpcb *tp);
|
|
|
|
void cc_post_recovery(struct tcpcb *tp, struct tcphdr *th);
|
2020-01-06 15:29:14 +00:00
|
|
|
void cc_ecnpkt_handler(struct tcpcb *tp, struct tcphdr *th, uint8_t iptos);
|
2015-12-16 00:56:45 +00:00
|
|
|
void cc_cong_signal(struct tcpcb *tp, struct tcphdr *th, uint32_t type);
|
In the TCP stack, the hhook(9) framework provides hooks for kernel modules
to add actions that run when a TCP frame is sent or received on a TCP
session in the ESTABLISHED state. In the base tree, this functionality is
only used for the h_ertt module, which is used by the cc_cdg, cc_chd, cc_hd,
and cc_vegas congestion control modules.
Presently, we incur overhead to check for hooks each time a TCP frame is
sent or received on an ESTABLISHED TCP session.
This change adds a new compile-time option (TCP_HHOOK) to determine whether
to include the hhook(9) framework for TCP. To retain backwards
compatibility, I added the TCP_HHOOK option to every configuration file that
already defined "options INET". (Therefore, this patch introduces no
functional change. In order to see a functional difference, you need to
compile a custom kernel without the TCP_HHOOK option.) This change will
allow users to easily exclude this functionality from their kernel, should
they wish to do so.
Note that any users who use a custom kernel configuration and use one of the
congestion control modules listed above will need to add the TCP_HHOOK
option to their kernel configuration.
Reviewed by: rrs, lstewart, hiren (previous version), sjg (makefiles only)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D8185
2016-10-12 02:16:42 +00:00
|
|
|
#ifdef TCP_HHOOK
|
2015-12-16 00:56:45 +00:00
|
|
|
void hhook_run_tcp_est_in(struct tcpcb *tp,
|
|
|
|
struct tcphdr *th, struct tcpopt *to);
|
In the TCP stack, the hhook(9) framework provides hooks for kernel modules
to add actions that run when a TCP frame is sent or received on a TCP
session in the ESTABLISHED state. In the base tree, this functionality is
only used for the h_ertt module, which is used by the cc_cdg, cc_chd, cc_hd,
and cc_vegas congestion control modules.
Presently, we incur overhead to check for hooks each time a TCP frame is
sent or received on an ESTABLISHED TCP session.
This change adds a new compile-time option (TCP_HHOOK) to determine whether
to include the hhook(9) framework for TCP. To retain backwards
compatibility, I added the TCP_HHOOK option to every configuration file that
already defined "options INET". (Therefore, this patch introduces no
functional change. In order to see a functional difference, you need to
compile a custom kernel without the TCP_HHOOK option.) This change will
allow users to easily exclude this functionality from their kernel, should
they wish to do so.
Note that any users who use a custom kernel configuration and use one of the
congestion control modules listed above will need to add the TCP_HHOOK
option to their kernel configuration.
Reviewed by: rrs, lstewart, hiren (previous version), sjg (makefiles only)
Sponsored by: Netflix
Differential Revision: https://reviews.freebsd.org/D8185
2016-10-12 02:16:42 +00:00
|
|
|
#endif
|
2015-12-16 00:56:45 +00:00
|
|
|
|
2014-08-08 01:57:15 +00:00
|
|
|
int tcp_input(struct mbuf **, int *, int);
|
2017-04-10 08:19:35 +00:00
|
|
|
int tcp_autorcvbuf(struct mbuf *, struct tcphdr *, struct socket *,
|
|
|
|
struct tcpcb *, int);
|
2020-11-08 18:47:05 +00:00
|
|
|
void tcp_handle_wakeup(struct tcpcb *, struct socket *);
|
2015-12-16 00:56:45 +00:00
|
|
|
void tcp_do_segment(struct mbuf *, struct tcphdr *,
|
2018-07-04 02:47:16 +00:00
|
|
|
struct socket *, struct tcpcb *, int, int, uint8_t);
|
2015-12-16 00:56:45 +00:00
|
|
|
|
|
|
|
int register_tcp_functions(struct tcp_function_block *blk, int wait);
|
2017-06-08 20:41:28 +00:00
|
|
|
int register_tcp_functions_as_names(struct tcp_function_block *blk,
|
|
|
|
int wait, const char *names[], int *num_names);
|
|
|
|
int register_tcp_functions_as_name(struct tcp_function_block *blk,
|
|
|
|
const char *name, int wait);
|
2018-04-19 13:37:59 +00:00
|
|
|
int deregister_tcp_functions(struct tcp_function_block *blk, bool quiesce,
|
|
|
|
bool force);
|
2015-12-16 00:56:45 +00:00
|
|
|
struct tcp_function_block *find_and_ref_tcp_functions(struct tcp_function_set *fs);
|
2018-04-19 13:37:59 +00:00
|
|
|
void tcp_switch_back_to_default(struct tcpcb *tp);
|
|
|
|
struct tcp_function_block *
|
|
|
|
find_and_ref_tcp_fb(struct tcp_function_block *fs);
|
2015-12-16 00:56:45 +00:00
|
|
|
int tcp_default_ctloutput(struct socket *so, struct sockopt *sopt, struct inpcb *inp, struct tcpcb *tp);
|
|
|
|
|
2019-09-06 18:29:48 +00:00
|
|
|
extern counter_u64_t tcp_inp_lro_direct_queue;
|
|
|
|
extern counter_u64_t tcp_inp_lro_wokeup_queue;
|
|
|
|
extern counter_u64_t tcp_inp_lro_compressed;
|
|
|
|
extern counter_u64_t tcp_inp_lro_single_push;
|
|
|
|
extern counter_u64_t tcp_inp_lro_locks_taken;
|
|
|
|
extern counter_u64_t tcp_inp_lro_sack_wake;
|
2021-01-27 17:09:32 +00:00
|
|
|
extern counter_u64_t tcp_extra_mbuf;
|
|
|
|
extern counter_u64_t tcp_would_have_but;
|
|
|
|
extern counter_u64_t tcp_comp_total;
|
|
|
|
extern counter_u64_t tcp_uncomp_total;
|
|
|
|
extern counter_u64_t tcp_csum_hardware;
|
|
|
|
extern counter_u64_t tcp_csum_hardware_w_ph;
|
|
|
|
extern counter_u64_t tcp_csum_software;
|
2019-09-06 18:29:48 +00:00
|
|
|
|
2019-12-17 16:08:07 +00:00
|
|
|
#ifdef NETFLIX_EXP_DETECTION
|
|
|
|
/* Various SACK attack thresholds */
|
|
|
|
extern int32_t tcp_force_detection;
|
|
|
|
extern int32_t tcp_sack_to_ack_thresh;
|
|
|
|
extern int32_t tcp_sack_to_move_thresh;
|
|
|
|
extern int32_t tcp_restoral_thresh;
|
|
|
|
extern int32_t tcp_sad_decay_val;
|
|
|
|
extern int32_t tcp_sad_pacing_interval;
|
|
|
|
extern int32_t tcp_sad_low_pps;
|
|
|
|
extern int32_t tcp_map_minimum;
|
|
|
|
extern int32_t tcp_attack_on_turns_on_logging;
|
|
|
|
#endif
|
|
|
|
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t tcp_maxmtu(struct in_conninfo *, struct tcp_ifcap *);
|
|
|
|
uint32_t tcp_maxmtu6(struct in_conninfo *, struct tcp_ifcap *);
|
2016-01-07 00:14:42 +00:00
|
|
|
u_int tcp_maxseg(const struct tcpcb *);
|
2012-04-16 13:49:03 +00:00
|
|
|
void tcp_mss_update(struct tcpcb *, int, int, struct hc_metrics_lite *,
|
2013-06-03 12:55:13 +00:00
|
|
|
struct tcp_ifcap *);
|
2002-03-19 21:25:46 +00:00
|
|
|
void tcp_mss(struct tcpcb *, int);
|
2003-11-20 20:07:39 +00:00
|
|
|
int tcp_mssopt(struct in_conninfo *);
|
2004-08-16 18:32:07 +00:00
|
|
|
struct inpcb *
|
2002-06-14 08:35:21 +00:00
|
|
|
tcp_drop_syn_sent(struct inpcb *, int);
|
1994-05-24 10:09:53 +00:00
|
|
|
struct tcpcb *
|
2002-03-19 21:25:46 +00:00
|
|
|
tcp_newtcpcb(struct inpcb *);
|
|
|
|
int tcp_output(struct tcpcb *);
|
2013-08-25 21:54:41 +00:00
|
|
|
void tcp_state_change(struct tcpcb *, int);
|
2002-03-19 21:25:46 +00:00
|
|
|
void tcp_respond(struct tcpcb *, void *,
|
|
|
|
struct tcphdr *, struct mbuf *, tcp_seq, tcp_seq, int);
|
2007-05-13 22:16:13 +00:00
|
|
|
void tcp_tw_init(void);
|
Introduce an infrastructure for dismantling vnet instances.
Vnet modules and protocol domains may now register destructor
functions to clean up and release per-module state. The destructor
mechanisms can be triggered by invoking "vimage -d", or a future
equivalent command which will be provided via the new jail framework.
While this patch introduces numerous placeholder destructor functions,
many of those are currently incomplete, thus leaking memory or (even
worse) failing to stop all running timers. Many of such issues are
already known and will be incrementaly fixed over the next weeks in
smaller incremental commits.
Apart from introducing new fields in structs ifnet, domain, protosw
and vnet_net, which requires the kernel and modules to be rebuilt, this
change should have no impact on nooptions VIMAGE builds, since vnet
destructors can only be called in VIMAGE kernels. Moreover,
destructor functions should be in general compiled in only in
options VIMAGE builds, except for kernel modules which can be safely
kldunloaded at run time.
Bump __FreeBSD_version to 800097.
Reviewed by: bz, julian
Approved by: rwatson, kib (re), julian (mentor)
2009-06-08 17:15:40 +00:00
|
|
|
#ifdef VIMAGE
|
|
|
|
void tcp_tw_destroy(void);
|
|
|
|
#endif
|
2007-05-13 22:16:13 +00:00
|
|
|
void tcp_tw_zone_change(void);
|
2007-05-16 17:14:25 +00:00
|
|
|
int tcp_twcheck(struct inpcb *, struct tcpopt *, struct tcphdr *,
|
|
|
|
struct mbuf *, int);
|
2002-03-19 21:25:46 +00:00
|
|
|
void tcp_setpersist(struct tcpcb *);
|
|
|
|
void tcp_slowtimo(void);
|
2000-01-09 19:17:30 +00:00
|
|
|
struct tcptemp *
|
2003-02-19 22:18:06 +00:00
|
|
|
tcpip_maketemplate(struct inpcb *);
|
|
|
|
void tcpip_fillheaders(struct inpcb *, void *, void *);
|
2015-04-16 10:00:06 +00:00
|
|
|
void tcp_timer_activate(struct tcpcb *, uint32_t, u_int);
|
2018-06-07 18:18:13 +00:00
|
|
|
int tcp_timer_suspend(struct tcpcb *, uint32_t);
|
|
|
|
void tcp_timers_unsuspend(struct tcpcb *, uint32_t);
|
2015-04-16 10:00:06 +00:00
|
|
|
int tcp_timer_active(struct tcpcb *, uint32_t);
|
|
|
|
void tcp_timer_stop(struct tcpcb *, uint32_t);
|
2007-05-04 23:43:18 +00:00
|
|
|
void tcp_trace(short, short, struct tcpcb *, void *, struct tcphdr *, int);
|
2018-06-07 18:18:13 +00:00
|
|
|
int inp_to_cpuid(struct inpcb *inp);
|
2003-11-20 20:07:39 +00:00
|
|
|
/*
|
|
|
|
* All tcp_hc_* functions are IPv4 and IPv6 (via in_conninfo)
|
|
|
|
*/
|
|
|
|
void tcp_hc_init(void);
|
Introduce an infrastructure for dismantling vnet instances.
Vnet modules and protocol domains may now register destructor
functions to clean up and release per-module state. The destructor
mechanisms can be triggered by invoking "vimage -d", or a future
equivalent command which will be provided via the new jail framework.
While this patch introduces numerous placeholder destructor functions,
many of those are currently incomplete, thus leaking memory or (even
worse) failing to stop all running timers. Many of such issues are
already known and will be incrementaly fixed over the next weeks in
smaller incremental commits.
Apart from introducing new fields in structs ifnet, domain, protosw
and vnet_net, which requires the kernel and modules to be rebuilt, this
change should have no impact on nooptions VIMAGE builds, since vnet
destructors can only be called in VIMAGE kernels. Moreover,
destructor functions should be in general compiled in only in
options VIMAGE builds, except for kernel modules which can be safely
kldunloaded at run time.
Bump __FreeBSD_version to 800097.
Reviewed by: bz, julian
Approved by: rwatson, kib (re), julian (mentor)
2009-06-08 17:15:40 +00:00
|
|
|
#ifdef VIMAGE
|
|
|
|
void tcp_hc_destroy(void);
|
|
|
|
#endif
|
2003-11-20 20:07:39 +00:00
|
|
|
void tcp_hc_get(struct in_conninfo *, struct hc_metrics_lite *);
|
2016-10-06 16:28:34 +00:00
|
|
|
uint32_t tcp_hc_getmtu(struct in_conninfo *);
|
|
|
|
void tcp_hc_updatemtu(struct in_conninfo *, uint32_t);
|
2003-11-20 20:07:39 +00:00
|
|
|
void tcp_hc_update(struct in_conninfo *, struct hc_metrics_lite *);
|
1995-07-10 15:39:16 +00:00
|
|
|
|
1996-07-11 16:32:50 +00:00
|
|
|
extern struct pr_usrreqs tcp_usrreqs;
|
2018-08-19 14:56:10 +00:00
|
|
|
|
|
|
|
uint32_t tcp_new_ts_offset(struct in_conninfo *);
|
|
|
|
tcp_seq tcp_new_isn(struct in_conninfo *);
|
1995-07-10 15:39:16 +00:00
|
|
|
|
One of the ways to detect loss is to count duplicate acks coming back from the
other end till it reaches predetermined threshold which is 3 for us right now.
Once that happens, we trigger fast-retransmit to do loss recovery.
Main problem with the current implementation is that we don't honor SACK
information well to detect whether an incoming ack is a dupack or not. RFC6675
has latest recommendations for that. According to it, dupack is a segment that
arrives carrying a SACK block that identifies previously unknown information
between snd_una and snd_max even if it carries new data, changes the advertised
window, or moves the cumulative acknowledgment point.
With the prevalence of Selective ACK (SACK) these days, improper handling can
lead to delayed loss recovery.
With the fix, new behavior looks like following:
0) th_ack < snd_una --> ignore
Old acks are ignored.
1) th_ack == snd_una, !sack_changed --> ignore
Acks with SACK enabled but without any new SACK info in them are ignored.
2) th_ack == snd_una, window == old_window --> increment
Increment on a good dupack.
3) th_ack == snd_una, window != old_window, sack_changed --> increment
When SACK enabled, it's okay to have advertized window changed if the ack has
new SACK info.
4) th_ack > snd_una --> reset to 0
Reset to 0 when left edge moves.
5) th_ack > snd_una, sack_changed --> increment
Increment if left edge moves but there is new SACK info.
Here, sack_changed is the indicator that incoming ack has previously unknown
SACK info in it.
Note: This fix is not fully compliant to RFC6675. That may require a few
changes to current implementation in order to keep per-sackhole dupack counter
and change to the way we mark/handle sack holes.
PR: 203663
Reviewed by: jtl
MFC after: 3 weeks
Sponsored by: Limelight Networks
Differential Revision: https://reviews.freebsd.org/D4225
2015-12-08 21:21:48 +00:00
|
|
|
int tcp_sack_doack(struct tcpcb *, struct tcpopt *, tcp_seq);
|
2019-09-02 19:04:02 +00:00
|
|
|
void tcp_update_dsack_list(struct tcpcb *, tcp_seq, tcp_seq);
|
2005-02-17 23:04:56 +00:00
|
|
|
void tcp_update_sack_list(struct tcpcb *tp, tcp_seq rcv_laststart, tcp_seq rcv_lastend);
|
2019-07-14 16:05:47 +00:00
|
|
|
void tcp_clean_dsack_blocks(struct tcpcb *tp);
|
2004-06-23 21:04:37 +00:00
|
|
|
void tcp_clean_sackreport(struct tcpcb *tp);
|
|
|
|
void tcp_sack_adjust(struct tcpcb *tp);
|
2004-10-05 18:36:24 +00:00
|
|
|
struct sackhole *tcp_sack_output(struct tcpcb *tp, int *sack_bytes_rexmt);
|
2021-03-25 22:58:46 +00:00
|
|
|
void tcp_do_prr_ack(struct tcpcb *, struct tcphdr *);
|
2004-06-23 21:04:37 +00:00
|
|
|
void tcp_sack_partialack(struct tcpcb *, struct tcphdr *);
|
|
|
|
void tcp_free_sackholes(struct tcpcb *tp);
|
|
|
|
int tcp_newreno(struct tcpcb *, struct tcphdr *);
|
2015-10-28 22:57:51 +00:00
|
|
|
int tcp_compute_pipe(struct tcpcb *);
|
2019-01-25 13:57:09 +00:00
|
|
|
uint32_t tcp_compute_initwnd(uint32_t);
|
2017-12-07 22:36:58 +00:00
|
|
|
void tcp_sndbuf_autoscale(struct tcpcb *, struct socket *, uint32_t);
|
2019-12-02 20:58:04 +00:00
|
|
|
int tcp_stats_sample_rollthedice(struct tcpcb *tp, void *seed_bytes,
|
|
|
|
size_t seed_len);
|
2018-06-07 18:18:13 +00:00
|
|
|
struct mbuf *
|
|
|
|
tcp_m_copym(struct mbuf *m, int32_t off0, int32_t *plen,
|
Add kernel-side support for in-kernel TLS.
KTLS adds support for in-kernel framing and encryption of Transport
Layer Security (1.0-1.2) data on TCP sockets. KTLS only supports
offload of TLS for transmitted data. Key negotation must still be
performed in userland. Once completed, transmit session keys for a
connection are provided to the kernel via a new TCP_TXTLS_ENABLE
socket option. All subsequent data transmitted on the socket is
placed into TLS frames and encrypted using the supplied keys.
Any data written to a KTLS-enabled socket via write(2), aio_write(2),
or sendfile(2) is assumed to be application data and is encoded in TLS
frames with an application data type. Individual records can be sent
with a custom type (e.g. handshake messages) via sendmsg(2) with a new
control message (TLS_SET_RECORD_TYPE) specifying the record type.
At present, rekeying is not supported though the in-kernel framework
should support rekeying.
KTLS makes use of the recently added unmapped mbufs to store TLS
frames in the socket buffer. Each TLS frame is described by a single
ext_pgs mbuf. The ext_pgs structure contains the header of the TLS
record (and trailer for encrypted records) as well as references to
the associated TLS session.
KTLS supports two primary methods of encrypting TLS frames: software
TLS and ifnet TLS.
Software TLS marks mbufs holding socket data as not ready via
M_NOTREADY similar to sendfile(2) when TLS framing information is
added to an unmapped mbuf in ktls_frame(). ktls_enqueue() is then
called to schedule TLS frames for encryption. In the case of
sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving
the mbufs marked M_NOTREADY until encryption is completed. For other
writes (vn_sendfile when pages are available, write(2), etc.), the
PRUS_NOTREADY is set when invoking pru_send() along with invoking
ktls_enqueue().
A pool of worker threads (the "KTLS" kernel process) encrypts TLS
frames queued via ktls_enqueue(). Each TLS frame is temporarily
mapped using the direct map and passed to a software encryption
backend to perform the actual encryption.
(Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if
someone wished to make this work on architectures without a direct
map.)
KTLS supports pluggable software encryption backends. Internally,
Netflix uses proprietary pure-software backends. This commit includes
a simple backend in a new ktls_ocf.ko module that uses the kernel's
OpenCrypto framework to provide AES-GCM encryption of TLS frames. As
a result, software TLS is now a bit of a misnomer as it can make use
of hardware crypto accelerators.
Once software encryption has finished, the TLS frame mbufs are marked
ready via pru_ready(). At this point, the encrypted data appears as
regular payload to the TCP stack stored in unmapped mbufs.
ifnet TLS permits a NIC to offload the TLS encryption and TCP
segmentation. In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS)
is allocated on the interface a socket is routed over and associated
with a TLS session. TLS records for a TLS session using ifnet TLS are
not marked M_NOTREADY but are passed down the stack unencrypted. The
ip_output_send() and ip6_output_send() helper functions that apply
send tags to outbound IP packets verify that the send tag of the TLS
record matches the outbound interface. If so, the packet is tagged
with the TLS send tag and sent to the interface. The NIC device
driver must recognize packets with the TLS send tag and schedule them
for TLS encryption and TCP segmentation. If the the outbound
interface does not match the interface in the TLS send tag, the packet
is dropped. In addition, a task is scheduled to refresh the TLS send
tag for the TLS session. If a new TLS send tag cannot be allocated,
the connection is dropped. If a new TLS send tag is allocated,
however, subsequent packets will be tagged with the correct TLS send
tag. (This latter case has been tested by configuring both ports of a
Chelsio T6 in a lagg and failing over from one port to another. As
the connections migrated to the new port, new TLS send tags were
allocated for the new port and connections resumed without being
dropped.)
ifnet TLS can be enabled and disabled on supported network interfaces
via new '[-]txtls[46]' options to ifconfig(8). ifnet TLS is supported
across both vlan devices and lagg interfaces using failover, lacp with
flowid enabled, or lacp with flowid enabled.
Applications may request the current KTLS mode of a connection via a
new TCP_TXTLS_MODE socket option. They can also use this socket
option to toggle between software and ifnet TLS modes.
In addition, a testing tool is available in tools/tools/switch_tls.
This is modeled on tcpdrop and uses similar syntax. However, instead
of dropping connections, -s is used to force KTLS connections to
switch to software TLS and -i is used to switch to ifnet TLS.
Various sysctls and counters are available under the kern.ipc.tls
sysctl node. The kern.ipc.tls.enable node must be set to true to
enable KTLS (it is off by default). The use of unmapped mbufs must
also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS.
KTLS is enabled via the KERN_TLS kernel option.
This patch is the culmination of years of work by several folks
including Scott Long and Randall Stewart for the original design and
implementation; Drew Gallatin for several optimizations including the
use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records
awaiting software encryption, and pluggable software crypto backends;
and John Baldwin for modifications to support hardware TLS offload.
Reviewed by: gallatin, hselasky, rrs
Obtained from: Netflix
Sponsored by: Netflix, Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D21277
2019-08-27 00:01:56 +00:00
|
|
|
int32_t seglimit, int32_t segsize, struct sockbuf *sb, bool hw_tls);
|
2018-06-07 18:18:13 +00:00
|
|
|
|
2019-12-02 20:58:04 +00:00
|
|
|
int tcp_stats_init(void);
|
2020-04-27 16:30:29 +00:00
|
|
|
void tcp_log_end_status(struct tcpcb *tp, uint8_t status);
|
2004-06-23 21:04:37 +00:00
|
|
|
|
2014-05-23 20:15:01 +00:00
|
|
|
static inline void
|
|
|
|
tcp_fields_to_host(struct tcphdr *th)
|
|
|
|
{
|
|
|
|
|
|
|
|
th->th_seq = ntohl(th->th_seq);
|
|
|
|
th->th_ack = ntohl(th->th_ack);
|
|
|
|
th->th_win = ntohs(th->th_win);
|
|
|
|
th->th_urp = ntohs(th->th_urp);
|
|
|
|
}
|
|
|
|
|
2017-02-10 17:46:26 +00:00
|
|
|
static inline void
|
|
|
|
tcp_fields_to_net(struct tcphdr *th)
|
|
|
|
{
|
|
|
|
|
|
|
|
th->th_seq = htonl(th->th_seq);
|
|
|
|
th->th_ack = htonl(th->th_ack);
|
|
|
|
th->th_win = htons(th->th_win);
|
|
|
|
th->th_urp = htons(th->th_urp);
|
|
|
|
}
|
1999-12-29 04:46:21 +00:00
|
|
|
#endif /* _KERNEL */
|
1994-08-21 05:27:42 +00:00
|
|
|
|
1995-02-14 02:35:19 +00:00
|
|
|
#endif /* _NETINET_TCP_VAR_H_ */
|