2005-01-07 01:45:51 +00:00
|
|
|
/*-
|
2017-11-20 19:43:44 +00:00
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
*
|
1997-01-03 19:50:26 +00:00
|
|
|
* Copyright (c) 1982, 1986, 1989, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2017-02-28 23:42:47 +00:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
1997-01-03 19:50:26 +00:00
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* From: @(#)if.h 8.1 (Berkeley) 6/10/93
|
1999-08-28 01:08:13 +00:00
|
|
|
* $FreeBSD$
|
1997-01-03 19:50:26 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _NET_IF_VAR_H_
|
|
|
|
#define _NET_IF_VAR_H_
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Structures defining a network interface, providing a packet
|
|
|
|
* transport mechanism (ala level 0 of the PUP protocols).
|
|
|
|
*
|
|
|
|
* Each interface accepts output datagrams of a specified maximum
|
|
|
|
* length, and provides higher level routines with input datagrams
|
|
|
|
* received from its medium.
|
|
|
|
*
|
|
|
|
* Output occurs when the routine if_output is called, with three parameters:
|
2020-04-17 17:05:58 +00:00
|
|
|
* (*ifp->if_output)(ifp, m, dst, ro)
|
1997-01-03 19:50:26 +00:00
|
|
|
* Here m is the mbuf chain to be sent and dst is the destination address.
|
|
|
|
* The output routine encapsulates the supplied datagram if necessary,
|
|
|
|
* and then transmits it on its medium.
|
|
|
|
*
|
|
|
|
* On input, each interface unwraps the data received by it, and either
|
2003-01-01 18:49:04 +00:00
|
|
|
* places it on the input queue of an internetwork datagram routine
|
1997-01-03 19:50:26 +00:00
|
|
|
* and posts the associated software interrupt, or passes the datagram to a raw
|
|
|
|
* packet input routine.
|
|
|
|
*
|
|
|
|
* Routines exist for locating interfaces by their addresses
|
2003-01-01 18:49:04 +00:00
|
|
|
* or for locating an interface on a certain network, as well as more general
|
1997-01-03 19:50:26 +00:00
|
|
|
* routing and gateway routines maintaining information used to locate
|
|
|
|
* interfaces. These routines live in the files if.c and route.c
|
|
|
|
*/
|
|
|
|
|
2013-10-28 08:03:40 +00:00
|
|
|
struct rtentry; /* ifa_rtrequest */
|
1997-01-03 19:50:26 +00:00
|
|
|
struct socket;
|
2005-02-22 13:04:05 +00:00
|
|
|
struct carp_if;
|
A major overhaul of the CARP implementation. The ip_carp.c was started
from scratch, copying needed functionality from the old implemenation
on demand, with a thorough review of all code. The main change is that
interface layer has been removed from the CARP. Now redundant addresses
are configured exactly on the interfaces, they run on.
The CARP configuration itself is, as before, configured and read via
SIOCSVH/SIOCGVH ioctls. A new prefix created with SIOCAIFADDR or
SIOCAIFADDR_IN6 may now be configured to a particular virtual host id,
which makes the prefix redundant.
ifconfig(8) semantics has been changed too: now one doesn't need
to clone carpXX interface, he/she should directly configure a vhid
on a Ethernet interface.
To supply vhid data from the kernel to an application the getifaddrs(8)
function had been changed to pass ifam_data with each address. [1]
The new implementation definitely closes all PRs related to carp(4)
being an interface, and may close several others. It also allows
to run a single redundant IP per interface.
Big thanks to Bjoern Zeeb for his help with inet6 part of patch, for
idea on using ifam_data and for several rounds of reviewing!
PR: kern/117000, kern/126945, kern/126714, kern/120130, kern/117448
Reviewed by: bz
Submitted by: bz [1]
2011-12-16 12:16:56 +00:00
|
|
|
struct carp_softc;
|
Merge the //depot/user/yar/vlan branch into CVS. It contains some collective
work by yar, thompsa and myself. The checksum offloading part also involves
work done by Mihail Balikov.
The most important changes:
o Instead of global linked list of all vlan softc use a per-trunk
hash. The size of hash is dynamically adjusted, depending on
number of entries. This changes struct ifnet, replacing counter
of vlans with a pointer to trunk structure. This change is an
improvement for setups with big number of VLANs, several interfaces
and several CPUs. It is a small regression for a setup with a single
VLAN interface.
An alternative to dynamic hash is a per-trunk static array with
4096 entries, which is a compile time option - VLAN_ARRAY. In my
experiments the array is not an improvement, probably because such
a big trunk structure doesn't fit into CPU cache.
o Introduce an UMA zone for VLAN tags. Since drivers depend on it,
the zone is declared in kern_mbuf.c, not in optional vlan(4) driver.
This change is a big improvement for any setup utilizing vlan(4).
o Use rwlock(9) instead of mutex(9) for locking. We are the first
ones to do this! :)
o Some drivers can do hardware VLAN tagging + hardware checksum
offloading. Add an infrastructure for this. Whenever vlan(4) is
attached to a parent or parent configuration is changed, the flags
on vlan(4) interface are updated.
In collaboration with: yar, thompsa
In collaboration with: Mihail Balikov <mihail.balikov interbgc.com>
2006-01-30 13:45:15 +00:00
|
|
|
struct ifvlantrunk;
|
2013-10-28 08:03:40 +00:00
|
|
|
struct route; /* if_output */
|
Introduce an infrastructure for dismantling vnet instances.
Vnet modules and protocol domains may now register destructor
functions to clean up and release per-module state. The destructor
mechanisms can be triggered by invoking "vimage -d", or a future
equivalent command which will be provided via the new jail framework.
While this patch introduces numerous placeholder destructor functions,
many of those are currently incomplete, thus leaking memory or (even
worse) failing to stop all running timers. Many of such issues are
already known and will be incrementaly fixed over the next weeks in
smaller incremental commits.
Apart from introducing new fields in structs ifnet, domain, protosw
and vnet_net, which requires the kernel and modules to be rebuilt, this
change should have no impact on nooptions VIMAGE builds, since vnet
destructors can only be called in VIMAGE kernels. Moreover,
destructor functions should be in general compiled in only in
options VIMAGE builds, except for kernel modules which can be safely
kldunloaded at run time.
Bump __FreeBSD_version to 800097.
Reviewed by: bz, julian
Approved by: rwatson, kib (re), julian (mentor)
2009-06-08 17:15:40 +00:00
|
|
|
struct vnet;
|
2014-06-02 17:54:39 +00:00
|
|
|
struct ifmedia;
|
2014-08-31 11:33:19 +00:00
|
|
|
struct netmap_adapter;
|
2019-10-17 16:23:03 +00:00
|
|
|
struct debugnet_methods;
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2000-11-26 21:47:01 +00:00
|
|
|
#ifdef _KERNEL
|
Extract eventfilter declarations to sys/_eventfilter.h
This allows replacing "sys/eventfilter.h" includes with "sys/_eventfilter.h"
in other header files (e.g., sys/{bus,conf,cpu}.h) and reduces header
pollution substantially.
EVENTHANDLER_DECLARE and EVENTHANDLER_LIST_DECLAREs were moved out of .c
files into appropriate headers (e.g., sys/proc.h, powernv/opal.h).
As a side effect of reduced header pollution, many .c files and headers no
longer contain needed definitions. The remainder of the patch addresses
adding appropriate includes to fix those files.
LOCK_DEBUG and LOCK_FILE_LINE_ARG are moved to sys/_lock.h, as required by
sys/mutex.h since r326106 (but silently protected by header pollution prior
to this change).
No functional change (intended). Of course, any out of tree modules that
relied on header pollution for sys/eventhandler.h, sys/lock.h, or
sys/mutex.h inclusion need to be fixed. __FreeBSD_version has been bumped.
2019-05-20 00:38:23 +00:00
|
|
|
#include <sys/_eventhandler.h>
|
2013-10-28 08:03:40 +00:00
|
|
|
#include <sys/mbuf.h> /* ifqueue only? */
|
2008-12-17 04:00:43 +00:00
|
|
|
#include <sys/buf_ring.h>
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
#include <net/vnet.h>
|
2000-11-26 21:47:01 +00:00
|
|
|
#endif /* _KERNEL */
|
2018-05-21 01:53:23 +00:00
|
|
|
#include <sys/ck.h>
|
2013-10-15 11:37:57 +00:00
|
|
|
#include <sys/counter.h>
|
2018-05-21 01:53:23 +00:00
|
|
|
#include <sys/epoch.h>
|
2001-03-28 09:17:56 +00:00
|
|
|
#include <sys/lock.h> /* XXX */
|
2013-10-28 08:03:40 +00:00
|
|
|
#include <sys/mutex.h> /* struct ifqueue */
|
2008-12-17 00:11:56 +00:00
|
|
|
#include <sys/rwlock.h> /* XXX */
|
2009-08-23 20:40:19 +00:00
|
|
|
#include <sys/sx.h> /* XXX */
|
2013-10-28 08:03:40 +00:00
|
|
|
#include <sys/_task.h> /* if_link_task */
|
2003-10-31 18:32:15 +00:00
|
|
|
#define IF_DUNIT_NONE -1
|
|
|
|
|
2015-04-16 20:22:40 +00:00
|
|
|
#include <net/altq/if_altq.h>
|
2004-06-13 17:29:10 +00:00
|
|
|
|
2018-05-23 21:02:14 +00:00
|
|
|
CK_STAILQ_HEAD(ifnethead, ifnet); /* we use TAILQs so that the order of */
|
2018-05-18 20:13:34 +00:00
|
|
|
CK_STAILQ_HEAD(ifaddrhead, ifaddr); /* instantiation is preserved in the list */
|
|
|
|
CK_STAILQ_HEAD(ifmultihead, ifmultiaddr);
|
2018-05-23 21:02:14 +00:00
|
|
|
CK_STAILQ_HEAD(ifgrouphead, ifg_group);
|
1997-01-03 19:50:26 +00:00
|
|
|
|
2012-09-04 22:17:33 +00:00
|
|
|
#ifdef _KERNEL
|
New pfil(9) KPI together with newborn pfil API and control utility.
The KPI have been reviewed and cleansed of features that were planned
back 20 years ago and never implemented. The pfil(9) internals have
been made opaque to protocols with only returned types and function
declarations exposed. The KPI is made more strict, but at the same time
more extensible, as kernel uses same command structures that userland
ioctl uses.
In nutshell [KA]PI is about declaring filtering points, declaring
filters and linking and unlinking them together.
New [KA]PI makes it possible to reconfigure pfil(9) configuration:
change order of hooks, rehook filter from one filtering point to a
different one, disconnect a hook on output leaving it on input only,
prepend/append a filter to existing list of filters.
Now it possible for a single packet filter to provide multiple rulesets
that may be linked to different points. Think of per-interface ACLs in
Cisco or Juniper. None of existing packet filters yet support that,
however limited usage is already possible, e.g. default ruleset can
be moved to single interface, as soon as interface would pride their
filtering points.
Another future feature is possiblity to create pfil heads, that provide
not an mbuf pointer but just a memory pointer with length. That would
allow filtering at very early stages of a packet lifecycle, e.g. when
packet has just been received by a NIC and no mbuf was yet allocated.
Differential Revision: https://reviews.freebsd.org/D18951
2019-01-31 23:01:03 +00:00
|
|
|
VNET_DECLARE(struct pfil_head *, link_pfil_head);
|
|
|
|
#define V_link_pfil_head VNET(link_pfil_head)
|
|
|
|
#define PFIL_ETHER_NAME "ethernet"
|
2015-11-25 07:31:59 +00:00
|
|
|
|
|
|
|
#define HHOOK_IPSEC_INET 0
|
|
|
|
#define HHOOK_IPSEC_INET6 1
|
|
|
|
#define HHOOK_IPSEC_COUNT 2
|
|
|
|
VNET_DECLARE(struct hhook_head *, ipsec_hhh_in[HHOOK_IPSEC_COUNT]);
|
|
|
|
VNET_DECLARE(struct hhook_head *, ipsec_hhh_out[HHOOK_IPSEC_COUNT]);
|
|
|
|
#define V_ipsec_hhh_in VNET(ipsec_hhh_in)
|
|
|
|
#define V_ipsec_hhh_out VNET(ipsec_hhh_out)
|
2012-09-04 22:17:33 +00:00
|
|
|
#endif /* _KERNEL */
|
2012-09-04 19:43:26 +00:00
|
|
|
|
2014-08-31 06:46:21 +00:00
|
|
|
typedef enum {
|
2014-09-28 08:57:07 +00:00
|
|
|
IFCOUNTER_IPACKETS = 0,
|
2014-08-31 06:46:21 +00:00
|
|
|
IFCOUNTER_IERRORS,
|
|
|
|
IFCOUNTER_OPACKETS,
|
|
|
|
IFCOUNTER_OERRORS,
|
|
|
|
IFCOUNTER_COLLISIONS,
|
|
|
|
IFCOUNTER_IBYTES,
|
|
|
|
IFCOUNTER_OBYTES,
|
|
|
|
IFCOUNTER_IMCASTS,
|
|
|
|
IFCOUNTER_OMCASTS,
|
|
|
|
IFCOUNTER_IQDROPS,
|
|
|
|
IFCOUNTER_OQDROPS,
|
|
|
|
IFCOUNTER_NOPROTO,
|
2014-09-28 08:57:07 +00:00
|
|
|
IFCOUNTERS /* Array size. */
|
2014-09-18 14:47:13 +00:00
|
|
|
} ift_counter;
|
2014-08-31 06:46:21 +00:00
|
|
|
|
2014-08-31 12:48:13 +00:00
|
|
|
typedef struct ifnet * if_t;
|
2014-06-02 17:54:39 +00:00
|
|
|
|
2014-08-31 12:48:13 +00:00
|
|
|
typedef void (*if_start_fn_t)(if_t);
|
|
|
|
typedef int (*if_ioctl_fn_t)(if_t, u_long, caddr_t);
|
|
|
|
typedef void (*if_init_fn_t)(void *);
|
|
|
|
typedef void (*if_qflush_fn_t)(if_t);
|
|
|
|
typedef int (*if_transmit_fn_t)(if_t, struct mbuf *);
|
2014-09-18 14:47:13 +00:00
|
|
|
typedef uint64_t (*if_get_counter_t)(if_t, ift_counter);
|
2014-06-02 17:54:39 +00:00
|
|
|
|
2014-09-22 08:27:27 +00:00
|
|
|
struct ifnet_hw_tsomax {
|
|
|
|
u_int tsomaxbytes; /* TSO total burst length limit in bytes */
|
|
|
|
u_int tsomaxsegcount; /* TSO maximum segment count */
|
|
|
|
u_int tsomaxsegsize; /* TSO maximum segment size in bytes */
|
|
|
|
};
|
|
|
|
|
2015-12-31 05:03:27 +00:00
|
|
|
/* Interface encap request types */
|
|
|
|
typedef enum {
|
|
|
|
IFENCAP_LL = 1 /* pre-calculate link-layer header */
|
|
|
|
} ife_type;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The structure below allows to request various pre-calculated L2/L3 headers
|
|
|
|
* for different media. Requests varies by type (rtype field).
|
|
|
|
*
|
|
|
|
* IFENCAP_LL type: pre-calculates link header based on address family
|
|
|
|
* and destination lladdr.
|
|
|
|
*
|
|
|
|
* Input data fields:
|
|
|
|
* buf: pointer to destination buffer
|
|
|
|
* bufsize: buffer size
|
|
|
|
* flags: IFENCAP_FLAG_BROADCAST if destination is broadcast
|
|
|
|
* family: address family defined by AF_ constant.
|
|
|
|
* lladdr: pointer to link-layer address
|
|
|
|
* lladdr_len: length of link-layer address
|
|
|
|
* hdata: pointer to L3 header (optional, used for ARP requests).
|
|
|
|
* Output data fields:
|
|
|
|
* buf: encap data is stored here
|
|
|
|
* bufsize: resulting encap length is stored here
|
|
|
|
* lladdr_off: offset of link-layer address from encap hdr start
|
|
|
|
* hdata: L3 header may be altered if necessary
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct if_encap_req {
|
|
|
|
u_char *buf; /* Destination buffer (w) */
|
|
|
|
size_t bufsize; /* size of provided buffer (r) */
|
|
|
|
ife_type rtype; /* request type (r) */
|
|
|
|
uint32_t flags; /* Request flags (r) */
|
|
|
|
int family; /* Address family AF_* (r) */
|
|
|
|
int lladdr_off; /* offset from header start (w) */
|
|
|
|
int lladdr_len; /* lladdr length (r) */
|
|
|
|
char *lladdr; /* link-level address pointer (r) */
|
|
|
|
char *hdata; /* Upper layer header data (rw) */
|
|
|
|
};
|
|
|
|
|
|
|
|
#define IFENCAP_FLAG_BROADCAST 0x02 /* Destination is broadcast */
|
|
|
|
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
/*
|
|
|
|
* Network interface send tag support. The storage of "struct
|
|
|
|
* m_snd_tag" comes from the network driver and it is free to allocate
|
|
|
|
* as much additional space as it wants for its own use.
|
|
|
|
*/
|
Add kernel-side support for in-kernel TLS.
KTLS adds support for in-kernel framing and encryption of Transport
Layer Security (1.0-1.2) data on TCP sockets. KTLS only supports
offload of TLS for transmitted data. Key negotation must still be
performed in userland. Once completed, transmit session keys for a
connection are provided to the kernel via a new TCP_TXTLS_ENABLE
socket option. All subsequent data transmitted on the socket is
placed into TLS frames and encrypted using the supplied keys.
Any data written to a KTLS-enabled socket via write(2), aio_write(2),
or sendfile(2) is assumed to be application data and is encoded in TLS
frames with an application data type. Individual records can be sent
with a custom type (e.g. handshake messages) via sendmsg(2) with a new
control message (TLS_SET_RECORD_TYPE) specifying the record type.
At present, rekeying is not supported though the in-kernel framework
should support rekeying.
KTLS makes use of the recently added unmapped mbufs to store TLS
frames in the socket buffer. Each TLS frame is described by a single
ext_pgs mbuf. The ext_pgs structure contains the header of the TLS
record (and trailer for encrypted records) as well as references to
the associated TLS session.
KTLS supports two primary methods of encrypting TLS frames: software
TLS and ifnet TLS.
Software TLS marks mbufs holding socket data as not ready via
M_NOTREADY similar to sendfile(2) when TLS framing information is
added to an unmapped mbuf in ktls_frame(). ktls_enqueue() is then
called to schedule TLS frames for encryption. In the case of
sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving
the mbufs marked M_NOTREADY until encryption is completed. For other
writes (vn_sendfile when pages are available, write(2), etc.), the
PRUS_NOTREADY is set when invoking pru_send() along with invoking
ktls_enqueue().
A pool of worker threads (the "KTLS" kernel process) encrypts TLS
frames queued via ktls_enqueue(). Each TLS frame is temporarily
mapped using the direct map and passed to a software encryption
backend to perform the actual encryption.
(Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if
someone wished to make this work on architectures without a direct
map.)
KTLS supports pluggable software encryption backends. Internally,
Netflix uses proprietary pure-software backends. This commit includes
a simple backend in a new ktls_ocf.ko module that uses the kernel's
OpenCrypto framework to provide AES-GCM encryption of TLS frames. As
a result, software TLS is now a bit of a misnomer as it can make use
of hardware crypto accelerators.
Once software encryption has finished, the TLS frame mbufs are marked
ready via pru_ready(). At this point, the encrypted data appears as
regular payload to the TCP stack stored in unmapped mbufs.
ifnet TLS permits a NIC to offload the TLS encryption and TCP
segmentation. In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS)
is allocated on the interface a socket is routed over and associated
with a TLS session. TLS records for a TLS session using ifnet TLS are
not marked M_NOTREADY but are passed down the stack unencrypted. The
ip_output_send() and ip6_output_send() helper functions that apply
send tags to outbound IP packets verify that the send tag of the TLS
record matches the outbound interface. If so, the packet is tagged
with the TLS send tag and sent to the interface. The NIC device
driver must recognize packets with the TLS send tag and schedule them
for TLS encryption and TCP segmentation. If the the outbound
interface does not match the interface in the TLS send tag, the packet
is dropped. In addition, a task is scheduled to refresh the TLS send
tag for the TLS session. If a new TLS send tag cannot be allocated,
the connection is dropped. If a new TLS send tag is allocated,
however, subsequent packets will be tagged with the correct TLS send
tag. (This latter case has been tested by configuring both ports of a
Chelsio T6 in a lagg and failing over from one port to another. As
the connections migrated to the new port, new TLS send tags were
allocated for the new port and connections resumed without being
dropped.)
ifnet TLS can be enabled and disabled on supported network interfaces
via new '[-]txtls[46]' options to ifconfig(8). ifnet TLS is supported
across both vlan devices and lagg interfaces using failover, lacp with
flowid enabled, or lacp with flowid enabled.
Applications may request the current KTLS mode of a connection via a
new TCP_TXTLS_MODE socket option. They can also use this socket
option to toggle between software and ifnet TLS modes.
In addition, a testing tool is available in tools/tools/switch_tls.
This is modeled on tcpdrop and uses similar syntax. However, instead
of dropping connections, -s is used to force KTLS connections to
switch to software TLS and -i is used to switch to ifnet TLS.
Various sysctls and counters are available under the kern.ipc.tls
sysctl node. The kern.ipc.tls.enable node must be set to true to
enable KTLS (it is off by default). The use of unmapped mbufs must
also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS.
KTLS is enabled via the KERN_TLS kernel option.
This patch is the culmination of years of work by several folks
including Scott Long and Randall Stewart for the original design and
implementation; Drew Gallatin for several optimizations including the
use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records
awaiting software encryption, and pluggable software crypto backends;
and John Baldwin for modifications to support hardware TLS offload.
Reviewed by: gallatin, hselasky, rrs
Obtained from: Netflix
Sponsored by: Netflix, Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D21277
2019-08-27 00:01:56 +00:00
|
|
|
struct ktls_session;
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
struct m_snd_tag;
|
|
|
|
|
|
|
|
#define IF_SND_TAG_TYPE_RATE_LIMIT 0
|
2017-09-06 13:56:18 +00:00
|
|
|
#define IF_SND_TAG_TYPE_UNLIMITED 1
|
Add kernel-side support for in-kernel TLS.
KTLS adds support for in-kernel framing and encryption of Transport
Layer Security (1.0-1.2) data on TCP sockets. KTLS only supports
offload of TLS for transmitted data. Key negotation must still be
performed in userland. Once completed, transmit session keys for a
connection are provided to the kernel via a new TCP_TXTLS_ENABLE
socket option. All subsequent data transmitted on the socket is
placed into TLS frames and encrypted using the supplied keys.
Any data written to a KTLS-enabled socket via write(2), aio_write(2),
or sendfile(2) is assumed to be application data and is encoded in TLS
frames with an application data type. Individual records can be sent
with a custom type (e.g. handshake messages) via sendmsg(2) with a new
control message (TLS_SET_RECORD_TYPE) specifying the record type.
At present, rekeying is not supported though the in-kernel framework
should support rekeying.
KTLS makes use of the recently added unmapped mbufs to store TLS
frames in the socket buffer. Each TLS frame is described by a single
ext_pgs mbuf. The ext_pgs structure contains the header of the TLS
record (and trailer for encrypted records) as well as references to
the associated TLS session.
KTLS supports two primary methods of encrypting TLS frames: software
TLS and ifnet TLS.
Software TLS marks mbufs holding socket data as not ready via
M_NOTREADY similar to sendfile(2) when TLS framing information is
added to an unmapped mbuf in ktls_frame(). ktls_enqueue() is then
called to schedule TLS frames for encryption. In the case of
sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving
the mbufs marked M_NOTREADY until encryption is completed. For other
writes (vn_sendfile when pages are available, write(2), etc.), the
PRUS_NOTREADY is set when invoking pru_send() along with invoking
ktls_enqueue().
A pool of worker threads (the "KTLS" kernel process) encrypts TLS
frames queued via ktls_enqueue(). Each TLS frame is temporarily
mapped using the direct map and passed to a software encryption
backend to perform the actual encryption.
(Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if
someone wished to make this work on architectures without a direct
map.)
KTLS supports pluggable software encryption backends. Internally,
Netflix uses proprietary pure-software backends. This commit includes
a simple backend in a new ktls_ocf.ko module that uses the kernel's
OpenCrypto framework to provide AES-GCM encryption of TLS frames. As
a result, software TLS is now a bit of a misnomer as it can make use
of hardware crypto accelerators.
Once software encryption has finished, the TLS frame mbufs are marked
ready via pru_ready(). At this point, the encrypted data appears as
regular payload to the TCP stack stored in unmapped mbufs.
ifnet TLS permits a NIC to offload the TLS encryption and TCP
segmentation. In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS)
is allocated on the interface a socket is routed over and associated
with a TLS session. TLS records for a TLS session using ifnet TLS are
not marked M_NOTREADY but are passed down the stack unencrypted. The
ip_output_send() and ip6_output_send() helper functions that apply
send tags to outbound IP packets verify that the send tag of the TLS
record matches the outbound interface. If so, the packet is tagged
with the TLS send tag and sent to the interface. The NIC device
driver must recognize packets with the TLS send tag and schedule them
for TLS encryption and TCP segmentation. If the the outbound
interface does not match the interface in the TLS send tag, the packet
is dropped. In addition, a task is scheduled to refresh the TLS send
tag for the TLS session. If a new TLS send tag cannot be allocated,
the connection is dropped. If a new TLS send tag is allocated,
however, subsequent packets will be tagged with the correct TLS send
tag. (This latter case has been tested by configuring both ports of a
Chelsio T6 in a lagg and failing over from one port to another. As
the connections migrated to the new port, new TLS send tags were
allocated for the new port and connections resumed without being
dropped.)
ifnet TLS can be enabled and disabled on supported network interfaces
via new '[-]txtls[46]' options to ifconfig(8). ifnet TLS is supported
across both vlan devices and lagg interfaces using failover, lacp with
flowid enabled, or lacp with flowid enabled.
Applications may request the current KTLS mode of a connection via a
new TCP_TXTLS_MODE socket option. They can also use this socket
option to toggle between software and ifnet TLS modes.
In addition, a testing tool is available in tools/tools/switch_tls.
This is modeled on tcpdrop and uses similar syntax. However, instead
of dropping connections, -s is used to force KTLS connections to
switch to software TLS and -i is used to switch to ifnet TLS.
Various sysctls and counters are available under the kern.ipc.tls
sysctl node. The kern.ipc.tls.enable node must be set to true to
enable KTLS (it is off by default). The use of unmapped mbufs must
also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS.
KTLS is enabled via the KERN_TLS kernel option.
This patch is the culmination of years of work by several folks
including Scott Long and Randall Stewart for the original design and
implementation; Drew Gallatin for several optimizations including the
use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records
awaiting software encryption, and pluggable software crypto backends;
and John Baldwin for modifications to support hardware TLS offload.
Reviewed by: gallatin, hselasky, rrs
Obtained from: Netflix
Sponsored by: Netflix, Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D21277
2019-08-27 00:01:56 +00:00
|
|
|
#define IF_SND_TAG_TYPE_TLS 2
|
2020-10-29 00:23:16 +00:00
|
|
|
#define IF_SND_TAG_TYPE_TLS_RATE_LIMIT 3
|
|
|
|
#define IF_SND_TAG_TYPE_MAX 4
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
|
|
|
|
struct if_snd_tag_alloc_header {
|
|
|
|
uint32_t type; /* send tag type, see IF_SND_TAG_XXX */
|
|
|
|
uint32_t flowid; /* mbuf hash value */
|
|
|
|
uint32_t flowtype; /* mbuf hash type */
|
2020-03-09 13:44:51 +00:00
|
|
|
uint8_t numa_domain; /* numa domain of associated inp */
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct if_snd_tag_alloc_rate_limit {
|
|
|
|
struct if_snd_tag_alloc_header hdr;
|
|
|
|
uint64_t max_rate; /* in bytes/s */
|
2019-08-01 14:17:31 +00:00
|
|
|
uint32_t flags; /* M_NOWAIT or M_WAITOK */
|
|
|
|
uint32_t reserved; /* alignment */
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
};
|
|
|
|
|
Add kernel-side support for in-kernel TLS.
KTLS adds support for in-kernel framing and encryption of Transport
Layer Security (1.0-1.2) data on TCP sockets. KTLS only supports
offload of TLS for transmitted data. Key negotation must still be
performed in userland. Once completed, transmit session keys for a
connection are provided to the kernel via a new TCP_TXTLS_ENABLE
socket option. All subsequent data transmitted on the socket is
placed into TLS frames and encrypted using the supplied keys.
Any data written to a KTLS-enabled socket via write(2), aio_write(2),
or sendfile(2) is assumed to be application data and is encoded in TLS
frames with an application data type. Individual records can be sent
with a custom type (e.g. handshake messages) via sendmsg(2) with a new
control message (TLS_SET_RECORD_TYPE) specifying the record type.
At present, rekeying is not supported though the in-kernel framework
should support rekeying.
KTLS makes use of the recently added unmapped mbufs to store TLS
frames in the socket buffer. Each TLS frame is described by a single
ext_pgs mbuf. The ext_pgs structure contains the header of the TLS
record (and trailer for encrypted records) as well as references to
the associated TLS session.
KTLS supports two primary methods of encrypting TLS frames: software
TLS and ifnet TLS.
Software TLS marks mbufs holding socket data as not ready via
M_NOTREADY similar to sendfile(2) when TLS framing information is
added to an unmapped mbuf in ktls_frame(). ktls_enqueue() is then
called to schedule TLS frames for encryption. In the case of
sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving
the mbufs marked M_NOTREADY until encryption is completed. For other
writes (vn_sendfile when pages are available, write(2), etc.), the
PRUS_NOTREADY is set when invoking pru_send() along with invoking
ktls_enqueue().
A pool of worker threads (the "KTLS" kernel process) encrypts TLS
frames queued via ktls_enqueue(). Each TLS frame is temporarily
mapped using the direct map and passed to a software encryption
backend to perform the actual encryption.
(Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if
someone wished to make this work on architectures without a direct
map.)
KTLS supports pluggable software encryption backends. Internally,
Netflix uses proprietary pure-software backends. This commit includes
a simple backend in a new ktls_ocf.ko module that uses the kernel's
OpenCrypto framework to provide AES-GCM encryption of TLS frames. As
a result, software TLS is now a bit of a misnomer as it can make use
of hardware crypto accelerators.
Once software encryption has finished, the TLS frame mbufs are marked
ready via pru_ready(). At this point, the encrypted data appears as
regular payload to the TCP stack stored in unmapped mbufs.
ifnet TLS permits a NIC to offload the TLS encryption and TCP
segmentation. In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS)
is allocated on the interface a socket is routed over and associated
with a TLS session. TLS records for a TLS session using ifnet TLS are
not marked M_NOTREADY but are passed down the stack unencrypted. The
ip_output_send() and ip6_output_send() helper functions that apply
send tags to outbound IP packets verify that the send tag of the TLS
record matches the outbound interface. If so, the packet is tagged
with the TLS send tag and sent to the interface. The NIC device
driver must recognize packets with the TLS send tag and schedule them
for TLS encryption and TCP segmentation. If the the outbound
interface does not match the interface in the TLS send tag, the packet
is dropped. In addition, a task is scheduled to refresh the TLS send
tag for the TLS session. If a new TLS send tag cannot be allocated,
the connection is dropped. If a new TLS send tag is allocated,
however, subsequent packets will be tagged with the correct TLS send
tag. (This latter case has been tested by configuring both ports of a
Chelsio T6 in a lagg and failing over from one port to another. As
the connections migrated to the new port, new TLS send tags were
allocated for the new port and connections resumed without being
dropped.)
ifnet TLS can be enabled and disabled on supported network interfaces
via new '[-]txtls[46]' options to ifconfig(8). ifnet TLS is supported
across both vlan devices and lagg interfaces using failover, lacp with
flowid enabled, or lacp with flowid enabled.
Applications may request the current KTLS mode of a connection via a
new TCP_TXTLS_MODE socket option. They can also use this socket
option to toggle between software and ifnet TLS modes.
In addition, a testing tool is available in tools/tools/switch_tls.
This is modeled on tcpdrop and uses similar syntax. However, instead
of dropping connections, -s is used to force KTLS connections to
switch to software TLS and -i is used to switch to ifnet TLS.
Various sysctls and counters are available under the kern.ipc.tls
sysctl node. The kern.ipc.tls.enable node must be set to true to
enable KTLS (it is off by default). The use of unmapped mbufs must
also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS.
KTLS is enabled via the KERN_TLS kernel option.
This patch is the culmination of years of work by several folks
including Scott Long and Randall Stewart for the original design and
implementation; Drew Gallatin for several optimizations including the
use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records
awaiting software encryption, and pluggable software crypto backends;
and John Baldwin for modifications to support hardware TLS offload.
Reviewed by: gallatin, hselasky, rrs
Obtained from: Netflix
Sponsored by: Netflix, Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D21277
2019-08-27 00:01:56 +00:00
|
|
|
struct if_snd_tag_alloc_tls {
|
|
|
|
struct if_snd_tag_alloc_header hdr;
|
|
|
|
struct inpcb *inp;
|
|
|
|
const struct ktls_session *tls;
|
|
|
|
};
|
|
|
|
|
2020-10-29 00:23:16 +00:00
|
|
|
struct if_snd_tag_alloc_tls_rate_limit {
|
|
|
|
struct if_snd_tag_alloc_header hdr;
|
|
|
|
struct inpcb *inp;
|
|
|
|
const struct ktls_session *tls;
|
|
|
|
uint64_t max_rate; /* in bytes/s */
|
|
|
|
};
|
|
|
|
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
struct if_snd_tag_rate_limit_params {
|
|
|
|
uint64_t max_rate; /* in bytes/s */
|
2017-09-06 13:56:18 +00:00
|
|
|
uint32_t queue_level; /* 0 (empty) .. 65535 (full) */
|
|
|
|
#define IF_SND_QUEUE_LEVEL_MIN 0
|
|
|
|
#define IF_SND_QUEUE_LEVEL_MAX 65535
|
2019-08-01 14:17:31 +00:00
|
|
|
uint32_t flags; /* M_NOWAIT or M_WAITOK */
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
union if_snd_tag_alloc_params {
|
|
|
|
struct if_snd_tag_alloc_header hdr;
|
|
|
|
struct if_snd_tag_alloc_rate_limit rate_limit;
|
2017-09-06 13:56:18 +00:00
|
|
|
struct if_snd_tag_alloc_rate_limit unlimited;
|
Add kernel-side support for in-kernel TLS.
KTLS adds support for in-kernel framing and encryption of Transport
Layer Security (1.0-1.2) data on TCP sockets. KTLS only supports
offload of TLS for transmitted data. Key negotation must still be
performed in userland. Once completed, transmit session keys for a
connection are provided to the kernel via a new TCP_TXTLS_ENABLE
socket option. All subsequent data transmitted on the socket is
placed into TLS frames and encrypted using the supplied keys.
Any data written to a KTLS-enabled socket via write(2), aio_write(2),
or sendfile(2) is assumed to be application data and is encoded in TLS
frames with an application data type. Individual records can be sent
with a custom type (e.g. handshake messages) via sendmsg(2) with a new
control message (TLS_SET_RECORD_TYPE) specifying the record type.
At present, rekeying is not supported though the in-kernel framework
should support rekeying.
KTLS makes use of the recently added unmapped mbufs to store TLS
frames in the socket buffer. Each TLS frame is described by a single
ext_pgs mbuf. The ext_pgs structure contains the header of the TLS
record (and trailer for encrypted records) as well as references to
the associated TLS session.
KTLS supports two primary methods of encrypting TLS frames: software
TLS and ifnet TLS.
Software TLS marks mbufs holding socket data as not ready via
M_NOTREADY similar to sendfile(2) when TLS framing information is
added to an unmapped mbuf in ktls_frame(). ktls_enqueue() is then
called to schedule TLS frames for encryption. In the case of
sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving
the mbufs marked M_NOTREADY until encryption is completed. For other
writes (vn_sendfile when pages are available, write(2), etc.), the
PRUS_NOTREADY is set when invoking pru_send() along with invoking
ktls_enqueue().
A pool of worker threads (the "KTLS" kernel process) encrypts TLS
frames queued via ktls_enqueue(). Each TLS frame is temporarily
mapped using the direct map and passed to a software encryption
backend to perform the actual encryption.
(Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if
someone wished to make this work on architectures without a direct
map.)
KTLS supports pluggable software encryption backends. Internally,
Netflix uses proprietary pure-software backends. This commit includes
a simple backend in a new ktls_ocf.ko module that uses the kernel's
OpenCrypto framework to provide AES-GCM encryption of TLS frames. As
a result, software TLS is now a bit of a misnomer as it can make use
of hardware crypto accelerators.
Once software encryption has finished, the TLS frame mbufs are marked
ready via pru_ready(). At this point, the encrypted data appears as
regular payload to the TCP stack stored in unmapped mbufs.
ifnet TLS permits a NIC to offload the TLS encryption and TCP
segmentation. In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS)
is allocated on the interface a socket is routed over and associated
with a TLS session. TLS records for a TLS session using ifnet TLS are
not marked M_NOTREADY but are passed down the stack unencrypted. The
ip_output_send() and ip6_output_send() helper functions that apply
send tags to outbound IP packets verify that the send tag of the TLS
record matches the outbound interface. If so, the packet is tagged
with the TLS send tag and sent to the interface. The NIC device
driver must recognize packets with the TLS send tag and schedule them
for TLS encryption and TCP segmentation. If the the outbound
interface does not match the interface in the TLS send tag, the packet
is dropped. In addition, a task is scheduled to refresh the TLS send
tag for the TLS session. If a new TLS send tag cannot be allocated,
the connection is dropped. If a new TLS send tag is allocated,
however, subsequent packets will be tagged with the correct TLS send
tag. (This latter case has been tested by configuring both ports of a
Chelsio T6 in a lagg and failing over from one port to another. As
the connections migrated to the new port, new TLS send tags were
allocated for the new port and connections resumed without being
dropped.)
ifnet TLS can be enabled and disabled on supported network interfaces
via new '[-]txtls[46]' options to ifconfig(8). ifnet TLS is supported
across both vlan devices and lagg interfaces using failover, lacp with
flowid enabled, or lacp with flowid enabled.
Applications may request the current KTLS mode of a connection via a
new TCP_TXTLS_MODE socket option. They can also use this socket
option to toggle between software and ifnet TLS modes.
In addition, a testing tool is available in tools/tools/switch_tls.
This is modeled on tcpdrop and uses similar syntax. However, instead
of dropping connections, -s is used to force KTLS connections to
switch to software TLS and -i is used to switch to ifnet TLS.
Various sysctls and counters are available under the kern.ipc.tls
sysctl node. The kern.ipc.tls.enable node must be set to true to
enable KTLS (it is off by default). The use of unmapped mbufs must
also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS.
KTLS is enabled via the KERN_TLS kernel option.
This patch is the culmination of years of work by several folks
including Scott Long and Randall Stewart for the original design and
implementation; Drew Gallatin for several optimizations including the
use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records
awaiting software encryption, and pluggable software crypto backends;
and John Baldwin for modifications to support hardware TLS offload.
Reviewed by: gallatin, hselasky, rrs
Obtained from: Netflix
Sponsored by: Netflix, Chelsio Communications
Differential Revision: https://reviews.freebsd.org/D21277
2019-08-27 00:01:56 +00:00
|
|
|
struct if_snd_tag_alloc_tls tls;
|
2020-10-29 00:23:16 +00:00
|
|
|
struct if_snd_tag_alloc_tls_rate_limit tls_rate_limit;
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
union if_snd_tag_modify_params {
|
|
|
|
struct if_snd_tag_rate_limit_params rate_limit;
|
2017-09-06 13:56:18 +00:00
|
|
|
struct if_snd_tag_rate_limit_params unlimited;
|
2020-10-29 00:23:16 +00:00
|
|
|
struct if_snd_tag_rate_limit_params tls_rate_limit;
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
union if_snd_tag_query_params {
|
|
|
|
struct if_snd_tag_rate_limit_params rate_limit;
|
2017-09-06 13:56:18 +00:00
|
|
|
struct if_snd_tag_rate_limit_params unlimited;
|
2020-10-29 00:23:16 +00:00
|
|
|
struct if_snd_tag_rate_limit_params tls_rate_limit;
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
};
|
|
|
|
|
2019-08-01 14:17:31 +00:00
|
|
|
/* Query return flags */
|
|
|
|
#define RT_NOSUPPORT 0x00000000 /* Not supported */
|
|
|
|
#define RT_IS_INDIRECT 0x00000001 /*
|
|
|
|
* Interface like a lagg, select
|
|
|
|
* the actual interface for
|
|
|
|
* capabilities.
|
|
|
|
*/
|
|
|
|
#define RT_IS_SELECTABLE 0x00000002 /*
|
|
|
|
* No rate table, you select
|
|
|
|
* rates and the first
|
|
|
|
* number_of_rates are created.
|
|
|
|
*/
|
|
|
|
#define RT_IS_FIXED_TABLE 0x00000004 /* A fixed table is attached */
|
|
|
|
#define RT_IS_UNUSABLE 0x00000008 /* It is not usable for this */
|
2020-02-26 13:48:33 +00:00
|
|
|
#define RT_IS_SETUP_REQ 0x00000010 /* The interface setup must be called before use */
|
2019-08-01 14:17:31 +00:00
|
|
|
|
|
|
|
struct if_ratelimit_query_results {
|
|
|
|
const uint64_t *rate_table; /* Pointer to table if present */
|
|
|
|
uint32_t flags; /* Flags indicating results */
|
|
|
|
uint32_t max_flows; /* Max flows using, 0=unlimited */
|
|
|
|
uint32_t number_of_rates; /* How many unique rates can be created */
|
|
|
|
uint32_t min_segment_burst; /* The amount the adapter bursts at each send */
|
|
|
|
};
|
|
|
|
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
typedef int (if_snd_tag_alloc_t)(struct ifnet *, union if_snd_tag_alloc_params *,
|
|
|
|
struct m_snd_tag **);
|
|
|
|
typedef int (if_snd_tag_modify_t)(struct m_snd_tag *, union if_snd_tag_modify_params *);
|
|
|
|
typedef int (if_snd_tag_query_t)(struct m_snd_tag *, union if_snd_tag_query_params *);
|
|
|
|
typedef void (if_snd_tag_free_t)(struct m_snd_tag *);
|
2021-01-26 16:54:42 +00:00
|
|
|
typedef struct m_snd_tag *(if_next_send_tag_t)(struct m_snd_tag *);
|
2019-08-01 14:17:31 +00:00
|
|
|
typedef void (if_ratelimit_query_t)(struct ifnet *,
|
|
|
|
struct if_ratelimit_query_results *);
|
2020-02-26 13:48:33 +00:00
|
|
|
typedef int (if_ratelimit_setup_t)(struct ifnet *, uint64_t, uint32_t);
|
2015-12-31 05:03:27 +00:00
|
|
|
|
1997-01-03 19:50:26 +00:00
|
|
|
/*
|
|
|
|
* Structure defining a network interface.
|
|
|
|
*/
|
|
|
|
struct ifnet {
|
2013-10-31 15:46:10 +00:00
|
|
|
/* General book keeping of interface lists. */
|
2018-05-24 23:21:23 +00:00
|
|
|
CK_STAILQ_ENTRY(ifnet) if_link; /* all struct ifnets are chained (CK_) */
|
2013-10-31 15:46:10 +00:00
|
|
|
LIST_ENTRY(ifnet) if_clones; /* interfaces of a cloner */
|
2018-05-24 23:21:23 +00:00
|
|
|
CK_STAILQ_HEAD(, ifg_list) if_groups; /* linked list of groups per if (CK_) */
|
2013-10-31 15:46:10 +00:00
|
|
|
/* protected by if_addr_lock */
|
|
|
|
u_char if_alloctype; /* if_type at time of allocation */
|
2019-04-22 19:24:21 +00:00
|
|
|
uint8_t if_numa_domain; /* NUMA domain of device */
|
2013-10-31 15:46:10 +00:00
|
|
|
/* Driver and protocol specific information that remains stable. */
|
1997-01-03 19:50:26 +00:00
|
|
|
void *if_softc; /* pointer to driver state */
|
2013-10-31 15:46:10 +00:00
|
|
|
void *if_llsoftc; /* link layer softc */
|
2005-06-10 16:49:24 +00:00
|
|
|
void *if_l2com; /* pointer to protocol bits */
|
2003-10-31 18:32:15 +00:00
|
|
|
const char *if_dname; /* driver name */
|
|
|
|
int if_dunit; /* unit or IF_DUNIT_NONE */
|
2013-10-31 15:46:10 +00:00
|
|
|
u_short if_index; /* numeric abbreviation for this if */
|
|
|
|
short if_index_reserved; /* spare space to grow if_index */
|
|
|
|
char if_xname[IFNAMSIZ]; /* external name (name + unit) */
|
|
|
|
char *if_description; /* interface description */
|
|
|
|
|
|
|
|
/* Variable fields that are touched by the stack and drivers. */
|
|
|
|
int if_flags; /* up/down, broadcast, etc. */
|
2014-08-31 13:30:54 +00:00
|
|
|
int if_drv_flags; /* driver-managed status flags */
|
2013-10-31 15:46:10 +00:00
|
|
|
int if_capabilities; /* interface features & capabilities */
|
|
|
|
int if_capenable; /* enabled features & capabilities */
|
|
|
|
void *if_linkmib; /* link-type-specific MIB data */
|
|
|
|
size_t if_linkmiblen; /* length of above data */
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
u_int if_refcount; /* reference count */
|
2014-08-31 06:46:21 +00:00
|
|
|
|
|
|
|
/* These fields are shared with struct if_data. */
|
|
|
|
uint8_t if_type; /* ethernet, tokenring, etc */
|
|
|
|
uint8_t if_addrlen; /* media address length */
|
|
|
|
uint8_t if_hdrlen; /* media header length */
|
|
|
|
uint8_t if_link_state; /* current link state */
|
|
|
|
uint32_t if_mtu; /* maximum transmission unit */
|
|
|
|
uint32_t if_metric; /* routing metric (external only) */
|
|
|
|
uint64_t if_baudrate; /* linespeed */
|
|
|
|
uint64_t if_hwassist; /* HW offload capabilities, see IFCAP */
|
|
|
|
time_t if_epoch; /* uptime at attach or stat reset */
|
|
|
|
struct timeval if_lastchange; /* time of last administrative change */
|
|
|
|
|
2013-10-31 15:46:10 +00:00
|
|
|
struct ifaltq if_snd; /* output queue (includes altq) */
|
|
|
|
struct task if_linktask; /* task for link change events */
|
2019-10-29 17:36:06 +00:00
|
|
|
struct task if_addmultitask; /* task for SIOCADDMULTI */
|
2013-10-31 15:46:10 +00:00
|
|
|
|
|
|
|
/* Addresses of different protocol families assigned to this if. */
|
2018-05-18 20:13:34 +00:00
|
|
|
struct mtx if_addr_lock; /* lock to protect address lists */
|
2004-04-15 19:45:59 +00:00
|
|
|
/*
|
|
|
|
* if_addrhead is the list of all addresses associated to
|
2004-04-16 10:28:54 +00:00
|
|
|
* an interface.
|
|
|
|
* Some code in the kernel assumes that first element
|
|
|
|
* of the list has type AF_LINK, and contains sockaddr_dl
|
|
|
|
* addresses which store the link-level address and the name
|
2004-04-15 19:45:59 +00:00
|
|
|
* of the interface.
|
2004-04-16 10:28:54 +00:00
|
|
|
* However, access to the AF_LINK address through this
|
2005-11-11 16:04:59 +00:00
|
|
|
* field is deprecated. Use if_addr or ifaddr_byindex() instead.
|
2004-04-15 19:45:59 +00:00
|
|
|
*/
|
2013-10-31 15:46:10 +00:00
|
|
|
struct ifaddrhead if_addrhead; /* linked list of addresses per if */
|
1997-01-07 19:15:32 +00:00
|
|
|
struct ifmultihead if_multiaddrs; /* multicast addresses configured */
|
|
|
|
int if_amcount; /* number of all-multicast requests */
|
2013-10-31 15:46:10 +00:00
|
|
|
struct ifaddr *if_addr; /* pointer to link-level address */
|
2017-05-10 22:13:47 +00:00
|
|
|
void *if_hw_addr; /* hardware link-level address */
|
2013-10-31 15:46:10 +00:00
|
|
|
const u_int8_t *if_broadcastaddr; /* linklevel broadcast bytestring */
|
2018-05-23 21:02:14 +00:00
|
|
|
struct mtx if_afdata_lock;
|
2013-10-31 15:46:10 +00:00
|
|
|
void *if_afdata[AF_MAX];
|
|
|
|
int if_afdata_initialized;
|
|
|
|
|
|
|
|
/* Additional features hung off the interface. */
|
|
|
|
u_int if_fib; /* interface FIB */
|
|
|
|
struct vnet *if_vnet; /* pointer to network stack instance */
|
|
|
|
struct vnet *if_home_vnet; /* where this ifnet originates from */
|
|
|
|
struct ifvlantrunk *if_vlantrunk; /* pointer to 802.1q data */
|
|
|
|
struct bpf_if *if_bpf; /* packet filter structure */
|
|
|
|
int if_pcount; /* number of promiscuous listeners */
|
|
|
|
void *if_bridge; /* bridge glue */
|
|
|
|
void *if_lagg; /* lagg glue */
|
|
|
|
void *if_pf_kif; /* pf glue */
|
|
|
|
struct carp_if *if_carp; /* carp interface structure */
|
|
|
|
struct label *if_label; /* interface MAC label */
|
2014-08-31 11:33:19 +00:00
|
|
|
struct netmap_adapter *if_netmap; /* netmap(4) softc */
|
2013-10-31 15:46:10 +00:00
|
|
|
|
|
|
|
/* Various procedures of the layer2 encapsulation and drivers. */
|
1997-01-03 19:50:26 +00:00
|
|
|
int (*if_output) /* output routine (enqueue) */
|
2013-04-26 12:50:32 +00:00
|
|
|
(struct ifnet *, struct mbuf *, const struct sockaddr *,
|
2009-04-16 20:30:28 +00:00
|
|
|
struct route *);
|
2002-11-14 23:36:28 +00:00
|
|
|
void (*if_input) /* input routine (from h/w driver) */
|
|
|
|
(struct ifnet *, struct mbuf *);
|
2018-05-11 05:00:40 +00:00
|
|
|
struct mbuf *(*if_bridge_input)(struct ifnet *, struct mbuf *);
|
|
|
|
int (*if_bridge_output)(struct ifnet *, struct mbuf *, struct sockaddr *,
|
|
|
|
struct rtentry *);
|
|
|
|
void (*if_bridge_linkstate)(struct ifnet *ifp);
|
2014-06-02 17:54:39 +00:00
|
|
|
if_start_fn_t if_start; /* initiate output routine */
|
|
|
|
if_ioctl_fn_t if_ioctl; /* ioctl routine */
|
|
|
|
if_init_fn_t if_init; /* Init routine */
|
1997-01-07 19:15:32 +00:00
|
|
|
int (*if_resolvemulti) /* validate/resolve multicast */
|
2002-03-19 21:54:18 +00:00
|
|
|
(struct ifnet *, struct sockaddr **, struct sockaddr *);
|
2020-02-26 13:48:33 +00:00
|
|
|
if_qflush_fn_t if_qflush; /* flush any queue */
|
2014-06-02 17:54:39 +00:00
|
|
|
if_transmit_fn_t if_transmit; /* initiate output routine */
|
|
|
|
|
Introduce an infrastructure for dismantling vnet instances.
Vnet modules and protocol domains may now register destructor
functions to clean up and release per-module state. The destructor
mechanisms can be triggered by invoking "vimage -d", or a future
equivalent command which will be provided via the new jail framework.
While this patch introduces numerous placeholder destructor functions,
many of those are currently incomplete, thus leaking memory or (even
worse) failing to stop all running timers. Many of such issues are
already known and will be incrementaly fixed over the next weeks in
smaller incremental commits.
Apart from introducing new fields in structs ifnet, domain, protosw
and vnet_net, which requires the kernel and modules to be rebuilt, this
change should have no impact on nooptions VIMAGE builds, since vnet
destructors can only be called in VIMAGE kernels. Moreover,
destructor functions should be in general compiled in only in
options VIMAGE builds, except for kernel modules which can be safely
kldunloaded at run time.
Bump __FreeBSD_version to 800097.
Reviewed by: bz, julian
Approved by: rwatson, kib (re), julian (mentor)
2009-06-08 17:15:40 +00:00
|
|
|
void (*if_reassign) /* reassign to vnet routine */
|
|
|
|
(struct ifnet *, struct vnet *, char *);
|
2014-08-31 06:46:21 +00:00
|
|
|
if_get_counter_t if_get_counter; /* get counter values */
|
2015-12-31 05:03:27 +00:00
|
|
|
int (*if_requestencap) /* make link header from request */
|
|
|
|
(struct ifnet *, struct if_encap_req *);
|
2009-03-01 12:42:54 +00:00
|
|
|
|
2014-09-28 08:57:07 +00:00
|
|
|
/* Statistics. */
|
|
|
|
counter_u64_t if_counters[IFCOUNTERS];
|
|
|
|
|
2013-10-31 15:46:10 +00:00
|
|
|
/* Stuff that's only temporary and doesn't belong here. */
|
2014-08-31 06:46:21 +00:00
|
|
|
|
2014-11-03 13:02:58 +00:00
|
|
|
/*
|
2014-11-11 12:05:59 +00:00
|
|
|
* Network adapter TSO limits:
|
|
|
|
* ===========================
|
|
|
|
*
|
|
|
|
* If the "if_hw_tsomax" field is zero the maximum segment
|
|
|
|
* length limit does not apply. If the "if_hw_tsomaxsegcount"
|
|
|
|
* or the "if_hw_tsomaxsegsize" field is zero the TSO segment
|
|
|
|
* count limit does not apply. If all three fields are zero,
|
|
|
|
* there is no TSO limit.
|
|
|
|
*
|
2015-09-14 08:36:22 +00:00
|
|
|
* NOTE: The TSO limits should reflect the values used in the
|
|
|
|
* BUSDMA tag a network adapter is using to load a mbuf chain
|
|
|
|
* for transmission. The TCP/IP network stack will subtract
|
|
|
|
* space for all linklevel and protocol level headers and
|
|
|
|
* ensure that the full mbuf chain passed to the network
|
|
|
|
* adapter fits within the given limits.
|
2014-11-03 13:02:58 +00:00
|
|
|
*/
|
2014-11-11 12:05:59 +00:00
|
|
|
u_int if_hw_tsomax; /* TSO maximum size in bytes */
|
2014-11-03 13:02:58 +00:00
|
|
|
u_int if_hw_tsomaxsegcount; /* TSO maximum segment count */
|
|
|
|
u_int if_hw_tsomaxsegsize; /* TSO maximum segment size in bytes */
|
2014-09-22 08:27:27 +00:00
|
|
|
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
/*
|
|
|
|
* Network adapter send tag support:
|
|
|
|
*/
|
|
|
|
if_snd_tag_alloc_t *if_snd_tag_alloc;
|
|
|
|
if_snd_tag_modify_t *if_snd_tag_modify;
|
|
|
|
if_snd_tag_query_t *if_snd_tag_query;
|
|
|
|
if_snd_tag_free_t *if_snd_tag_free;
|
2021-01-26 16:54:42 +00:00
|
|
|
if_next_send_tag_t *if_next_snd_tag;
|
2019-08-01 14:17:31 +00:00
|
|
|
if_ratelimit_query_t *if_ratelimit_query;
|
2020-02-26 13:48:33 +00:00
|
|
|
if_ratelimit_setup_t *if_ratelimit_setup;
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
|
2018-03-27 15:29:32 +00:00
|
|
|
/* Ethernet PCP */
|
|
|
|
uint8_t if_pcp;
|
|
|
|
|
2018-05-06 00:38:29 +00:00
|
|
|
/*
|
2019-10-17 16:23:03 +00:00
|
|
|
* Debugnet (Netdump) hooks to be called while in db/panic.
|
2018-05-06 00:38:29 +00:00
|
|
|
*/
|
2019-10-17 16:23:03 +00:00
|
|
|
struct debugnet_methods *if_debugnet_methods;
|
2018-05-23 21:02:14 +00:00
|
|
|
struct epoch_context if_epoch_ctx;
|
2018-05-06 00:38:29 +00:00
|
|
|
|
2014-08-31 06:46:21 +00:00
|
|
|
/*
|
|
|
|
* Spare fields to be added before branching a stable branch, so
|
|
|
|
* that structure can be enhanced without changing the kernel
|
|
|
|
* binary interface.
|
2009-03-01 12:42:54 +00:00
|
|
|
*/
|
Implement kernel support for hardware rate limited sockets.
- Add RATELIMIT kernel configuration keyword which must be set to
enable the new functionality.
- Add support for hardware driven, Receive Side Scaling, RSS aware, rate
limited sendqueues and expose the functionality through the already
established SO_MAX_PACING_RATE setsockopt(). The API support rates in
the range from 1 to 4Gbytes/s which are suitable for regular TCP and
UDP streams. The setsockopt(2) manual page has been updated.
- Add rate limit function callback API to "struct ifnet" which supports
the following operations: if_snd_tag_alloc(), if_snd_tag_modify(),
if_snd_tag_query() and if_snd_tag_free().
- Add support to ifconfig to view, set and clear the IFCAP_TXRTLMT
flag, which tells if a network driver supports rate limiting or not.
- This patch also adds support for rate limiting through VLAN and LAGG
intermediate network devices.
- How rate limiting works:
1) The userspace application calls setsockopt() after accepting or
making a new connection to set the rate which is then stored in the
socket structure in the kernel. Later on when packets are transmitted
a check is made in the transmit path for rate changes. A rate change
implies a non-blocking ifp->if_snd_tag_alloc() call will be made to the
destination network interface, which then sets up a custom sendqueue
with the given rate limitation parameter. A "struct m_snd_tag" pointer is
returned which serves as a "snd_tag" hint in the m_pkthdr for the
subsequently transmitted mbufs.
2) When the network driver sees the "m->m_pkthdr.snd_tag" different
from NULL, it will move the packets into a designated rate limited sendqueue
given by the snd_tag pointer. It is up to the individual drivers how the rate
limited traffic will be rate limited.
3) Route changes are detected by the NIC drivers in the ifp->if_transmit()
routine when the ifnet pointer in the incoming snd_tag mismatches the
one of the network interface. The network adapter frees the mbuf and
returns EAGAIN which causes the ip_output() to release and clear the send
tag. Upon next ip_output() a new "snd_tag" will be tried allocated.
4) When the PCB is detached the custom sendqueue will be released by a
non-blocking ifp->if_snd_tag_free() call to the currently bound network
interface.
Reviewed by: wblock (manpages), adrian, gallatin, scottl (network)
Differential Revision: https://reviews.freebsd.org/D3687
Sponsored by: Mellanox Technologies
MFC after: 3 months
2017-01-18 13:31:17 +00:00
|
|
|
int if_ispare[4]; /* general use */
|
1997-01-03 19:50:26 +00:00
|
|
|
};
|
Lock down the network interface queues. The queue mutex must be obtained
before adding/removing packets from the queue. Also, the if_obytes and
if_omcasts fields should only be manipulated under protection of the mutex.
IF_ENQUEUE, IF_PREPEND, and IF_DEQUEUE perform all necessary locking on
the queue. An IF_LOCK macro is provided, as well as the old (mutex-less)
versions of the macros in the form _IF_ENQUEUE, _IF_QFULL, for code which
needs them, but their use is discouraged.
Two new macros are introduced: IF_DRAIN() to drain a queue, and IF_HANDOFF,
which takes care of locking/enqueue, and also statistics updating/start
if necessary.
2000-11-25 07:35:38 +00:00
|
|
|
|
1999-11-22 02:45:11 +00:00
|
|
|
/* for compatibility with other BSDs */
|
2006-08-04 21:27:40 +00:00
|
|
|
#define if_name(ifp) ((ifp)->if_xname)
|
1999-11-22 02:45:11 +00:00
|
|
|
|
2019-04-22 19:24:21 +00:00
|
|
|
#define IF_NODOM 255
|
2005-08-02 17:43:35 +00:00
|
|
|
/*
|
|
|
|
* Locks for address lists on the network interface.
|
|
|
|
*/
|
2018-05-18 20:13:34 +00:00
|
|
|
#define IF_ADDR_LOCK_INIT(if) mtx_init(&(if)->if_addr_lock, "if_addr_lock", NULL, MTX_DEF)
|
|
|
|
#define IF_ADDR_LOCK_DESTROY(if) mtx_destroy(&(if)->if_addr_lock)
|
|
|
|
|
|
|
|
#define IF_ADDR_WLOCK(if) mtx_lock(&(if)->if_addr_lock)
|
|
|
|
#define IF_ADDR_WUNLOCK(if) mtx_unlock(&(if)->if_addr_lock)
|
2018-07-04 02:47:16 +00:00
|
|
|
#define IF_ADDR_LOCK_ASSERT(if) MPASS(in_epoch(net_epoch_preempt) || mtx_owned(&(if)->if_addr_lock))
|
2018-05-18 20:13:34 +00:00
|
|
|
#define IF_ADDR_WLOCK_ASSERT(if) mtx_assert(&(if)->if_addr_lock, MA_OWNED)
|
2005-08-02 17:43:35 +00:00
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#ifdef _KERNEL
|
2010-01-18 20:34:00 +00:00
|
|
|
/* interface link layer address change event */
|
|
|
|
typedef void (*iflladdr_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(iflladdr_event, iflladdr_event_handler_t);
|
2004-02-26 04:27:55 +00:00
|
|
|
/* interface address change event */
|
|
|
|
typedef void (*ifaddr_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(ifaddr_event, ifaddr_event_handler_t);
|
2018-10-21 15:02:06 +00:00
|
|
|
typedef void (*ifaddr_event_ext_handler_t)(void *, struct ifnet *,
|
|
|
|
struct ifaddr *, int);
|
|
|
|
EVENTHANDLER_DECLARE(ifaddr_event_ext, ifaddr_event_ext_handler_t);
|
|
|
|
#define IFADDR_EVENT_ADD 0
|
|
|
|
#define IFADDR_EVENT_DEL 1
|
2004-02-26 04:27:55 +00:00
|
|
|
/* new interface arrival event */
|
|
|
|
typedef void (*ifnet_arrival_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_arrival_event, ifnet_arrival_event_handler_t);
|
|
|
|
/* interface departure event */
|
|
|
|
typedef void (*ifnet_departure_event_handler_t)(void *, struct ifnet *);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_departure_event, ifnet_departure_event_handler_t);
|
2011-03-21 09:40:01 +00:00
|
|
|
/* Interface link state change event */
|
|
|
|
typedef void (*ifnet_link_event_handler_t)(void *, struct ifnet *, int);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_link_event, ifnet_link_event_handler_t);
|
2017-01-28 07:26:42 +00:00
|
|
|
/* Interface up/down event */
|
|
|
|
#define IFNET_EVENT_UP 0
|
|
|
|
#define IFNET_EVENT_DOWN 1
|
2018-04-26 08:58:27 +00:00
|
|
|
#define IFNET_EVENT_PCP 2 /* priority code point, PCP */
|
|
|
|
|
2017-01-28 07:26:42 +00:00
|
|
|
typedef void (*ifnet_event_fn)(void *, struct ifnet *ifp, int event);
|
|
|
|
EVENTHANDLER_DECLARE(ifnet_event, ifnet_event_fn);
|
2004-02-26 04:27:55 +00:00
|
|
|
|
2006-06-19 22:20:45 +00:00
|
|
|
/*
|
|
|
|
* interface groups
|
|
|
|
*/
|
|
|
|
struct ifg_group {
|
|
|
|
char ifg_group[IFNAMSIZ];
|
|
|
|
u_int ifg_refcnt;
|
|
|
|
void *ifg_pf_kif;
|
2018-05-24 23:21:23 +00:00
|
|
|
CK_STAILQ_HEAD(, ifg_member) ifg_members; /* (CK_) */
|
|
|
|
CK_STAILQ_ENTRY(ifg_group) ifg_next; /* (CK_) */
|
2006-06-19 22:20:45 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct ifg_member {
|
2018-05-24 23:21:23 +00:00
|
|
|
CK_STAILQ_ENTRY(ifg_member) ifgm_next; /* (CK_) */
|
2006-06-19 22:20:45 +00:00
|
|
|
struct ifnet *ifgm_ifp;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ifg_list {
|
|
|
|
struct ifg_group *ifgl_group;
|
2018-05-24 23:21:23 +00:00
|
|
|
CK_STAILQ_ENTRY(ifg_list) ifgl_next; /* (CK_) */
|
2006-06-19 22:20:45 +00:00
|
|
|
};
|
|
|
|
|
2013-10-28 20:32:05 +00:00
|
|
|
#ifdef _SYS_EVENTHANDLER_H_
|
2006-06-19 22:20:45 +00:00
|
|
|
/* group attach event */
|
|
|
|
typedef void (*group_attach_event_handler_t)(void *, struct ifg_group *);
|
|
|
|
EVENTHANDLER_DECLARE(group_attach_event, group_attach_event_handler_t);
|
|
|
|
/* group detach event */
|
|
|
|
typedef void (*group_detach_event_handler_t)(void *, struct ifg_group *);
|
|
|
|
EVENTHANDLER_DECLARE(group_detach_event, group_detach_event_handler_t);
|
|
|
|
/* group change event */
|
|
|
|
typedef void (*group_change_event_handler_t)(void *, const char *);
|
|
|
|
EVENTHANDLER_DECLARE(group_change_event, group_change_event_handler_t);
|
2013-10-28 20:32:05 +00:00
|
|
|
#endif /* _SYS_EVENTHANDLER_H_ */
|
2006-06-19 22:20:45 +00:00
|
|
|
|
2003-10-24 16:57:59 +00:00
|
|
|
#define IF_AFDATA_LOCK_INIT(ifp) \
|
2018-05-23 21:02:14 +00:00
|
|
|
mtx_init(&(ifp)->if_afdata_lock, "if_afdata", NULL, MTX_DEF)
|
2003-10-24 16:57:59 +00:00
|
|
|
|
2018-05-23 21:02:14 +00:00
|
|
|
#define IF_AFDATA_WLOCK(ifp) mtx_lock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_WUNLOCK(ifp) mtx_unlock(&(ifp)->if_afdata_lock)
|
2008-12-17 00:11:56 +00:00
|
|
|
#define IF_AFDATA_LOCK(ifp) IF_AFDATA_WLOCK(ifp)
|
|
|
|
#define IF_AFDATA_UNLOCK(ifp) IF_AFDATA_WUNLOCK(ifp)
|
2018-05-23 21:02:14 +00:00
|
|
|
#define IF_AFDATA_TRYLOCK(ifp) mtx_trylock(&(ifp)->if_afdata_lock)
|
|
|
|
#define IF_AFDATA_DESTROY(ifp) mtx_destroy(&(ifp)->if_afdata_lock)
|
2008-12-17 00:11:56 +00:00
|
|
|
|
2018-07-04 02:47:16 +00:00
|
|
|
#define IF_AFDATA_LOCK_ASSERT(ifp) MPASS(in_epoch(net_epoch_preempt) || mtx_owned(&(ifp)->if_afdata_lock))
|
2018-05-23 21:02:14 +00:00
|
|
|
#define IF_AFDATA_WLOCK_ASSERT(ifp) mtx_assert(&(ifp)->if_afdata_lock, MA_OWNED)
|
|
|
|
#define IF_AFDATA_UNLOCK_ASSERT(ifp) mtx_assert(&(ifp)->if_afdata_lock, MA_NOTOWNED)
|
This main goals of this project are:
1. separating L2 tables (ARP, NDP) from the L3 routing tables
2. removing as much locking dependencies among these layers as
possible to allow for some parallelism in the search operations
3. simplify the logic in the routing code,
The most notable end result is the obsolescent of the route
cloning (RTF_CLONING) concept, which translated into code reduction
in both IPv4 ARP and IPv6 NDP related modules, and size reduction in
struct rtentry{}. The change in design obsoletes the semantics of
RTF_CLONING, RTF_WASCLONE and RTF_LLINFO routing flags. The userland
applications such as "arp" and "ndp" have been modified to reflect
those changes. The output from "netstat -r" shows only the routing
entries.
Quite a few developers have contributed to this project in the
past: Glebius Smirnoff, Luigi Rizzo, Alessandro Cerri, and
Andre Oppermann. And most recently:
- Kip Macy revised the locking code completely, thus completing
the last piece of the puzzle, Kip has also been conducting
active functional testing
- Sam Leffler has helped me improving/refactoring the code, and
provided valuable reviews
- Julian Elischer setup the perforce tree for me and has helped
me maintaining that branch before the svn conversion
2008-12-15 06:10:57 +00:00
|
|
|
|
1999-08-06 13:53:03 +00:00
|
|
|
/*
|
|
|
|
* 72 was chosen below because it is the size of a TCP/IP
|
|
|
|
* header (40) + the minimum mss (32).
|
|
|
|
*/
|
|
|
|
#define IF_MINMTU 72
|
|
|
|
#define IF_MAXMTU 65535
|
|
|
|
|
2012-06-19 07:34:13 +00:00
|
|
|
#define TOEDEV(ifp) ((ifp)->if_llsoftc)
|
|
|
|
|
1997-01-03 19:50:26 +00:00
|
|
|
/*
|
|
|
|
* The ifaddr structure contains information about one address
|
|
|
|
* of an interface. They are maintained by the different address families,
|
|
|
|
* are allocated and attached when an address is set, and are linked
|
|
|
|
* together so all addresses for an interface can be located.
|
2004-04-15 19:45:59 +00:00
|
|
|
*
|
|
|
|
* NOTE: a 'struct ifaddr' is always at the beginning of a larger
|
|
|
|
* chunk of malloc'ed memory, where we store the three addresses
|
|
|
|
* (ifa_addr, ifa_dstaddr and ifa_netmask) referenced here.
|
1997-01-03 19:50:26 +00:00
|
|
|
*/
|
|
|
|
struct ifaddr {
|
|
|
|
struct sockaddr *ifa_addr; /* address of interface */
|
|
|
|
struct sockaddr *ifa_dstaddr; /* other end of p-to-p link */
|
|
|
|
#define ifa_broadaddr ifa_dstaddr /* broadcast address interface */
|
|
|
|
struct sockaddr *ifa_netmask; /* used to determine subnet */
|
|
|
|
struct ifnet *ifa_ifp; /* back-pointer to interface */
|
A major overhaul of the CARP implementation. The ip_carp.c was started
from scratch, copying needed functionality from the old implemenation
on demand, with a thorough review of all code. The main change is that
interface layer has been removed from the CARP. Now redundant addresses
are configured exactly on the interfaces, they run on.
The CARP configuration itself is, as before, configured and read via
SIOCSVH/SIOCGVH ioctls. A new prefix created with SIOCAIFADDR or
SIOCAIFADDR_IN6 may now be configured to a particular virtual host id,
which makes the prefix redundant.
ifconfig(8) semantics has been changed too: now one doesn't need
to clone carpXX interface, he/she should directly configure a vhid
on a Ethernet interface.
To supply vhid data from the kernel to an application the getifaddrs(8)
function had been changed to pass ifam_data with each address. [1]
The new implementation definitely closes all PRs related to carp(4)
being an interface, and may close several others. It also allows
to run a single redundant IP per interface.
Big thanks to Bjoern Zeeb for his help with inet6 part of patch, for
idea on using ifam_data and for several rounds of reviewing!
PR: kern/117000, kern/126945, kern/126714, kern/120130, kern/117448
Reviewed by: bz
Submitted by: bz [1]
2011-12-16 12:16:56 +00:00
|
|
|
struct carp_softc *ifa_carp; /* pointer to CARP data */
|
2018-05-21 01:53:23 +00:00
|
|
|
CK_STAILQ_ENTRY(ifaddr) ifa_link; /* queue macro glue */
|
1997-01-03 19:50:26 +00:00
|
|
|
u_short ifa_flags; /* mostly rt_flags for cloning */
|
2015-02-19 23:16:10 +00:00
|
|
|
#define IFA_ROUTE RTF_UP /* route installed */
|
|
|
|
#define IFA_RTSELF RTF_HOST /* loopback route to self installed */
|
1999-05-16 17:09:20 +00:00
|
|
|
u_int ifa_refcnt; /* references to this structure */
|
2013-10-15 11:37:57 +00:00
|
|
|
|
|
|
|
counter_u64_t ifa_ipackets;
|
2020-02-26 13:48:33 +00:00
|
|
|
counter_u64_t ifa_opackets;
|
2013-10-15 11:37:57 +00:00
|
|
|
counter_u64_t ifa_ibytes;
|
|
|
|
counter_u64_t ifa_obytes;
|
2018-05-18 20:13:34 +00:00
|
|
|
struct epoch_context ifa_epoch_ctx;
|
1997-01-03 19:50:26 +00:00
|
|
|
};
|
|
|
|
|
2013-10-15 10:31:42 +00:00
|
|
|
struct ifaddr * ifa_alloc(size_t size, int flags);
|
2009-06-21 19:30:33 +00:00
|
|
|
void ifa_free(struct ifaddr *ifa);
|
|
|
|
void ifa_ref(struct ifaddr *ifa);
|
2021-03-30 14:03:28 +00:00
|
|
|
int __result_use_check ifa_try_ref(struct ifaddr *ifa);
|
2002-12-18 11:46:59 +00:00
|
|
|
|
1997-01-07 19:15:32 +00:00
|
|
|
/*
|
|
|
|
* Multicast address structure. This is analogous to the ifaddr
|
|
|
|
* structure except that it keeps track of multicast addresses.
|
|
|
|
*/
|
2018-08-15 20:23:08 +00:00
|
|
|
#define IFMA_F_ENQUEUED 0x1
|
1997-01-07 19:15:32 +00:00
|
|
|
struct ifmultiaddr {
|
2018-05-21 01:53:23 +00:00
|
|
|
CK_STAILQ_ENTRY(ifmultiaddr) ifma_link; /* queue macro glue */
|
1997-01-08 13:20:25 +00:00
|
|
|
struct sockaddr *ifma_addr; /* address this membership is for */
|
|
|
|
struct sockaddr *ifma_lladdr; /* link-layer translation, if any */
|
|
|
|
struct ifnet *ifma_ifp; /* back-pointer to interface */
|
|
|
|
u_int ifma_refcount; /* reference count */
|
2018-08-15 20:23:08 +00:00
|
|
|
int ifma_flags;
|
1997-01-08 13:20:25 +00:00
|
|
|
void *ifma_protospec; /* protocol-specific state, if any */
|
2007-03-20 00:36:10 +00:00
|
|
|
struct ifmultiaddr *ifma_llifma; /* pointer to ifma for ifma_lladdr */
|
2018-05-18 20:13:34 +00:00
|
|
|
struct epoch_context ifma_epoch_ctx;
|
1997-01-07 19:15:32 +00:00
|
|
|
};
|
|
|
|
|
2009-08-23 20:40:19 +00:00
|
|
|
extern struct sx ifnet_sxlock;
|
|
|
|
|
2020-11-25 10:56:38 +00:00
|
|
|
#define IFNET_WLOCK() sx_xlock(&ifnet_sxlock)
|
|
|
|
#define IFNET_WUNLOCK() sx_xunlock(&ifnet_sxlock)
|
|
|
|
#define IFNET_RLOCK_ASSERT() sx_assert(&ifnet_sxlock, SA_SLOCKED)
|
|
|
|
#define IFNET_WLOCK_ASSERT() sx_assert(&ifnet_sxlock, SA_XLOCKED)
|
2009-08-23 20:40:19 +00:00
|
|
|
#define IFNET_RLOCK() sx_slock(&ifnet_sxlock)
|
|
|
|
#define IFNET_RUNLOCK() sx_sunlock(&ifnet_sxlock)
|
2002-12-22 05:35:03 +00:00
|
|
|
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
/*
|
|
|
|
* Look up an ifnet given its index; the _ref variant also acquires a
|
|
|
|
* reference that must be freed using if_rele(). It is almost always a bug
|
2016-10-19 02:24:57 +00:00
|
|
|
* to call ifnet_byindex() instead of ifnet_byindex_ref().
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
*/
|
Introduce locking around use of ifindex_table, whose use was previously
unsynchronized. While races were extremely rare, we've now had a
couple of reports of panics in environments involving large numbers of
IPSEC tunnels being added very quickly on an active system.
- Add accessor functions ifnet_byindex(), ifaddr_byindex(),
ifdev_byindex() to replace existing accessor macros. These functions
now acquire the ifnet lock before derefencing the table.
- Add IFNET_WLOCK_ASSERT().
- Add static accessor functions ifnet_setbyindex(), ifdev_setbyindex(),
which set values in the table either asserting of acquiring the ifnet
lock.
- Use accessor functions throughout if.c to modify and read
ifindex_table.
- Rework ifnet attach/detach to lock around ifindex_table modification.
Note that these changes simply close races around use of ifindex_table,
and make no attempt to solve the probem of disappearing ifnets. Further
refinement of this work, including with respect to ifindex_table
resizing, is still required.
In a future change, the ifnet lock should be converted from a mutex to an
rwlock in order to reduce contention.
Reviewed and tested by: brooks
2008-06-26 23:05:28 +00:00
|
|
|
struct ifnet *ifnet_byindex(u_short idx);
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
struct ifnet *ifnet_byindex_ref(u_short idx);
|
2008-08-20 03:14:48 +00:00
|
|
|
|
2004-04-16 10:28:54 +00:00
|
|
|
/*
|
|
|
|
* Given the index, ifaddr_byindex() returns the one and only
|
|
|
|
* link-level ifaddr for the interface. You are not supposed to use
|
|
|
|
* it to traverse the list of addresses associated to the interface.
|
|
|
|
*/
|
Introduce locking around use of ifindex_table, whose use was previously
unsynchronized. While races were extremely rare, we've now had a
couple of reports of panics in environments involving large numbers of
IPSEC tunnels being added very quickly on an active system.
- Add accessor functions ifnet_byindex(), ifaddr_byindex(),
ifdev_byindex() to replace existing accessor macros. These functions
now acquire the ifnet lock before derefencing the table.
- Add IFNET_WLOCK_ASSERT().
- Add static accessor functions ifnet_setbyindex(), ifdev_setbyindex(),
which set values in the table either asserting of acquiring the ifnet
lock.
- Use accessor functions throughout if.c to modify and read
ifindex_table.
- Rework ifnet attach/detach to lock around ifindex_table modification.
Note that these changes simply close races around use of ifindex_table,
and make no attempt to solve the probem of disappearing ifnets. Further
refinement of this work, including with respect to ifindex_table
resizing, is still required.
In a future change, the ifnet lock should be converted from a mutex to an
rwlock in order to reduce contention.
Reviewed and tested by: brooks
2008-06-26 23:05:28 +00:00
|
|
|
struct ifaddr *ifaddr_byindex(u_short idx);
|
2001-09-06 02:40:43 +00:00
|
|
|
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
VNET_DECLARE(struct ifnethead, ifnet);
|
|
|
|
VNET_DECLARE(struct ifgrouphead, ifg_head);
|
|
|
|
VNET_DECLARE(int, if_index);
|
|
|
|
VNET_DECLARE(struct ifnet *, loif); /* first loopback interface */
|
|
|
|
|
2009-07-16 21:13:04 +00:00
|
|
|
#define V_ifnet VNET(ifnet)
|
|
|
|
#define V_ifg_head VNET(ifg_head)
|
|
|
|
#define V_if_index VNET(if_index)
|
|
|
|
#define V_loif VNET(loif)
|
Build on Jeff Roberson's linker-set based dynamic per-CPU allocator
(DPCPU), as suggested by Peter Wemm, and implement a new per-virtual
network stack memory allocator. Modify vnet to use the allocator
instead of monolithic global container structures (vinet, ...). This
change solves many binary compatibility problems associated with
VIMAGE, and restores ELF symbols for virtualized global variables.
Each virtualized global variable exists as a "reference copy", and also
once per virtual network stack. Virtualized global variables are
tagged at compile-time, placing the in a special linker set, which is
loaded into a contiguous region of kernel memory. Virtualized global
variables in the base kernel are linked as normal, but those in modules
are copied and relocated to a reserved portion of the kernel's vnet
region with the help of a the kernel linker.
Virtualized global variables exist in per-vnet memory set up when the
network stack instance is created, and are initialized statically from
the reference copy. Run-time access occurs via an accessor macro, which
converts from the current vnet and requested symbol to a per-vnet
address. When "options VIMAGE" is not compiled into the kernel, normal
global ELF symbols will be used instead and indirection is avoided.
This change restores static initialization for network stack global
variables, restores support for non-global symbols and types, eliminates
the need for many subsystem constructors, eliminates large per-subsystem
structures that caused many binary compatibility issues both for
monitoring applications (netstat) and kernel modules, removes the
per-function INIT_VNET_*() macros throughout the stack, eliminates the
need for vnet_symmap ksym(2) munging, and eliminates duplicate
definitions of virtualized globals under VIMAGE_GLOBALS.
Bump __FreeBSD_version and update UPDATING.
Portions submitted by: bz
Reviewed by: bz, zec
Discussed with: gnn, jamie, jeff, jhb, julian, sam
Suggested by: peter
Approved by: re (kensmith)
2009-07-14 22:48:30 +00:00
|
|
|
|
2018-05-06 20:34:13 +00:00
|
|
|
#ifdef MCAST_VERBOSE
|
|
|
|
#define MCDPRINTF printf
|
|
|
|
#else
|
|
|
|
#define MCDPRINTF(...)
|
|
|
|
#endif
|
|
|
|
|
2006-06-19 22:20:45 +00:00
|
|
|
int if_addgroup(struct ifnet *, const char *);
|
|
|
|
int if_delgroup(struct ifnet *, const char *);
|
2002-03-19 21:54:18 +00:00
|
|
|
int if_addmulti(struct ifnet *, struct sockaddr *, struct ifmultiaddr **);
|
|
|
|
int if_allmulti(struct ifnet *, int);
|
2005-06-10 16:49:24 +00:00
|
|
|
struct ifnet* if_alloc(u_char);
|
2019-04-22 19:24:21 +00:00
|
|
|
struct ifnet* if_alloc_dev(u_char, device_t dev);
|
|
|
|
struct ifnet* if_alloc_domain(u_char, int numa_domain);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_attach(struct ifnet *);
|
2009-04-23 11:51:53 +00:00
|
|
|
void if_dead(struct ifnet *);
|
2002-03-19 21:54:18 +00:00
|
|
|
int if_delmulti(struct ifnet *, struct sockaddr *);
|
2007-03-20 00:36:10 +00:00
|
|
|
void if_delmulti_ifma(struct ifmultiaddr *);
|
2018-05-06 20:34:13 +00:00
|
|
|
void if_delmulti_ifma_flags(struct ifmultiaddr *, int flags);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_detach(struct ifnet *);
|
2005-05-25 13:52:03 +00:00
|
|
|
void if_purgeaddrs(struct ifnet *);
|
2010-01-24 16:17:58 +00:00
|
|
|
void if_delallmulti(struct ifnet *);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_down(struct ifnet *);
|
2007-03-20 03:15:43 +00:00
|
|
|
struct ifmultiaddr *
|
2015-09-05 05:33:20 +00:00
|
|
|
if_findmulti(struct ifnet *, const struct sockaddr *);
|
2018-05-06 20:34:13 +00:00
|
|
|
void if_freemulti(struct ifmultiaddr *ifma);
|
2005-06-10 16:49:24 +00:00
|
|
|
void if_free(struct ifnet *);
|
2003-10-31 18:32:15 +00:00
|
|
|
void if_initname(struct ifnet *, const char *, int);
|
2004-12-08 05:45:59 +00:00
|
|
|
void if_link_state_change(struct ifnet *, int);
|
2002-09-24 17:35:08 +00:00
|
|
|
int if_printf(struct ifnet *, const char *, ...) __printflike(2, 3);
|
2021-03-21 18:49:05 +00:00
|
|
|
int if_log(struct ifnet *, int, const char *, ...) __printflike(3, 4);
|
Start to address a number of races relating to use of ifnet pointers
after the corresponding interface has been destroyed:
(1) Add an ifnet refcount, ifp->if_refcount. Initialize it to 1 in
if_alloc(), and modify if_free_type() to decrement and check the
refcount.
(2) Add new if_ref() and if_rele() interfaces to allow kernel code
walking global interface lists to release IFNET_[RW]LOCK() yet
keep the ifnet stable. Currently, if_rele() is a no-op wrapper
around if_free(), but this may change in the future.
(3) Add new ifnet field, if_alloctype, which caches the type passed
to if_alloc(), but unlike if_type, won't be changed by drivers.
This allows asynchronous free's of the interface after the
driver has released it to still use the right type. Use that
instead of the type passed to if_free_type(), but assert that
they are the same (might have to rethink this if that doesn't
work out).
(4) Add a new ifnet_byindex_ref(), which looks up an interface by
index and returns a reference rather than a pointer to it.
(5) Fix if_alloc() to fully initialize the if_addr_mtx before hooking
up the ifnet to global lists.
(6) Modify sysctls in if_mib.c to use ifnet_byindex_ref() and release
the ifnet when done.
When this change is MFC'd, it will need to replace if_ispare fields
rather than adding new fields in order to avoid breaking the binary
interface. Once this change is MFC'd, if_free_type() should be
removed, as its 'type' argument is now optional.
This refcount is not appropriate for counting mbuf pkthdr references,
and also not for counting entry into the device driver via ifnet
function pointers. An rmlock may be appropriate for the latter.
Rather, this is about ensuring data structure stability when reaching
an ifnet via global ifnet lists and tables followed by copy in or out
of userspace.
MFC after: 3 weeks
Reported by: mdtancsa
Reviewed by: brooks
2009-04-21 22:43:32 +00:00
|
|
|
void if_ref(struct ifnet *);
|
|
|
|
void if_rele(struct ifnet *);
|
2021-03-30 14:03:28 +00:00
|
|
|
bool __result_use_check if_try_ref(struct ifnet *);
|
2002-03-19 21:54:18 +00:00
|
|
|
int if_setlladdr(struct ifnet *, const u_char *, int);
|
2018-07-09 11:03:28 +00:00
|
|
|
int if_tunnel_check_nesting(struct ifnet *, struct mbuf *, uint32_t, int);
|
2002-03-19 21:54:18 +00:00
|
|
|
void if_up(struct ifnet *);
|
|
|
|
int ifioctl(struct socket *, u_long, caddr_t, struct thread *);
|
|
|
|
int ifpromisc(struct ifnet *, int);
|
|
|
|
struct ifnet *ifunit(const char *);
|
2009-04-23 13:08:47 +00:00
|
|
|
struct ifnet *ifunit_ref(const char *);
|
2002-03-19 21:54:18 +00:00
|
|
|
|
2009-09-15 19:18:34 +00:00
|
|
|
int ifa_add_loopback_route(struct ifaddr *, struct sockaddr *);
|
|
|
|
int ifa_del_loopback_route(struct ifaddr *, struct sockaddr *);
|
2015-09-16 06:23:15 +00:00
|
|
|
int ifa_switch_loopback_route(struct ifaddr *, struct sockaddr *);
|
2009-09-15 19:18:34 +00:00
|
|
|
|
2015-09-05 05:33:20 +00:00
|
|
|
struct ifaddr *ifa_ifwithaddr(const struct sockaddr *);
|
|
|
|
int ifa_ifwithaddr_check(const struct sockaddr *);
|
|
|
|
struct ifaddr *ifa_ifwithbroadaddr(const struct sockaddr *, int);
|
|
|
|
struct ifaddr *ifa_ifwithdstaddr(const struct sockaddr *, int);
|
|
|
|
struct ifaddr *ifa_ifwithnet(const struct sockaddr *, int, int);
|
2020-05-23 19:06:57 +00:00
|
|
|
struct ifaddr *ifa_ifwithroute(int, const struct sockaddr *,
|
|
|
|
const struct sockaddr *, u_int);
|
2015-09-05 05:33:20 +00:00
|
|
|
struct ifaddr *ifaof_ifpforaddr(const struct sockaddr *, struct ifnet *);
|
2013-02-11 10:58:22 +00:00
|
|
|
int ifa_preferred(struct ifaddr *, struct ifaddr *);
|
2002-03-19 21:54:18 +00:00
|
|
|
|
|
|
|
int if_simloop(struct ifnet *ifp, struct mbuf *m, int af, int hlen);
|
|
|
|
|
2005-06-10 16:49:24 +00:00
|
|
|
typedef void *if_com_alloc_t(u_char type, struct ifnet *ifp);
|
|
|
|
typedef void if_com_free_t(void *com, u_char type);
|
|
|
|
void if_register_com_alloc(u_char type, if_com_alloc_t *a, if_com_free_t *f);
|
|
|
|
void if_deregister_com_alloc(u_char type);
|
2014-08-31 06:46:21 +00:00
|
|
|
void if_data_copy(struct ifnet *, struct if_data *);
|
2014-09-18 14:47:13 +00:00
|
|
|
uint64_t if_get_counter_default(struct ifnet *, ift_counter);
|
|
|
|
void if_inc_counter(struct ifnet *, ift_counter, int64_t);
|
2005-06-10 16:49:24 +00:00
|
|
|
|
2001-10-14 20:17:53 +00:00
|
|
|
#define IF_LLADDR(ifp) \
|
2005-11-11 16:04:59 +00:00
|
|
|
LLADDR((struct sockaddr_dl *)((ifp)->if_addr->ifa_addr))
|
2001-10-14 20:17:53 +00:00
|
|
|
|
2014-06-02 17:54:39 +00:00
|
|
|
uint64_t if_setbaudrate(if_t ifp, uint64_t baudrate);
|
|
|
|
uint64_t if_getbaudrate(if_t ifp);
|
|
|
|
int if_setcapabilities(if_t ifp, int capabilities);
|
|
|
|
int if_setcapabilitiesbit(if_t ifp, int setbit, int clearbit);
|
|
|
|
int if_getcapabilities(if_t ifp);
|
|
|
|
int if_togglecapenable(if_t ifp, int togglecap);
|
|
|
|
int if_setcapenable(if_t ifp, int capenable);
|
|
|
|
int if_setcapenablebit(if_t ifp, int setcap, int clearcap);
|
|
|
|
int if_getcapenable(if_t ifp);
|
|
|
|
const char *if_getdname(if_t ifp);
|
|
|
|
int if_setdev(if_t ifp, void *dev);
|
|
|
|
int if_setdrvflagbits(if_t ifp, int if_setflags, int clear_flags);
|
|
|
|
int if_getdrvflags(if_t ifp);
|
|
|
|
int if_setdrvflags(if_t ifp, int flags);
|
|
|
|
int if_clearhwassist(if_t ifp);
|
|
|
|
int if_sethwassistbits(if_t ifp, int toset, int toclear);
|
|
|
|
int if_sethwassist(if_t ifp, int hwassist_bit);
|
|
|
|
int if_gethwassist(if_t ifp);
|
|
|
|
int if_setsoftc(if_t ifp, void *softc);
|
|
|
|
void *if_getsoftc(if_t ifp);
|
|
|
|
int if_setflags(if_t ifp, int flags);
|
2017-05-10 22:13:47 +00:00
|
|
|
int if_gethwaddr(if_t ifp, struct ifreq *);
|
2014-06-02 17:54:39 +00:00
|
|
|
int if_setmtu(if_t ifp, int mtu);
|
|
|
|
int if_getmtu(if_t ifp);
|
Make checks for rt_mtu generic:
Some virtual if drivers has (ab)used ifa ifa_rtrequest hook to enforce
route MTU to be not bigger that interface MTU. While ifa_rtrequest hooking
might be an option in some situation, it is not feasible to do MTU checks
there: generic (or per-domain) routing code is perfectly capable of doing
this.
We currrently have 3 places where MTU is altered:
1) route addition.
In this case domain overrides radix _addroute callback (in[6]_addroute)
and all necessary checks/fixes are/can be done there.
2) route change (especially, GW change).
In this case, there are no explicit per-domain calls, but one can
override rte by setting ifa_rtrequest hook to domain handler
(inet6 does this).
3) ifconfig ifaceX mtu YYYY
In this case, we have no callbacks, but ip[6]_output performes runtime
checks and decreases rt_mtu if necessary.
Generally, the goals are to be able to handle all MTU changes in
control plane, not in runtime part, and properly deal with increased
interface MTU.
This commit changes the following:
* removes hooks setting MTU from drivers side
* adds proper per-doman MTU checks for case 1)
* adds generic MTU check for case 2)
* The latter is done by using new dom_ifmtu callback since
if_mtu denotes L3 interface MTU, e.g. maximum trasmitted _packet_ size.
However, IPv6 mtu might be different from if_mtu one (e.g. default 1280)
for some cases, so we need an abstract way to know maximum MTU size
for given interface and domain.
* moves rt_setmetrics() before MTU/ifa_rtrequest hooks since it copies
user-supplied data which must be checked.
* removes RT_LOCK_ASSERT() from other ifa_rtrequest hooks to be able to
use this functions on new non-inserted rte.
More changes will follow soon.
MFC after: 1 month
Sponsored by: Yandex LLC
2014-11-06 13:13:09 +00:00
|
|
|
int if_getmtu_family(if_t ifp, int family);
|
2014-06-02 17:54:39 +00:00
|
|
|
int if_setflagbits(if_t ifp, int set, int clear);
|
|
|
|
int if_getflags(if_t ifp);
|
|
|
|
int if_sendq_empty(if_t ifp);
|
|
|
|
int if_setsendqready(if_t ifp);
|
|
|
|
int if_setsendqlen(if_t ifp, int tx_desc_count);
|
2017-01-31 16:12:31 +00:00
|
|
|
int if_sethwtsomax(if_t ifp, u_int if_hw_tsomax);
|
|
|
|
int if_sethwtsomaxsegcount(if_t ifp, u_int if_hw_tsomaxsegcount);
|
|
|
|
int if_sethwtsomaxsegsize(if_t ifp, u_int if_hw_tsomaxsegsize);
|
|
|
|
u_int if_gethwtsomax(if_t ifp);
|
|
|
|
u_int if_gethwtsomaxsegcount(if_t ifp);
|
|
|
|
u_int if_gethwtsomaxsegsize(if_t ifp);
|
2014-06-02 17:54:39 +00:00
|
|
|
int if_input(if_t ifp, struct mbuf* sendmp);
|
|
|
|
int if_sendq_prepend(if_t ifp, struct mbuf *m);
|
|
|
|
struct mbuf *if_dequeue(if_t ifp);
|
|
|
|
int if_setifheaderlen(if_t ifp, int len);
|
|
|
|
void if_setrcvif(struct mbuf *m, if_t ifp);
|
|
|
|
void if_setvtag(struct mbuf *m, u_int16_t tag);
|
|
|
|
u_int16_t if_getvtag(struct mbuf *m);
|
|
|
|
int if_vlantrunkinuse(if_t ifp);
|
|
|
|
caddr_t if_getlladdr(if_t ifp);
|
|
|
|
void *if_gethandle(u_char);
|
|
|
|
void if_bpfmtap(if_t ifp, struct mbuf *m);
|
|
|
|
void if_etherbpfmtap(if_t ifp, struct mbuf *m);
|
|
|
|
void if_vlancap(if_t ifp);
|
|
|
|
|
2019-10-10 23:42:55 +00:00
|
|
|
/*
|
|
|
|
* Traversing through interface address lists.
|
|
|
|
*/
|
|
|
|
struct sockaddr_dl;
|
|
|
|
typedef u_int iflladdr_cb_t(void *, struct sockaddr_dl *, u_int);
|
|
|
|
u_int if_foreach_lladdr(if_t, iflladdr_cb_t, void *);
|
|
|
|
u_int if_foreach_llmaddr(if_t, iflladdr_cb_t, void *);
|
2019-10-10 23:44:56 +00:00
|
|
|
u_int if_lladdr_count(if_t);
|
|
|
|
u_int if_llmaddr_count(if_t);
|
2019-10-10 23:42:55 +00:00
|
|
|
|
2014-06-02 17:54:39 +00:00
|
|
|
int if_getamcount(if_t ifp);
|
|
|
|
struct ifaddr * if_getifaddr(if_t ifp);
|
|
|
|
|
|
|
|
/* Functions */
|
|
|
|
void if_setinitfn(if_t ifp, void (*)(void *));
|
2014-08-31 12:48:13 +00:00
|
|
|
void if_setioctlfn(if_t ifp, int (*)(if_t, u_long, caddr_t));
|
|
|
|
void if_setstartfn(if_t ifp, void (*)(if_t));
|
2014-06-02 17:54:39 +00:00
|
|
|
void if_settransmitfn(if_t ifp, if_transmit_fn_t);
|
|
|
|
void if_setqflushfn(if_t ifp, if_qflush_fn_t);
|
2014-09-18 14:38:28 +00:00
|
|
|
void if_setgetcounterfn(if_t ifp, if_get_counter_t);
|
2020-02-26 13:48:33 +00:00
|
|
|
|
2014-06-02 17:54:39 +00:00
|
|
|
/* Revisit the below. These are inline functions originally */
|
|
|
|
int drbr_inuse_drv(if_t ifp, struct buf_ring *br);
|
|
|
|
struct mbuf* drbr_dequeue_drv(if_t ifp, struct buf_ring *br);
|
|
|
|
int drbr_needs_enqueue_drv(if_t ifp, struct buf_ring *br);
|
|
|
|
int drbr_enqueue_drv(if_t ifp, struct buf_ring *br, struct mbuf *m);
|
|
|
|
|
2014-09-22 08:27:27 +00:00
|
|
|
/* TSO */
|
|
|
|
void if_hw_tsomax_common(if_t ifp, struct ifnet_hw_tsomax *);
|
|
|
|
int if_hw_tsomax_update(if_t ifp, struct ifnet_hw_tsomax *);
|
|
|
|
|
2018-03-30 18:50:13 +00:00
|
|
|
/* accessors for struct ifreq */
|
|
|
|
void *ifr_data_get_ptr(void *ifrp);
|
2020-03-03 18:05:11 +00:00
|
|
|
void *ifr_buffer_get_buffer(void *data);
|
|
|
|
size_t ifr_buffer_get_length(void *data);
|
2018-03-30 18:50:13 +00:00
|
|
|
|
2018-09-29 13:01:23 +00:00
|
|
|
int ifhwioctl(u_long, struct ifnet *, caddr_t, struct thread *);
|
|
|
|
|
2014-09-28 14:05:18 +00:00
|
|
|
#ifdef DEVICE_POLLING
|
|
|
|
enum poll_cmd { POLL_ONLY, POLL_AND_CHECK_STATUS };
|
|
|
|
|
|
|
|
typedef int poll_handler_t(if_t ifp, enum poll_cmd cmd, int count);
|
|
|
|
int ether_poll_register(poll_handler_t *h, if_t ifp);
|
|
|
|
int ether_poll_deregister(if_t ifp);
|
|
|
|
#endif /* DEVICE_POLLING */
|
|
|
|
|
1999-12-29 04:46:21 +00:00
|
|
|
#endif /* _KERNEL */
|
2014-09-28 17:09:40 +00:00
|
|
|
|
|
|
|
#include <net/ifq.h> /* XXXAO: temporary unconditional include */
|
|
|
|
|
1997-01-03 19:50:26 +00:00
|
|
|
#endif /* !_NET_IF_VAR_H_ */
|