NIC KTLS for Chelsio T6 adapters.

This adds support for ifnet (NIC) KTLS using Chelsio T6 adapters.
Unlike the TOE-based KTLS in r353328, NIC TLS works with non-TOE
connections.

NIC KTLS on T6 is not able to use the normal TSO (LSO) path to segment
the encrypted TLS frames output by the crypto engine.  Instead, the
TOE is placed into a special setup to permit "dummy" connections to be
associated with regular sockets using KTLS.  This permits using the
TOE to segment the encrypted TLS records.  However, this approach does
have some limitations:

1) Regular TOE sockets cannot be used when the TOE is in this special
   mode.  One can use either TOE and TOE-based KTLS or NIC KTLS, but
   not both at the same time.

2) In NIC KTLS mode, the TOE is only able to accept a per-connection
   timestamp offset that varies in the upper 4 bits.  Put another way,
   only connections whose timestamp offset has the 28 lower bits
   cleared can use NIC KTLS and generate correct timestamps.  The
   driver will refuse to enable NIC KTLS on connections with a
   timestamp offset with any of the lower 28 bits set.  To use NIC
   KTLS, users can either disable TCP timestamps by setting the
   net.inet.tcp.rfc1323 sysctl to 0, or apply a local patch to the
   tcp_new_ts_offset() function to clear the lower 28 bits of the
   generated offset.

3) Because the TCP segmentation relies on fields mirrored in a TCB in
   the TOE, not all fields in a TCP packet can be sent in the TCP
   segments generated from a TLS record.  Specifically, for packets
   containing TCP options other than timestamps, the driver will
   inject an "empty" TCP packet holding the requested options (e.g. a
   SACK scoreboard) along with the segments from the TLS record.
   These empty TCP packets are counted by the
   dev.cc.N.txq.M.kern_tls_options sysctls.

Unlike TOE TLS which is able to buffer encrypted TLS records in
on-card memory to handle retransmits, NIC KTLS must re-encrypt TLS
records for retransmit requests as well as non-retransmit requests
that do not include the start of a TLS record but do include the
trailer.  The T6 NIC KTLS code tries to optimize some of the cases for
requests to transmit partial TLS records.  In particular it attempts
to minimize sending "waste" bytes that have to be given as input to
the crypto engine but are not needed on the wire to satisfy mbufs sent
from the TCP stack down to the driver.

TCP packets for TLS requests are broken down into the following
classes (with associated counters):

- Mbufs that send an entire TLS record in full do not have any waste
  bytes (dev.cc.N.txq.M.kern_tls_full).

- Mbufs that send a short TLS record that ends before the end of the
  trailer (dev.cc.N.txq.M.kern_tls_short).  For sockets using AES-CBC,
  the encryption must always start at the beginning, so if the mbuf
  starts at an offset into the TLS record, the offset bytes will be
  "waste" bytes.  For sockets using AES-GCM, the encryption can start
  at the 16 byte block before the starting offset capping the waste at
  15 bytes.

- Mbufs that send a partial TLS record that has a non-zero starting
  offset but ends at the end of the trailer
  (dev.cc.N.txq.M.kern_tls_partial).  In order to compute the
  authentication hash stored in the trailer, the entire TLS record
  must be sent as input to the crypto engine, so the bytes before the
  offset are always "waste" bytes.

In addition, other per-txq sysctls are provided:

- dev.cc.N.txq.M.kern_tls_cbc: Count of sockets sent via this txq
  using AES-CBC.

- dev.cc.N.txq.M.kern_tls_gcm: Count of sockets sent via this txq
  using AES-GCM.

- dev.cc.N.txq.M.kern_tls_fin: Count of empty FIN-only packets sent to
  compensate for the TOE engine not being able to set FIN on the last
  segment of a TLS record if the TLS record mbuf had FIN set.

- dev.cc.N.txq.M.kern_tls_records: Count of TLS records sent via this
  txq including full, short, and partial records.

- dev.cc.N.txq.M.kern_tls_octets: Count of non-waste bytes (TLS header
  and payload) sent for TLS record requests.

- dev.cc.N.txq.M.kern_tls_waste: Count of waste bytes sent for TLS
  record requests.

To enable NIC KTLS with T6, set the following tunables prior to
loading the cxgbe(4) driver:

hw.cxgbe.config_file=kern_tls
hw.cxgbe.kern_tls=1

Reviewed by:	np
Sponsored by:	Chelsio Communications
Differential Revision:	https://reviews.freebsd.org/D21962
This commit is contained in:
John Baldwin 2019-11-21 19:30:31 +00:00
parent e3c42ad809
commit bddf73433e
15 changed files with 3118 additions and 34 deletions

View File

@ -1423,6 +1423,8 @@ dev/cxgbe/common/t4_hw.c optional cxgbe pci \
compile-with "${NORMAL_C} -I$S/dev/cxgbe"
dev/cxgbe/common/t4vf_hw.c optional cxgbev pci \
compile-with "${NORMAL_C} -I$S/dev/cxgbe"
dev/cxgbe/crypto/t4_kern_tls.c optional cxgbe pci kern_tls \
compile-with "${NORMAL_C} -I$S/dev/cxgbe"
dev/cxgbe/crypto/t4_keyctx.c optional cxgbe pci \
compile-with "${NORMAL_C} -I$S/dev/cxgbe"
dev/cxgbe/cudbg/cudbg_common.c optional cxgbe \

View File

@ -35,6 +35,7 @@
#include <sys/kernel.h>
#include <sys/bus.h>
#include <sys/counter.h>
#include <sys/rman.h>
#include <sys/types.h>
#include <sys/lock.h>
@ -158,6 +159,7 @@ enum {
ADAP_ERR = (1 << 5),
BUF_PACKING_OK = (1 << 6),
IS_VF = (1 << 7),
KERN_TLS_OK = (1 << 8),
CXGBE_BUSY = (1 << 9),
@ -380,7 +382,7 @@ enum {
CPL_COOKIE_TOM,
CPL_COOKIE_HASHFILTER,
CPL_COOKIE_ETHOFLD,
CPL_COOKIE_AVAILABLE3,
CPL_COOKIE_KERN_TLS,
NUM_CPL_COOKIES = 8 /* Limited by M_COOKIE. Do not increase. */
};
@ -582,8 +584,25 @@ struct sge_txq {
uint64_t txpkts0_pkts; /* # of frames in type0 coalesced tx WRs */
uint64_t txpkts1_pkts; /* # of frames in type1 coalesced tx WRs */
uint64_t raw_wrs; /* # of raw work requests (alloc_wr_mbuf) */
uint64_t tls_wrs; /* # of TLS work requests */
uint64_t kern_tls_records;
uint64_t kern_tls_short;
uint64_t kern_tls_partial;
uint64_t kern_tls_full;
uint64_t kern_tls_octets;
uint64_t kern_tls_waste;
uint64_t kern_tls_options;
uint64_t kern_tls_header;
uint64_t kern_tls_fin;
uint64_t kern_tls_fin_short;
uint64_t kern_tls_cbc;
uint64_t kern_tls_gcm;
/* stats for not-that-common events */
/* Optional scratch space for constructing work requests. */
uint8_t ss[SGE_MAX_WR_LEN] __aligned(16);
} __aligned(CACHE_LINE_SIZE);
/* rxq: SGE ingress queue + SGE free list + miscellaneous items */
@ -840,6 +859,7 @@ struct adapter {
struct smt_data *smt; /* Source MAC Table */
struct tid_info tids;
vmem_t *key_map;
struct tls_tunables tlst;
uint8_t doorbells;
int offload_map; /* ports with IFCAP_TOE enabled */
@ -897,6 +917,8 @@ struct adapter {
int last_op_flags;
int swintr;
struct callout ktls_tick;
};
#define ADAPTER_LOCK(sc) mtx_lock(&(sc)->sc_lock)
@ -1169,6 +1191,18 @@ void cxgbe_media_status(struct ifnet *, struct ifmediareq *);
bool t4_os_dump_cimla(struct adapter *, int, bool);
void t4_os_dump_devlog(struct adapter *);
#ifdef KERN_TLS
/* t4_kern_tls.c */
int cxgbe_tls_tag_alloc(struct ifnet *, union if_snd_tag_alloc_params *,
struct m_snd_tag **);
void cxgbe_tls_tag_free(struct m_snd_tag *);
void t6_ktls_modload(void);
void t6_ktls_modunload(void);
int t6_ktls_try(struct ifnet *, struct socket *, struct ktls_session *);
int t6_ktls_parse_pkt(struct mbuf *, int *, int *);
int t6_ktls_write_wr(struct sge_txq *, void *, struct mbuf *, u_int, u_int);
#endif
/* t4_keyctx.c */
struct auth_hash;
union authctx;

View File

@ -1158,6 +1158,17 @@ struct cpl_tx_data {
__be32 flags;
};
/* cpl_tx_data.len fields */
#define S_TX_DATA_MSS 16
#define M_TX_DATA_MSS 0xFFFF
#define V_TX_DATA_MSS(x) ((x) << S_TX_DATA_MSS)
#define G_TX_DATA_MSS(x) (((x) >> S_TX_DATA_MSS) & M_TX_DATA_MSS)
#define S_TX_LENGTH 0
#define M_TX_LENGTH 0xFFFF
#define V_TX_LENGTH(x) ((x) << S_TX_LENGTH)
#define G_TX_LENGTH(x) (((x) >> S_TX_LENGTH) & M_TX_LENGTH)
/* cpl_tx_data.flags fields */
#define S_TX_PROXY 5
#define V_TX_PROXY(x) ((x) << S_TX_PROXY)
@ -1205,6 +1216,14 @@ struct cpl_tx_data {
#define V_T6_TX_FORCE(x) ((x) << S_T6_TX_FORCE)
#define F_T6_TX_FORCE V_T6_TX_FORCE(1U)
#define S_TX_BYPASS 21
#define V_TX_BYPASS(x) ((x) << S_TX_BYPASS)
#define F_TX_BYPASS V_TX_BYPASS(1U)
#define S_TX_PUSH 22
#define V_TX_PUSH(x) ((x) << S_TX_PUSH)
#define F_TX_PUSH V_TX_PUSH(1U)
/* additional tx_data_wr.flags fields */
#define S_TX_CPU_IDX 0
#define M_TX_CPU_IDX 0x3F

View File

@ -22617,6 +22617,10 @@
#define V_TXPDUSIZEADJ(x) ((x) << S_TXPDUSIZEADJ)
#define G_TXPDUSIZEADJ(x) (((x) >> S_TXPDUSIZEADJ) & M_TXPDUSIZEADJ)
#define S_ENABLECBYP 21
#define V_ENABLECBYP(x) ((x) << S_ENABLECBYP)
#define F_ENABLECBYP V_ENABLECBYP(1U)
#define S_LIMITEDTRANSMIT 20
#define M_LIMITEDTRANSMIT 0xfU
#define V_LIMITEDTRANSMIT(x) ((x) << S_LIMITEDTRANSMIT)

View File

@ -753,6 +753,9 @@
#define S_TF_CCTRL_RFR 62
#define V_TF_CCTRL_RFR(x) ((__u64)(x) << S_TF_CCTRL_RFR)
#define S_TF_CORE_BYPASS 63
#define V_TF_CORE_BYPASS(x) ((__u64)(x) << S_TF_CORE_BYPASS)
#define S_TF_DDP_INDICATE_OUT 16
#define V_TF_DDP_INDICATE_OUT(x) ((x) << S_TF_DDP_INDICATE_OUT)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,278 @@
# Firmware configuration file.
#
# Global limits (some are hardware limits, others are due to the firmware).
# nvi = 128 virtual interfaces
# niqflint = 1023 ingress queues with freelists and/or interrupts
# nethctrl = 64K Ethernet or ctrl egress queues
# neq = 64K egress queues of all kinds, including freelists
# nexactf = 512 MPS TCAM entries, can oversubscribe.
[global]
rss_glb_config_mode = basicvirtual
rss_glb_config_options = tnlmapen,hashtoeplitz,tnlalllkp
# PL_TIMEOUT register
pl_timeout_value = 200 # the timeout value in units of us
sge_timer_value = 1, 5, 10, 50, 100, 200 # SGE_TIMER_VALUE* in usecs
reg[0x10c4] = 0x20000000/0x20000000 # GK_CONTROL, enable 5th thread
reg[0x7dc0] = 0x0e2f8849 # TP_SHIFT_CNT
#Tick granularities in kbps
tsch_ticks = 100000, 10000, 1000, 10
filterMode = fragmentation, mpshittype, protocol, vlan, port, fcoe
filterMask = protocol
tp_pmrx = 10, 512
tp_pmrx_pagesize = 64K
# TP number of RX channels (0 = auto)
tp_nrxch = 0
tp_pmtx = 10, 512
tp_pmtx_pagesize = 64K
# TP number of TX channels (0 = auto)
tp_ntxch = 0
# TP OFLD MTUs
tp_mtus = 88, 256, 512, 576, 808, 1024, 1280, 1488, 1500, 2002, 2048, 4096, 4352, 8192, 9000, 9600
# enable TP_OUT_CONFIG.IPIDSPLITMODE and CRXPKTENC
reg[0x7d04] = 0x00010008/0x00010008
# TP_GLOBAL_CONFIG
reg[0x7d08] = 0x00000800/0x00000800 # set IssFromCplEnable
# TP_PC_CONFIG
reg[0x7d48] = 0x00000000/0x00000400 # clear EnableFLMError
# TP_PARA_REG0
reg[0x7d60] = 0x06000000/0x07000000 # set InitCWND to 6
# cluster, lan, or wan.
tp_tcptuning = lan
# LE_DB_CONFIG
reg[0x19c04] = 0x00000000/0x00440000 # LE Server SRAM disabled
# LE IPv4 compression disabled
# LE_DB_HASH_CONFIG
reg[0x19c28] = 0x00800000/0x01f00000 # LE Hash bucket size 8,
# ULP_TX_CONFIG
reg[0x8dc0] = 0x00000104/0x00000104 # Enable ITT on PI err
# Enable more error msg for ...
# TPT error.
# ULP_RX_MISC_FEATURE_ENABLE
#reg[0x1925c] = 0x01003400/0x01003400 # iscsi tag pi bit
# Enable offset decrement after ...
# PI extraction and before DDP
# ulp insert pi source info in DIF
# iscsi_eff_offset_en
#Enable iscsi completion moderation feature
reg[0x1925c] = 0x000041c0/0x000031c0 # Enable offset decrement after
# PI extraction and before DDP.
# ulp insert pi source info in
# DIF.
# Enable iscsi hdr cmd mode.
# iscsi force cmd mode.
# Enable iscsi cmp mode.
# MC configuration
#mc_mode_brc[0] = 1 # mc0 - 1: enable BRC, 0: enable RBC
# PFs 0-3. These get 8 MSI/8 MSI-X vectors each. VFs are supported by
# these 4 PFs only.
[function "0"]
wx_caps = all
r_caps = all
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x1
[function "1"]
wx_caps = all
r_caps = all
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x2
[function "2"]
wx_caps = all
r_caps = all
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x4
[function "3"]
wx_caps = all
r_caps = all
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x8
# PF4 is the resource-rich PF that the bus/nexus driver attaches to.
# It gets 32 MSI/128 MSI-X vectors.
[function "4"]
wx_caps = all
r_caps = all
nvi = 32
rssnvi = 32
niqflint = 512
nethctrl = 1024
neq = 2048
nqpcq = 8192
nexactf = 456
cmask = all
pmask = all
ncrypto_lookaside = 16
nclip = 320
nethofld = 8192
# TCAM has 6K cells; each region must start at a multiple of 128 cell.
# Each entry in these categories takes 2 cells each. nhash will use the
# TCAM iff there is room left (that is, the rest don't add up to 3072).
nfilter = 48
nserver = 64
nhpfilter = 0
nhash = 524288
protocol = ofld, tlskeys, crypto_lookaside
tp_l2t = 4096
tp_ddp = 2
tp_ddp_iscsi = 2
tp_tls_key = 3
tp_tls_mxrxsize = 17408 # 16384 + 1024, governs max rx data, pm max xfer len, rx coalesce sizes
tp_stag = 2
tp_pbl = 5
tp_rq = 7
tp_srq = 128
# PF5 is the SCSI Controller PF. It gets 32 MSI/40 MSI-X vectors.
# Not used right now.
[function "5"]
nvi = 1
rssnvi = 0
# PF6 is the FCoE Controller PF. It gets 32 MSI/40 MSI-X vectors.
# Not used right now.
[function "6"]
nvi = 1
rssnvi = 0
# The following function, 1023, is not an actual PCIE function but is used to
# configure and reserve firmware internal resources that come from the global
# resource pool.
#
[function "1023"]
wx_caps = all
r_caps = all
nvi = 4
rssnvi = 0
cmask = all
pmask = all
nexactf = 8
nfilter = 16
# For Virtual functions, we only allow NIC functionality and we only allow
# access to one port (1 << PF). Note that because of limitations in the
# Scatter Gather Engine (SGE) hardware which checks writes to VF KDOORBELL
# and GTS registers, the number of Ingress and Egress Queues must be a power
# of 2.
#
[function "0/*"]
wx_caps = 0x82
r_caps = 0x86
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x1
[function "1/*"]
wx_caps = 0x82
r_caps = 0x86
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x2
[function "2/*"]
wx_caps = 0x82
r_caps = 0x86
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x1
[function "3/*"]
wx_caps = 0x82
r_caps = 0x86
nvi = 1
rssnvi = 0
niqflint = 2
nethctrl = 2
neq = 4
nexactf = 2
cmask = all
pmask = 0x2
# MPS has 192K buffer space for ingress packets from the wire as well as
# loopback path of the L2 switch.
[port "0"]
dcb = none
#bg_mem = 25
#lpbk_mem = 25
hwm = 60
lwm = 15
dwm = 30
[port "1"]
dcb = none
#bg_mem = 25
#lpbk_mem = 25
hwm = 60
lwm = 15
dwm = 30
[fini]
version = 0x1
checksum = 0xa737b06f
#
# $FreeBSD$
#

View File

@ -243,10 +243,17 @@ struct tom_tunables {
int cop_managed_offloading;
int autorcvbuf_inc;
};
/* iWARP driver tunables */
struct iw_tunables {
int wc_en;
};
struct tls_tunables {
int inline_keys;
int combo_wrs;
};
#ifdef TCP_OFFLOAD
int t4_register_uld(struct uld_info *);
int t4_unregister_uld(struct uld_info *);

View File

@ -145,6 +145,23 @@ find_or_alloc_l2e(struct l2t_data *d, uint16_t vlan, uint8_t port, uint8_t *dmac
return (e);
}
static void
mk_write_l2e(struct adapter *sc, struct l2t_entry *e, int sync, int reply,
void *dst)
{
struct cpl_l2t_write_req *req;
int idx;
req = dst;
idx = e->idx + sc->vres.l2t.start;
INIT_TP_WR(req, 0);
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ, idx |
V_SYNC_WR(sync) | V_TID_QID(e->iqid)));
req->params = htons(V_L2T_W_PORT(e->lport) | V_L2T_W_NOREPLY(!reply));
req->l2t_idx = htons(idx);
req->vlan = htons(e->vlan);
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
}
/*
* Write an L2T entry. Must be called with the entry locked.
@ -157,7 +174,6 @@ t4_write_l2e(struct l2t_entry *e, int sync)
struct adapter *sc;
struct wrq_cookie cookie;
struct cpl_l2t_write_req *req;
int idx;
mtx_assert(&e->lock, MA_OWNED);
MPASS(e->wrq != NULL);
@ -169,14 +185,7 @@ t4_write_l2e(struct l2t_entry *e, int sync)
if (req == NULL)
return (ENOMEM);
idx = e->idx + sc->vres.l2t.start;
INIT_TP_WR(req, 0);
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ, idx |
V_SYNC_WR(sync) | V_TID_QID(e->iqid)));
req->params = htons(V_L2T_W_PORT(e->lport) | V_L2T_W_NOREPLY(!sync));
req->l2t_idx = htons(idx);
req->vlan = htons(e->vlan);
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
mk_write_l2e(sc, e, sync, sync, req);
commit_wrq_wr(wrq, req, &cookie);
@ -186,6 +195,90 @@ t4_write_l2e(struct l2t_entry *e, int sync)
return (0);
}
/*
* Allocate an L2T entry for use by a TLS connection. These entries are
* associated with a specific VLAN and destination MAC that never changes.
* However, multiple TLS connections might share a single entry.
*
* If a new L2T entry is allocated, a work request to initialize it is
* written to 'txq' and 'ndesc' will be set to 1. Otherwise, 'ndesc'
* will be set to 0.
*
* To avoid races, separate L2T entries are reserved for individual
* queues since the L2T entry update is written to a txq just prior to
* TLS work requests that will depend on it being written.
*/
struct l2t_entry *
t4_l2t_alloc_tls(struct adapter *sc, struct sge_txq *txq, void *dst,
int *ndesc, uint16_t vlan, uint8_t port, uint8_t *eth_addr)
{
struct l2t_data *d;
struct l2t_entry *e;
int i;
TXQ_LOCK_ASSERT_OWNED(txq);
d = sc->l2t;
*ndesc = 0;
rw_rlock(&d->lock);
/* First, try to find an existing entry. */
for (i = 0; i < d->l2t_size; i++) {
e = &d->l2tab[i];
if (e->state != L2T_STATE_TLS)
continue;
if (e->vlan == vlan && e->lport == port &&
e->wrq == (struct sge_wrq *)txq &&
memcmp(e->dmac, eth_addr, ETHER_ADDR_LEN) == 0) {
if (atomic_fetchadd_int(&e->refcnt, 1) == 0) {
/*
* This entry wasn't held but is still
* valid, so decrement nfree.
*/
atomic_subtract_int(&d->nfree, 1);
}
KASSERT(e->refcnt > 0,
("%s: refcount overflow", __func__));
rw_runlock(&d->lock);
return (e);
}
}
/*
* Don't bother rechecking if the upgrade fails since the txq is
* already locked.
*/
if (!rw_try_upgrade(&d->lock)) {
rw_runlock(&d->lock);
rw_wlock(&d->lock);
}
/* Match not found, allocate a new entry. */
e = t4_alloc_l2e(d);
if (e == NULL) {
rw_wunlock(&d->lock);
return (e);
}
/* Initialize the entry. */
e->state = L2T_STATE_TLS;
e->vlan = vlan;
e->lport = port;
e->iqid = sc->sge.fwq.abs_id;
e->wrq = (struct sge_wrq *)txq;
memcpy(e->dmac, eth_addr, ETHER_ADDR_LEN);
atomic_store_rel_int(&e->refcnt, 1);
rw_wunlock(&d->lock);
/* Write out the work request. */
*ndesc = howmany(sizeof(struct cpl_l2t_write_req), EQ_ESIZE);
MPASS(*ndesc == 1);
mk_write_l2e(sc, e, 1, 0, dst);
return (e);
}
/*
* Allocate an L2T entry for use by a switching rule. Such need to be
* explicitly freed and while busy they are not on any hash chain, so normal
@ -307,6 +400,7 @@ l2e_state(const struct l2t_entry *e)
case L2T_STATE_SYNC_WRITE: return 'W';
case L2T_STATE_RESOLVING: return STAILQ_EMPTY(&e->wr_list) ? 'R' : 'A';
case L2T_STATE_SWITCHING: return 'X';
case L2T_STATE_TLS: return 'T';
default: return 'U';
}
}
@ -343,7 +437,7 @@ sysctl_l2t(SYSCTL_HANDLER_ARGS)
"Ethernet address VLAN/P LP State Users Port");
header = 1;
}
if (e->state == L2T_STATE_SWITCHING)
if (e->state >= L2T_STATE_SWITCHING)
ip[0] = 0;
else {
inet_ntop(e->ipv6 ? AF_INET6 : AF_INET, &e->addr[0],

View File

@ -48,6 +48,7 @@ enum {
/* when state is one of the below the entry is not hashed */
L2T_STATE_SWITCHING, /* entry is being used by a switching filter */
L2T_STATE_TLS, /* entry is being used by TLS sessions */
L2T_STATE_UNUSED /* entry not in use */
};
@ -93,6 +94,8 @@ int t4_free_l2t(struct l2t_data *);
struct l2t_entry *t4_alloc_l2e(struct l2t_data *);
struct l2t_entry *t4_l2t_alloc_switching(struct adapter *, uint16_t, uint8_t,
uint8_t *);
struct l2t_entry *t4_l2t_alloc_tls(struct adapter *, struct sge_txq *,
void *, int *, uint16_t, uint8_t, uint8_t *);
int t4_l2t_set_switching(struct adapter *, struct l2t_entry *, uint16_t,
uint8_t, uint8_t *);
int t4_write_l2e(struct l2t_entry *, int);

View File

@ -33,6 +33,7 @@ __FBSDID("$FreeBSD$");
#include "opt_ddb.h"
#include "opt_inet.h"
#include "opt_inet6.h"
#include "opt_kern_tls.h"
#include "opt_ratelimit.h"
#include "opt_rss.h"
@ -65,6 +66,9 @@ __FBSDID("$FreeBSD$");
#endif
#include <netinet/in.h>
#include <netinet/ip.h>
#ifdef KERN_TLS
#include <netinet/tcp_seq.h>
#endif
#if defined(__i386__) || defined(__amd64__)
#include <machine/md_var.h>
#include <machine/cputypes.h>
@ -229,7 +233,7 @@ static void cxgbe_init(void *);
static int cxgbe_ioctl(struct ifnet *, unsigned long, caddr_t);
static int cxgbe_transmit(struct ifnet *, struct mbuf *);
static void cxgbe_qflush(struct ifnet *);
#ifdef RATELIMIT
#if defined(KERN_TLS) || defined(RATELIMIT)
static int cxgbe_snd_tag_alloc(struct ifnet *, union if_snd_tag_alloc_params *,
struct m_snd_tag **);
static int cxgbe_snd_tag_modify(struct m_snd_tag *,
@ -576,6 +580,28 @@ SYSCTL_INT(_hw_cxgbe, OID_AUTO, cop_managed_offloading, CTLFLAG_RDTUN,
"COP (Connection Offload Policy) controls all TOE offload");
#endif
#ifdef KERN_TLS
/*
* This enables KERN_TLS for all adapters if set.
*/
static int t4_kern_tls = 0;
SYSCTL_INT(_hw_cxgbe, OID_AUTO, kern_tls, CTLFLAG_RDTUN, &t4_kern_tls, 0,
"Enable KERN_TLS mode for all supported adapters");
SYSCTL_NODE(_hw_cxgbe, OID_AUTO, tls, CTLFLAG_RD, 0,
"cxgbe(4) KERN_TLS parameters");
static int t4_tls_inline_keys = 0;
SYSCTL_INT(_hw_cxgbe_tls, OID_AUTO, inline_keys, CTLFLAG_RDTUN,
&t4_tls_inline_keys, 0,
"Always pass TLS keys in work requests (1) or attempt to store TLS keys "
"in card memory.");
static int t4_tls_combo_wrs = 0;
SYSCTL_INT(_hw_cxgbe_tls, OID_AUTO, combo_wrs, CTLFLAG_RDTUN, &t4_tls_combo_wrs,
0, "Attempt to combine TCB field updates with TLS record work requests.");
#endif
/* Functions used by VIs to obtain unique MAC addresses for each VI. */
static int vi_mac_funcs[] = {
FW_VI_FUNC_ETH,
@ -1011,6 +1037,8 @@ t4_attach(device_t dev)
sc->policy = NULL;
rw_init(&sc->policy_lock, "connection offload policy");
callout_init(&sc->ktls_tick, 1);
rc = t4_map_bars_0_and_4(sc);
if (rc != 0)
goto done; /* error message displayed already */
@ -1585,6 +1613,7 @@ t4_detach_common(device_t dev)
free(sc->tt.tls_rx_ports, M_CXGBE);
t4_destroy_dma_tag(sc);
callout_drain(&sc->ktls_tick);
callout_drain(&sc->sfl_callout);
if (mtx_initialized(&sc->tids.ftid_lock)) {
mtx_destroy(&sc->tids.ftid_lock);
@ -1663,18 +1692,20 @@ cxgbe_vi_attach(device_t dev, struct vi_info *vi)
ifp->if_transmit = cxgbe_transmit;
ifp->if_qflush = cxgbe_qflush;
ifp->if_get_counter = cxgbe_get_counter;
#ifdef RATELIMIT
#if defined(KERN_TLS) || defined(RATELIMIT)
ifp->if_snd_tag_alloc = cxgbe_snd_tag_alloc;
ifp->if_snd_tag_modify = cxgbe_snd_tag_modify;
ifp->if_snd_tag_query = cxgbe_snd_tag_query;
ifp->if_snd_tag_free = cxgbe_snd_tag_free;
#endif
#ifdef RATELIMIT
ifp->if_ratelimit_query = cxgbe_ratelimit_query;
#endif
ifp->if_capabilities = T4_CAP;
ifp->if_capenable = T4_CAP_ENABLE;
#ifdef TCP_OFFLOAD
if (vi->nofldrxq != 0)
if (vi->nofldrxq != 0 && (vi->pi->adapter->flags & KERN_TLS_OK) == 0)
ifp->if_capabilities |= IFCAP_TOE;
#endif
#ifdef RATELIMIT
@ -1693,6 +1724,12 @@ cxgbe_vi_attach(device_t dev, struct vi_info *vi)
ifp->if_hw_tsomaxsegcount = TX_SGL_SEGS_EO_TSO;
#endif
ifp->if_hw_tsomaxsegsize = 65536;
#ifdef KERN_TLS
if (vi->pi->adapter->flags & KERN_TLS_OK) {
ifp->if_capabilities |= IFCAP_TXTLS;
ifp->if_capenable |= IFCAP_TXTLS;
}
#endif
ether_ifattach(ifp, vi->hw_addr);
#ifdef DEV_NETMAP
@ -2001,6 +2038,11 @@ cxgbe_ioctl(struct ifnet *ifp, unsigned long cmd, caddr_t data)
if (mask & IFCAP_NOMAP)
ifp->if_capenable ^= IFCAP_NOMAP;
#ifdef KERN_TLS
if (mask & IFCAP_TXTLS)
ifp->if_capenable ^= (mask & IFCAP_TXTLS);
#endif
#ifdef VLAN_CAPABILITIES
VLAN_CAPABILITIES(ifp);
#endif
@ -2061,7 +2103,7 @@ cxgbe_transmit(struct ifnet *ifp, struct mbuf *m)
M_ASSERTPKTHDR(m);
MPASS(m->m_nextpkt == NULL); /* not quite ready for this yet */
#ifdef RATELIMIT
#if defined(KERN_TLS) || defined(RATELIMIT)
if (m->m_pkthdr.csum_flags & CSUM_SND_TAG)
MPASS(m->m_pkthdr.snd_tag->ifp == ifp);
#endif
@ -2239,7 +2281,7 @@ cxgbe_get_counter(struct ifnet *ifp, ift_counter c)
}
}
#ifdef RATELIMIT
#if defined(KERN_TLS) || defined(RATELIMIT)
void
cxgbe_snd_tag_init(struct cxgbe_snd_tag *cst, struct ifnet *ifp, int type)
{
@ -2259,6 +2301,11 @@ cxgbe_snd_tag_alloc(struct ifnet *ifp, union if_snd_tag_alloc_params *params,
case IF_SND_TAG_TYPE_RATE_LIMIT:
error = cxgbe_rate_tag_alloc(ifp, params, pt);
break;
#endif
#ifdef KERN_TLS
case IF_SND_TAG_TYPE_TLS:
error = cxgbe_tls_tag_alloc(ifp, params, pt);
break;
#endif
default:
error = EOPNOTSUPP;
@ -2313,6 +2360,11 @@ cxgbe_snd_tag_free(struct m_snd_tag *mst)
case IF_SND_TAG_TYPE_RATE_LIMIT:
cxgbe_rate_tag_free(mst);
return;
#endif
#ifdef KERN_TLS
case IF_SND_TAG_TYPE_TLS:
cxgbe_tls_tag_free(mst);
return;
#endif
default:
panic("shouldn't get here");
@ -4523,6 +4575,58 @@ get_params__post_init(struct adapter *sc)
return (rc);
}
#ifdef KERN_TLS
static void
ktls_tick(void *arg)
{
struct adapter *sc;
uint32_t tstamp;
sc = arg;
tstamp = tcp_ts_getticks();
t4_write_reg(sc, A_TP_SYNC_TIME_HI, tstamp >> 1);
t4_write_reg(sc, A_TP_SYNC_TIME_LO, tstamp << 31);
callout_schedule_sbt(&sc->ktls_tick, SBT_1MS, 0, C_HARDCLOCK);
}
static void
t4_enable_kern_tls(struct adapter *sc)
{
uint32_t m, v;
m = F_ENABLECBYP;
v = F_ENABLECBYP;
t4_set_reg_field(sc, A_TP_PARA_REG6, m, v);
m = F_CPL_FLAGS_UPDATE_EN | F_SEQ_UPDATE_EN;
v = F_CPL_FLAGS_UPDATE_EN | F_SEQ_UPDATE_EN;
t4_set_reg_field(sc, A_ULP_TX_CONFIG, m, v);
m = F_NICMODE;
v = F_NICMODE;
t4_set_reg_field(sc, A_TP_IN_CONFIG, m, v);
m = F_LOOKUPEVERYPKT;
v = 0;
t4_set_reg_field(sc, A_TP_INGRESS_CONFIG, m, v);
m = F_TXDEFERENABLE | F_DISABLEWINDOWPSH | F_DISABLESEPPSHFLAG;
v = F_DISABLEWINDOWPSH;
t4_set_reg_field(sc, A_TP_PC_CONFIG, m, v);
m = V_TIMESTAMPRESOLUTION(M_TIMESTAMPRESOLUTION);
v = V_TIMESTAMPRESOLUTION(0x1f);
t4_set_reg_field(sc, A_TP_TIMER_RESOLUTION, m, v);
sc->flags |= KERN_TLS_OK;
sc->tlst.inline_keys = t4_tls_inline_keys;
sc->tlst.combo_wrs = t4_tls_combo_wrs;
}
#endif
static int
set_params__post_init(struct adapter *sc)
{
@ -4602,6 +4706,12 @@ set_params__post_init(struct adapter *sc)
}
}
#endif
#ifdef KERN_TLS
if (t4_kern_tls != 0 && sc->cryptocaps & FW_CAPS_CONFIG_TLSKEYS &&
sc->toecaps & FW_CAPS_CONFIG_TOE)
t4_enable_kern_tls(sc);
#endif
return (0);
}
@ -5480,6 +5590,11 @@ adapter_full_init(struct adapter *sc)
if (!(sc->flags & IS_VF))
t4_intr_enable(sc);
#ifdef KERN_TLS
if (sc->flags & KERN_TLS_OK)
callout_reset_sbt(&sc->ktls_tick, SBT_1MS, 0, ktls_tick, sc,
C_HARDCLOCK);
#endif
sc->flags |= FULL_INIT_DONE;
done:
if (rc != 0)
@ -6347,6 +6462,25 @@ t4_sysctls(struct adapter *sc)
sysctl_wcwr_stats, "A", "write combined work requests");
}
#ifdef KERN_TLS
if (sc->flags & KERN_TLS_OK) {
/*
* dev.t4nex.0.tls.
*/
oid = SYSCTL_ADD_NODE(ctx, c0, OID_AUTO, "tls", CTLFLAG_RD,
NULL, "KERN_TLS parameters");
children = SYSCTL_CHILDREN(oid);
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "inline_keys",
CTLFLAG_RW, &sc->tlst.inline_keys, 0, "Always pass TLS "
"keys in work requests (1) or attempt to store TLS keys "
"in card memory.");
SYSCTL_ADD_INT(ctx, children, OID_AUTO, "combo_wrs",
CTLFLAG_RW, &sc->tlst.combo_wrs, 0, "Attempt to combine "
"TCB field updates with TLS record work requests.");
}
#endif
#ifdef TCP_OFFLOAD
if (is_offload(sc)) {
int i;
@ -6817,16 +6951,16 @@ cxgbe_sysctls(struct port_info *pi)
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "tx_tls_records",
CTLFLAG_RD, &pi->tx_tls_records,
"# of TLS records transmitted");
"# of TOE TLS records transmitted");
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "tx_tls_octets",
CTLFLAG_RD, &pi->tx_tls_octets,
"# of payload octets in transmitted TLS records");
"# of payload octets in transmitted TOE TLS records");
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "rx_tls_records",
CTLFLAG_RD, &pi->rx_tls_records,
"# of TLS records received");
"# of TOE TLS records received");
SYSCTL_ADD_ULONG(ctx, children, OID_AUTO, "rx_tls_octets",
CTLFLAG_RD, &pi->rx_tls_octets,
"# of payload octets in received TLS records");
"# of payload octets in received TOE TLS records");
}
static int
@ -10076,6 +10210,19 @@ clear_stats(struct adapter *sc, u_int port_id)
txq->txpkts0_pkts = 0;
txq->txpkts1_pkts = 0;
txq->raw_wrs = 0;
txq->tls_wrs = 0;
txq->kern_tls_records = 0;
txq->kern_tls_short = 0;
txq->kern_tls_partial = 0;
txq->kern_tls_full = 0;
txq->kern_tls_octets = 0;
txq->kern_tls_waste = 0;
txq->kern_tls_options = 0;
txq->kern_tls_header = 0;
txq->kern_tls_fin = 0;
txq->kern_tls_fin_short = 0;
txq->kern_tls_cbc = 0;
txq->kern_tls_gcm = 0;
mp_ring_reset_stats(txq->r);
}
@ -10601,10 +10748,17 @@ tweak_tunables(void)
#ifdef TCP_OFFLOAD
calculate_nqueues(&t4_nofldrxq, nc, NOFLDRXQ);
calculate_nqueues(&t4_nofldrxq_vi, nc, NOFLDRXQ_VI);
#endif
#if defined(TCP_OFFLOAD) || defined(KERN_TLS)
if (t4_toecaps_allowed == -1)
t4_toecaps_allowed = FW_CAPS_CONFIG_TOE;
#else
if (t4_toecaps_allowed == -1)
t4_toecaps_allowed = 0;
#endif
#ifdef TCP_OFFLOAD
if (t4_rdmacaps_allowed == -1) {
t4_rdmacaps_allowed = FW_CAPS_CONFIG_RDMA_RDDP |
FW_CAPS_CONFIG_RDMA_RDMAC;
@ -10622,9 +10776,6 @@ tweak_tunables(void)
if (t4_pktc_idx_ofld < -1 || t4_pktc_idx_ofld >= SGE_NCOUNTERS)
t4_pktc_idx_ofld = PKTC_IDX_OFLD;
#else
if (t4_toecaps_allowed == -1)
t4_toecaps_allowed = 0;
if (t4_rdmacaps_allowed == -1)
t4_rdmacaps_allowed = 0;
@ -10888,6 +11039,9 @@ mod_event(module_t mod, int cmd, void *arg)
#endif
#ifdef INET6
t4_clip_modload();
#endif
#ifdef KERN_TLS
t6_ktls_modload();
#endif
t4_tracer_modload();
tweak_tunables();
@ -10928,6 +11082,9 @@ mod_event(module_t mod, int cmd, void *arg)
if (t4_sge_extfree_refs() == 0) {
t4_tracer_modunload();
#ifdef KERN_TLS
t6_ktls_modunload();
#endif
#ifdef INET6
t4_clip_modunload();
#endif

View File

@ -32,6 +32,7 @@ __FBSDID("$FreeBSD$");
#include "opt_inet.h"
#include "opt_inet6.h"
#include "opt_kern_tls.h"
#include "opt_ratelimit.h"
#include <sys/types.h>
@ -39,6 +40,7 @@ __FBSDID("$FreeBSD$");
#include <sys/mbuf.h>
#include <sys/socket.h>
#include <sys/kernel.h>
#include <sys/ktls.h>
#include <sys/malloc.h>
#include <sys/queue.h>
#include <sys/sbuf.h>
@ -47,6 +49,7 @@ __FBSDID("$FreeBSD$");
#include <sys/sglist.h>
#include <sys/sysctl.h>
#include <sys/smp.h>
#include <sys/socketvar.h>
#include <sys/counter.h>
#include <net/bpf.h>
#include <net/ethernet.h>
@ -85,6 +88,7 @@ __FBSDID("$FreeBSD$");
/* Internal mbuf flags stored in PH_loc.eight[1]. */
#define MC_NOMAP 0x01
#define MC_RAW_WR 0x02
#define MC_TLS 0x04
/*
* Ethernet frames are DMA'd at this byte offset into the freelist buffer.
@ -2240,7 +2244,8 @@ mbuf_len16(struct mbuf *m)
M_ASSERTPKTHDR(m);
n = m->m_pkthdr.PH_loc.eight[0];
MPASS(n > 0 && n <= SGE_MAX_WR_LEN / 16);
if (!(mbuf_cflags(m) & MC_TLS))
MPASS(n > 0 && n <= SGE_MAX_WR_LEN / 16);
return (n);
}
@ -2542,7 +2547,7 @@ parse_pkt(struct adapter *sc, struct mbuf **mp)
#if defined(INET) || defined(INET6)
struct tcphdr *tcp;
#endif
#ifdef RATELIMIT
#if defined(KERN_TLS) || defined(RATELIMIT)
struct cxgbe_snd_tag *cst;
#endif
uint16_t eh_type;
@ -2565,11 +2570,25 @@ parse_pkt(struct adapter *sc, struct mbuf **mp)
M_ASSERTPKTHDR(m0);
MPASS(m0->m_pkthdr.len > 0);
nsegs = count_mbuf_nsegs(m0, 0, &cflags);
#ifdef RATELIMIT
#if defined(KERN_TLS) || defined(RATELIMIT)
if (m0->m_pkthdr.csum_flags & CSUM_SND_TAG)
cst = mst_to_cst(m0->m_pkthdr.snd_tag);
else
cst = NULL;
#endif
#ifdef KERN_TLS
if (cst != NULL && cst->type == IF_SND_TAG_TYPE_TLS) {
int len16;
cflags |= MC_TLS;
set_mbuf_cflags(m0, cflags);
rc = t6_ktls_parse_pkt(m0, &nsegs, &len16);
if (rc != 0)
goto fail;
set_mbuf_nsegs(m0, nsegs);
set_mbuf_len16(m0, len16);
return (0);
}
#endif
if (nsegs > (needs_tso(m0) ? TX_SGL_SEGS_TSO : TX_SGL_SEGS)) {
if (defragged++ > 0 || (m = m_defrag(m0, M_NOWAIT)) == NULL) {
@ -2841,7 +2860,7 @@ cannot_use_txpkts(struct mbuf *m)
{
/* maybe put a GL limit too, to avoid silliness? */
return (needs_tso(m) || (mbuf_cflags(m) & MC_RAW_WR) != 0);
return (needs_tso(m) || (mbuf_cflags(m) & (MC_RAW_WR | MC_TLS)) != 0);
}
static inline int
@ -2917,7 +2936,8 @@ eth_tx(struct mp_ring *r, u_int cidx, u_int pidx)
M_ASSERTPKTHDR(m0);
MPASS(m0->m_nextpkt == NULL);
if (available < SGE_MAX_WR_NDESC) {
if (available < howmany(mbuf_len16(m0), EQ_ESIZE / 16)) {
MPASS(howmany(mbuf_len16(m0), EQ_ESIZE / 16) <= 64);
available += reclaim_tx_descs(txq, 64);
if (available < howmany(mbuf_len16(m0), EQ_ESIZE / 16))
break; /* out of descriptors */
@ -2928,7 +2948,19 @@ eth_tx(struct mp_ring *r, u_int cidx, u_int pidx)
next_cidx = 0;
wr = (void *)&eq->desc[eq->pidx];
if (sc->flags & IS_VF) {
if (mbuf_cflags(m0) & MC_RAW_WR) {
total++;
remaining--;
n = write_raw_wr(txq, (void *)wr, m0, available);
#ifdef KERN_TLS
} else if (mbuf_cflags(m0) & MC_TLS) {
total++;
remaining--;
ETHER_BPF_MTAP(ifp, m0);
n = t6_ktls_write_wr(txq,(void *)wr, m0,
mbuf_nsegs(m0), available);
#endif
} else if (sc->flags & IS_VF) {
total++;
remaining--;
ETHER_BPF_MTAP(ifp, m0);
@ -2962,17 +2994,15 @@ eth_tx(struct mp_ring *r, u_int cidx, u_int pidx)
n = write_txpkts_wr(txq, wr, m0, &txp, available);
total += txp.npkt;
remaining -= txp.npkt;
} else if (mbuf_cflags(m0) & MC_RAW_WR) {
total++;
remaining--;
n = write_raw_wr(txq, (void *)wr, m0, available);
} else {
total++;
remaining--;
ETHER_BPF_MTAP(ifp, m0);
n = write_txpkt_wr(txq, (void *)wr, m0, available);
}
MPASS(n >= 1 && n <= available && n <= SGE_MAX_WR_NDESC);
MPASS(n >= 1 && n <= available);
if (!(mbuf_cflags(m0) & MC_TLS))
MPASS(n <= SGE_MAX_WR_NDESC);
available -= n;
dbdiff += n;
@ -4188,6 +4218,49 @@ alloc_txq(struct vi_info *vi, struct sge_txq *txq, int idx,
"# of frames tx'd using type1 txpkts work requests");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO, "raw_wrs", CTLFLAG_RD,
&txq->raw_wrs, "# of raw work requests (non-packets)");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO, "tls_wrs", CTLFLAG_RD,
&txq->tls_wrs, "# of TLS work requests (TLS records)");
#ifdef KERN_TLS
if (sc->flags & KERN_TLS_OK) {
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_records", CTLFLAG_RD, &txq->kern_tls_records,
"# of NIC TLS records transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_short", CTLFLAG_RD, &txq->kern_tls_short,
"# of short NIC TLS records transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_partial", CTLFLAG_RD, &txq->kern_tls_partial,
"# of partial NIC TLS records transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_full", CTLFLAG_RD, &txq->kern_tls_full,
"# of full NIC TLS records transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_octets", CTLFLAG_RD, &txq->kern_tls_octets,
"# of payload octets in transmitted NIC TLS records");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_waste", CTLFLAG_RD, &txq->kern_tls_waste,
"# of octets DMAd but not transmitted in NIC TLS records");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_options", CTLFLAG_RD, &txq->kern_tls_options,
"# of NIC TLS options-only packets transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_header", CTLFLAG_RD, &txq->kern_tls_header,
"# of NIC TLS header-only packets transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_fin", CTLFLAG_RD, &txq->kern_tls_fin,
"# of NIC TLS FIN-only packets transmitted");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_fin_short", CTLFLAG_RD, &txq->kern_tls_fin_short,
"# of NIC TLS padded FIN packets on short TLS records");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_cbc", CTLFLAG_RD, &txq->kern_tls_cbc,
"# of NIC TLS sessions using AES-CBC");
SYSCTL_ADD_UQUAD(&vi->ctx, children, OID_AUTO,
"kern_tls_gcm", CTLFLAG_RD, &txq->kern_tls_gcm,
"# of NIC TLS sessions using AES-GCM");
}
#endif
SYSCTL_ADD_COUNTER_U64(&vi->ctx, children, OID_AUTO, "r_enqueues",
CTLFLAG_RD, &txq->r->enqueues,

View File

@ -255,6 +255,8 @@ t4_connect(struct toedev *tod, struct socket *so, struct rtentry *rt,
DONT_OFFLOAD_ACTIVE_OPEN(ENOSYS); /* XXX: implement lagg+TOE */
else
DONT_OFFLOAD_ACTIVE_OPEN(ENOTSUP);
if (sc->flags & KERN_TLS_OK)
DONT_OFFLOAD_ACTIVE_OPEN(ENOTSUP);
rw_rlock(&sc->policy_lock);
settings = *lookup_offload_policy(sc, OPEN_TYPE_ACTIVE, NULL,

View File

@ -524,6 +524,8 @@ t4_listen_start(struct toedev *tod, struct tcpcb *tp)
if (!(inp->inp_vflag & INP_IPV6) &&
IN_LOOPBACK(ntohl(inp->inp_laddr.s_addr)))
return (0);
if (sc->flags & KERN_TLS_OK)
return (0);
#if 0
ADAPTER_LOCK(sc);
if (IS_BUSY(sc)) {

View File

@ -2,6 +2,8 @@
# $FreeBSD$
#
.include <kmod.opts.mk>
CXGBE= ${SRCTOP}/sys/dev/cxgbe
.PATH: ${CXGBE} ${CXGBE}/common ${CXGBE}/crypto ${CXGBE}/cudbg
@ -11,6 +13,7 @@ SRCS+= device_if.h
SRCS+= opt_ddb.h
SRCS+= opt_inet.h
SRCS+= opt_inet6.h
SRCS+= opt_kern_tls.h
SRCS+= opt_ofed.h
SRCS+= opt_ratelimit.h
SRCS+= opt_rss.h
@ -20,6 +23,9 @@ SRCS+= t4_filter.c
SRCS+= t4_hw.c
SRCS+= t4_if.c t4_if.h
SRCS+= t4_iov.c
.if ${KERN_OPTS:MKERN_TLS} != ""
SRCS+= t4_kern_tls.c
.endif
SRCS+= t4_keyctx.c
SRCS+= t4_l2t.c
SRCS+= t4_main.c