freebsd-skq/sys/kern/uipc_ktls.c
John Baldwin b2e60773c6 Add kernel-side support for in-kernel TLS.
KTLS adds support for in-kernel framing and encryption of Transport
Layer Security (1.0-1.2) data on TCP sockets.  KTLS only supports
offload of TLS for transmitted data.  Key negotation must still be
performed in userland.  Once completed, transmit session keys for a
connection are provided to the kernel via a new TCP_TXTLS_ENABLE
socket option.  All subsequent data transmitted on the socket is
placed into TLS frames and encrypted using the supplied keys.

Any data written to a KTLS-enabled socket via write(2), aio_write(2),
or sendfile(2) is assumed to be application data and is encoded in TLS
frames with an application data type.  Individual records can be sent
with a custom type (e.g. handshake messages) via sendmsg(2) with a new
control message (TLS_SET_RECORD_TYPE) specifying the record type.

At present, rekeying is not supported though the in-kernel framework
should support rekeying.

KTLS makes use of the recently added unmapped mbufs to store TLS
frames in the socket buffer.  Each TLS frame is described by a single
ext_pgs mbuf.  The ext_pgs structure contains the header of the TLS
record (and trailer for encrypted records) as well as references to
the associated TLS session.

KTLS supports two primary methods of encrypting TLS frames: software
TLS and ifnet TLS.

Software TLS marks mbufs holding socket data as not ready via
M_NOTREADY similar to sendfile(2) when TLS framing information is
added to an unmapped mbuf in ktls_frame().  ktls_enqueue() is then
called to schedule TLS frames for encryption.  In the case of
sendfile_iodone() calls ktls_enqueue() instead of pru_ready() leaving
the mbufs marked M_NOTREADY until encryption is completed.  For other
writes (vn_sendfile when pages are available, write(2), etc.), the
PRUS_NOTREADY is set when invoking pru_send() along with invoking
ktls_enqueue().

A pool of worker threads (the "KTLS" kernel process) encrypts TLS
frames queued via ktls_enqueue().  Each TLS frame is temporarily
mapped using the direct map and passed to a software encryption
backend to perform the actual encryption.

(Note: The use of PHYS_TO_DMAP could be replaced with sf_bufs if
someone wished to make this work on architectures without a direct
map.)

KTLS supports pluggable software encryption backends.  Internally,
Netflix uses proprietary pure-software backends.  This commit includes
a simple backend in a new ktls_ocf.ko module that uses the kernel's
OpenCrypto framework to provide AES-GCM encryption of TLS frames.  As
a result, software TLS is now a bit of a misnomer as it can make use
of hardware crypto accelerators.

Once software encryption has finished, the TLS frame mbufs are marked
ready via pru_ready().  At this point, the encrypted data appears as
regular payload to the TCP stack stored in unmapped mbufs.

ifnet TLS permits a NIC to offload the TLS encryption and TCP
segmentation.  In this mode, a new send tag type (IF_SND_TAG_TYPE_TLS)
is allocated on the interface a socket is routed over and associated
with a TLS session.  TLS records for a TLS session using ifnet TLS are
not marked M_NOTREADY but are passed down the stack unencrypted.  The
ip_output_send() and ip6_output_send() helper functions that apply
send tags to outbound IP packets verify that the send tag of the TLS
record matches the outbound interface.  If so, the packet is tagged
with the TLS send tag and sent to the interface.  The NIC device
driver must recognize packets with the TLS send tag and schedule them
for TLS encryption and TCP segmentation.  If the the outbound
interface does not match the interface in the TLS send tag, the packet
is dropped.  In addition, a task is scheduled to refresh the TLS send
tag for the TLS session.  If a new TLS send tag cannot be allocated,
the connection is dropped.  If a new TLS send tag is allocated,
however, subsequent packets will be tagged with the correct TLS send
tag.  (This latter case has been tested by configuring both ports of a
Chelsio T6 in a lagg and failing over from one port to another.  As
the connections migrated to the new port, new TLS send tags were
allocated for the new port and connections resumed without being
dropped.)

ifnet TLS can be enabled and disabled on supported network interfaces
via new '[-]txtls[46]' options to ifconfig(8).  ifnet TLS is supported
across both vlan devices and lagg interfaces using failover, lacp with
flowid enabled, or lacp with flowid enabled.

Applications may request the current KTLS mode of a connection via a
new TCP_TXTLS_MODE socket option.  They can also use this socket
option to toggle between software and ifnet TLS modes.

In addition, a testing tool is available in tools/tools/switch_tls.
This is modeled on tcpdrop and uses similar syntax.  However, instead
of dropping connections, -s is used to force KTLS connections to
switch to software TLS and -i is used to switch to ifnet TLS.

Various sysctls and counters are available under the kern.ipc.tls
sysctl node.  The kern.ipc.tls.enable node must be set to true to
enable KTLS (it is off by default).  The use of unmapped mbufs must
also be enabled via kern.ipc.mb_use_ext_pgs to enable KTLS.

KTLS is enabled via the KERN_TLS kernel option.

This patch is the culmination of years of work by several folks
including Scott Long and Randall Stewart for the original design and
implementation; Drew Gallatin for several optimizations including the
use of ext_pgs mbufs, the M_NOTREADY mechanism for TLS records
awaiting software encryption, and pluggable software crypto backends;
and John Baldwin for modifications to support hardware TLS offload.

Reviewed by:	gallatin, hselasky, rrs
Obtained from:	Netflix
Sponsored by:	Netflix, Chelsio Communications
Differential Revision:	https://reviews.freebsd.org/D21277
2019-08-27 00:01:56 +00:00

1451 lines
38 KiB
C

/*-
* SPDX-License-Identifier: BSD-2-Clause
*
* Copyright (c) 2014-2019 Netflix Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include "opt_inet.h"
#include "opt_inet6.h"
#include "opt_rss.h"
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/ktls.h>
#include <sys/lock.h>
#include <sys/mbuf.h>
#include <sys/mutex.h>
#include <sys/rmlock.h>
#include <sys/proc.h>
#include <sys/protosw.h>
#include <sys/refcount.h>
#include <sys/smp.h>
#include <sys/socket.h>
#include <sys/socketvar.h>
#include <sys/sysctl.h>
#include <sys/taskqueue.h>
#include <sys/kthread.h>
#include <sys/uio.h>
#include <sys/vmmeter.h>
#if defined(__aarch64__) || defined(__amd64__) || defined(__i386__)
#include <machine/pcb.h>
#endif
#include <machine/vmparam.h>
#ifdef RSS
#include <net/netisr.h>
#include <net/rss_config.h>
#endif
#if defined(INET) || defined(INET6)
#include <netinet/in.h>
#include <netinet/in_pcb.h>
#endif
#include <netinet/tcp_var.h>
#include <opencrypto/xform.h>
#include <vm/uma_dbg.h>
#include <vm/vm.h>
#include <vm/vm_pageout.h>
#include <vm/vm_page.h>
struct ktls_wq {
struct mtx mtx;
STAILQ_HEAD(, mbuf_ext_pgs) head;
bool running;
} __aligned(CACHE_LINE_SIZE);
static struct ktls_wq *ktls_wq;
static struct proc *ktls_proc;
LIST_HEAD(, ktls_crypto_backend) ktls_backends;
static struct rmlock ktls_backends_lock;
static uma_zone_t ktls_session_zone;
static uint16_t ktls_cpuid_lookup[MAXCPU];
SYSCTL_NODE(_kern_ipc, OID_AUTO, tls, CTLFLAG_RW, 0,
"Kernel TLS offload");
SYSCTL_NODE(_kern_ipc_tls, OID_AUTO, stats, CTLFLAG_RW, 0,
"Kernel TLS offload stats");
static int ktls_allow_unload;
SYSCTL_INT(_kern_ipc_tls, OID_AUTO, allow_unload, CTLFLAG_RDTUN,
&ktls_allow_unload, 0, "Allow software crypto modules to unload");
#ifdef RSS
static int ktls_bind_threads = 1;
#else
static int ktls_bind_threads;
#endif
SYSCTL_INT(_kern_ipc_tls, OID_AUTO, bind_threads, CTLFLAG_RDTUN,
&ktls_bind_threads, 0,
"Bind crypto threads to cores or domains at boot");
static u_int ktls_maxlen = 16384;
SYSCTL_UINT(_kern_ipc_tls, OID_AUTO, maxlen, CTLFLAG_RWTUN,
&ktls_maxlen, 0, "Maximum TLS record size");
static int ktls_number_threads;
SYSCTL_INT(_kern_ipc_tls_stats, OID_AUTO, threads, CTLFLAG_RD,
&ktls_number_threads, 0,
"Number of TLS threads in thread-pool");
static bool ktls_offload_enable;
SYSCTL_BOOL(_kern_ipc_tls, OID_AUTO, enable, CTLFLAG_RW,
&ktls_offload_enable, 0,
"Enable support for kernel TLS offload");
static bool ktls_cbc_enable = true;
SYSCTL_BOOL(_kern_ipc_tls, OID_AUTO, cbc_enable, CTLFLAG_RW,
&ktls_cbc_enable, 1,
"Enable Support of AES-CBC crypto for kernel TLS");
static counter_u64_t ktls_tasks_active;
SYSCTL_COUNTER_U64(_kern_ipc_tls, OID_AUTO, tasks_active, CTLFLAG_RD,
&ktls_tasks_active, "Number of active tasks");
static counter_u64_t ktls_cnt_on;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, so_inqueue, CTLFLAG_RD,
&ktls_cnt_on, "Number of TLS records in queue to tasks for SW crypto");
static counter_u64_t ktls_offload_total;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, offload_total,
CTLFLAG_RD, &ktls_offload_total,
"Total successful TLS setups (parameters set)");
static counter_u64_t ktls_offload_enable_calls;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, enable_calls,
CTLFLAG_RD, &ktls_offload_enable_calls,
"Total number of TLS enable calls made");
static counter_u64_t ktls_offload_active;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, active, CTLFLAG_RD,
&ktls_offload_active, "Total Active TLS sessions");
static counter_u64_t ktls_offload_failed_crypto;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, failed_crypto, CTLFLAG_RD,
&ktls_offload_failed_crypto, "Total TLS crypto failures");
static counter_u64_t ktls_switch_to_ifnet;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, switch_to_ifnet, CTLFLAG_RD,
&ktls_switch_to_ifnet, "TLS sessions switched from SW to ifnet");
static counter_u64_t ktls_switch_to_sw;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, switch_to_sw, CTLFLAG_RD,
&ktls_switch_to_sw, "TLS sessions switched from ifnet to SW");
static counter_u64_t ktls_switch_failed;
SYSCTL_COUNTER_U64(_kern_ipc_tls_stats, OID_AUTO, switch_failed, CTLFLAG_RD,
&ktls_switch_failed, "TLS sessions unable to switch between SW and ifnet");
SYSCTL_NODE(_kern_ipc_tls, OID_AUTO, sw, CTLFLAG_RD, 0,
"Software TLS session stats");
SYSCTL_NODE(_kern_ipc_tls, OID_AUTO, ifnet, CTLFLAG_RD, 0,
"Hardware (ifnet) TLS session stats");
static counter_u64_t ktls_sw_cbc;
SYSCTL_COUNTER_U64(_kern_ipc_tls_sw, OID_AUTO, cbc, CTLFLAG_RD, &ktls_sw_cbc,
"Active number of software TLS sessions using AES-CBC");
static counter_u64_t ktls_sw_gcm;
SYSCTL_COUNTER_U64(_kern_ipc_tls_sw, OID_AUTO, gcm, CTLFLAG_RD, &ktls_sw_gcm,
"Active number of software TLS sessions using AES-GCM");
static counter_u64_t ktls_ifnet_cbc;
SYSCTL_COUNTER_U64(_kern_ipc_tls_ifnet, OID_AUTO, cbc, CTLFLAG_RD,
&ktls_ifnet_cbc,
"Active number of ifnet TLS sessions using AES-CBC");
static counter_u64_t ktls_ifnet_gcm;
SYSCTL_COUNTER_U64(_kern_ipc_tls_ifnet, OID_AUTO, gcm, CTLFLAG_RD,
&ktls_ifnet_gcm,
"Active number of ifnet TLS sessions using AES-GCM");
static counter_u64_t ktls_ifnet_reset;
SYSCTL_COUNTER_U64(_kern_ipc_tls_ifnet, OID_AUTO, reset, CTLFLAG_RD,
&ktls_ifnet_reset, "TLS sessions updated to a new ifnet send tag");
static counter_u64_t ktls_ifnet_reset_dropped;
SYSCTL_COUNTER_U64(_kern_ipc_tls_ifnet, OID_AUTO, reset_dropped, CTLFLAG_RD,
&ktls_ifnet_reset_dropped,
"TLS sessions dropped after failing to update ifnet send tag");
static counter_u64_t ktls_ifnet_reset_failed;
SYSCTL_COUNTER_U64(_kern_ipc_tls_ifnet, OID_AUTO, reset_failed, CTLFLAG_RD,
&ktls_ifnet_reset_failed,
"TLS sessions that failed to allocate a new ifnet send tag");
static int ktls_ifnet_permitted;
SYSCTL_UINT(_kern_ipc_tls_ifnet, OID_AUTO, permitted, CTLFLAG_RWTUN,
&ktls_ifnet_permitted, 1,
"Whether to permit hardware (ifnet) TLS sessions");
static MALLOC_DEFINE(M_KTLS, "ktls", "Kernel TLS");
static void ktls_cleanup(struct ktls_session *tls);
#if defined(INET) || defined(INET6)
static void ktls_reset_send_tag(void *context, int pending);
#endif
static void ktls_work_thread(void *ctx);
int
ktls_crypto_backend_register(struct ktls_crypto_backend *be)
{
struct ktls_crypto_backend *curr_be, *tmp;
if (be->api_version != KTLS_API_VERSION) {
printf("KTLS: API version mismatch (%d vs %d) for %s\n",
be->api_version, KTLS_API_VERSION,
be->name);
return (EINVAL);
}
rm_wlock(&ktls_backends_lock);
printf("KTLS: Registering crypto method %s with prio %d\n",
be->name, be->prio);
if (LIST_EMPTY(&ktls_backends)) {
LIST_INSERT_HEAD(&ktls_backends, be, next);
} else {
LIST_FOREACH_SAFE(curr_be, &ktls_backends, next, tmp) {
if (curr_be->prio < be->prio) {
LIST_INSERT_BEFORE(curr_be, be, next);
break;
}
if (LIST_NEXT(curr_be, next) == NULL) {
LIST_INSERT_AFTER(curr_be, be, next);
break;
}
}
}
rm_wunlock(&ktls_backends_lock);
return (0);
}
int
ktls_crypto_backend_deregister(struct ktls_crypto_backend *be)
{
struct ktls_crypto_backend *tmp;
/*
* Don't error if the backend isn't registered. This permits
* MOD_UNLOAD handlers to use this function unconditionally.
*/
rm_wlock(&ktls_backends_lock);
LIST_FOREACH(tmp, &ktls_backends, next) {
if (tmp == be)
break;
}
if (tmp == NULL) {
rm_wunlock(&ktls_backends_lock);
return (0);
}
if (!ktls_allow_unload) {
rm_wunlock(&ktls_backends_lock);
printf(
"KTLS: Deregistering crypto method %s is not supported\n",
be->name);
return (EBUSY);
}
if (be->use_count) {
rm_wunlock(&ktls_backends_lock);
return (EBUSY);
}
LIST_REMOVE(be, next);
rm_wunlock(&ktls_backends_lock);
return (0);
}
#if defined(INET) || defined(INET6)
static uint16_t
ktls_get_cpu(struct socket *so)
{
struct inpcb *inp;
uint16_t cpuid;
inp = sotoinpcb(so);
#ifdef RSS
cpuid = rss_hash2cpuid(inp->inp_flowid, inp->inp_flowtype);
if (cpuid != NETISR_CPUID_NONE)
return (cpuid);
#endif
/*
* Just use the flowid to shard connections in a repeatable
* fashion. Note that some crypto backends rely on the
* serialization provided by having the same connection use
* the same queue.
*/
cpuid = ktls_cpuid_lookup[inp->inp_flowid % ktls_number_threads];
return (cpuid);
}
#endif
static void
ktls_init(void *dummy __unused)
{
struct thread *td;
struct pcpu *pc;
cpuset_t mask;
int error, i;
ktls_tasks_active = counter_u64_alloc(M_WAITOK);
ktls_cnt_on = counter_u64_alloc(M_WAITOK);
ktls_offload_total = counter_u64_alloc(M_WAITOK);
ktls_offload_enable_calls = counter_u64_alloc(M_WAITOK);
ktls_offload_active = counter_u64_alloc(M_WAITOK);
ktls_offload_failed_crypto = counter_u64_alloc(M_WAITOK);
ktls_switch_to_ifnet = counter_u64_alloc(M_WAITOK);
ktls_switch_to_sw = counter_u64_alloc(M_WAITOK);
ktls_switch_failed = counter_u64_alloc(M_WAITOK);
ktls_sw_cbc = counter_u64_alloc(M_WAITOK);
ktls_sw_gcm = counter_u64_alloc(M_WAITOK);
ktls_ifnet_cbc = counter_u64_alloc(M_WAITOK);
ktls_ifnet_gcm = counter_u64_alloc(M_WAITOK);
ktls_ifnet_reset = counter_u64_alloc(M_WAITOK);
ktls_ifnet_reset_dropped = counter_u64_alloc(M_WAITOK);
ktls_ifnet_reset_failed = counter_u64_alloc(M_WAITOK);
rm_init(&ktls_backends_lock, "ktls backends");
LIST_INIT(&ktls_backends);
ktls_wq = malloc(sizeof(*ktls_wq) * (mp_maxid + 1), M_KTLS,
M_WAITOK | M_ZERO);
ktls_session_zone = uma_zcreate("ktls_session",
sizeof(struct ktls_session),
#ifdef INVARIANTS
trash_ctor, trash_dtor, trash_init, trash_fini,
#else
NULL, NULL, NULL, NULL,
#endif
UMA_ALIGN_CACHE, 0);
/*
* Initialize the workqueues to run the TLS work. We create a
* work queue for each CPU.
*/
CPU_FOREACH(i) {
STAILQ_INIT(&ktls_wq[i].head);
mtx_init(&ktls_wq[i].mtx, "ktls work queue", NULL, MTX_DEF);
error = kproc_kthread_add(ktls_work_thread, &ktls_wq[i],
&ktls_proc, &td, 0, 0, "KTLS", "ktls_thr_%d", i);
if (error)
panic("Can't add KTLS thread %d error %d", i, error);
/*
* Bind threads to cores. If ktls_bind_threads is >
* 1, then we bind to the NUMA domain.
*/
if (ktls_bind_threads) {
if (ktls_bind_threads > 1) {
pc = pcpu_find(i);
CPU_COPY(&cpuset_domain[pc->pc_domain], &mask);
} else {
CPU_SETOF(i, &mask);
}
error = cpuset_setthread(td->td_tid, &mask);
if (error)
panic(
"Unable to bind KTLS thread for CPU %d error %d",
i, error);
}
ktls_cpuid_lookup[ktls_number_threads] = i;
ktls_number_threads++;
}
printf("KTLS: Initialized %d threads\n", ktls_number_threads);
}
SYSINIT(ktls, SI_SUB_SMP + 1, SI_ORDER_ANY, ktls_init, NULL);
#if defined(INET) || defined(INET6)
static int
ktls_create_session(struct socket *so, struct tls_enable *en,
struct ktls_session **tlsp)
{
struct ktls_session *tls;
int error;
/* Only TLS 1.0 - 1.2 are supported. */
if (en->tls_vmajor != TLS_MAJOR_VER_ONE)
return (EINVAL);
if (en->tls_vminor < TLS_MINOR_VER_ZERO ||
en->tls_vminor > TLS_MINOR_VER_TWO)
return (EINVAL);
if (en->auth_key_len < 0 || en->auth_key_len > TLS_MAX_PARAM_SIZE)
return (EINVAL);
if (en->cipher_key_len < 0 || en->cipher_key_len > TLS_MAX_PARAM_SIZE)
return (EINVAL);
if (en->iv_len < 0 || en->iv_len > TLS_MAX_PARAM_SIZE)
return (EINVAL);
/* All supported algorithms require a cipher key. */
if (en->cipher_key_len == 0)
return (EINVAL);
/* No flags are currently supported. */
if (en->flags != 0)
return (EINVAL);
/* Common checks for supported algorithms. */
switch (en->cipher_algorithm) {
case CRYPTO_AES_NIST_GCM_16:
/*
* auth_algorithm isn't used, but permit GMAC values
* for compatibility.
*/
switch (en->auth_algorithm) {
case 0:
case CRYPTO_AES_128_NIST_GMAC:
case CRYPTO_AES_192_NIST_GMAC:
case CRYPTO_AES_256_NIST_GMAC:
break;
default:
return (EINVAL);
}
if (en->auth_key_len != 0)
return (EINVAL);
if (en->iv_len != TLS_AEAD_GCM_LEN)
return (EINVAL);
break;
case CRYPTO_AES_CBC:
switch (en->auth_algorithm) {
case CRYPTO_SHA1_HMAC:
/*
* TLS 1.0 requires an implicit IV. TLS 1.1+
* all use explicit IVs.
*/
if (en->tls_vminor == TLS_MINOR_VER_ZERO) {
if (en->iv_len != TLS_CBC_IMPLICIT_IV_LEN)
return (EINVAL);
break;
}
/* FALLTHROUGH */
case CRYPTO_SHA2_256_HMAC:
case CRYPTO_SHA2_384_HMAC:
/* Ignore any supplied IV. */
en->iv_len = 0;
break;
default:
return (EINVAL);
}
if (en->auth_key_len == 0)
return (EINVAL);
break;
default:
return (EINVAL);
}
tls = uma_zalloc(ktls_session_zone, M_WAITOK | M_ZERO);
counter_u64_add(ktls_offload_active, 1);
refcount_init(&tls->refcount, 1);
TASK_INIT(&tls->reset_tag_task, 0, ktls_reset_send_tag, tls);
tls->wq_index = ktls_get_cpu(so);
tls->params.cipher_algorithm = en->cipher_algorithm;
tls->params.auth_algorithm = en->auth_algorithm;
tls->params.tls_vmajor = en->tls_vmajor;
tls->params.tls_vminor = en->tls_vminor;
tls->params.flags = en->flags;
tls->params.max_frame_len = min(TLS_MAX_MSG_SIZE_V10_2, ktls_maxlen);
/* Set the header and trailer lengths. */
tls->params.tls_hlen = sizeof(struct tls_record_layer);
switch (en->cipher_algorithm) {
case CRYPTO_AES_NIST_GCM_16:
tls->params.tls_hlen += 8;
tls->params.tls_tlen = AES_GMAC_HASH_LEN;
tls->params.tls_bs = 1;
break;
case CRYPTO_AES_CBC:
switch (en->auth_algorithm) {
case CRYPTO_SHA1_HMAC:
if (en->tls_vminor == TLS_MINOR_VER_ZERO) {
/* Implicit IV, no nonce. */
} else {
tls->params.tls_hlen += AES_BLOCK_LEN;
}
tls->params.tls_tlen = AES_BLOCK_LEN +
SHA1_HASH_LEN;
break;
case CRYPTO_SHA2_256_HMAC:
tls->params.tls_hlen += AES_BLOCK_LEN;
tls->params.tls_tlen = AES_BLOCK_LEN +
SHA2_256_HASH_LEN;
break;
case CRYPTO_SHA2_384_HMAC:
tls->params.tls_hlen += AES_BLOCK_LEN;
tls->params.tls_tlen = AES_BLOCK_LEN +
SHA2_384_HASH_LEN;
break;
default:
panic("invalid hmac");
}
tls->params.tls_bs = AES_BLOCK_LEN;
break;
default:
panic("invalid cipher");
}
KASSERT(tls->params.tls_hlen <= MBUF_PEXT_HDR_LEN,
("TLS header length too long: %d", tls->params.tls_hlen));
KASSERT(tls->params.tls_tlen <= MBUF_PEXT_TRAIL_LEN,
("TLS trailer length too long: %d", tls->params.tls_tlen));
if (en->auth_key_len != 0) {
tls->params.auth_key_len = en->auth_key_len;
tls->params.auth_key = malloc(en->auth_key_len, M_KTLS,
M_WAITOK);
error = copyin(en->auth_key, tls->params.auth_key,
en->auth_key_len);
if (error)
goto out;
}
tls->params.cipher_key_len = en->cipher_key_len;
tls->params.cipher_key = malloc(en->cipher_key_len, M_KTLS, M_WAITOK);
error = copyin(en->cipher_key, tls->params.cipher_key,
en->cipher_key_len);
if (error)
goto out;
/*
* This holds the implicit portion of the nonce for GCM and
* the initial implicit IV for TLS 1.0. The explicit portions
* of the IV are generated in ktls_frame() and ktls_seq().
*/
if (en->iv_len != 0) {
MPASS(en->iv_len <= sizeof(tls->params.iv));
tls->params.iv_len = en->iv_len;
error = copyin(en->iv, tls->params.iv, en->iv_len);
if (error)
goto out;
}
*tlsp = tls;
return (0);
out:
ktls_cleanup(tls);
return (error);
}
static struct ktls_session *
ktls_clone_session(struct ktls_session *tls)
{
struct ktls_session *tls_new;
tls_new = uma_zalloc(ktls_session_zone, M_WAITOK | M_ZERO);
counter_u64_add(ktls_offload_active, 1);
refcount_init(&tls_new->refcount, 1);
/* Copy fields from existing session. */
tls_new->params = tls->params;
tls_new->wq_index = tls->wq_index;
/* Deep copy keys. */
if (tls_new->params.auth_key != NULL) {
tls_new->params.auth_key = malloc(tls->params.auth_key_len,
M_KTLS, M_WAITOK);
memcpy(tls_new->params.auth_key, tls->params.auth_key,
tls->params.auth_key_len);
}
tls_new->params.cipher_key = malloc(tls->params.cipher_key_len, M_KTLS,
M_WAITOK);
memcpy(tls_new->params.cipher_key, tls->params.cipher_key,
tls->params.cipher_key_len);
return (tls_new);
}
#endif
static void
ktls_cleanup(struct ktls_session *tls)
{
counter_u64_add(ktls_offload_active, -1);
if (tls->free != NULL) {
MPASS(tls->be != NULL);
switch (tls->params.cipher_algorithm) {
case CRYPTO_AES_CBC:
counter_u64_add(ktls_sw_cbc, -1);
break;
case CRYPTO_AES_NIST_GCM_16:
counter_u64_add(ktls_sw_gcm, -1);
break;
}
tls->free(tls);
} else if (tls->snd_tag != NULL) {
switch (tls->params.cipher_algorithm) {
case CRYPTO_AES_CBC:
counter_u64_add(ktls_ifnet_cbc, -1);
break;
case CRYPTO_AES_NIST_GCM_16:
counter_u64_add(ktls_ifnet_gcm, -1);
break;
}
m_snd_tag_rele(tls->snd_tag);
}
if (tls->params.auth_key != NULL) {
explicit_bzero(tls->params.auth_key, tls->params.auth_key_len);
free(tls->params.auth_key, M_KTLS);
tls->params.auth_key = NULL;
tls->params.auth_key_len = 0;
}
if (tls->params.cipher_key != NULL) {
explicit_bzero(tls->params.cipher_key,
tls->params.cipher_key_len);
free(tls->params.cipher_key, M_KTLS);
tls->params.cipher_key = NULL;
tls->params.cipher_key_len = 0;
}
explicit_bzero(tls->params.iv, sizeof(tls->params.iv));
}
#if defined(INET) || defined(INET6)
/*
* Common code used when first enabling ifnet TLS on a connection or
* when allocating a new ifnet TLS session due to a routing change.
* This function allocates a new TLS send tag on whatever interface
* the connection is currently routed over.
*/
static int
ktls_alloc_snd_tag(struct inpcb *inp, struct ktls_session *tls, bool force,
struct m_snd_tag **mstp)
{
union if_snd_tag_alloc_params params;
struct ifnet *ifp;
struct rtentry *rt;
struct tcpcb *tp;
int error;
INP_RLOCK(inp);
if (inp->inp_flags2 & INP_FREED) {
INP_RUNLOCK(inp);
return (ECONNRESET);
}
if (inp->inp_flags & (INP_TIMEWAIT | INP_DROPPED)) {
INP_RUNLOCK(inp);
return (ECONNRESET);
}
if (inp->inp_socket == NULL) {
INP_RUNLOCK(inp);
return (ECONNRESET);
}
tp = intotcpcb(inp);
/*
* Check administrative controls on ifnet TLS to determine if
* ifnet TLS should be denied.
*
* - Always permit 'force' requests.
* - ktls_ifnet_permitted == 0: always deny.
*/
if (!force && ktls_ifnet_permitted == 0) {
INP_RUNLOCK(inp);
return (ENXIO);
}
/*
* XXX: Use the cached route in the inpcb to find the
* interface. This should perhaps instead use
* rtalloc1_fib(dst, 0, 0, fibnum). Since KTLS is only
* enabled after a connection has completed key negotiation in
* userland, the cached route will be present in practice.
*/
rt = inp->inp_route.ro_rt;
if (rt == NULL || rt->rt_ifp == NULL) {
INP_RUNLOCK(inp);
return (ENXIO);
}
ifp = rt->rt_ifp;
if_ref(ifp);
params.hdr.type = IF_SND_TAG_TYPE_TLS;
params.hdr.flowid = inp->inp_flowid;
params.hdr.flowtype = inp->inp_flowtype;
params.tls.inp = inp;
params.tls.tls = tls;
INP_RUNLOCK(inp);
if (ifp->if_snd_tag_alloc == NULL) {
error = EOPNOTSUPP;
goto out;
}
if ((ifp->if_capenable & IFCAP_NOMAP) == 0) {
error = EOPNOTSUPP;
goto out;
}
if (inp->inp_vflag & INP_IPV6) {
if ((ifp->if_capenable & IFCAP_TXTLS6) == 0) {
error = EOPNOTSUPP;
goto out;
}
} else {
if ((ifp->if_capenable & IFCAP_TXTLS4) == 0) {
error = EOPNOTSUPP;
goto out;
}
}
error = ifp->if_snd_tag_alloc(ifp, &params, mstp);
out:
if_rele(ifp);
return (error);
}
static int
ktls_try_ifnet(struct socket *so, struct ktls_session *tls, bool force)
{
struct m_snd_tag *mst;
int error;
error = ktls_alloc_snd_tag(so->so_pcb, tls, force, &mst);
if (error == 0) {
tls->snd_tag = mst;
switch (tls->params.cipher_algorithm) {
case CRYPTO_AES_CBC:
counter_u64_add(ktls_ifnet_cbc, 1);
break;
case CRYPTO_AES_NIST_GCM_16:
counter_u64_add(ktls_ifnet_gcm, 1);
break;
}
}
return (error);
}
static int
ktls_try_sw(struct socket *so, struct ktls_session *tls)
{
struct rm_priotracker prio;
struct ktls_crypto_backend *be;
/*
* Choose the best software crypto backend. Backends are
* stored in sorted priority order (larget value == most
* important at the head of the list), so this just stops on
* the first backend that claims the session by returning
* success.
*/
if (ktls_allow_unload)
rm_rlock(&ktls_backends_lock, &prio);
LIST_FOREACH(be, &ktls_backends, next) {
if (be->try(so, tls) == 0)
break;
KASSERT(tls->cipher == NULL,
("ktls backend leaked a cipher pointer"));
}
if (be != NULL) {
if (ktls_allow_unload)
be->use_count++;
tls->be = be;
}
if (ktls_allow_unload)
rm_runlock(&ktls_backends_lock, &prio);
if (be == NULL)
return (EOPNOTSUPP);
switch (tls->params.cipher_algorithm) {
case CRYPTO_AES_CBC:
counter_u64_add(ktls_sw_cbc, 1);
break;
case CRYPTO_AES_NIST_GCM_16:
counter_u64_add(ktls_sw_gcm, 1);
break;
}
return (0);
}
int
ktls_enable_tx(struct socket *so, struct tls_enable *en)
{
struct ktls_session *tls;
int error;
if (!ktls_offload_enable)
return (ENOTSUP);
counter_u64_add(ktls_offload_enable_calls, 1);
/*
* This should always be true since only the TCP socket option
* invokes this function.
*/
if (so->so_proto->pr_protocol != IPPROTO_TCP)
return (EINVAL);
/*
* XXX: Don't overwrite existing sessions. We should permit
* this to support rekeying in the future.
*/
if (so->so_snd.sb_tls_info != NULL)
return (EALREADY);
if (en->cipher_algorithm == CRYPTO_AES_CBC && !ktls_cbc_enable)
return (ENOTSUP);
/* TLS requires ext pgs */
if (mb_use_ext_pgs == 0)
return (ENXIO);
error = ktls_create_session(so, en, &tls);
if (error)
return (error);
/* Prefer ifnet TLS over software TLS. */
error = ktls_try_ifnet(so, tls, false);
if (error)
error = ktls_try_sw(so, tls);
if (error) {
ktls_cleanup(tls);
return (error);
}
error = sblock(&so->so_snd, SBL_WAIT);
if (error) {
ktls_cleanup(tls);
return (error);
}
SOCKBUF_LOCK(&so->so_snd);
so->so_snd.sb_tls_info = tls;
if (tls->sw_encrypt == NULL)
so->so_snd.sb_flags |= SB_TLS_IFNET;
SOCKBUF_UNLOCK(&so->so_snd);
sbunlock(&so->so_snd);
counter_u64_add(ktls_offload_total, 1);
return (0);
}
int
ktls_get_tx_mode(struct socket *so)
{
struct ktls_session *tls;
struct inpcb *inp;
int mode;
inp = so->so_pcb;
INP_WLOCK_ASSERT(inp);
SOCKBUF_LOCK(&so->so_snd);
tls = so->so_snd.sb_tls_info;
if (tls == NULL)
mode = TCP_TLS_MODE_NONE;
else if (tls->sw_encrypt != NULL)
mode = TCP_TLS_MODE_SW;
else
mode = TCP_TLS_MODE_IFNET;
SOCKBUF_UNLOCK(&so->so_snd);
return (mode);
}
/*
* Switch between SW and ifnet TLS sessions as requested.
*/
int
ktls_set_tx_mode(struct socket *so, int mode)
{
struct ktls_session *tls, *tls_new;
struct inpcb *inp;
int error;
MPASS(mode == TCP_TLS_MODE_SW || mode == TCP_TLS_MODE_IFNET);
inp = so->so_pcb;
INP_WLOCK_ASSERT(inp);
SOCKBUF_LOCK(&so->so_snd);
tls = so->so_snd.sb_tls_info;
if (tls == NULL) {
SOCKBUF_UNLOCK(&so->so_snd);
return (0);
}
if ((tls->sw_encrypt != NULL && mode == TCP_TLS_MODE_SW) ||
(tls->sw_encrypt == NULL && mode == TCP_TLS_MODE_IFNET)) {
SOCKBUF_UNLOCK(&so->so_snd);
return (0);
}
tls = ktls_hold(tls);
SOCKBUF_UNLOCK(&so->so_snd);
INP_WUNLOCK(inp);
tls_new = ktls_clone_session(tls);
if (mode == TCP_TLS_MODE_IFNET)
error = ktls_try_ifnet(so, tls_new, true);
else
error = ktls_try_sw(so, tls_new);
if (error) {
counter_u64_add(ktls_switch_failed, 1);
ktls_free(tls_new);
ktls_free(tls);
INP_WLOCK(inp);
return (error);
}
error = sblock(&so->so_snd, SBL_WAIT);
if (error) {
counter_u64_add(ktls_switch_failed, 1);
ktls_free(tls_new);
ktls_free(tls);
INP_WLOCK(inp);
return (error);
}
/*
* If we raced with another session change, keep the existing
* session.
*/
if (tls != so->so_snd.sb_tls_info) {
counter_u64_add(ktls_switch_failed, 1);
sbunlock(&so->so_snd);
ktls_free(tls_new);
ktls_free(tls);
INP_WLOCK(inp);
return (EBUSY);
}
SOCKBUF_LOCK(&so->so_snd);
so->so_snd.sb_tls_info = tls_new;
if (tls_new->sw_encrypt == NULL)
so->so_snd.sb_flags |= SB_TLS_IFNET;
SOCKBUF_UNLOCK(&so->so_snd);
sbunlock(&so->so_snd);
/*
* Drop two references on 'tls'. The first is for the
* ktls_hold() above. The second drops the reference from the
* socket buffer.
*/
KASSERT(tls->refcount >= 2, ("too few references on old session"));
ktls_free(tls);
ktls_free(tls);
if (mode == TCP_TLS_MODE_IFNET)
counter_u64_add(ktls_switch_to_ifnet, 1);
else
counter_u64_add(ktls_switch_to_sw, 1);
INP_WLOCK(inp);
return (0);
}
/*
* Try to allocate a new TLS send tag. This task is scheduled when
* ip_output detects a route change while trying to transmit a packet
* holding a TLS record. If a new tag is allocated, replace the tag
* in the TLS session. Subsequent packets on the connection will use
* the new tag. If a new tag cannot be allocated, drop the
* connection.
*/
static void
ktls_reset_send_tag(void *context, int pending)
{
struct epoch_tracker et;
struct ktls_session *tls;
struct m_snd_tag *old, *new;
struct inpcb *inp;
struct tcpcb *tp;
int error;
MPASS(pending == 1);
tls = context;
inp = tls->inp;
/*
* Free the old tag first before allocating a new one.
* ip[6]_output_send() will treat a NULL send tag the same as
* an ifp mismatch and drop packets until a new tag is
* allocated.
*
* Write-lock the INP when changing tls->snd_tag since
* ip[6]_output_send() holds a read-lock when reading the
* pointer.
*/
INP_WLOCK(inp);
old = tls->snd_tag;
tls->snd_tag = NULL;
INP_WUNLOCK(inp);
if (old != NULL)
m_snd_tag_rele(old);
error = ktls_alloc_snd_tag(inp, tls, true, &new);
if (error == 0) {
INP_WLOCK(inp);
tls->snd_tag = new;
mtx_pool_lock(mtxpool_sleep, tls);
tls->reset_pending = false;
mtx_pool_unlock(mtxpool_sleep, tls);
if (!in_pcbrele_wlocked(inp))
INP_WUNLOCK(inp);
counter_u64_add(ktls_ifnet_reset, 1);
/*
* XXX: Should we kick tcp_output explicitly now that
* the send tag is fixed or just rely on timers?
*/
} else {
INP_INFO_RLOCK_ET(&V_tcbinfo, et);
INP_WLOCK(inp);
if (!in_pcbrele_wlocked(inp)) {
if (!(inp->inp_flags & INP_TIMEWAIT) &&
!(inp->inp_flags & INP_DROPPED)) {
tp = intotcpcb(inp);
tp = tcp_drop(tp, ECONNABORTED);
if (tp != NULL)
INP_WUNLOCK(inp);
counter_u64_add(ktls_ifnet_reset_dropped, 1);
} else
INP_WUNLOCK(inp);
}
INP_INFO_RUNLOCK_ET(&V_tcbinfo, et);
counter_u64_add(ktls_ifnet_reset_failed, 1);
/*
* Leave reset_pending true to avoid future tasks while
* the socket goes away.
*/
}
ktls_free(tls);
}
int
ktls_output_eagain(struct inpcb *inp, struct ktls_session *tls)
{
if (inp == NULL)
return (ENOBUFS);
INP_LOCK_ASSERT(inp);
/*
* See if we should schedule a task to update the send tag for
* this session.
*/
mtx_pool_lock(mtxpool_sleep, tls);
if (!tls->reset_pending) {
(void) ktls_hold(tls);
in_pcbref(inp);
tls->inp = inp;
tls->reset_pending = true;
taskqueue_enqueue(taskqueue_thread, &tls->reset_tag_task);
}
mtx_pool_unlock(mtxpool_sleep, tls);
return (ENOBUFS);
}
#endif
void
ktls_destroy(struct ktls_session *tls)
{
struct rm_priotracker prio;
ktls_cleanup(tls);
if (tls->be != NULL && ktls_allow_unload) {
rm_rlock(&ktls_backends_lock, &prio);
tls->be->use_count--;
rm_runlock(&ktls_backends_lock, &prio);
}
uma_zfree(ktls_session_zone, tls);
}
void
ktls_seq(struct sockbuf *sb, struct mbuf *m)
{
struct mbuf_ext_pgs *pgs;
struct tls_record_layer *tlshdr;
uint64_t seqno;
for (; m != NULL; m = m->m_next) {
KASSERT((m->m_flags & M_NOMAP) != 0,
("ktls_seq: mapped mbuf %p", m));
pgs = m->m_ext.ext_pgs;
pgs->seqno = sb->sb_tls_seqno;
/*
* Store the sequence number in the TLS header as the
* explicit part of the IV for GCM.
*/
if (pgs->tls->params.cipher_algorithm ==
CRYPTO_AES_NIST_GCM_16) {
tlshdr = (void *)pgs->hdr;
seqno = htobe64(pgs->seqno);
memcpy(tlshdr + 1, &seqno, sizeof(seqno));
}
sb->sb_tls_seqno++;
}
}
/*
* Add TLS framing (headers and trailers) to a chain of mbufs. Each
* mbuf in the chain must be an unmapped mbuf. The payload of the
* mbuf must be populated with the payload of each TLS record.
*
* The record_type argument specifies the TLS record type used when
* populating the TLS header.
*
* The enq_count argument on return is set to the number of pages of
* payload data for this entire chain that need to be encrypted via SW
* encryption. The returned value should be passed to ktls_enqueue
* when scheduling encryption of this chain of mbufs.
*/
int
ktls_frame(struct mbuf *top, struct ktls_session *tls, int *enq_cnt,
uint8_t record_type)
{
struct tls_record_layer *tlshdr;
struct mbuf *m;
struct mbuf_ext_pgs *pgs;
uint16_t tls_len;
int maxlen;
maxlen = tls->params.max_frame_len;
*enq_cnt = 0;
for (m = top; m != NULL; m = m->m_next) {
/*
* All mbufs in the chain should be non-empty TLS
* records whose payload does not exceed the maximum
* frame length.
*/
if (m->m_len > maxlen || m->m_len == 0)
return (EINVAL);
tls_len = m->m_len;
/*
* TLS frames require unmapped mbufs to store session
* info.
*/
KASSERT((m->m_flags & M_NOMAP) != 0,
("ktls_frame: mapped mbuf %p (top = %p)\n", m, top));
pgs = m->m_ext.ext_pgs;
/* Save a reference to the session. */
pgs->tls = ktls_hold(tls);
pgs->hdr_len = tls->params.tls_hlen;
pgs->trail_len = tls->params.tls_tlen;
if (tls->params.cipher_algorithm == CRYPTO_AES_CBC) {
int bs, delta;
/*
* AES-CBC pads messages to a multiple of the
* block size. Note that the padding is
* applied after the digest and the encryption
* is done on the "plaintext || mac || padding".
* At least one byte of padding is always
* present.
*
* Compute the final trailer length assuming
* at most one block of padding.
* tls->params.sb_tls_tlen is the maximum
* possible trailer length (padding + digest).
* delta holds the number of excess padding
* bytes if the maximum were used. Those
* extra bytes are removed.
*/
bs = tls->params.tls_bs;
delta = (tls_len + tls->params.tls_tlen) & (bs - 1);
pgs->trail_len -= delta;
}
m->m_len += pgs->hdr_len + pgs->trail_len;
/* Populate the TLS header. */
tlshdr = (void *)pgs->hdr;
tlshdr->tls_vmajor = tls->params.tls_vmajor;
tlshdr->tls_vminor = tls->params.tls_vminor;
tlshdr->tls_type = record_type;
tlshdr->tls_length = htons(m->m_len - sizeof(*tlshdr));
/*
* For GCM, the sequence number is stored in the
* header by ktls_seq(). For CBC, a random nonce is
* inserted for TLS 1.1+.
*/
if (tls->params.cipher_algorithm == CRYPTO_AES_CBC &&
tls->params.tls_vminor >= TLS_MINOR_VER_ONE)
arc4rand(tlshdr + 1, AES_BLOCK_LEN, 0);
/*
* When using SW encryption, mark the mbuf not ready.
* It will be marked ready via sbready() after the
* record has been encrypted.
*
* When using ifnet TLS, unencrypted TLS records are
* sent down the stack to the NIC.
*/
if (tls->sw_encrypt != NULL) {
m->m_flags |= M_NOTREADY;
pgs->nrdy = pgs->npgs;
*enq_cnt += pgs->npgs;
}
}
return (0);
}
void
ktls_enqueue_to_free(struct mbuf_ext_pgs *pgs)
{
struct ktls_wq *wq;
bool running;
/* Mark it for freeing. */
pgs->mbuf = NULL;
wq = &ktls_wq[pgs->tls->wq_index];
mtx_lock(&wq->mtx);
STAILQ_INSERT_TAIL(&wq->head, pgs, stailq);
running = wq->running;
mtx_unlock(&wq->mtx);
if (!running)
wakeup(wq);
}
void
ktls_enqueue(struct mbuf *m, struct socket *so, int page_count)
{
struct mbuf_ext_pgs *pgs;
struct ktls_wq *wq;
bool running;
KASSERT(((m->m_flags & (M_NOMAP | M_NOTREADY)) ==
(M_NOMAP | M_NOTREADY)),
("ktls_enqueue: %p not unready & nomap mbuf\n", m));
KASSERT(page_count != 0, ("enqueueing TLS mbuf with zero page count"));
pgs = m->m_ext.ext_pgs;
KASSERT(pgs->tls->sw_encrypt != NULL, ("ifnet TLS mbuf"));
pgs->enc_cnt = page_count;
pgs->mbuf = m;
/*
* Save a pointer to the socket. The caller is responsible
* for taking an additional reference via soref().
*/
pgs->so = so;
wq = &ktls_wq[pgs->tls->wq_index];
mtx_lock(&wq->mtx);
STAILQ_INSERT_TAIL(&wq->head, pgs, stailq);
running = wq->running;
mtx_unlock(&wq->mtx);
if (!running)
wakeup(wq);
counter_u64_add(ktls_cnt_on, 1);
}
static __noinline void
ktls_encrypt(struct mbuf_ext_pgs *pgs)
{
struct ktls_session *tls;
struct socket *so;
struct mbuf *m, *top;
vm_paddr_t parray[1 + btoc(TLS_MAX_MSG_SIZE_V10_2)];
struct iovec src_iov[1 + btoc(TLS_MAX_MSG_SIZE_V10_2)];
struct iovec dst_iov[1 + btoc(TLS_MAX_MSG_SIZE_V10_2)];
vm_page_t pg;
int error, i, len, npages, off, total_pages;
bool is_anon;
so = pgs->so;
tls = pgs->tls;
top = pgs->mbuf;
KASSERT(tls != NULL, ("tls = NULL, top = %p, pgs = %p\n", top, pgs));
KASSERT(so != NULL, ("so = NULL, top = %p, pgs = %p\n", top, pgs));
#ifdef INVARIANTS
pgs->so = NULL;
pgs->mbuf = NULL;
#endif
total_pages = pgs->enc_cnt;
npages = 0;
/*
* Encrypt the TLS records in the chain of mbufs starting with
* 'top'. 'total_pages' gives us a total count of pages and is
* used to know when we have finished encrypting the TLS
* records originally queued with 'top'.
*
* NB: These mbufs are queued in the socket buffer and
* 'm_next' is traversing the mbufs in the socket buffer. The
* socket buffer lock is not held while traversing this chain.
* Since the mbufs are all marked M_NOTREADY their 'm_next'
* pointers should be stable. However, the 'm_next' of the
* last mbuf encrypted is not necessarily NULL. It can point
* to other mbufs appended while 'top' was on the TLS work
* queue.
*
* Each mbuf holds an entire TLS record.
*/
error = 0;
for (m = top; npages != total_pages; m = m->m_next) {
pgs = m->m_ext.ext_pgs;
KASSERT(pgs->tls == tls,
("different TLS sessions in a single mbuf chain: %p vs %p",
tls, pgs->tls));
KASSERT((m->m_flags & (M_NOMAP | M_NOTREADY)) ==
(M_NOMAP | M_NOTREADY),
("%p not unready & nomap mbuf (top = %p)\n", m, top));
KASSERT(npages + pgs->npgs <= total_pages,
("page count mismatch: top %p, total_pages %d, m %p", top,
total_pages, m));
/*
* Generate source and destination ivoecs to pass to
* the SW encryption backend. For writable mbufs, the
* destination iovec is a copy of the source and
* encryption is done in place. For file-backed mbufs
* (from sendfile), anonymous wired pages are
* allocated and assigned to the destination iovec.
*/
is_anon = M_WRITABLE(m);
off = pgs->first_pg_off;
for (i = 0; i < pgs->npgs; i++, off = 0) {
len = mbuf_ext_pg_len(pgs, i, off);
src_iov[i].iov_len = len;
src_iov[i].iov_base =
(char *)(void *)PHYS_TO_DMAP(pgs->pa[i]) + off;
if (is_anon) {
dst_iov[i].iov_base = src_iov[i].iov_base;
dst_iov[i].iov_len = src_iov[i].iov_len;
continue;
}
retry_page:
pg = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL |
VM_ALLOC_NOOBJ | VM_ALLOC_NODUMP | VM_ALLOC_WIRED);
if (pg == NULL) {
vm_wait(NULL);
goto retry_page;
}
parray[i] = VM_PAGE_TO_PHYS(pg);
dst_iov[i].iov_base =
(char *)(void *)PHYS_TO_DMAP(parray[i]) + off;
dst_iov[i].iov_len = len;
}
npages += i;
error = (*tls->sw_encrypt)(tls,
(const struct tls_record_layer *)pgs->hdr,
pgs->trail, src_iov, dst_iov, i, pgs->seqno);
if (error) {
counter_u64_add(ktls_offload_failed_crypto, 1);
break;
}
/*
* For file-backed mbufs, release the file-backed
* pages and replace them in the ext_pgs array with
* the anonymous wired pages allocated above.
*/
if (!is_anon) {
/* Free the old pages. */
m->m_ext.ext_free(m);
/* Replace them with the new pages. */
for (i = 0; i < pgs->npgs; i++)
pgs->pa[i] = parray[i];
/* Use the basic free routine. */
m->m_ext.ext_free = mb_free_mext_pgs;
}
/*
* Drop a reference to the session now that it is no
* longer needed. Existing code depends on encrypted
* records having no associated session vs
* yet-to-be-encrypted records having an associated
* session.
*/
pgs->tls = NULL;
ktls_free(tls);
}
CURVNET_SET(so->so_vnet);
if (error == 0) {
(void)(*so->so_proto->pr_usrreqs->pru_ready)(so, top, npages);
} else {
so->so_proto->pr_usrreqs->pru_abort(so);
so->so_error = EIO;
mb_free_notready(top, total_pages);
}
SOCK_LOCK(so);
sorele(so);
CURVNET_RESTORE();
}
static void
ktls_work_thread(void *ctx)
{
struct ktls_wq *wq = ctx;
struct mbuf_ext_pgs *p, *n;
struct ktls_session *tls;
STAILQ_HEAD(, mbuf_ext_pgs) local_head;
#if defined(__aarch64__) || defined(__amd64__) || defined(__i386__)
fpu_kern_thread(0);
#endif
for (;;) {
mtx_lock(&wq->mtx);
while (STAILQ_EMPTY(&wq->head)) {
wq->running = false;
mtx_sleep(wq, &wq->mtx, 0, "-", 0);
wq->running = true;
}
STAILQ_INIT(&local_head);
STAILQ_CONCAT(&local_head, &wq->head);
mtx_unlock(&wq->mtx);
STAILQ_FOREACH_SAFE(p, &local_head, stailq, n) {
if (p->mbuf != NULL) {
ktls_encrypt(p);
counter_u64_add(ktls_cnt_on, -1);
} else {
tls = p->tls;
ktls_free(tls);
uma_zfree(zone_extpgs, p);
}
}
}
}