freebsd-skq/sys/netipsec/xform_esp.c
jhb ddcef18974 Refactor driver and consumer interfaces for OCF (in-kernel crypto).
- The linked list of cryptoini structures used in session
  initialization is replaced with a new flat structure: struct
  crypto_session_params.  This session includes a new mode to define
  how the other fields should be interpreted.  Available modes
  include:

  - COMPRESS (for compression/decompression)
  - CIPHER (for simply encryption/decryption)
  - DIGEST (computing and verifying digests)
  - AEAD (combined auth and encryption such as AES-GCM and AES-CCM)
  - ETA (combined auth and encryption using encrypt-then-authenticate)

  Additional modes could be added in the future (e.g. if we wanted to
  support TLS MtE for AES-CBC in the kernel we could add a new mode
  for that.  TLS modes might also affect how AAD is interpreted, etc.)

  The flat structure also includes the key lengths and algorithms as
  before.  However, code doesn't have to walk the linked list and
  switch on the algorithm to determine which key is the auth key vs
  encryption key.  The 'csp_auth_*' fields are always used for auth
  keys and settings and 'csp_cipher_*' for cipher.  (Compression
  algorithms are stored in csp_cipher_alg.)

- Drivers no longer register a list of supported algorithms.  This
  doesn't quite work when you factor in modes (e.g. a driver might
  support both AES-CBC and SHA2-256-HMAC separately but not combined
  for ETA).  Instead, a new 'crypto_probesession' method has been
  added to the kobj interface for symmteric crypto drivers.  This
  method returns a negative value on success (similar to how
  device_probe works) and the crypto framework uses this value to pick
  the "best" driver.  There are three constants for hardware
  (e.g. ccr), accelerated software (e.g. aesni), and plain software
  (cryptosoft) that give preference in that order.  One effect of this
  is that if you request only hardware when creating a new session,
  you will no longer get a session using accelerated software.
  Another effect is that the default setting to disallow software
  crypto via /dev/crypto now disables accelerated software.

  Once a driver is chosen, 'crypto_newsession' is invoked as before.

- Crypto operations are now solely described by the flat 'cryptop'
  structure.  The linked list of descriptors has been removed.

  A separate enum has been added to describe the type of data buffer
  in use instead of using CRYPTO_F_* flags to make it easier to add
  more types in the future if needed (e.g. wired userspace buffers for
  zero-copy).  It will also make it easier to re-introduce separate
  input and output buffers (in-kernel TLS would benefit from this).

  Try to make the flags related to IV handling less insane:

  - CRYPTO_F_IV_SEPARATE means that the IV is stored in the 'crp_iv'
    member of the operation structure.  If this flag is not set, the
    IV is stored in the data buffer at the 'crp_iv_start' offset.

  - CRYPTO_F_IV_GENERATE means that a random IV should be generated
    and stored into the data buffer.  This cannot be used with
    CRYPTO_F_IV_SEPARATE.

  If a consumer wants to deal with explicit vs implicit IVs, etc. it
  can always generate the IV however it needs and store partial IVs in
  the buffer and the full IV/nonce in crp_iv and set
  CRYPTO_F_IV_SEPARATE.

  The layout of the buffer is now described via fields in cryptop.
  crp_aad_start and crp_aad_length define the boundaries of any AAD.
  Previously with GCM and CCM you defined an auth crd with this range,
  but for ETA your auth crd had to span both the AAD and plaintext
  (and they had to be adjacent).

  crp_payload_start and crp_payload_length define the boundaries of
  the plaintext/ciphertext.  Modes that only do a single operation
  (COMPRESS, CIPHER, DIGEST) should only use this region and leave the
  AAD region empty.

  If a digest is present (or should be generated), it's starting
  location is marked by crp_digest_start.

  Instead of using the CRD_F_ENCRYPT flag to determine the direction
  of the operation, cryptop now includes an 'op' field defining the
  operation to perform.  For digests I've added a new VERIFY digest
  mode which assumes a digest is present in the input and fails the
  request with EBADMSG if it doesn't match the internally-computed
  digest.  GCM and CCM already assumed this, and the new AEAD mode
  requires this for decryption.  The new ETA mode now also requires
  this for decryption, so IPsec and GELI no longer do their own
  authentication verification.  Simple DIGEST operations can also do
  this, though there are no in-tree consumers.

  To eventually support some refcounting to close races, the session
  cookie is now passed to crypto_getop() and clients should no longer
  set crp_sesssion directly.

- Assymteric crypto operation structures should be allocated via
  crypto_getkreq() and freed via crypto_freekreq().  This permits the
  crypto layer to track open asym requests and close races with a
  driver trying to unregister while asym requests are in flight.

- crypto_copyback, crypto_copydata, crypto_apply, and
  crypto_contiguous_subsegment now accept the 'crp' object as the
  first parameter instead of individual members.  This makes it easier
  to deal with different buffer types in the future as well as
  separate input and output buffers.  It's also simpler for driver
  writers to use.

- bus_dmamap_load_crp() loads a DMA mapping for a crypto buffer.
  This understands the various types of buffers so that drivers that
  use DMA do not have to be aware of different buffer types.

- Helper routines now exist to build an auth context for HMAC IPAD
  and OPAD.  This reduces some duplicated work among drivers.

- Key buffers are now treated as const throughout the framework and in
  device drivers.  However, session key buffers provided when a session
  is created are expected to remain alive for the duration of the
  session.

- GCM and CCM sessions now only specify a cipher algorithm and a cipher
  key.  The redundant auth information is not needed or used.

- For cryptosoft, split up the code a bit such that the 'process'
  callback now invokes a function pointer in the session.  This
  function pointer is set based on the mode (in effect) though it
  simplifies a few edge cases that would otherwise be in the switch in
  'process'.

  It does split up GCM vs CCM which I think is more readable even if there
  is some duplication.

- I changed /dev/crypto to support GMAC requests using CRYPTO_AES_NIST_GMAC
  as an auth algorithm and updated cryptocheck to work with it.

- Combined cipher and auth sessions via /dev/crypto now always use ETA
  mode.  The COP_F_CIPHER_FIRST flag is now a no-op that is ignored.
  This was actually documented as being true in crypto(4) before, but
  the code had not implemented this before I added the CIPHER_FIRST
  flag.

- I have not yet updated /dev/crypto to be aware of explicit modes for
  sessions.  I will probably do that at some point in the future as well
  as teach it about IV/nonce and tag lengths for AEAD so we can support
  all of the NIST KAT tests for GCM and CCM.

- I've split up the exising crypto.9 manpage into several pages
  of which many are written from scratch.

- I have converted all drivers and consumers in the tree and verified
  that they compile, but I have not tested all of them.  I have tested
  the following drivers:

  - cryptosoft
  - aesni (AES only)
  - blake2
  - ccr

  and the following consumers:

  - cryptodev
  - IPsec
  - ktls_ocf
  - GELI (lightly)

  I have not tested the following:

  - ccp
  - aesni with sha
  - hifn
  - kgssapi_krb5
  - ubsec
  - padlock
  - safe
  - armv8_crypto (aarch64)
  - glxsb (i386)
  - sec (ppc)
  - cesa (armv7)
  - cryptocteon (mips64)
  - nlmsec (mips64)

Discussed with:	cem
Relnotes:	yes
Sponsored by:	Chelsio Communications
Differential Revision:	https://reviews.freebsd.org/D23677
2020-03-27 18:25:23 +00:00

975 lines
25 KiB
C

/* $FreeBSD$ */
/* $OpenBSD: ip_esp.c,v 1.69 2001/06/26 06:18:59 angelos Exp $ */
/*-
* The authors of this code are John Ioannidis (ji@tla.org),
* Angelos D. Keromytis (kermit@csd.uch.gr) and
* Niels Provos (provos@physnet.uni-hamburg.de).
*
* The original version of this code was written by John Ioannidis
* for BSD/OS in Athens, Greece, in November 1995.
*
* Ported to OpenBSD and NetBSD, with additional transforms, in December 1996,
* by Angelos D. Keromytis.
*
* Additional transforms and features in 1997 and 1998 by Angelos D. Keromytis
* and Niels Provos.
*
* Additional features in 1999 by Angelos D. Keromytis.
*
* Copyright (C) 1995, 1996, 1997, 1998, 1999 by John Ioannidis,
* Angelos D. Keromytis and Niels Provos.
* Copyright (c) 2001 Angelos D. Keromytis.
*
* Permission to use, copy, and modify this software with or without fee
* is hereby granted, provided that this entire notice is included in
* all copies of any software which is or includes a copy or
* modification of this software.
* You may use this code under the GNU public license if you so wish. Please
* contribute changes back to the authors under this freer than GPL license
* so that we may further the use of strong encryption without limitations to
* all.
*
* THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR
* IMPLIED WARRANTY. IN PARTICULAR, NONE OF THE AUTHORS MAKES ANY
* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE
* MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR
* PURPOSE.
*/
#include "opt_inet.h"
#include "opt_inet6.h"
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/mbuf.h>
#include <sys/socket.h>
#include <sys/syslog.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/random.h>
#include <sys/mutex.h>
#include <sys/sysctl.h>
#include <sys/mutex.h>
#include <machine/atomic.h>
#include <net/if.h>
#include <net/vnet.h>
#include <netinet/in.h>
#include <netinet/in_systm.h>
#include <netinet/ip.h>
#include <netinet/ip_ecn.h>
#include <netinet/ip6.h>
#include <netipsec/ipsec.h>
#include <netipsec/ah.h>
#include <netipsec/ah_var.h>
#include <netipsec/esp.h>
#include <netipsec/esp_var.h>
#include <netipsec/xform.h>
#ifdef INET6
#include <netinet6/ip6_var.h>
#include <netipsec/ipsec6.h>
#include <netinet6/ip6_ecn.h>
#endif
#include <netipsec/key.h>
#include <netipsec/key_debug.h>
#include <opencrypto/cryptodev.h>
#include <opencrypto/xform.h>
VNET_DEFINE(int, esp_enable) = 1;
VNET_PCPUSTAT_DEFINE(struct espstat, espstat);
VNET_PCPUSTAT_SYSINIT(espstat);
#ifdef VIMAGE
VNET_PCPUSTAT_SYSUNINIT(espstat);
#endif /* VIMAGE */
SYSCTL_DECL(_net_inet_esp);
SYSCTL_INT(_net_inet_esp, OID_AUTO, esp_enable,
CTLFLAG_VNET | CTLFLAG_RW, &VNET_NAME(esp_enable), 0, "");
SYSCTL_VNET_PCPUSTAT(_net_inet_esp, IPSECCTL_STATS, stats,
struct espstat, espstat,
"ESP statistics (struct espstat, netipsec/esp_var.h");
static struct timeval deswarn, blfwarn, castwarn, camelliawarn;
static int esp_input_cb(struct cryptop *op);
static int esp_output_cb(struct cryptop *crp);
size_t
esp_hdrsiz(struct secasvar *sav)
{
size_t size;
if (sav != NULL) {
/*XXX not right for null algorithm--does it matter??*/
IPSEC_ASSERT(sav->tdb_encalgxform != NULL,
("SA with null xform"));
if (sav->flags & SADB_X_EXT_OLD)
size = sizeof (struct esp);
else
size = sizeof (struct newesp);
size += sav->tdb_encalgxform->blocksize + 9;
/*XXX need alg check???*/
if (sav->tdb_authalgxform != NULL && sav->replay)
size += ah_hdrsiz(sav);
} else {
/*
* base header size
* + max iv length for CBC mode
* + max pad length
* + sizeof (pad length field)
* + sizeof (next header field)
* + max icv supported.
*/
size = sizeof (struct newesp) + EALG_MAX_BLOCK_LEN + 9 + 16;
}
return size;
}
/*
* esp_init() is called when an SPI is being set up.
*/
static int
esp_init(struct secasvar *sav, struct xformsw *xsp)
{
const struct enc_xform *txform;
struct crypto_session_params csp;
int keylen;
int error;
txform = enc_algorithm_lookup(sav->alg_enc);
if (txform == NULL) {
DPRINTF(("%s: unsupported encryption algorithm %d\n",
__func__, sav->alg_enc));
return EINVAL;
}
if (sav->key_enc == NULL) {
DPRINTF(("%s: no encoding key for %s algorithm\n",
__func__, txform->name));
return EINVAL;
}
if ((sav->flags & (SADB_X_EXT_OLD | SADB_X_EXT_IV4B)) ==
SADB_X_EXT_IV4B) {
DPRINTF(("%s: 4-byte IV not supported with protocol\n",
__func__));
return EINVAL;
}
switch (sav->alg_enc) {
case SADB_EALG_DESCBC:
if (ratecheck(&deswarn, &ipsec_warn_interval))
gone_in(13, "DES cipher for IPsec");
break;
case SADB_X_EALG_BLOWFISHCBC:
if (ratecheck(&blfwarn, &ipsec_warn_interval))
gone_in(13, "Blowfish cipher for IPsec");
break;
case SADB_X_EALG_CAST128CBC:
if (ratecheck(&castwarn, &ipsec_warn_interval))
gone_in(13, "CAST cipher for IPsec");
break;
case SADB_X_EALG_CAMELLIACBC:
if (ratecheck(&camelliawarn, &ipsec_warn_interval))
gone_in(13, "Camellia cipher for IPsec");
break;
}
/* subtract off the salt, RFC4106, 8.1 and RFC3686, 5.1 */
keylen = _KEYLEN(sav->key_enc) - SAV_ISCTRORGCM(sav) * 4;
if (txform->minkey > keylen || keylen > txform->maxkey) {
DPRINTF(("%s: invalid key length %u, must be in the range "
"[%u..%u] for algorithm %s\n", __func__,
keylen, txform->minkey, txform->maxkey,
txform->name));
return EINVAL;
}
if (SAV_ISCTRORGCM(sav))
sav->ivlen = 8; /* RFC4106 3.1 and RFC3686 3.1 */
else
sav->ivlen = txform->ivsize;
memset(&csp, 0, sizeof(csp));
/*
* Setup AH-related state.
*/
if (sav->alg_auth != 0) {
error = ah_init0(sav, xsp, &csp);
if (error)
return error;
}
/* NB: override anything set in ah_init0 */
sav->tdb_xform = xsp;
sav->tdb_encalgxform = txform;
/*
* Whenever AES-GCM is used for encryption, one
* of the AES authentication algorithms is chosen
* as well, based on the key size.
*/
if (sav->alg_enc == SADB_X_EALG_AESGCM16) {
switch (keylen) {
case AES_128_GMAC_KEY_LEN:
sav->alg_auth = SADB_X_AALG_AES128GMAC;
sav->tdb_authalgxform = &auth_hash_nist_gmac_aes_128;
break;
case AES_192_GMAC_KEY_LEN:
sav->alg_auth = SADB_X_AALG_AES192GMAC;
sav->tdb_authalgxform = &auth_hash_nist_gmac_aes_192;
break;
case AES_256_GMAC_KEY_LEN:
sav->alg_auth = SADB_X_AALG_AES256GMAC;
sav->tdb_authalgxform = &auth_hash_nist_gmac_aes_256;
break;
default:
DPRINTF(("%s: invalid key length %u"
"for algorithm %s\n", __func__,
keylen, txform->name));
return EINVAL;
}
csp.csp_mode = CSP_MODE_AEAD;
} else if (sav->alg_auth != 0)
csp.csp_mode = CSP_MODE_ETA;
else
csp.csp_mode = CSP_MODE_CIPHER;
/* Initialize crypto session. */
csp.csp_cipher_alg = sav->tdb_encalgxform->type;
csp.csp_cipher_key = sav->key_enc->key_data;
csp.csp_cipher_klen = _KEYBITS(sav->key_enc) / 8 -
SAV_ISCTRORGCM(sav) * 4;
csp.csp_ivlen = txform->ivsize;
error = crypto_newsession(&sav->tdb_cryptoid, &csp, V_crypto_support);
return error;
}
/*
* Paranoia.
*/
static int
esp_zeroize(struct secasvar *sav)
{
/* NB: ah_zerorize free's the crypto session state */
int error = ah_zeroize(sav);
if (sav->key_enc)
bzero(sav->key_enc->key_data, _KEYLEN(sav->key_enc));
sav->tdb_encalgxform = NULL;
sav->tdb_xform = NULL;
return error;
}
/*
* ESP input processing, called (eventually) through the protocol switch.
*/
static int
esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
{
IPSEC_DEBUG_DECLARE(char buf[128]);
const struct auth_hash *esph;
const struct enc_xform *espx;
struct xform_data *xd;
struct cryptop *crp;
struct newesp *esp;
uint8_t *ivp;
crypto_session_t cryptoid;
int alen, error, hlen, plen;
IPSEC_ASSERT(sav != NULL, ("null SA"));
IPSEC_ASSERT(sav->tdb_encalgxform != NULL, ("null encoding xform"));
error = EINVAL;
/* Valid IP Packet length ? */
if ( (skip&3) || (m->m_pkthdr.len&3) ){
DPRINTF(("%s: misaligned packet, skip %u pkt len %u",
__func__, skip, m->m_pkthdr.len));
ESPSTAT_INC(esps_badilen);
goto bad;
}
if (m->m_len < skip + sizeof(*esp)) {
m = m_pullup(m, skip + sizeof(*esp));
if (m == NULL) {
DPRINTF(("%s: cannot pullup header\n", __func__));
ESPSTAT_INC(esps_hdrops); /*XXX*/
error = ENOBUFS;
goto bad;
}
}
esp = (struct newesp *)(mtod(m, caddr_t) + skip);
esph = sav->tdb_authalgxform;
espx = sav->tdb_encalgxform;
/* Determine the ESP header and auth length */
if (sav->flags & SADB_X_EXT_OLD)
hlen = sizeof (struct esp) + sav->ivlen;
else
hlen = sizeof (struct newesp) + sav->ivlen;
alen = xform_ah_authsize(esph);
/*
* Verify payload length is multiple of encryption algorithm
* block size.
*
* NB: This works for the null algorithm because the blocksize
* is 4 and all packets must be 4-byte aligned regardless
* of the algorithm.
*/
plen = m->m_pkthdr.len - (skip + hlen + alen);
if ((plen & (espx->blocksize - 1)) || (plen <= 0)) {
DPRINTF(("%s: payload of %d octets not a multiple of %d octets,"
" SA %s/%08lx\n", __func__, plen, espx->blocksize,
ipsec_address(&sav->sah->saidx.dst, buf, sizeof(buf)),
(u_long)ntohl(sav->spi)));
ESPSTAT_INC(esps_badilen);
goto bad;
}
/*
* Check sequence number.
*/
SECASVAR_LOCK(sav);
if (esph != NULL && sav->replay != NULL && sav->replay->wsize != 0) {
if (ipsec_chkreplay(ntohl(esp->esp_seq), sav) == 0) {
SECASVAR_UNLOCK(sav);
DPRINTF(("%s: packet replay check for %s\n", __func__,
ipsec_sa2str(sav, buf, sizeof(buf))));
ESPSTAT_INC(esps_replay);
error = EACCES;
goto bad;
}
}
cryptoid = sav->tdb_cryptoid;
SECASVAR_UNLOCK(sav);
/* Update the counters */
ESPSTAT_ADD(esps_ibytes, m->m_pkthdr.len - (skip + hlen + alen));
/* Get crypto descriptors */
crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
DPRINTF(("%s: failed to acquire crypto descriptors\n",
__func__));
ESPSTAT_INC(esps_crypto);
error = ENOBUFS;
goto bad;
}
/* Get IPsec-specific opaque pointer */
xd = malloc(sizeof(*xd), M_XDATA, M_NOWAIT | M_ZERO);
if (xd == NULL) {
DPRINTF(("%s: failed to allocate xform_data\n", __func__));
ESPSTAT_INC(esps_crypto);
crypto_freereq(crp);
error = ENOBUFS;
goto bad;
}
if (esph != NULL) {
crp->crp_op = CRYPTO_OP_VERIFY_DIGEST;
crp->crp_aad_start = skip;
if (SAV_ISGCM(sav))
crp->crp_aad_length = 8; /* RFC4106 5, SPI + SN */
else
crp->crp_aad_length = hlen;
crp->crp_digest_start = m->m_pkthdr.len - alen;
}
/* Crypto operation descriptor */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length */
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
crp->crp_mbuf = m;
crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = esp_input_cb;
crp->crp_opaque = xd;
/* These are passed as-is to the callback */
xd->sav = sav;
xd->protoff = protoff;
xd->skip = skip;
xd->cryptoid = cryptoid;
xd->vnet = curvnet;
/* Decryption descriptor */
crp->crp_op |= CRYPTO_OP_DECRYPT;
crp->crp_payload_start = skip + hlen;
crp->crp_payload_length = m->m_pkthdr.len - (skip + hlen + alen);
if (SAV_ISCTRORGCM(sav)) {
ivp = &crp->crp_iv[0];
/* GCM IV Format: RFC4106 4 */
/* CTR IV Format: RFC3686 4 */
/* Salt is last four bytes of key, RFC4106 8.1 */
/* Nonce is last four bytes of key, RFC3686 5.1 */
memcpy(ivp, sav->key_enc->key_data +
_KEYLEN(sav->key_enc) - 4, 4);
if (SAV_ISCTR(sav)) {
/* Initial block counter is 1, RFC3686 4 */
be32enc(&ivp[sav->ivlen + 4], 1);
}
m_copydata(m, skip + hlen - sav->ivlen, sav->ivlen, &ivp[4]);
crp->crp_flags |= CRYPTO_F_IV_SEPARATE;
} else if (sav->ivlen != 0)
crp->crp_iv_start = skip + hlen - sav->ivlen;
return (crypto_dispatch(crp));
bad:
m_freem(m);
key_freesav(&sav);
return (error);
}
/*
* ESP input callback from the crypto driver.
*/
static int
esp_input_cb(struct cryptop *crp)
{
IPSEC_DEBUG_DECLARE(char buf[128]);
uint8_t lastthree[3];
const struct auth_hash *esph;
struct mbuf *m;
struct xform_data *xd;
struct secasvar *sav;
struct secasindex *saidx;
crypto_session_t cryptoid;
int hlen, skip, protoff, error, alen;
m = crp->crp_mbuf;
xd = crp->crp_opaque;
CURVNET_SET(xd->vnet);
sav = xd->sav;
skip = xd->skip;
protoff = xd->protoff;
cryptoid = xd->cryptoid;
saidx = &sav->sah->saidx;
esph = sav->tdb_authalgxform;
/* Check for crypto errors */
if (crp->crp_etype) {
if (crp->crp_etype == EAGAIN) {
/* Reset the session ID */
if (ipsec_updateid(sav, &crp->crp_session, &cryptoid) != 0)
crypto_freesession(cryptoid);
xd->cryptoid = crp->crp_session;
CURVNET_RESTORE();
return (crypto_dispatch(crp));
}
/* EBADMSG indicates authentication failure. */
if (!(crp->crp_etype == EBADMSG && esph != NULL)) {
ESPSTAT_INC(esps_noxform);
DPRINTF(("%s: crypto error %d\n", __func__,
crp->crp_etype));
error = crp->crp_etype;
goto bad;
}
}
/* Shouldn't happen... */
if (m == NULL) {
ESPSTAT_INC(esps_crypto);
DPRINTF(("%s: bogus returned buffer from crypto\n", __func__));
error = EINVAL;
goto bad;
}
ESPSTAT_INC(esps_hist[sav->alg_enc]);
/* If authentication was performed, check now. */
if (esph != NULL) {
alen = xform_ah_authsize(esph);
AHSTAT_INC(ahs_hist[sav->alg_auth]);
if (crp->crp_etype == EBADMSG) {
DPRINTF(("%s: authentication hash mismatch for "
"packet in SA %s/%08lx\n", __func__,
ipsec_address(&saidx->dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi)));
ESPSTAT_INC(esps_badauth);
error = EACCES;
goto bad;
}
m->m_flags |= M_AUTHIPDGM;
/* Remove trailing authenticator */
m_adj(m, -alen);
}
/* Release the crypto descriptors */
free(xd, M_XDATA), xd = NULL;
crypto_freereq(crp), crp = NULL;
/*
* Packet is now decrypted.
*/
m->m_flags |= M_DECRYPTED;
/*
* Update replay sequence number, if appropriate.
*/
if (sav->replay) {
u_int32_t seq;
m_copydata(m, skip + offsetof(struct newesp, esp_seq),
sizeof (seq), (caddr_t) &seq);
SECASVAR_LOCK(sav);
if (ipsec_updatereplay(ntohl(seq), sav)) {
SECASVAR_UNLOCK(sav);
DPRINTF(("%s: packet replay check for %s\n", __func__,
ipsec_sa2str(sav, buf, sizeof(buf))));
ESPSTAT_INC(esps_replay);
error = EACCES;
goto bad;
}
SECASVAR_UNLOCK(sav);
}
/* Determine the ESP header length */
if (sav->flags & SADB_X_EXT_OLD)
hlen = sizeof (struct esp) + sav->ivlen;
else
hlen = sizeof (struct newesp) + sav->ivlen;
/* Remove the ESP header and IV from the mbuf. */
error = m_striphdr(m, skip, hlen);
if (error) {
ESPSTAT_INC(esps_hdrops);
DPRINTF(("%s: bad mbuf chain, SA %s/%08lx\n", __func__,
ipsec_address(&sav->sah->saidx.dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi)));
goto bad;
}
/* Save the last three bytes of decrypted data */
m_copydata(m, m->m_pkthdr.len - 3, 3, lastthree);
/* Verify pad length */
if (lastthree[1] + 2 > m->m_pkthdr.len - skip) {
ESPSTAT_INC(esps_badilen);
DPRINTF(("%s: invalid padding length %d for %u byte packet "
"in SA %s/%08lx\n", __func__, lastthree[1],
m->m_pkthdr.len - skip,
ipsec_address(&sav->sah->saidx.dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi)));
error = EINVAL;
goto bad;
}
/* Verify correct decryption by checking the last padding bytes */
if ((sav->flags & SADB_X_EXT_PMASK) != SADB_X_EXT_PRAND) {
if (lastthree[1] != lastthree[0] && lastthree[1] != 0) {
ESPSTAT_INC(esps_badenc);
DPRINTF(("%s: decryption failed for packet in "
"SA %s/%08lx\n", __func__, ipsec_address(
&sav->sah->saidx.dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi)));
error = EINVAL;
goto bad;
}
}
/*
* RFC4303 2.6:
* Silently drop packet if next header field is IPPROTO_NONE.
*/
if (lastthree[2] == IPPROTO_NONE)
goto bad;
/* Trim the mbuf chain to remove trailing authenticator and padding */
m_adj(m, -(lastthree[1] + 2));
/* Restore the Next Protocol field */
m_copyback(m, protoff, sizeof (u_int8_t), lastthree + 2);
switch (saidx->dst.sa.sa_family) {
#ifdef INET6
case AF_INET6:
error = ipsec6_common_input_cb(m, sav, skip, protoff);
break;
#endif
#ifdef INET
case AF_INET:
error = ipsec4_common_input_cb(m, sav, skip, protoff);
break;
#endif
default:
panic("%s: Unexpected address family: %d saidx=%p", __func__,
saidx->dst.sa.sa_family, saidx);
}
CURVNET_RESTORE();
return error;
bad:
CURVNET_RESTORE();
if (sav != NULL)
key_freesav(&sav);
if (m != NULL)
m_freem(m);
if (xd != NULL)
free(xd, M_XDATA);
if (crp != NULL)
crypto_freereq(crp);
return error;
}
/*
* ESP output routine, called by ipsec[46]_perform_request().
*/
static int
esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
u_int idx, int skip, int protoff)
{
IPSEC_DEBUG_DECLARE(char buf[IPSEC_ADDRSTRLEN]);
struct cryptop *crp;
const struct auth_hash *esph;
const struct enc_xform *espx;
struct mbuf *mo = NULL;
struct xform_data *xd;
struct secasindex *saidx;
unsigned char *pad;
uint8_t *ivp;
uint64_t cntr;
crypto_session_t cryptoid;
int hlen, rlen, padding, blks, alen, i, roff;
int error, maxpacketsize;
uint8_t prot;
IPSEC_ASSERT(sav != NULL, ("null SA"));
esph = sav->tdb_authalgxform;
espx = sav->tdb_encalgxform;
IPSEC_ASSERT(espx != NULL, ("null encoding xform"));
if (sav->flags & SADB_X_EXT_OLD)
hlen = sizeof (struct esp) + sav->ivlen;
else
hlen = sizeof (struct newesp) + sav->ivlen;
rlen = m->m_pkthdr.len - skip; /* Raw payload length. */
/*
* RFC4303 2.4 Requires 4 byte alignment.
*/
blks = MAX(4, espx->blocksize); /* Cipher blocksize */
/* XXX clamp padding length a la KAME??? */
padding = ((blks - ((rlen + 2) % blks)) % blks) + 2;
alen = xform_ah_authsize(esph);
ESPSTAT_INC(esps_output);
saidx = &sav->sah->saidx;
/* Check for maximum packet size violations. */
switch (saidx->dst.sa.sa_family) {
#ifdef INET
case AF_INET:
maxpacketsize = IP_MAXPACKET;
break;
#endif /* INET */
#ifdef INET6
case AF_INET6:
maxpacketsize = IPV6_MAXPACKET;
break;
#endif /* INET6 */
default:
DPRINTF(("%s: unknown/unsupported protocol "
"family %d, SA %s/%08lx\n", __func__,
saidx->dst.sa.sa_family, ipsec_address(&saidx->dst,
buf, sizeof(buf)), (u_long) ntohl(sav->spi)));
ESPSTAT_INC(esps_nopf);
error = EPFNOSUPPORT;
goto bad;
}
/*
DPRINTF(("%s: skip %d hlen %d rlen %d padding %d alen %d blksd %d\n",
__func__, skip, hlen, rlen, padding, alen, blks)); */
if (skip + hlen + rlen + padding + alen > maxpacketsize) {
DPRINTF(("%s: packet in SA %s/%08lx got too big "
"(len %u, max len %u)\n", __func__,
ipsec_address(&saidx->dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi),
skip + hlen + rlen + padding + alen, maxpacketsize));
ESPSTAT_INC(esps_toobig);
error = EMSGSIZE;
goto bad;
}
/* Update the counters. */
ESPSTAT_ADD(esps_obytes, m->m_pkthdr.len - skip);
m = m_unshare(m, M_NOWAIT);
if (m == NULL) {
DPRINTF(("%s: cannot clone mbuf chain, SA %s/%08lx\n", __func__,
ipsec_address(&saidx->dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi)));
ESPSTAT_INC(esps_hdrops);
error = ENOBUFS;
goto bad;
}
/* Inject ESP header. */
mo = m_makespace(m, skip, hlen, &roff);
if (mo == NULL) {
DPRINTF(("%s: %u byte ESP hdr inject failed for SA %s/%08lx\n",
__func__, hlen, ipsec_address(&saidx->dst, buf,
sizeof(buf)), (u_long) ntohl(sav->spi)));
ESPSTAT_INC(esps_hdrops); /* XXX diffs from openbsd */
error = ENOBUFS;
goto bad;
}
/* Initialize ESP header. */
bcopy((caddr_t) &sav->spi, mtod(mo, caddr_t) + roff,
sizeof(uint32_t));
SECASVAR_LOCK(sav);
if (sav->replay) {
uint32_t replay;
#ifdef REGRESSION
/* Emulate replay attack when ipsec_replay is TRUE. */
if (!V_ipsec_replay)
#endif
sav->replay->count++;
replay = htonl(sav->replay->count);
bcopy((caddr_t) &replay, mtod(mo, caddr_t) + roff +
sizeof(uint32_t), sizeof(uint32_t));
}
cryptoid = sav->tdb_cryptoid;
if (SAV_ISCTRORGCM(sav))
cntr = sav->cntr++;
SECASVAR_UNLOCK(sav);
/*
* Add padding -- better to do it ourselves than use the crypto engine,
* although if/when we support compression, we'd have to do that.
*/
pad = (u_char *) m_pad(m, padding + alen);
if (pad == NULL) {
DPRINTF(("%s: m_pad failed for SA %s/%08lx\n", __func__,
ipsec_address(&saidx->dst, buf, sizeof(buf)),
(u_long) ntohl(sav->spi)));
m = NULL; /* NB: free'd by m_pad */
error = ENOBUFS;
goto bad;
}
/*
* Add padding: random, zero, or self-describing.
* XXX catch unexpected setting
*/
switch (sav->flags & SADB_X_EXT_PMASK) {
case SADB_X_EXT_PRAND:
arc4random_buf(pad, padding - 2);
break;
case SADB_X_EXT_PZERO:
bzero(pad, padding - 2);
break;
case SADB_X_EXT_PSEQ:
for (i = 0; i < padding - 2; i++)
pad[i] = i+1;
break;
}
/* Fix padding length and Next Protocol in padding itself. */
pad[padding - 2] = padding - 2;
m_copydata(m, protoff, sizeof(u_int8_t), pad + padding - 1);
/* Fix Next Protocol in IPv4/IPv6 header. */
prot = IPPROTO_ESP;
m_copyback(m, protoff, sizeof(u_int8_t), (u_char *) &prot);
/* Get crypto descriptor. */
crp = crypto_getreq(cryptoid, M_NOWAIT);
if (crp == NULL) {
DPRINTF(("%s: failed to acquire crypto descriptor\n",
__func__));
ESPSTAT_INC(esps_crypto);
error = ENOBUFS;
goto bad;
}
/* IPsec-specific opaque crypto info. */
xd = malloc(sizeof(struct xform_data), M_XDATA, M_NOWAIT | M_ZERO);
if (xd == NULL) {
crypto_freereq(crp);
DPRINTF(("%s: failed to allocate xform_data\n", __func__));
ESPSTAT_INC(esps_crypto);
error = ENOBUFS;
goto bad;
}
/* Encryption descriptor. */
crp->crp_payload_start = skip + hlen;
crp->crp_payload_length = m->m_pkthdr.len - (skip + hlen + alen);
crp->crp_op = CRYPTO_OP_ENCRYPT;
/* Encryption operation. */
if (SAV_ISCTRORGCM(sav)) {
ivp = &crp->crp_iv[0];
/* GCM IV Format: RFC4106 4 */
/* CTR IV Format: RFC3686 4 */
/* Salt is last four bytes of key, RFC4106 8.1 */
/* Nonce is last four bytes of key, RFC3686 5.1 */
memcpy(ivp, sav->key_enc->key_data +
_KEYLEN(sav->key_enc) - 4, 4);
be64enc(&ivp[4], cntr);
if (SAV_ISCTR(sav)) {
/* Initial block counter is 1, RFC3686 4 */
/* XXXAE: should we use this only for first packet? */
be32enc(&ivp[sav->ivlen + 4], 1);
}
m_copyback(m, skip + hlen - sav->ivlen, sav->ivlen, &ivp[4]);
crp->crp_flags |= CRYPTO_F_IV_SEPARATE;
} else if (sav->ivlen != 0) {
crp->crp_iv_start = skip + hlen - sav->ivlen;
crp->crp_flags |= CRYPTO_F_IV_GENERATE;
}
/* Callback parameters */
xd->sp = sp;
xd->sav = sav;
xd->idx = idx;
xd->cryptoid = cryptoid;
xd->vnet = curvnet;
/* Crypto operation descriptor. */
crp->crp_ilen = m->m_pkthdr.len; /* Total input length. */
crp->crp_flags |= CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
crp->crp_mbuf = m;
crp->crp_buf_type = CRYPTO_BUF_MBUF;
crp->crp_callback = esp_output_cb;
crp->crp_opaque = xd;
if (esph) {
/* Authentication descriptor. */
crp->crp_op |= CRYPTO_OP_COMPUTE_DIGEST;
crp->crp_aad_start = skip;
if (SAV_ISGCM(sav))
crp->crp_aad_length = 8; /* RFC4106 5, SPI + SN */
else
crp->crp_aad_length = hlen;
crp->crp_digest_start = m->m_pkthdr.len - alen;
}
return crypto_dispatch(crp);
bad:
if (m)
m_freem(m);
key_freesav(&sav);
key_freesp(&sp);
return (error);
}
/*
* ESP output callback from the crypto driver.
*/
static int
esp_output_cb(struct cryptop *crp)
{
struct xform_data *xd;
struct secpolicy *sp;
struct secasvar *sav;
struct mbuf *m;
crypto_session_t cryptoid;
u_int idx;
int error;
xd = (struct xform_data *) crp->crp_opaque;
CURVNET_SET(xd->vnet);
m = (struct mbuf *) crp->crp_buf;
sp = xd->sp;
sav = xd->sav;
idx = xd->idx;
cryptoid = xd->cryptoid;
/* Check for crypto errors. */
if (crp->crp_etype) {
if (crp->crp_etype == EAGAIN) {
/* Reset the session ID */
if (ipsec_updateid(sav, &crp->crp_session, &cryptoid) != 0)
crypto_freesession(cryptoid);
xd->cryptoid = crp->crp_session;
CURVNET_RESTORE();
return (crypto_dispatch(crp));
}
ESPSTAT_INC(esps_noxform);
DPRINTF(("%s: crypto error %d\n", __func__, crp->crp_etype));
error = crp->crp_etype;
m_freem(m);
goto bad;
}
/* Shouldn't happen... */
if (m == NULL) {
ESPSTAT_INC(esps_crypto);
DPRINTF(("%s: bogus returned buffer from crypto\n", __func__));
error = EINVAL;
goto bad;
}
free(xd, M_XDATA);
crypto_freereq(crp);
ESPSTAT_INC(esps_hist[sav->alg_enc]);
if (sav->tdb_authalgxform != NULL)
AHSTAT_INC(ahs_hist[sav->alg_auth]);
#ifdef REGRESSION
/* Emulate man-in-the-middle attack when ipsec_integrity is TRUE. */
if (V_ipsec_integrity) {
static unsigned char ipseczeroes[AH_HMAC_MAXHASHLEN];
const struct auth_hash *esph;
/*
* Corrupt HMAC if we want to test integrity verification of
* the other side.
*/
esph = sav->tdb_authalgxform;
if (esph != NULL) {
int alen;
alen = xform_ah_authsize(esph);
m_copyback(m, m->m_pkthdr.len - alen,
alen, ipseczeroes);
}
}
#endif
/* NB: m is reclaimed by ipsec_process_done. */
error = ipsec_process_done(m, sp, sav, idx);
CURVNET_RESTORE();
return (error);
bad:
CURVNET_RESTORE();
free(xd, M_XDATA);
crypto_freereq(crp);
key_freesav(&sav);
key_freesp(&sp);
return (error);
}
static struct xformsw esp_xformsw = {
.xf_type = XF_ESP,
.xf_name = "IPsec ESP",
.xf_init = esp_init,
.xf_zeroize = esp_zeroize,
.xf_input = esp_input,
.xf_output = esp_output,
};
SYSINIT(esp_xform_init, SI_SUB_PROTO_DOMAIN, SI_ORDER_MIDDLE,
xform_attach, &esp_xformsw);
SYSUNINIT(esp_xform_uninit, SI_SUB_PROTO_DOMAIN, SI_ORDER_MIDDLE,
xform_detach, &esp_xformsw);