c034143269
- The linked list of cryptoini structures used in session initialization is replaced with a new flat structure: struct crypto_session_params. This session includes a new mode to define how the other fields should be interpreted. Available modes include: - COMPRESS (for compression/decompression) - CIPHER (for simply encryption/decryption) - DIGEST (computing and verifying digests) - AEAD (combined auth and encryption such as AES-GCM and AES-CCM) - ETA (combined auth and encryption using encrypt-then-authenticate) Additional modes could be added in the future (e.g. if we wanted to support TLS MtE for AES-CBC in the kernel we could add a new mode for that. TLS modes might also affect how AAD is interpreted, etc.) The flat structure also includes the key lengths and algorithms as before. However, code doesn't have to walk the linked list and switch on the algorithm to determine which key is the auth key vs encryption key. The 'csp_auth_*' fields are always used for auth keys and settings and 'csp_cipher_*' for cipher. (Compression algorithms are stored in csp_cipher_alg.) - Drivers no longer register a list of supported algorithms. This doesn't quite work when you factor in modes (e.g. a driver might support both AES-CBC and SHA2-256-HMAC separately but not combined for ETA). Instead, a new 'crypto_probesession' method has been added to the kobj interface for symmteric crypto drivers. This method returns a negative value on success (similar to how device_probe works) and the crypto framework uses this value to pick the "best" driver. There are three constants for hardware (e.g. ccr), accelerated software (e.g. aesni), and plain software (cryptosoft) that give preference in that order. One effect of this is that if you request only hardware when creating a new session, you will no longer get a session using accelerated software. Another effect is that the default setting to disallow software crypto via /dev/crypto now disables accelerated software. Once a driver is chosen, 'crypto_newsession' is invoked as before. - Crypto operations are now solely described by the flat 'cryptop' structure. The linked list of descriptors has been removed. A separate enum has been added to describe the type of data buffer in use instead of using CRYPTO_F_* flags to make it easier to add more types in the future if needed (e.g. wired userspace buffers for zero-copy). It will also make it easier to re-introduce separate input and output buffers (in-kernel TLS would benefit from this). Try to make the flags related to IV handling less insane: - CRYPTO_F_IV_SEPARATE means that the IV is stored in the 'crp_iv' member of the operation structure. If this flag is not set, the IV is stored in the data buffer at the 'crp_iv_start' offset. - CRYPTO_F_IV_GENERATE means that a random IV should be generated and stored into the data buffer. This cannot be used with CRYPTO_F_IV_SEPARATE. If a consumer wants to deal with explicit vs implicit IVs, etc. it can always generate the IV however it needs and store partial IVs in the buffer and the full IV/nonce in crp_iv and set CRYPTO_F_IV_SEPARATE. The layout of the buffer is now described via fields in cryptop. crp_aad_start and crp_aad_length define the boundaries of any AAD. Previously with GCM and CCM you defined an auth crd with this range, but for ETA your auth crd had to span both the AAD and plaintext (and they had to be adjacent). crp_payload_start and crp_payload_length define the boundaries of the plaintext/ciphertext. Modes that only do a single operation (COMPRESS, CIPHER, DIGEST) should only use this region and leave the AAD region empty. If a digest is present (or should be generated), it's starting location is marked by crp_digest_start. Instead of using the CRD_F_ENCRYPT flag to determine the direction of the operation, cryptop now includes an 'op' field defining the operation to perform. For digests I've added a new VERIFY digest mode which assumes a digest is present in the input and fails the request with EBADMSG if it doesn't match the internally-computed digest. GCM and CCM already assumed this, and the new AEAD mode requires this for decryption. The new ETA mode now also requires this for decryption, so IPsec and GELI no longer do their own authentication verification. Simple DIGEST operations can also do this, though there are no in-tree consumers. To eventually support some refcounting to close races, the session cookie is now passed to crypto_getop() and clients should no longer set crp_sesssion directly. - Assymteric crypto operation structures should be allocated via crypto_getkreq() and freed via crypto_freekreq(). This permits the crypto layer to track open asym requests and close races with a driver trying to unregister while asym requests are in flight. - crypto_copyback, crypto_copydata, crypto_apply, and crypto_contiguous_subsegment now accept the 'crp' object as the first parameter instead of individual members. This makes it easier to deal with different buffer types in the future as well as separate input and output buffers. It's also simpler for driver writers to use. - bus_dmamap_load_crp() loads a DMA mapping for a crypto buffer. This understands the various types of buffers so that drivers that use DMA do not have to be aware of different buffer types. - Helper routines now exist to build an auth context for HMAC IPAD and OPAD. This reduces some duplicated work among drivers. - Key buffers are now treated as const throughout the framework and in device drivers. However, session key buffers provided when a session is created are expected to remain alive for the duration of the session. - GCM and CCM sessions now only specify a cipher algorithm and a cipher key. The redundant auth information is not needed or used. - For cryptosoft, split up the code a bit such that the 'process' callback now invokes a function pointer in the session. This function pointer is set based on the mode (in effect) though it simplifies a few edge cases that would otherwise be in the switch in 'process'. It does split up GCM vs CCM which I think is more readable even if there is some duplication. - I changed /dev/crypto to support GMAC requests using CRYPTO_AES_NIST_GMAC as an auth algorithm and updated cryptocheck to work with it. - Combined cipher and auth sessions via /dev/crypto now always use ETA mode. The COP_F_CIPHER_FIRST flag is now a no-op that is ignored. This was actually documented as being true in crypto(4) before, but the code had not implemented this before I added the CIPHER_FIRST flag. - I have not yet updated /dev/crypto to be aware of explicit modes for sessions. I will probably do that at some point in the future as well as teach it about IV/nonce and tag lengths for AEAD so we can support all of the NIST KAT tests for GCM and CCM. - I've split up the exising crypto.9 manpage into several pages of which many are written from scratch. - I have converted all drivers and consumers in the tree and verified that they compile, but I have not tested all of them. I have tested the following drivers: - cryptosoft - aesni (AES only) - blake2 - ccr and the following consumers: - cryptodev - IPsec - ktls_ocf - GELI (lightly) I have not tested the following: - ccp - aesni with sha - hifn - kgssapi_krb5 - ubsec - padlock - safe - armv8_crypto (aarch64) - glxsb (i386) - sec (ppc) - cesa (armv7) - cryptocteon (mips64) - nlmsec (mips64) Discussed with: cem Relnotes: yes Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D23677
856 lines
19 KiB
C
856 lines
19 KiB
C
/*-
|
|
* SPDX-License-Identifier: BSD-2-Clause-FreeBSD
|
|
*
|
|
* Copyright (c) 2017 Chelsio Communications, Inc.
|
|
* Copyright (c) 2017 Conrad Meyer <cem@FreeBSD.org>
|
|
* All rights reserved.
|
|
* Largely borrowed from ccr(4), Written by: John Baldwin <jhb@FreeBSD.org>
|
|
*
|
|
* Redistribution and use in source and binary forms, with or without
|
|
* modification, are permitted provided that the following conditions
|
|
* are met:
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
* notice, this list of conditions and the following disclaimer.
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
* documentation and/or other materials provided with the distribution.
|
|
*
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
* SUCH DAMAGE.
|
|
*/
|
|
|
|
#include <sys/cdefs.h>
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
#include "opt_ddb.h"
|
|
|
|
#include <sys/param.h>
|
|
#include <sys/bus.h>
|
|
#include <sys/lock.h>
|
|
#include <sys/kernel.h>
|
|
#include <sys/malloc.h>
|
|
#include <sys/mutex.h>
|
|
#include <sys/module.h>
|
|
#include <sys/random.h>
|
|
#include <sys/sglist.h>
|
|
#include <sys/sysctl.h>
|
|
|
|
#ifdef DDB
|
|
#include <ddb/ddb.h>
|
|
#endif
|
|
|
|
#include <dev/pci/pcivar.h>
|
|
|
|
#include <dev/random/randomdev.h>
|
|
|
|
#include <opencrypto/cryptodev.h>
|
|
#include <opencrypto/xform.h>
|
|
|
|
#include "cryptodev_if.h"
|
|
|
|
#include "ccp.h"
|
|
#include "ccp_hardware.h"
|
|
|
|
MALLOC_DEFINE(M_CCP, "ccp", "AMD CCP crypto");
|
|
|
|
/*
|
|
* Need a global softc available for garbage random_source API, which lacks any
|
|
* context pointer. It's also handy for debugging.
|
|
*/
|
|
struct ccp_softc *g_ccp_softc;
|
|
|
|
bool g_debug_print = false;
|
|
SYSCTL_BOOL(_hw_ccp, OID_AUTO, debug, CTLFLAG_RWTUN, &g_debug_print, 0,
|
|
"Set to enable debugging log messages");
|
|
|
|
static struct pciid {
|
|
uint32_t devid;
|
|
const char *desc;
|
|
} ccp_ids[] = {
|
|
{ 0x14561022, "AMD CCP-5a" },
|
|
{ 0x14681022, "AMD CCP-5b" },
|
|
};
|
|
|
|
static struct random_source random_ccp = {
|
|
.rs_ident = "AMD CCP TRNG",
|
|
.rs_source = RANDOM_PURE_CCP,
|
|
.rs_read = random_ccp_read,
|
|
};
|
|
|
|
/*
|
|
* ccp_populate_sglist() generates a scatter/gather list that covers the entire
|
|
* crypto operation buffer.
|
|
*/
|
|
static int
|
|
ccp_populate_sglist(struct sglist *sg, struct cryptop *crp)
|
|
{
|
|
int error;
|
|
|
|
sglist_reset(sg);
|
|
switch (crp->crp_buf_type) {
|
|
case CRYPTO_BUF_MBUF:
|
|
error = sglist_append_mbuf(sg, crp->crp_mbuf);
|
|
break;
|
|
case CRYPTO_BUF_UIO:
|
|
error = sglist_append_uio(sg, crp->crp_uio);
|
|
break;
|
|
case CRYPTO_BUF_CONTIG:
|
|
error = sglist_append(sg, crp->crp_buf, crp->crp_ilen);
|
|
break;
|
|
default:
|
|
error = EINVAL;
|
|
}
|
|
return (error);
|
|
}
|
|
|
|
/*
|
|
* Handle a GCM request with an empty payload by performing the
|
|
* operation in software.
|
|
*/
|
|
static void
|
|
ccp_gcm_soft(struct ccp_session *s, struct cryptop *crp)
|
|
{
|
|
struct aes_gmac_ctx gmac_ctx;
|
|
char block[GMAC_BLOCK_LEN];
|
|
char digest[GMAC_DIGEST_LEN];
|
|
char iv[AES_BLOCK_LEN];
|
|
int i, len;
|
|
|
|
/*
|
|
* This assumes a 12-byte IV from the crp. See longer comment
|
|
* above in ccp_gcm() for more details.
|
|
*/
|
|
if ((crp->crp_flags & CRYPTO_F_IV_SEPARATE) == 0) {
|
|
crp->crp_etype = EINVAL;
|
|
goto out;
|
|
}
|
|
memcpy(iv, crp->crp_iv, 12);
|
|
*(uint32_t *)&iv[12] = htobe32(1);
|
|
|
|
/* Initialize the MAC. */
|
|
AES_GMAC_Init(&gmac_ctx);
|
|
AES_GMAC_Setkey(&gmac_ctx, s->blkcipher.enckey, s->blkcipher.key_len);
|
|
AES_GMAC_Reinit(&gmac_ctx, iv, sizeof(iv));
|
|
|
|
/* MAC the AAD. */
|
|
for (i = 0; i < crp->crp_aad_length; i += sizeof(block)) {
|
|
len = imin(crp->crp_aad_length - i, sizeof(block));
|
|
crypto_copydata(crp, crp->crp_aad_start + i, len, block);
|
|
bzero(block + len, sizeof(block) - len);
|
|
AES_GMAC_Update(&gmac_ctx, block, sizeof(block));
|
|
}
|
|
|
|
/* Length block. */
|
|
bzero(block, sizeof(block));
|
|
((uint32_t *)block)[1] = htobe32(crp->crp_aad_length * 8);
|
|
AES_GMAC_Update(&gmac_ctx, block, sizeof(block));
|
|
AES_GMAC_Final(digest, &gmac_ctx);
|
|
|
|
if (CRYPTO_OP_IS_ENCRYPT(crp->crp_op)) {
|
|
crypto_copyback(crp, crp->crp_digest_start, sizeof(digest),
|
|
digest);
|
|
crp->crp_etype = 0;
|
|
} else {
|
|
char digest2[GMAC_DIGEST_LEN];
|
|
|
|
crypto_copydata(crp, crp->crp_digest_start, sizeof(digest2),
|
|
digest2);
|
|
if (timingsafe_bcmp(digest, digest2, sizeof(digest)) == 0)
|
|
crp->crp_etype = 0;
|
|
else
|
|
crp->crp_etype = EBADMSG;
|
|
}
|
|
out:
|
|
crypto_done(crp);
|
|
}
|
|
|
|
static int
|
|
ccp_probe(device_t dev)
|
|
{
|
|
struct pciid *ip;
|
|
uint32_t id;
|
|
|
|
id = pci_get_devid(dev);
|
|
for (ip = ccp_ids; ip < &ccp_ids[nitems(ccp_ids)]; ip++) {
|
|
if (id == ip->devid) {
|
|
device_set_desc(dev, ip->desc);
|
|
return (0);
|
|
}
|
|
}
|
|
return (ENXIO);
|
|
}
|
|
|
|
static void
|
|
ccp_initialize_queues(struct ccp_softc *sc)
|
|
{
|
|
struct ccp_queue *qp;
|
|
size_t i;
|
|
|
|
for (i = 0; i < nitems(sc->queues); i++) {
|
|
qp = &sc->queues[i];
|
|
|
|
qp->cq_softc = sc;
|
|
qp->cq_qindex = i;
|
|
mtx_init(&qp->cq_lock, "ccp queue", NULL, MTX_DEF);
|
|
/* XXX - arbitrarily chosen sizes */
|
|
qp->cq_sg_crp = sglist_alloc(32, M_WAITOK);
|
|
/* Two more SGEs than sg_crp to accommodate ipad. */
|
|
qp->cq_sg_ulptx = sglist_alloc(34, M_WAITOK);
|
|
qp->cq_sg_dst = sglist_alloc(2, M_WAITOK);
|
|
}
|
|
}
|
|
|
|
static void
|
|
ccp_free_queues(struct ccp_softc *sc)
|
|
{
|
|
struct ccp_queue *qp;
|
|
size_t i;
|
|
|
|
for (i = 0; i < nitems(sc->queues); i++) {
|
|
qp = &sc->queues[i];
|
|
|
|
mtx_destroy(&qp->cq_lock);
|
|
sglist_free(qp->cq_sg_crp);
|
|
sglist_free(qp->cq_sg_ulptx);
|
|
sglist_free(qp->cq_sg_dst);
|
|
}
|
|
}
|
|
|
|
static int
|
|
ccp_attach(device_t dev)
|
|
{
|
|
struct ccp_softc *sc;
|
|
int error;
|
|
|
|
sc = device_get_softc(dev);
|
|
sc->dev = dev;
|
|
|
|
sc->cid = crypto_get_driverid(dev, sizeof(struct ccp_session),
|
|
CRYPTOCAP_F_HARDWARE);
|
|
if (sc->cid < 0) {
|
|
device_printf(dev, "could not get crypto driver id\n");
|
|
return (ENXIO);
|
|
}
|
|
|
|
error = ccp_hw_attach(dev);
|
|
if (error != 0)
|
|
return (error);
|
|
|
|
mtx_init(&sc->lock, "ccp", NULL, MTX_DEF);
|
|
|
|
ccp_initialize_queues(sc);
|
|
|
|
if (g_ccp_softc == NULL) {
|
|
g_ccp_softc = sc;
|
|
if ((sc->hw_features & VERSION_CAP_TRNG) != 0)
|
|
random_source_register(&random_ccp);
|
|
}
|
|
|
|
return (0);
|
|
}
|
|
|
|
static int
|
|
ccp_detach(device_t dev)
|
|
{
|
|
struct ccp_softc *sc;
|
|
|
|
sc = device_get_softc(dev);
|
|
|
|
mtx_lock(&sc->lock);
|
|
sc->detaching = true;
|
|
mtx_unlock(&sc->lock);
|
|
|
|
crypto_unregister_all(sc->cid);
|
|
if (g_ccp_softc == sc && (sc->hw_features & VERSION_CAP_TRNG) != 0)
|
|
random_source_deregister(&random_ccp);
|
|
|
|
ccp_hw_detach(dev);
|
|
ccp_free_queues(sc);
|
|
|
|
if (g_ccp_softc == sc)
|
|
g_ccp_softc = NULL;
|
|
|
|
mtx_destroy(&sc->lock);
|
|
return (0);
|
|
}
|
|
|
|
static void
|
|
ccp_init_hmac_digest(struct ccp_session *s, const char *key, int klen)
|
|
{
|
|
union authctx auth_ctx;
|
|
struct auth_hash *axf;
|
|
u_int i;
|
|
|
|
/*
|
|
* If the key is larger than the block size, use the digest of
|
|
* the key as the key instead.
|
|
*/
|
|
axf = s->hmac.auth_hash;
|
|
if (klen > axf->blocksize) {
|
|
axf->Init(&auth_ctx);
|
|
axf->Update(&auth_ctx, key, klen);
|
|
axf->Final(s->hmac.ipad, &auth_ctx);
|
|
explicit_bzero(&auth_ctx, sizeof(auth_ctx));
|
|
klen = axf->hashsize;
|
|
} else
|
|
memcpy(s->hmac.ipad, key, klen);
|
|
|
|
memset(s->hmac.ipad + klen, 0, axf->blocksize - klen);
|
|
memcpy(s->hmac.opad, s->hmac.ipad, axf->blocksize);
|
|
|
|
for (i = 0; i < axf->blocksize; i++) {
|
|
s->hmac.ipad[i] ^= HMAC_IPAD_VAL;
|
|
s->hmac.opad[i] ^= HMAC_OPAD_VAL;
|
|
}
|
|
}
|
|
|
|
static bool
|
|
ccp_aes_check_keylen(int alg, int klen)
|
|
{
|
|
|
|
switch (klen * 8) {
|
|
case 128:
|
|
case 192:
|
|
if (alg == CRYPTO_AES_XTS)
|
|
return (false);
|
|
break;
|
|
case 256:
|
|
break;
|
|
case 512:
|
|
if (alg != CRYPTO_AES_XTS)
|
|
return (false);
|
|
break;
|
|
default:
|
|
return (false);
|
|
}
|
|
return (true);
|
|
}
|
|
|
|
static void
|
|
ccp_aes_setkey(struct ccp_session *s, int alg, const void *key, int klen)
|
|
{
|
|
unsigned kbits;
|
|
|
|
if (alg == CRYPTO_AES_XTS)
|
|
kbits = (klen / 2) * 8;
|
|
else
|
|
kbits = klen * 8;
|
|
|
|
switch (kbits) {
|
|
case 128:
|
|
s->blkcipher.cipher_type = CCP_AES_TYPE_128;
|
|
break;
|
|
case 192:
|
|
s->blkcipher.cipher_type = CCP_AES_TYPE_192;
|
|
break;
|
|
case 256:
|
|
s->blkcipher.cipher_type = CCP_AES_TYPE_256;
|
|
break;
|
|
default:
|
|
panic("should not get here");
|
|
}
|
|
|
|
s->blkcipher.key_len = klen;
|
|
memcpy(s->blkcipher.enckey, key, s->blkcipher.key_len);
|
|
}
|
|
|
|
static bool
|
|
ccp_auth_supported(struct ccp_softc *sc,
|
|
const struct crypto_session_params *csp)
|
|
{
|
|
|
|
if ((sc->hw_features & VERSION_CAP_SHA) == 0)
|
|
return (false);
|
|
switch (csp->csp_auth_alg) {
|
|
case CRYPTO_SHA1_HMAC:
|
|
case CRYPTO_SHA2_256_HMAC:
|
|
case CRYPTO_SHA2_384_HMAC:
|
|
case CRYPTO_SHA2_512_HMAC:
|
|
if (csp->csp_auth_key == NULL)
|
|
return (false);
|
|
break;
|
|
default:
|
|
return (false);
|
|
}
|
|
return (true);
|
|
}
|
|
|
|
static bool
|
|
ccp_cipher_supported(struct ccp_softc *sc,
|
|
const struct crypto_session_params *csp)
|
|
{
|
|
|
|
if ((sc->hw_features & VERSION_CAP_AES) == 0)
|
|
return (false);
|
|
switch (csp->csp_cipher_alg) {
|
|
case CRYPTO_AES_CBC:
|
|
if (csp->csp_ivlen != AES_BLOCK_LEN)
|
|
return (false);
|
|
break;
|
|
case CRYPTO_AES_ICM:
|
|
if (csp->csp_ivlen != AES_BLOCK_LEN)
|
|
return (false);
|
|
break;
|
|
case CRYPTO_AES_XTS:
|
|
if (csp->csp_ivlen != AES_XTS_IV_LEN)
|
|
return (false);
|
|
break;
|
|
default:
|
|
return (false);
|
|
}
|
|
return (ccp_aes_check_keylen(csp->csp_cipher_alg,
|
|
csp->csp_cipher_klen));
|
|
}
|
|
|
|
static int
|
|
ccp_probesession(device_t dev, const struct crypto_session_params *csp)
|
|
{
|
|
struct ccp_softc *sc;
|
|
|
|
if (csp->csp_flags != 0)
|
|
return (EINVAL);
|
|
sc = device_get_softc(dev);
|
|
switch (csp->csp_mode) {
|
|
case CSP_MODE_DIGEST:
|
|
if (!ccp_auth_supported(sc, csp))
|
|
return (EINVAL);
|
|
break;
|
|
case CSP_MODE_CIPHER:
|
|
if (!ccp_cipher_supported(sc, csp))
|
|
return (EINVAL);
|
|
break;
|
|
case CSP_MODE_AEAD:
|
|
switch (csp->csp_cipher_alg) {
|
|
case CRYPTO_AES_NIST_GCM_16:
|
|
if (csp->csp_ivlen != AES_GCM_IV_LEN)
|
|
return (EINVAL);
|
|
if (csp->csp_auth_mlen < 0 ||
|
|
csp->csp_auth_mlen > AES_GMAC_HASH_LEN)
|
|
return (EINVAL);
|
|
if ((sc->hw_features & VERSION_CAP_AES) == 0)
|
|
return (EINVAL);
|
|
break;
|
|
default:
|
|
return (EINVAL);
|
|
}
|
|
break;
|
|
case CSP_MODE_ETA:
|
|
if (!ccp_auth_supported(sc, csp) ||
|
|
!ccp_cipher_supported(sc, csp))
|
|
return (EINVAL);
|
|
break;
|
|
default:
|
|
return (EINVAL);
|
|
}
|
|
|
|
return (CRYPTODEV_PROBE_HARDWARE);
|
|
}
|
|
|
|
static int
|
|
ccp_newsession(device_t dev, crypto_session_t cses,
|
|
const struct crypto_session_params *csp)
|
|
{
|
|
struct ccp_softc *sc;
|
|
struct ccp_session *s;
|
|
struct auth_hash *auth_hash;
|
|
enum ccp_aes_mode cipher_mode;
|
|
unsigned auth_mode;
|
|
unsigned q;
|
|
|
|
/* XXX reconcile auth_mode with use by ccp_sha */
|
|
switch (csp->csp_auth_alg) {
|
|
case CRYPTO_SHA1_HMAC:
|
|
auth_hash = &auth_hash_hmac_sha1;
|
|
auth_mode = SHA1;
|
|
break;
|
|
case CRYPTO_SHA2_256_HMAC:
|
|
auth_hash = &auth_hash_hmac_sha2_256;
|
|
auth_mode = SHA2_256;
|
|
break;
|
|
case CRYPTO_SHA2_384_HMAC:
|
|
auth_hash = &auth_hash_hmac_sha2_384;
|
|
auth_mode = SHA2_384;
|
|
break;
|
|
case CRYPTO_SHA2_512_HMAC:
|
|
auth_hash = &auth_hash_hmac_sha2_512;
|
|
auth_mode = SHA2_512;
|
|
break;
|
|
default:
|
|
auth_hash = NULL;
|
|
auth_mode = 0;
|
|
break;
|
|
}
|
|
|
|
switch (csp->csp_cipher_alg) {
|
|
case CRYPTO_AES_CBC:
|
|
cipher_mode = CCP_AES_MODE_CBC;
|
|
break;
|
|
case CRYPTO_AES_ICM:
|
|
cipher_mode = CCP_AES_MODE_CTR;
|
|
break;
|
|
case CRYPTO_AES_NIST_GCM_16:
|
|
cipher_mode = CCP_AES_MODE_GCTR;
|
|
break;
|
|
case CRYPTO_AES_XTS:
|
|
cipher_mode = CCP_AES_MODE_XTS;
|
|
break;
|
|
default:
|
|
cipher_mode = CCP_AES_MODE_ECB;
|
|
break;
|
|
}
|
|
|
|
sc = device_get_softc(dev);
|
|
mtx_lock(&sc->lock);
|
|
if (sc->detaching) {
|
|
mtx_unlock(&sc->lock);
|
|
return (ENXIO);
|
|
}
|
|
|
|
s = crypto_get_driver_session(cses);
|
|
|
|
/* Just grab the first usable queue for now. */
|
|
for (q = 0; q < nitems(sc->queues); q++)
|
|
if ((sc->valid_queues & (1 << q)) != 0)
|
|
break;
|
|
if (q == nitems(sc->queues)) {
|
|
mtx_unlock(&sc->lock);
|
|
return (ENXIO);
|
|
}
|
|
s->queue = q;
|
|
|
|
switch (csp->csp_mode) {
|
|
case CSP_MODE_AEAD:
|
|
s->mode = GCM;
|
|
break;
|
|
case CSP_MODE_ETA:
|
|
s->mode = AUTHENC;
|
|
break;
|
|
case CSP_MODE_DIGEST:
|
|
s->mode = HMAC;
|
|
break;
|
|
case CSP_MODE_CIPHER:
|
|
s->mode = BLKCIPHER;
|
|
break;
|
|
}
|
|
|
|
if (s->mode == GCM) {
|
|
if (csp->csp_auth_mlen == 0)
|
|
s->gmac.hash_len = AES_GMAC_HASH_LEN;
|
|
else
|
|
s->gmac.hash_len = csp->csp_auth_mlen;
|
|
} else if (auth_hash != NULL) {
|
|
s->hmac.auth_hash = auth_hash;
|
|
s->hmac.auth_mode = auth_mode;
|
|
if (csp->csp_auth_mlen == 0)
|
|
s->hmac.hash_len = auth_hash->hashsize;
|
|
else
|
|
s->hmac.hash_len = csp->csp_auth_mlen;
|
|
ccp_init_hmac_digest(s, csp->csp_auth_key, csp->csp_auth_klen);
|
|
}
|
|
if (cipher_mode != CCP_AES_MODE_ECB) {
|
|
s->blkcipher.cipher_mode = cipher_mode;
|
|
if (csp->csp_cipher_key != NULL)
|
|
ccp_aes_setkey(s, csp->csp_cipher_alg,
|
|
csp->csp_cipher_key, csp->csp_cipher_klen);
|
|
}
|
|
|
|
s->active = true;
|
|
mtx_unlock(&sc->lock);
|
|
|
|
return (0);
|
|
}
|
|
|
|
static void
|
|
ccp_freesession(device_t dev, crypto_session_t cses)
|
|
{
|
|
struct ccp_session *s;
|
|
|
|
s = crypto_get_driver_session(cses);
|
|
|
|
if (s->pending != 0)
|
|
device_printf(dev,
|
|
"session %p freed with %d pending requests\n", s,
|
|
s->pending);
|
|
s->active = false;
|
|
}
|
|
|
|
static int
|
|
ccp_process(device_t dev, struct cryptop *crp, int hint)
|
|
{
|
|
const struct crypto_session_params *csp;
|
|
struct ccp_softc *sc;
|
|
struct ccp_queue *qp;
|
|
struct ccp_session *s;
|
|
int error;
|
|
bool qpheld;
|
|
|
|
qpheld = false;
|
|
qp = NULL;
|
|
|
|
csp = crypto_get_params(crp->crp_session);
|
|
s = crypto_get_driver_session(crp->crp_session);
|
|
sc = device_get_softc(dev);
|
|
mtx_lock(&sc->lock);
|
|
qp = &sc->queues[s->queue];
|
|
mtx_unlock(&sc->lock);
|
|
error = ccp_queue_acquire_reserve(qp, 1 /* placeholder */, M_NOWAIT);
|
|
if (error != 0)
|
|
goto out;
|
|
qpheld = true;
|
|
|
|
error = ccp_populate_sglist(qp->cq_sg_crp, crp);
|
|
if (error != 0)
|
|
goto out;
|
|
|
|
if (crp->crp_auth_key != NULL) {
|
|
KASSERT(s->hmac.auth_hash != NULL, ("auth key without HMAC"));
|
|
ccp_init_hmac_digest(s, crp->crp_auth_key, csp->csp_auth_klen);
|
|
}
|
|
if (crp->crp_cipher_key != NULL)
|
|
ccp_aes_setkey(s, csp->csp_cipher_alg, crp->crp_cipher_key,
|
|
csp->csp_cipher_klen);
|
|
|
|
switch (s->mode) {
|
|
case HMAC:
|
|
if (s->pending != 0) {
|
|
error = EAGAIN;
|
|
break;
|
|
}
|
|
error = ccp_hmac(qp, s, crp);
|
|
break;
|
|
case BLKCIPHER:
|
|
if (s->pending != 0) {
|
|
error = EAGAIN;
|
|
break;
|
|
}
|
|
error = ccp_blkcipher(qp, s, crp);
|
|
break;
|
|
case AUTHENC:
|
|
if (s->pending != 0) {
|
|
error = EAGAIN;
|
|
break;
|
|
}
|
|
error = ccp_authenc(qp, s, crp);
|
|
break;
|
|
case GCM:
|
|
if (crp->crp_payload_length == 0) {
|
|
mtx_unlock(&qp->cq_lock);
|
|
ccp_gcm_soft(s, crp);
|
|
return (0);
|
|
}
|
|
if (s->pending != 0) {
|
|
error = EAGAIN;
|
|
break;
|
|
}
|
|
error = ccp_gcm(qp, s, crp);
|
|
break;
|
|
}
|
|
|
|
if (error == 0)
|
|
s->pending++;
|
|
|
|
out:
|
|
if (qpheld) {
|
|
if (error != 0) {
|
|
/*
|
|
* Squash EAGAIN so callers don't uselessly and
|
|
* expensively retry if the ring was full.
|
|
*/
|
|
if (error == EAGAIN)
|
|
error = ENOMEM;
|
|
ccp_queue_abort(qp);
|
|
} else
|
|
ccp_queue_release(qp);
|
|
}
|
|
|
|
if (error != 0) {
|
|
DPRINTF(dev, "%s: early error:%d\n", __func__, error);
|
|
crp->crp_etype = error;
|
|
crypto_done(crp);
|
|
}
|
|
return (0);
|
|
}
|
|
|
|
static device_method_t ccp_methods[] = {
|
|
DEVMETHOD(device_probe, ccp_probe),
|
|
DEVMETHOD(device_attach, ccp_attach),
|
|
DEVMETHOD(device_detach, ccp_detach),
|
|
|
|
DEVMETHOD(cryptodev_probesession, ccp_probesession),
|
|
DEVMETHOD(cryptodev_newsession, ccp_newsession),
|
|
DEVMETHOD(cryptodev_freesession, ccp_freesession),
|
|
DEVMETHOD(cryptodev_process, ccp_process),
|
|
|
|
DEVMETHOD_END
|
|
};
|
|
|
|
static driver_t ccp_driver = {
|
|
"ccp",
|
|
ccp_methods,
|
|
sizeof(struct ccp_softc)
|
|
};
|
|
|
|
static devclass_t ccp_devclass;
|
|
DRIVER_MODULE(ccp, pci, ccp_driver, ccp_devclass, NULL, NULL);
|
|
MODULE_VERSION(ccp, 1);
|
|
MODULE_DEPEND(ccp, crypto, 1, 1, 1);
|
|
MODULE_DEPEND(ccp, random_device, 1, 1, 1);
|
|
#if 0 /* There are enough known issues that we shouldn't load automatically */
|
|
MODULE_PNP_INFO("W32:vendor/device", pci, ccp, ccp_ids,
|
|
nitems(ccp_ids));
|
|
#endif
|
|
|
|
static int
|
|
ccp_queue_reserve_space(struct ccp_queue *qp, unsigned n, int mflags)
|
|
{
|
|
struct ccp_softc *sc;
|
|
|
|
mtx_assert(&qp->cq_lock, MA_OWNED);
|
|
sc = qp->cq_softc;
|
|
|
|
if (n < 1 || n >= (1 << sc->ring_size_order))
|
|
return (EINVAL);
|
|
|
|
while (true) {
|
|
if (ccp_queue_get_ring_space(qp) >= n)
|
|
return (0);
|
|
if ((mflags & M_WAITOK) == 0)
|
|
return (EAGAIN);
|
|
qp->cq_waiting = true;
|
|
msleep(&qp->cq_tail, &qp->cq_lock, 0, "ccpqfull", 0);
|
|
}
|
|
}
|
|
|
|
int
|
|
ccp_queue_acquire_reserve(struct ccp_queue *qp, unsigned n, int mflags)
|
|
{
|
|
int error;
|
|
|
|
mtx_lock(&qp->cq_lock);
|
|
qp->cq_acq_tail = qp->cq_tail;
|
|
error = ccp_queue_reserve_space(qp, n, mflags);
|
|
if (error != 0)
|
|
mtx_unlock(&qp->cq_lock);
|
|
return (error);
|
|
}
|
|
|
|
void
|
|
ccp_queue_release(struct ccp_queue *qp)
|
|
{
|
|
|
|
mtx_assert(&qp->cq_lock, MA_OWNED);
|
|
if (qp->cq_tail != qp->cq_acq_tail) {
|
|
wmb();
|
|
ccp_queue_write_tail(qp);
|
|
}
|
|
mtx_unlock(&qp->cq_lock);
|
|
}
|
|
|
|
void
|
|
ccp_queue_abort(struct ccp_queue *qp)
|
|
{
|
|
unsigned i;
|
|
|
|
mtx_assert(&qp->cq_lock, MA_OWNED);
|
|
|
|
/* Wipe out any descriptors associated with this aborted txn. */
|
|
for (i = qp->cq_acq_tail; i != qp->cq_tail;
|
|
i = (i + 1) % (1 << qp->cq_softc->ring_size_order)) {
|
|
memset(&qp->desc_ring[i], 0, sizeof(qp->desc_ring[i]));
|
|
}
|
|
qp->cq_tail = qp->cq_acq_tail;
|
|
|
|
mtx_unlock(&qp->cq_lock);
|
|
}
|
|
|
|
#ifdef DDB
|
|
#define _db_show_lock(lo) LOCK_CLASS(lo)->lc_ddb_show(lo)
|
|
#define db_show_lock(lk) _db_show_lock(&(lk)->lock_object)
|
|
static void
|
|
db_show_ccp_sc(struct ccp_softc *sc)
|
|
{
|
|
|
|
db_printf("ccp softc at %p\n", sc);
|
|
db_printf(" cid: %d\n", (int)sc->cid);
|
|
|
|
db_printf(" lock: ");
|
|
db_show_lock(&sc->lock);
|
|
|
|
db_printf(" detaching: %d\n", (int)sc->detaching);
|
|
db_printf(" ring_size_order: %u\n", sc->ring_size_order);
|
|
|
|
db_printf(" hw_version: %d\n", (int)sc->hw_version);
|
|
db_printf(" hw_features: %b\n", (int)sc->hw_features,
|
|
"\20\24ELFC\23TRNG\22Zip_Compress\16Zip_Decompress\13ECC\12RSA"
|
|
"\11SHA\0103DES\07AES");
|
|
|
|
db_printf(" hw status:\n");
|
|
db_ccp_show_hw(sc);
|
|
}
|
|
|
|
static void
|
|
db_show_ccp_qp(struct ccp_queue *qp)
|
|
{
|
|
|
|
db_printf(" lock: ");
|
|
db_show_lock(&qp->cq_lock);
|
|
|
|
db_printf(" cq_qindex: %u\n", qp->cq_qindex);
|
|
db_printf(" cq_softc: %p\n", qp->cq_softc);
|
|
|
|
db_printf(" head: %u\n", qp->cq_head);
|
|
db_printf(" tail: %u\n", qp->cq_tail);
|
|
db_printf(" acq_tail: %u\n", qp->cq_acq_tail);
|
|
db_printf(" desc_ring: %p\n", qp->desc_ring);
|
|
db_printf(" completions_ring: %p\n", qp->completions_ring);
|
|
db_printf(" descriptors (phys): 0x%jx\n",
|
|
(uintmax_t)qp->desc_ring_bus_addr);
|
|
|
|
db_printf(" hw status:\n");
|
|
db_ccp_show_queue_hw(qp);
|
|
}
|
|
|
|
DB_SHOW_COMMAND(ccp, db_show_ccp)
|
|
{
|
|
struct ccp_softc *sc;
|
|
unsigned unit, qindex;
|
|
|
|
if (!have_addr)
|
|
goto usage;
|
|
|
|
unit = (unsigned)addr;
|
|
|
|
sc = devclass_get_softc(ccp_devclass, unit);
|
|
if (sc == NULL) {
|
|
db_printf("No such device ccp%u\n", unit);
|
|
goto usage;
|
|
}
|
|
|
|
if (count == -1) {
|
|
db_show_ccp_sc(sc);
|
|
return;
|
|
}
|
|
|
|
qindex = (unsigned)count;
|
|
if (qindex >= nitems(sc->queues)) {
|
|
db_printf("No such queue %u\n", qindex);
|
|
goto usage;
|
|
}
|
|
db_show_ccp_qp(&sc->queues[qindex]);
|
|
return;
|
|
|
|
usage:
|
|
db_printf("usage: show ccp <unit>[,<qindex>]\n");
|
|
return;
|
|
}
|
|
#endif /* DDB */
|