opencrypto: Introduce crypto_dispatch_async()
Currently, OpenCrypto consumers can request asynchronous dispatch by setting a flag in the cryptop. (Currently only IPSec may do this.) I think this is a bit confusing: we (conditionally) set cryptop flags to request async dispatch, and then crypto_dispatch() immediately examines those flags to see if the consumer wants async dispatch. The flag names are also confusing since they don't specify what "async" applies to: dispatch or completion. Add a new KPI, crypto_dispatch_async(), rather than encoding the requested dispatch type in each cryptop. crypto_dispatch_async() falls back to crypto_dispatch() if the session's driver provides asynchronous dispatch. Get rid of CRYPTOP_ASYNC() and CRYPTOP_ASYNC_KEEPORDER(). Similarly, add crypto_dispatch_batch() to request processing of a tailq of cryptops, rather than encoding the scheduling policy using cryptop flags. Convert GELI, the only user of this interface (disabled by default) to use the new interface. Add CRYPTO_SESS_SYNC(), which can be used by consumers to determine whether crypto requests will be dispatched synchronously. This is just a helper macro. Use it instead of looking at cap flags directly. Fix style in crypto_done(). Also get rid of CRYPTO_RETW_EMPTY() and just check the relevant queues directly. This could result in some unnecessary wakeups but I think it's very uncommon to be using more than one queue per worker in a given workload, so checking all three queues is a waste of cycles. Reviewed by: jhb Sponsored by: Ampere Computing Submitted by: Klara, Inc. MFC after: 2 weeks Differential Revision: https://reviews.freebsd.org/D28194
This commit is contained in:
parent
7509b677b4
commit
68f6800ce0
@ -30,7 +30,7 @@
|
|||||||
.\"
|
.\"
|
||||||
.\" $FreeBSD$
|
.\" $FreeBSD$
|
||||||
.\"
|
.\"
|
||||||
.Dd August 12, 2020
|
.Dd February 8, 2021
|
||||||
.Dt CRYPTO_REQUEST 9
|
.Dt CRYPTO_REQUEST 9
|
||||||
.Os
|
.Os
|
||||||
.Sh NAME
|
.Sh NAME
|
||||||
@ -40,6 +40,10 @@
|
|||||||
.In opencrypto/cryptodev.h
|
.In opencrypto/cryptodev.h
|
||||||
.Ft int
|
.Ft int
|
||||||
.Fn crypto_dispatch "struct cryptop *crp"
|
.Fn crypto_dispatch "struct cryptop *crp"
|
||||||
|
.Ft int
|
||||||
|
.Fn crypto_dispatch_async "struct cryptop *crp" "int flags"
|
||||||
|
.Ft void
|
||||||
|
.Fn crypto_dispatch_batch "struct cryptopq *crpq" "int flags"
|
||||||
.Ft void
|
.Ft void
|
||||||
.Fn crypto_destroyreq "struct cryptop *crp"
|
.Fn crypto_destroyreq "struct cryptop *crp"
|
||||||
.Ft void
|
.Ft void
|
||||||
@ -104,10 +108,15 @@ the caller should set fields in the structure to describe
|
|||||||
request-specific parameters.
|
request-specific parameters.
|
||||||
Unused fields should be left as-is.
|
Unused fields should be left as-is.
|
||||||
.Pp
|
.Pp
|
||||||
.Fn crypto_dispatch
|
The
|
||||||
passes a crypto request to the driver attached to the request's session.
|
.Fn crypto_dispatch ,
|
||||||
If there are errors in the request's fields, this function may return
|
.Fn crypto_dispatch_async ,
|
||||||
an error to the caller.
|
and
|
||||||
|
.Fn crypto_dispatch_batch
|
||||||
|
functions pass one or more crypto requests to the driver attached to the
|
||||||
|
request's session.
|
||||||
|
If there are errors in the request's fields, these functions may return an
|
||||||
|
error to the caller.
|
||||||
If errors are encountered while servicing the request, they will instead
|
If errors are encountered while servicing the request, they will instead
|
||||||
be reported to the request's callback function
|
be reported to the request's callback function
|
||||||
.Pq Fa crp_callback
|
.Pq Fa crp_callback
|
||||||
@ -341,64 +350,53 @@ store the partial IV in the data buffer and pass the full IV separately in
|
|||||||
.Ss Request and Callback Scheduling
|
.Ss Request and Callback Scheduling
|
||||||
The crypto framework provides multiple methods of scheduling the dispatch
|
The crypto framework provides multiple methods of scheduling the dispatch
|
||||||
of requests to drivers along with the processing of driver callbacks.
|
of requests to drivers along with the processing of driver callbacks.
|
||||||
Requests use flags in
|
The
|
||||||
.Fa crp_flags
|
|
||||||
to select the desired scheduling methods.
|
|
||||||
.Pp
|
|
||||||
.Fn crypto_dispatch
|
|
||||||
can pass the request to the session's driver via three different methods:
|
|
||||||
.Bl -enum
|
|
||||||
.It
|
|
||||||
The request is queued to a taskqueue backed by a pool of worker threads.
|
|
||||||
By default the pool is sized to provide one thread for each CPU.
|
|
||||||
Worker threads dequeue requests and pass them to the driver
|
|
||||||
asynchronously.
|
|
||||||
.It
|
|
||||||
The request is passed to the driver synchronously in the context of the
|
|
||||||
thread invoking
|
|
||||||
.Fn crypto_dispatch .
|
|
||||||
.It
|
|
||||||
The request is queued to a queue of pending requests.
|
|
||||||
A single worker thread dequeues requests and passes them to the driver
|
|
||||||
asynchronously.
|
|
||||||
.El
|
|
||||||
.Pp
|
|
||||||
To select the first method (taskqueue backed by multiple threads),
|
|
||||||
requests should set
|
|
||||||
.Dv CRYPTO_F_ASYNC .
|
|
||||||
To always use the third method (queue to single worker thread),
|
|
||||||
requests should set
|
|
||||||
.Dv CRYPTO_F_BATCH .
|
|
||||||
If both flags are set,
|
|
||||||
.Dv CRYPTO_F_ASYNC
|
|
||||||
takes precedence.
|
|
||||||
If neither flag is set,
|
|
||||||
.Fn crypto_dispatch
|
|
||||||
will first attempt the second method (invoke driver synchronously).
|
|
||||||
If the driver is blocked,
|
|
||||||
the request will be queued using the third method.
|
|
||||||
One caveat is that the first method is only used for requests using software
|
|
||||||
drivers which use host CPUs to process requests.
|
|
||||||
Requests whose session is associated with a hardware driver will ignore
|
|
||||||
.Dv CRYPTO_F_ASYNC
|
|
||||||
and only use
|
|
||||||
.Dv CRYPTO_F_BATCH
|
|
||||||
to determine how requests should be scheduled.
|
|
||||||
.Pp
|
|
||||||
In addition to bypassing synchronous dispatch in
|
|
||||||
.Fn crypto_dispatch ,
|
.Fn crypto_dispatch ,
|
||||||
.Dv CRYPTO_F_BATCH
|
.Fn crypto_dispatch_async ,
|
||||||
requests additional changes aimed at optimizing batches of requests to
|
and
|
||||||
the same driver.
|
.Fn crypto_dispatch_batch
|
||||||
When the worker thread processes a request with
|
functions can be used to request different dispatch scheduling policies.
|
||||||
.Dv CRYPTO_F_BATCH ,
|
.Pp
|
||||||
it will search the pending request queue for any other requests for the same
|
.Fn crypto_dispatch
|
||||||
driver,
|
synchronously passes the request to the driver.
|
||||||
including requests from different sessions.
|
The driver itself may process the request synchronously or asynchronously
|
||||||
If any other requests are present,
|
depending on whether the driver is implemented by software or hardware.
|
||||||
|
.Pp
|
||||||
|
.Fn crypto_dispatch_async
|
||||||
|
dispatches the request asynchronously.
|
||||||
|
If the driver is inherently synchronous, the request is queued to a taskqueue
|
||||||
|
backed by a pool of worker threads.
|
||||||
|
This can increase througput by allowing requests from a single producer to be
|
||||||
|
processed in parallel.
|
||||||
|
By default the pool is sized to provide one thread for each CPU.
|
||||||
|
Worker threads dequeue requests and pass them to the driver asynchronously.
|
||||||
|
.Fn crypto_dispatch_async
|
||||||
|
additionally takes a
|
||||||
|
.Va flags
|
||||||
|
parameter.
|
||||||
|
The
|
||||||
|
.Dv CRYPTO_ASYNC_ORDERED
|
||||||
|
flag indicates that completion callbacks for requests must be called in the
|
||||||
|
same order as requests were dispatched.
|
||||||
|
If the driver is asynchronous, the behavior of
|
||||||
|
.Fn crypto_dispatch_async
|
||||||
|
is identical to that of
|
||||||
|
.Fn crypto_dispatch .
|
||||||
|
.Pp
|
||||||
|
.Fn crypto_dispatch_batch
|
||||||
|
allows the caller to collect a batch of requests and submit them to the driver
|
||||||
|
at the same time.
|
||||||
|
This allows hardware drivers to optimize the scheduling of request processing
|
||||||
|
and batch completion interrupts.
|
||||||
|
A batch is submitted to the driver by invoking the driver's process method on
|
||||||
|
each request, specifying
|
||||||
.Dv CRYPTO_HINT_MORE
|
.Dv CRYPTO_HINT_MORE
|
||||||
is passed to the driver's process method.
|
with each request except for the last.
|
||||||
Drivers may use this to batch completion interrupts.
|
The
|
||||||
|
.Fa flags
|
||||||
|
parameter to
|
||||||
|
.Fn crypto_dispatch_batch
|
||||||
|
is currently ignored.
|
||||||
.Pp
|
.Pp
|
||||||
Callback function scheduling is simpler than request scheduling.
|
Callback function scheduling is simpler than request scheduling.
|
||||||
Callbacks can either be invoked synchronously from
|
Callbacks can either be invoked synchronously from
|
||||||
|
@ -449,11 +449,13 @@ void
|
|||||||
g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
|
g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
|
||||||
{
|
{
|
||||||
struct g_eli_softc *sc;
|
struct g_eli_softc *sc;
|
||||||
|
struct cryptopq crpq;
|
||||||
struct cryptop *crp;
|
struct cryptop *crp;
|
||||||
u_int i, lsec, nsec, data_secsize, decr_secsize, encr_secsize;
|
u_int i, lsec, nsec, data_secsize, decr_secsize, encr_secsize;
|
||||||
off_t dstoff;
|
off_t dstoff;
|
||||||
u_char *p, *data, *authkey, *plaindata;
|
u_char *p, *data, *authkey, *plaindata;
|
||||||
int error;
|
int error;
|
||||||
|
bool batch;
|
||||||
|
|
||||||
G_ELI_LOGREQ(3, bp, "%s", __func__);
|
G_ELI_LOGREQ(3, bp, "%s", __func__);
|
||||||
|
|
||||||
@ -496,6 +498,9 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
|
|||||||
p = (char *)roundup((uintptr_t)p, sizeof(uintptr_t));
|
p = (char *)roundup((uintptr_t)p, sizeof(uintptr_t));
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
TAILQ_INIT(&crpq);
|
||||||
|
batch = atomic_load_int(&g_eli_batch) != 0;
|
||||||
|
|
||||||
for (i = 1; i <= nsec; i++, dstoff += encr_secsize) {
|
for (i = 1; i <= nsec; i++, dstoff += encr_secsize) {
|
||||||
crp = crypto_getreq(wr->w_sid, M_WAITOK);
|
crp = crypto_getreq(wr->w_sid, M_WAITOK);
|
||||||
authkey = (u_char *)p; p += G_ELI_AUTH_SECKEYLEN;
|
authkey = (u_char *)p; p += G_ELI_AUTH_SECKEYLEN;
|
||||||
@ -521,8 +526,6 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
|
|||||||
crp->crp_opaque = (void *)bp;
|
crp->crp_opaque = (void *)bp;
|
||||||
data += encr_secsize;
|
data += encr_secsize;
|
||||||
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
||||||
if (g_eli_batch)
|
|
||||||
crp->crp_flags |= CRYPTO_F_BATCH;
|
|
||||||
if (bp->bio_cmd == BIO_WRITE) {
|
if (bp->bio_cmd == BIO_WRITE) {
|
||||||
crp->crp_callback = g_eli_auth_write_done;
|
crp->crp_callback = g_eli_auth_write_done;
|
||||||
crp->crp_op = CRYPTO_OP_ENCRYPT |
|
crp->crp_op = CRYPTO_OP_ENCRYPT |
|
||||||
@ -549,8 +552,15 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
|
|||||||
g_eli_auth_keygen(sc, dstoff, authkey);
|
g_eli_auth_keygen(sc, dstoff, authkey);
|
||||||
crp->crp_auth_key = authkey;
|
crp->crp_auth_key = authkey;
|
||||||
|
|
||||||
|
if (batch) {
|
||||||
|
TAILQ_INSERT_TAIL(&crpq, crp, crp_next);
|
||||||
|
} else {
|
||||||
error = crypto_dispatch(crp);
|
error = crypto_dispatch(crp);
|
||||||
KASSERT(error == 0, ("crypto_dispatch() failed (error=%d)",
|
KASSERT(error == 0,
|
||||||
error));
|
("crypto_dispatch() failed (error=%d)", error));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (batch)
|
||||||
|
crypto_dispatch_batch(&crpq, 0);
|
||||||
|
}
|
||||||
|
@ -261,13 +261,14 @@ void
|
|||||||
g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
|
g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
|
||||||
{
|
{
|
||||||
struct g_eli_softc *sc;
|
struct g_eli_softc *sc;
|
||||||
|
struct cryptopq crpq;
|
||||||
struct cryptop *crp;
|
struct cryptop *crp;
|
||||||
vm_page_t *pages;
|
vm_page_t *pages;
|
||||||
u_int i, nsec, secsize;
|
u_int i, nsec, secsize;
|
||||||
off_t dstoff;
|
off_t dstoff;
|
||||||
u_char *data = NULL;
|
u_char *data = NULL;
|
||||||
int error;
|
int error, pages_offset;
|
||||||
int pages_offset;
|
bool batch;
|
||||||
|
|
||||||
G_ELI_LOGREQ(3, bp, "%s", __func__);
|
G_ELI_LOGREQ(3, bp, "%s", __func__);
|
||||||
|
|
||||||
@ -303,6 +304,9 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TAILQ_INIT(&crpq);
|
||||||
|
batch = atomic_load_int(&g_eli_batch) != 0;
|
||||||
|
|
||||||
for (i = 0, dstoff = bp->bio_offset; i < nsec; i++, dstoff += secsize) {
|
for (i = 0, dstoff = bp->bio_offset; i < nsec; i++, dstoff += secsize) {
|
||||||
crp = crypto_getreq(wr->w_sid, M_WAITOK);
|
crp = crypto_getreq(wr->w_sid, M_WAITOK);
|
||||||
|
|
||||||
@ -325,9 +329,6 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
|
|||||||
crp->crp_callback = g_eli_crypto_read_done;
|
crp->crp_callback = g_eli_crypto_read_done;
|
||||||
}
|
}
|
||||||
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
||||||
if (g_eli_batch)
|
|
||||||
crp->crp_flags |= CRYPTO_F_BATCH;
|
|
||||||
|
|
||||||
crp->crp_payload_start = 0;
|
crp->crp_payload_start = 0;
|
||||||
crp->crp_payload_length = secsize;
|
crp->crp_payload_length = secsize;
|
||||||
if ((sc->sc_flags & G_ELI_FLAG_SINGLE_KEY) == 0) {
|
if ((sc->sc_flags & G_ELI_FLAG_SINGLE_KEY) == 0) {
|
||||||
@ -340,8 +341,15 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
|
|||||||
sizeof(crp->crp_iv));
|
sizeof(crp->crp_iv));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (batch) {
|
||||||
|
TAILQ_INSERT_TAIL(&crpq, crp, crp_next);
|
||||||
|
} else {
|
||||||
error = crypto_dispatch(crp);
|
error = crypto_dispatch(crp);
|
||||||
KASSERT(error == 0, ("crypto_dispatch() failed (error=%d)",
|
KASSERT(error == 0,
|
||||||
error));
|
("crypto_dispatch() failed (error=%d)", error));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (batch)
|
||||||
|
crypto_dispatch_batch(&crpq, 0);
|
||||||
|
}
|
||||||
|
@ -122,7 +122,7 @@ aes_crypto_cb(struct cryptop *crp)
|
|||||||
int error;
|
int error;
|
||||||
struct aes_state *as = (struct aes_state *) crp->crp_opaque;
|
struct aes_state *as = (struct aes_state *) crp->crp_opaque;
|
||||||
|
|
||||||
if (crypto_ses2caps(crp->crp_session) & CRYPTOCAP_F_SYNC)
|
if (CRYPTO_SESS_SYNC(crp->crp_session))
|
||||||
return (0);
|
return (0);
|
||||||
|
|
||||||
error = crp->crp_etype;
|
error = crp->crp_etype;
|
||||||
@ -165,7 +165,7 @@ aes_encrypt_1(const struct krb5_key_state *ks, int buftype, void *buf,
|
|||||||
|
|
||||||
error = crypto_dispatch(crp);
|
error = crypto_dispatch(crp);
|
||||||
|
|
||||||
if ((crypto_ses2caps(as->as_session_aes) & CRYPTOCAP_F_SYNC) == 0) {
|
if (!CRYPTO_SESS_SYNC(as->as_session_aes)) {
|
||||||
mtx_lock(&as->as_lock);
|
mtx_lock(&as->as_lock);
|
||||||
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
|
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
|
||||||
error = msleep(crp, &as->as_lock, 0, "gssaes", 0);
|
error = msleep(crp, &as->as_lock, 0, "gssaes", 0);
|
||||||
@ -335,7 +335,7 @@ aes_checksum(const struct krb5_key_state *ks, int usage,
|
|||||||
|
|
||||||
error = crypto_dispatch(crp);
|
error = crypto_dispatch(crp);
|
||||||
|
|
||||||
if ((crypto_ses2caps(as->as_session_sha1) & CRYPTOCAP_F_SYNC) == 0) {
|
if (!CRYPTO_SESS_SYNC(as->as_session_sha1)) {
|
||||||
mtx_lock(&as->as_lock);
|
mtx_lock(&as->as_lock);
|
||||||
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
|
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
|
||||||
error = msleep(crp, &as->as_lock, 0, "gssaes", 0);
|
error = msleep(crp, &as->as_lock, 0, "gssaes", 0);
|
||||||
|
@ -652,8 +652,6 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
|
|||||||
/* Crypto operation descriptor. */
|
/* Crypto operation descriptor. */
|
||||||
crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
|
crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
|
||||||
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
||||||
if (V_async_crypto)
|
|
||||||
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
|
|
||||||
crypto_use_mbuf(crp, m);
|
crypto_use_mbuf(crp, m);
|
||||||
crp->crp_callback = ah_input_cb;
|
crp->crp_callback = ah_input_cb;
|
||||||
crp->crp_opaque = xd;
|
crp->crp_opaque = xd;
|
||||||
@ -671,6 +669,9 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
|
|||||||
xd->skip = skip;
|
xd->skip = skip;
|
||||||
xd->cryptoid = cryptoid;
|
xd->cryptoid = cryptoid;
|
||||||
xd->vnet = curvnet;
|
xd->vnet = curvnet;
|
||||||
|
if (V_async_crypto)
|
||||||
|
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
|
||||||
|
else
|
||||||
return (crypto_dispatch(crp));
|
return (crypto_dispatch(crp));
|
||||||
bad:
|
bad:
|
||||||
m_freem(m);
|
m_freem(m);
|
||||||
@ -1036,8 +1037,6 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
|
|||||||
/* Crypto operation descriptor. */
|
/* Crypto operation descriptor. */
|
||||||
crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
|
crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
|
||||||
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
||||||
if (V_async_crypto)
|
|
||||||
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
|
|
||||||
crypto_use_mbuf(crp, m);
|
crypto_use_mbuf(crp, m);
|
||||||
crp->crp_callback = ah_output_cb;
|
crp->crp_callback = ah_output_cb;
|
||||||
crp->crp_opaque = xd;
|
crp->crp_opaque = xd;
|
||||||
@ -1055,7 +1054,10 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
|
|||||||
xd->cryptoid = cryptoid;
|
xd->cryptoid = cryptoid;
|
||||||
xd->vnet = curvnet;
|
xd->vnet = curvnet;
|
||||||
|
|
||||||
return crypto_dispatch(crp);
|
if (V_async_crypto)
|
||||||
|
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
|
||||||
|
else
|
||||||
|
return (crypto_dispatch(crp));
|
||||||
bad:
|
bad:
|
||||||
if (m)
|
if (m)
|
||||||
m_freem(m);
|
m_freem(m);
|
||||||
|
@ -406,8 +406,6 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
|
|||||||
|
|
||||||
/* Crypto operation descriptor */
|
/* Crypto operation descriptor */
|
||||||
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
crp->crp_flags = CRYPTO_F_CBIFSYNC;
|
||||||
if (V_async_crypto)
|
|
||||||
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
|
|
||||||
crypto_use_mbuf(crp, m);
|
crypto_use_mbuf(crp, m);
|
||||||
crp->crp_callback = esp_input_cb;
|
crp->crp_callback = esp_input_cb;
|
||||||
crp->crp_opaque = xd;
|
crp->crp_opaque = xd;
|
||||||
@ -460,6 +458,9 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
|
|||||||
} else if (sav->ivlen != 0)
|
} else if (sav->ivlen != 0)
|
||||||
crp->crp_iv_start = skip + hlen - sav->ivlen;
|
crp->crp_iv_start = skip + hlen - sav->ivlen;
|
||||||
|
|
||||||
|
if (V_async_crypto)
|
||||||
|
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
|
||||||
|
else
|
||||||
return (crypto_dispatch(crp));
|
return (crypto_dispatch(crp));
|
||||||
|
|
||||||
crp_aad_fail:
|
crp_aad_fail:
|
||||||
@ -895,8 +896,6 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
|
|||||||
|
|
||||||
/* Crypto operation descriptor. */
|
/* Crypto operation descriptor. */
|
||||||
crp->crp_flags |= CRYPTO_F_CBIFSYNC;
|
crp->crp_flags |= CRYPTO_F_CBIFSYNC;
|
||||||
if (V_async_crypto)
|
|
||||||
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
|
|
||||||
crypto_use_mbuf(crp, m);
|
crypto_use_mbuf(crp, m);
|
||||||
crp->crp_callback = esp_output_cb;
|
crp->crp_callback = esp_output_cb;
|
||||||
crp->crp_opaque = xd;
|
crp->crp_opaque = xd;
|
||||||
@ -944,7 +943,10 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
|
|||||||
crp->crp_digest_start = m->m_pkthdr.len - alen;
|
crp->crp_digest_start = m->m_pkthdr.len - alen;
|
||||||
}
|
}
|
||||||
|
|
||||||
return crypto_dispatch(crp);
|
if (V_async_crypto)
|
||||||
|
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
|
||||||
|
else
|
||||||
|
return (crypto_dispatch(crp));
|
||||||
|
|
||||||
crp_aad_fail:
|
crp_aad_fail:
|
||||||
free(xd, M_XDATA);
|
free(xd, M_XDATA);
|
||||||
|
@ -188,8 +188,6 @@ static struct crypto_ret_worker *crypto_ret_workers = NULL;
|
|||||||
|
|
||||||
#define CRYPTO_RETW_LOCK(w) mtx_lock(&w->crypto_ret_mtx)
|
#define CRYPTO_RETW_LOCK(w) mtx_lock(&w->crypto_ret_mtx)
|
||||||
#define CRYPTO_RETW_UNLOCK(w) mtx_unlock(&w->crypto_ret_mtx)
|
#define CRYPTO_RETW_UNLOCK(w) mtx_unlock(&w->crypto_ret_mtx)
|
||||||
#define CRYPTO_RETW_EMPTY(w) \
|
|
||||||
(TAILQ_EMPTY(&w->crp_ret_q) && TAILQ_EMPTY(&w->crp_ret_kq) && TAILQ_EMPTY(&w->crp_ordered_ret_q))
|
|
||||||
|
|
||||||
static int crypto_workers_num = 0;
|
static int crypto_workers_num = 0;
|
||||||
SYSCTL_INT(_kern_crypto, OID_AUTO, num_workers, CTLFLAG_RDTUN,
|
SYSCTL_INT(_kern_crypto, OID_AUTO, num_workers, CTLFLAG_RDTUN,
|
||||||
@ -1406,11 +1404,8 @@ crp_sanity(struct cryptop *crp)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/*
|
static int
|
||||||
* Add a crypto request to a queue, to be processed by the kernel thread.
|
crypto_dispatch_one(struct cryptop *crp, int hint)
|
||||||
*/
|
|
||||||
int
|
|
||||||
crypto_dispatch(struct cryptop *crp)
|
|
||||||
{
|
{
|
||||||
struct cryptocap *cap;
|
struct cryptocap *cap;
|
||||||
int result;
|
int result;
|
||||||
@ -1418,49 +1413,82 @@ crypto_dispatch(struct cryptop *crp)
|
|||||||
#ifdef INVARIANTS
|
#ifdef INVARIANTS
|
||||||
crp_sanity(crp);
|
crp_sanity(crp);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
CRYPTOSTAT_INC(cs_ops);
|
CRYPTOSTAT_INC(cs_ops);
|
||||||
|
|
||||||
crp->crp_retw_id = crp->crp_session->id % crypto_workers_num;
|
crp->crp_retw_id = crp->crp_session->id % crypto_workers_num;
|
||||||
|
|
||||||
if (CRYPTOP_ASYNC(crp)) {
|
/*
|
||||||
if (crp->crp_flags & CRYPTO_F_ASYNC_KEEPORDER) {
|
* Caller marked the request to be processed immediately; dispatch it
|
||||||
|
* directly to the driver unless the driver is currently blocked, in
|
||||||
|
* which case it is queued for deferred dispatch.
|
||||||
|
*/
|
||||||
|
cap = crp->crp_session->cap;
|
||||||
|
if (!atomic_load_int(&cap->cc_qblocked)) {
|
||||||
|
result = crypto_invoke(cap, crp, hint);
|
||||||
|
if (result != ERESTART)
|
||||||
|
return (result);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The driver ran out of resources, put the request on the
|
||||||
|
* queue.
|
||||||
|
*/
|
||||||
|
}
|
||||||
|
crypto_batch_enqueue(crp);
|
||||||
|
return (0);
|
||||||
|
}
|
||||||
|
|
||||||
|
int
|
||||||
|
crypto_dispatch(struct cryptop *crp)
|
||||||
|
{
|
||||||
|
return (crypto_dispatch_one(crp, 0));
|
||||||
|
}
|
||||||
|
|
||||||
|
int
|
||||||
|
crypto_dispatch_async(struct cryptop *crp, int flags)
|
||||||
|
{
|
||||||
struct crypto_ret_worker *ret_worker;
|
struct crypto_ret_worker *ret_worker;
|
||||||
|
|
||||||
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
|
if (!CRYPTO_SESS_SYNC(crp->crp_session)) {
|
||||||
|
/*
|
||||||
|
* The driver issues completions asynchonously, don't bother
|
||||||
|
* deferring dispatch to a worker thread.
|
||||||
|
*/
|
||||||
|
return (crypto_dispatch(crp));
|
||||||
|
}
|
||||||
|
|
||||||
|
#ifdef INVARIANTS
|
||||||
|
crp_sanity(crp);
|
||||||
|
#endif
|
||||||
|
CRYPTOSTAT_INC(cs_ops);
|
||||||
|
|
||||||
|
crp->crp_retw_id = crp->crp_session->id % crypto_workers_num;
|
||||||
|
if ((flags & CRYPTO_ASYNC_ORDERED) != 0) {
|
||||||
|
crp->crp_flags |= CRYPTO_F_ASYNC_ORDERED;
|
||||||
|
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
|
||||||
CRYPTO_RETW_LOCK(ret_worker);
|
CRYPTO_RETW_LOCK(ret_worker);
|
||||||
crp->crp_seq = ret_worker->reorder_ops++;
|
crp->crp_seq = ret_worker->reorder_ops++;
|
||||||
CRYPTO_RETW_UNLOCK(ret_worker);
|
CRYPTO_RETW_UNLOCK(ret_worker);
|
||||||
}
|
}
|
||||||
|
|
||||||
TASK_INIT(&crp->crp_task, 0, crypto_task_invoke, crp);
|
TASK_INIT(&crp->crp_task, 0, crypto_task_invoke, crp);
|
||||||
taskqueue_enqueue(crypto_tq, &crp->crp_task);
|
taskqueue_enqueue(crypto_tq, &crp->crp_task);
|
||||||
return (0);
|
return (0);
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((crp->crp_flags & CRYPTO_F_BATCH) == 0) {
|
void
|
||||||
/*
|
crypto_dispatch_batch(struct cryptopq *crpq, int flags)
|
||||||
* Caller marked the request to be processed
|
{
|
||||||
* immediately; dispatch it directly to the
|
struct cryptop *crp;
|
||||||
* driver unless the driver is currently blocked.
|
int hint;
|
||||||
*/
|
|
||||||
cap = crp->crp_session->cap;
|
while ((crp = TAILQ_FIRST(crpq)) != NULL) {
|
||||||
if (!cap->cc_qblocked) {
|
hint = TAILQ_NEXT(crp, crp_next) != NULL ? CRYPTO_HINT_MORE : 0;
|
||||||
result = crypto_invoke(cap, crp, 0);
|
TAILQ_REMOVE(crpq, crp, crp_next);
|
||||||
if (result != ERESTART)
|
if (crypto_dispatch_one(crp, hint) != 0)
|
||||||
return (result);
|
|
||||||
/*
|
|
||||||
* The driver ran out of resources, put the request on
|
|
||||||
* the queue.
|
|
||||||
*/
|
|
||||||
}
|
|
||||||
}
|
|
||||||
crypto_batch_enqueue(crp);
|
crypto_batch_enqueue(crp);
|
||||||
return 0;
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void
|
static void
|
||||||
crypto_batch_enqueue(struct cryptop *crp)
|
crypto_batch_enqueue(struct cryptop *crp)
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -1814,10 +1842,10 @@ crypto_done(struct cryptop *crp)
|
|||||||
* doing extraneous context switches; the latter is mostly
|
* doing extraneous context switches; the latter is mostly
|
||||||
* used with the software crypto driver.
|
* used with the software crypto driver.
|
||||||
*/
|
*/
|
||||||
if (!CRYPTOP_ASYNC_KEEPORDER(crp) &&
|
if ((crp->crp_flags & CRYPTO_F_ASYNC_ORDERED) == 0 &&
|
||||||
((crp->crp_flags & CRYPTO_F_CBIMM) ||
|
((crp->crp_flags & CRYPTO_F_CBIMM) != 0 ||
|
||||||
((crp->crp_flags & CRYPTO_F_CBIFSYNC) &&
|
((crp->crp_flags & CRYPTO_F_CBIFSYNC) != 0 &&
|
||||||
(crypto_ses2caps(crp->crp_session) & CRYPTOCAP_F_SYNC)))) {
|
CRYPTO_SESS_SYNC(crp->crp_session)))) {
|
||||||
/*
|
/*
|
||||||
* Do the callback directly. This is ok when the
|
* Do the callback directly. This is ok when the
|
||||||
* callback routine does very little (e.g. the
|
* callback routine does very little (e.g. the
|
||||||
@ -1829,36 +1857,35 @@ crypto_done(struct cryptop *crp)
|
|||||||
bool wake;
|
bool wake;
|
||||||
|
|
||||||
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
|
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
|
||||||
wake = false;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Normal case; queue the callback for the thread.
|
* Normal case; queue the callback for the thread.
|
||||||
*/
|
*/
|
||||||
CRYPTO_RETW_LOCK(ret_worker);
|
CRYPTO_RETW_LOCK(ret_worker);
|
||||||
if (CRYPTOP_ASYNC_KEEPORDER(crp)) {
|
if ((crp->crp_flags & CRYPTO_F_ASYNC_ORDERED) != 0) {
|
||||||
struct cryptop *tmp;
|
struct cryptop *tmp;
|
||||||
|
|
||||||
TAILQ_FOREACH_REVERSE(tmp, &ret_worker->crp_ordered_ret_q,
|
TAILQ_FOREACH_REVERSE(tmp,
|
||||||
cryptop_q, crp_next) {
|
&ret_worker->crp_ordered_ret_q, cryptop_q,
|
||||||
|
crp_next) {
|
||||||
if (CRYPTO_SEQ_GT(crp->crp_seq, tmp->crp_seq)) {
|
if (CRYPTO_SEQ_GT(crp->crp_seq, tmp->crp_seq)) {
|
||||||
TAILQ_INSERT_AFTER(&ret_worker->crp_ordered_ret_q,
|
TAILQ_INSERT_AFTER(
|
||||||
tmp, crp, crp_next);
|
&ret_worker->crp_ordered_ret_q, tmp,
|
||||||
|
crp, crp_next);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (tmp == NULL) {
|
if (tmp == NULL) {
|
||||||
TAILQ_INSERT_HEAD(&ret_worker->crp_ordered_ret_q,
|
TAILQ_INSERT_HEAD(
|
||||||
crp, crp_next);
|
&ret_worker->crp_ordered_ret_q, crp,
|
||||||
|
crp_next);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (crp->crp_seq == ret_worker->reorder_cur_seq)
|
wake = crp->crp_seq == ret_worker->reorder_cur_seq;
|
||||||
wake = true;
|
} else {
|
||||||
}
|
wake = TAILQ_EMPTY(&ret_worker->crp_ret_q);
|
||||||
else {
|
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_q, crp,
|
||||||
if (CRYPTO_RETW_EMPTY(ret_worker))
|
crp_next);
|
||||||
wake = true;
|
|
||||||
|
|
||||||
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_q, crp, crp_next);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (wake)
|
if (wake)
|
||||||
@ -1894,7 +1921,7 @@ crypto_kdone(struct cryptkop *krp)
|
|||||||
ret_worker = CRYPTO_RETW(0);
|
ret_worker = CRYPTO_RETW(0);
|
||||||
|
|
||||||
CRYPTO_RETW_LOCK(ret_worker);
|
CRYPTO_RETW_LOCK(ret_worker);
|
||||||
if (CRYPTO_RETW_EMPTY(ret_worker))
|
if (TAILQ_EMPTY(&ret_worker->crp_ret_kq))
|
||||||
wakeup_one(&ret_worker->crp_ret_q); /* shared wait channel */
|
wakeup_one(&ret_worker->crp_ret_q); /* shared wait channel */
|
||||||
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_kq, krp, krp_next);
|
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_kq, krp, krp_next);
|
||||||
CRYPTO_RETW_UNLOCK(ret_worker);
|
CRYPTO_RETW_UNLOCK(ret_worker);
|
||||||
@ -1991,13 +2018,10 @@ crypto_proc(void)
|
|||||||
*/
|
*/
|
||||||
if (submit->crp_session->cap == cap)
|
if (submit->crp_session->cap == cap)
|
||||||
hint = CRYPTO_HINT_MORE;
|
hint = CRYPTO_HINT_MORE;
|
||||||
break;
|
|
||||||
} else {
|
} else {
|
||||||
submit = crp;
|
submit = crp;
|
||||||
if ((submit->crp_flags & CRYPTO_F_BATCH) == 0)
|
|
||||||
break;
|
|
||||||
/* keep scanning for more are q'd */
|
|
||||||
}
|
}
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (submit != NULL) {
|
if (submit != NULL) {
|
||||||
|
@ -455,18 +455,10 @@ struct cryptop {
|
|||||||
*/
|
*/
|
||||||
int crp_flags;
|
int crp_flags;
|
||||||
|
|
||||||
#define CRYPTO_F_BATCH 0x0008 /* Batch op if possible */
|
|
||||||
#define CRYPTO_F_CBIMM 0x0010 /* Do callback immediately */
|
#define CRYPTO_F_CBIMM 0x0010 /* Do callback immediately */
|
||||||
#define CRYPTO_F_DONE 0x0020 /* Operation completed */
|
#define CRYPTO_F_DONE 0x0020 /* Operation completed */
|
||||||
#define CRYPTO_F_CBIFSYNC 0x0040 /* Do CBIMM if op is synchronous */
|
#define CRYPTO_F_CBIFSYNC 0x0040 /* Do CBIMM if op is synchronous */
|
||||||
#define CRYPTO_F_ASYNC 0x0080 /* Dispatch crypto jobs on several threads
|
#define CRYPTO_F_ASYNC_ORDERED 0x0100 /* Completions must happen in order */
|
||||||
* if op is synchronous
|
|
||||||
*/
|
|
||||||
#define CRYPTO_F_ASYNC_KEEPORDER 0x0100 /*
|
|
||||||
* Dispatch the crypto jobs in the same
|
|
||||||
* order there are submitted. Applied only
|
|
||||||
* if CRYPTO_F_ASYNC flags is set
|
|
||||||
*/
|
|
||||||
#define CRYPTO_F_IV_SEPARATE 0x0200 /* Use crp_iv[] as IV. */
|
#define CRYPTO_F_IV_SEPARATE 0x0200 /* Use crp_iv[] as IV. */
|
||||||
|
|
||||||
int crp_op;
|
int crp_op;
|
||||||
@ -506,6 +498,8 @@ struct cryptop {
|
|||||||
*/
|
*/
|
||||||
};
|
};
|
||||||
|
|
||||||
|
TAILQ_HEAD(cryptopq, cryptop);
|
||||||
|
|
||||||
static __inline void
|
static __inline void
|
||||||
_crypto_use_buf(struct crypto_buffer *cb, void *buf, int len)
|
_crypto_use_buf(struct crypto_buffer *cb, void *buf, int len)
|
||||||
{
|
{
|
||||||
@ -587,12 +581,6 @@ crypto_use_output_uio(struct cryptop *crp, struct uio *uio)
|
|||||||
_crypto_use_uio(&crp->crp_obuf, uio);
|
_crypto_use_uio(&crp->crp_obuf, uio);
|
||||||
}
|
}
|
||||||
|
|
||||||
#define CRYPTOP_ASYNC(crp) \
|
|
||||||
(((crp)->crp_flags & CRYPTO_F_ASYNC) && \
|
|
||||||
crypto_ses2caps((crp)->crp_session) & CRYPTOCAP_F_SYNC)
|
|
||||||
#define CRYPTOP_ASYNC_KEEPORDER(crp) \
|
|
||||||
(CRYPTOP_ASYNC(crp) && \
|
|
||||||
(crp)->crp_flags & CRYPTO_F_ASYNC_KEEPORDER)
|
|
||||||
#define CRYPTO_HAS_OUTPUT_BUFFER(crp) \
|
#define CRYPTO_HAS_OUTPUT_BUFFER(crp) \
|
||||||
((crp)->crp_obuf.cb_type != CRYPTO_BUF_NONE)
|
((crp)->crp_obuf.cb_type != CRYPTO_BUF_NONE)
|
||||||
|
|
||||||
@ -642,6 +630,8 @@ extern void crypto_freesession(crypto_session_t cses);
|
|||||||
#define CRYPTOCAP_F_SOFTWARE CRYPTO_FLAG_SOFTWARE
|
#define CRYPTOCAP_F_SOFTWARE CRYPTO_FLAG_SOFTWARE
|
||||||
#define CRYPTOCAP_F_SYNC 0x04000000 /* operates synchronously */
|
#define CRYPTOCAP_F_SYNC 0x04000000 /* operates synchronously */
|
||||||
#define CRYPTOCAP_F_ACCEL_SOFTWARE 0x08000000
|
#define CRYPTOCAP_F_ACCEL_SOFTWARE 0x08000000
|
||||||
|
#define CRYPTO_SESS_SYNC(sess) \
|
||||||
|
((crypto_ses2caps(sess) & CRYPTOCAP_F_SYNC) != 0)
|
||||||
extern int32_t crypto_get_driverid(device_t dev, size_t session_size,
|
extern int32_t crypto_get_driverid(device_t dev, size_t session_size,
|
||||||
int flags);
|
int flags);
|
||||||
extern int crypto_find_driver(const char *);
|
extern int crypto_find_driver(const char *);
|
||||||
@ -650,6 +640,9 @@ extern int crypto_getcaps(int hid);
|
|||||||
extern int crypto_kregister(uint32_t, int, uint32_t);
|
extern int crypto_kregister(uint32_t, int, uint32_t);
|
||||||
extern int crypto_unregister_all(uint32_t driverid);
|
extern int crypto_unregister_all(uint32_t driverid);
|
||||||
extern int crypto_dispatch(struct cryptop *crp);
|
extern int crypto_dispatch(struct cryptop *crp);
|
||||||
|
#define CRYPTO_ASYNC_ORDERED 0x1 /* complete in order dispatched */
|
||||||
|
extern int crypto_dispatch_async(struct cryptop *crp, int flags);
|
||||||
|
extern void crypto_dispatch_batch(struct cryptopq *crpq, int flags);
|
||||||
extern int crypto_kdispatch(struct cryptkop *);
|
extern int crypto_kdispatch(struct cryptkop *);
|
||||||
#define CRYPTO_SYMQ 0x1
|
#define CRYPTO_SYMQ 0x1
|
||||||
#define CRYPTO_ASYMQ 0x2
|
#define CRYPTO_ASYMQ 0x2
|
||||||
|
@ -60,7 +60,7 @@
|
|||||||
* in the range 5 to 9.
|
* in the range 5 to 9.
|
||||||
*/
|
*/
|
||||||
#undef __FreeBSD_version
|
#undef __FreeBSD_version
|
||||||
#define __FreeBSD_version 1400003 /* Master, propagated to newvers */
|
#define __FreeBSD_version 1400004 /* Master, propagated to newvers */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* __FreeBSD_kernel__ indicates that this system uses the kernel of FreeBSD,
|
* __FreeBSD_kernel__ indicates that this system uses the kernel of FreeBSD,
|
||||||
|
Loading…
Reference in New Issue
Block a user