opencrypto: Introduce crypto_dispatch_async()

Currently, OpenCrypto consumers can request asynchronous dispatch by
setting a flag in the cryptop.  (Currently only IPSec may do this.)   I
think this is a bit confusing: we (conditionally) set cryptop flags to
request async dispatch, and then crypto_dispatch() immediately examines
those flags to see if the consumer wants async dispatch. The flag names
are also confusing since they don't specify what "async" applies to:
dispatch or completion.

Add a new KPI, crypto_dispatch_async(), rather than encoding the
requested dispatch type in each cryptop. crypto_dispatch_async() falls
back to crypto_dispatch() if the session's driver provides asynchronous
dispatch. Get rid of CRYPTOP_ASYNC() and CRYPTOP_ASYNC_KEEPORDER().

Similarly, add crypto_dispatch_batch() to request processing of a tailq
of cryptops, rather than encoding the scheduling policy using cryptop
flags.  Convert GELI, the only user of this interface (disabled by
default) to use the new interface.

Add CRYPTO_SESS_SYNC(), which can be used by consumers to determine
whether crypto requests will be dispatched synchronously. This is just
a helper macro. Use it instead of looking at cap flags directly.

Fix style in crypto_done(). Also get rid of CRYPTO_RETW_EMPTY() and
just check the relevant queues directly. This could result in some
unnecessary wakeups but I think it's very uncommon to be using more than
one queue per worker in a given workload, so checking all three queues
is a waste of cycles.

Reviewed by:	jhb
Sponsored by:	Ampere Computing
Submitted by:	Klara, Inc.
MFC after:	2 weeks
Differential Revision:	https://reviews.freebsd.org/D28194
This commit is contained in:
Mark Johnston 2021-02-08 09:19:19 -05:00
parent 7509b677b4
commit 68f6800ce0
9 changed files with 205 additions and 168 deletions

View File

@ -30,7 +30,7 @@
.\"
.\" $FreeBSD$
.\"
.Dd August 12, 2020
.Dd February 8, 2021
.Dt CRYPTO_REQUEST 9
.Os
.Sh NAME
@ -40,6 +40,10 @@
.In opencrypto/cryptodev.h
.Ft int
.Fn crypto_dispatch "struct cryptop *crp"
.Ft int
.Fn crypto_dispatch_async "struct cryptop *crp" "int flags"
.Ft void
.Fn crypto_dispatch_batch "struct cryptopq *crpq" "int flags"
.Ft void
.Fn crypto_destroyreq "struct cryptop *crp"
.Ft void
@ -104,10 +108,15 @@ the caller should set fields in the structure to describe
request-specific parameters.
Unused fields should be left as-is.
.Pp
.Fn crypto_dispatch
passes a crypto request to the driver attached to the request's session.
If there are errors in the request's fields, this function may return
an error to the caller.
The
.Fn crypto_dispatch ,
.Fn crypto_dispatch_async ,
and
.Fn crypto_dispatch_batch
functions pass one or more crypto requests to the driver attached to the
request's session.
If there are errors in the request's fields, these functions may return an
error to the caller.
If errors are encountered while servicing the request, they will instead
be reported to the request's callback function
.Pq Fa crp_callback
@ -341,64 +350,53 @@ store the partial IV in the data buffer and pass the full IV separately in
.Ss Request and Callback Scheduling
The crypto framework provides multiple methods of scheduling the dispatch
of requests to drivers along with the processing of driver callbacks.
Requests use flags in
.Fa crp_flags
to select the desired scheduling methods.
.Pp
.Fn crypto_dispatch
can pass the request to the session's driver via three different methods:
.Bl -enum
.It
The request is queued to a taskqueue backed by a pool of worker threads.
By default the pool is sized to provide one thread for each CPU.
Worker threads dequeue requests and pass them to the driver
asynchronously.
.It
The request is passed to the driver synchronously in the context of the
thread invoking
.Fn crypto_dispatch .
.It
The request is queued to a queue of pending requests.
A single worker thread dequeues requests and passes them to the driver
asynchronously.
.El
.Pp
To select the first method (taskqueue backed by multiple threads),
requests should set
.Dv CRYPTO_F_ASYNC .
To always use the third method (queue to single worker thread),
requests should set
.Dv CRYPTO_F_BATCH .
If both flags are set,
.Dv CRYPTO_F_ASYNC
takes precedence.
If neither flag is set,
.Fn crypto_dispatch
will first attempt the second method (invoke driver synchronously).
If the driver is blocked,
the request will be queued using the third method.
One caveat is that the first method is only used for requests using software
drivers which use host CPUs to process requests.
Requests whose session is associated with a hardware driver will ignore
.Dv CRYPTO_F_ASYNC
and only use
.Dv CRYPTO_F_BATCH
to determine how requests should be scheduled.
.Pp
In addition to bypassing synchronous dispatch in
The
.Fn crypto_dispatch ,
.Dv CRYPTO_F_BATCH
requests additional changes aimed at optimizing batches of requests to
the same driver.
When the worker thread processes a request with
.Dv CRYPTO_F_BATCH ,
it will search the pending request queue for any other requests for the same
driver,
including requests from different sessions.
If any other requests are present,
.Fn crypto_dispatch_async ,
and
.Fn crypto_dispatch_batch
functions can be used to request different dispatch scheduling policies.
.Pp
.Fn crypto_dispatch
synchronously passes the request to the driver.
The driver itself may process the request synchronously or asynchronously
depending on whether the driver is implemented by software or hardware.
.Pp
.Fn crypto_dispatch_async
dispatches the request asynchronously.
If the driver is inherently synchronous, the request is queued to a taskqueue
backed by a pool of worker threads.
This can increase througput by allowing requests from a single producer to be
processed in parallel.
By default the pool is sized to provide one thread for each CPU.
Worker threads dequeue requests and pass them to the driver asynchronously.
.Fn crypto_dispatch_async
additionally takes a
.Va flags
parameter.
The
.Dv CRYPTO_ASYNC_ORDERED
flag indicates that completion callbacks for requests must be called in the
same order as requests were dispatched.
If the driver is asynchronous, the behavior of
.Fn crypto_dispatch_async
is identical to that of
.Fn crypto_dispatch .
.Pp
.Fn crypto_dispatch_batch
allows the caller to collect a batch of requests and submit them to the driver
at the same time.
This allows hardware drivers to optimize the scheduling of request processing
and batch completion interrupts.
A batch is submitted to the driver by invoking the driver's process method on
each request, specifying
.Dv CRYPTO_HINT_MORE
is passed to the driver's process method.
Drivers may use this to batch completion interrupts.
with each request except for the last.
The
.Fa flags
parameter to
.Fn crypto_dispatch_batch
is currently ignored.
.Pp
Callback function scheduling is simpler than request scheduling.
Callbacks can either be invoked synchronously from

View File

@ -449,11 +449,13 @@ void
g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
{
struct g_eli_softc *sc;
struct cryptopq crpq;
struct cryptop *crp;
u_int i, lsec, nsec, data_secsize, decr_secsize, encr_secsize;
off_t dstoff;
u_char *p, *data, *authkey, *plaindata;
int error;
bool batch;
G_ELI_LOGREQ(3, bp, "%s", __func__);
@ -496,6 +498,9 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
p = (char *)roundup((uintptr_t)p, sizeof(uintptr_t));
#endif
TAILQ_INIT(&crpq);
batch = atomic_load_int(&g_eli_batch) != 0;
for (i = 1; i <= nsec; i++, dstoff += encr_secsize) {
crp = crypto_getreq(wr->w_sid, M_WAITOK);
authkey = (u_char *)p; p += G_ELI_AUTH_SECKEYLEN;
@ -521,8 +526,6 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
crp->crp_opaque = (void *)bp;
data += encr_secsize;
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (g_eli_batch)
crp->crp_flags |= CRYPTO_F_BATCH;
if (bp->bio_cmd == BIO_WRITE) {
crp->crp_callback = g_eli_auth_write_done;
crp->crp_op = CRYPTO_OP_ENCRYPT |
@ -549,8 +552,15 @@ g_eli_auth_run(struct g_eli_worker *wr, struct bio *bp)
g_eli_auth_keygen(sc, dstoff, authkey);
crp->crp_auth_key = authkey;
error = crypto_dispatch(crp);
KASSERT(error == 0, ("crypto_dispatch() failed (error=%d)",
error));
if (batch) {
TAILQ_INSERT_TAIL(&crpq, crp, crp_next);
} else {
error = crypto_dispatch(crp);
KASSERT(error == 0,
("crypto_dispatch() failed (error=%d)", error));
}
}
if (batch)
crypto_dispatch_batch(&crpq, 0);
}

View File

@ -261,13 +261,14 @@ void
g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
{
struct g_eli_softc *sc;
struct cryptopq crpq;
struct cryptop *crp;
vm_page_t *pages;
u_int i, nsec, secsize;
off_t dstoff;
u_char *data = NULL;
int error;
int pages_offset;
int error, pages_offset;
bool batch;
G_ELI_LOGREQ(3, bp, "%s", __func__);
@ -303,6 +304,9 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
}
}
TAILQ_INIT(&crpq);
batch = atomic_load_int(&g_eli_batch) != 0;
for (i = 0, dstoff = bp->bio_offset; i < nsec; i++, dstoff += secsize) {
crp = crypto_getreq(wr->w_sid, M_WAITOK);
@ -325,9 +329,6 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
crp->crp_callback = g_eli_crypto_read_done;
}
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (g_eli_batch)
crp->crp_flags |= CRYPTO_F_BATCH;
crp->crp_payload_start = 0;
crp->crp_payload_length = secsize;
if ((sc->sc_flags & G_ELI_FLAG_SINGLE_KEY) == 0) {
@ -340,8 +341,15 @@ g_eli_crypto_run(struct g_eli_worker *wr, struct bio *bp)
sizeof(crp->crp_iv));
}
error = crypto_dispatch(crp);
KASSERT(error == 0, ("crypto_dispatch() failed (error=%d)",
error));
if (batch) {
TAILQ_INSERT_TAIL(&crpq, crp, crp_next);
} else {
error = crypto_dispatch(crp);
KASSERT(error == 0,
("crypto_dispatch() failed (error=%d)", error));
}
}
if (batch)
crypto_dispatch_batch(&crpq, 0);
}

View File

@ -122,7 +122,7 @@ aes_crypto_cb(struct cryptop *crp)
int error;
struct aes_state *as = (struct aes_state *) crp->crp_opaque;
if (crypto_ses2caps(crp->crp_session) & CRYPTOCAP_F_SYNC)
if (CRYPTO_SESS_SYNC(crp->crp_session))
return (0);
error = crp->crp_etype;
@ -165,7 +165,7 @@ aes_encrypt_1(const struct krb5_key_state *ks, int buftype, void *buf,
error = crypto_dispatch(crp);
if ((crypto_ses2caps(as->as_session_aes) & CRYPTOCAP_F_SYNC) == 0) {
if (!CRYPTO_SESS_SYNC(as->as_session_aes)) {
mtx_lock(&as->as_lock);
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
error = msleep(crp, &as->as_lock, 0, "gssaes", 0);
@ -335,7 +335,7 @@ aes_checksum(const struct krb5_key_state *ks, int usage,
error = crypto_dispatch(crp);
if ((crypto_ses2caps(as->as_session_sha1) & CRYPTOCAP_F_SYNC) == 0) {
if (!CRYPTO_SESS_SYNC(as->as_session_sha1)) {
mtx_lock(&as->as_lock);
if (!error && !(crp->crp_flags & CRYPTO_F_DONE))
error = msleep(crp, &as->as_lock, 0, "gssaes", 0);

View File

@ -652,8 +652,6 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
/* Crypto operation descriptor. */
crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
crypto_use_mbuf(crp, m);
crp->crp_callback = ah_input_cb;
crp->crp_opaque = xd;
@ -671,7 +669,10 @@ ah_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
xd->skip = skip;
xd->cryptoid = cryptoid;
xd->vnet = curvnet;
return (crypto_dispatch(crp));
if (V_async_crypto)
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
else
return (crypto_dispatch(crp));
bad:
m_freem(m);
key_freesav(&sav);
@ -1036,8 +1037,6 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
/* Crypto operation descriptor. */
crp->crp_op = CRYPTO_OP_COMPUTE_DIGEST;
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
crypto_use_mbuf(crp, m);
crp->crp_callback = ah_output_cb;
crp->crp_opaque = xd;
@ -1055,7 +1054,10 @@ ah_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
xd->cryptoid = cryptoid;
xd->vnet = curvnet;
return crypto_dispatch(crp);
if (V_async_crypto)
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
else
return (crypto_dispatch(crp));
bad:
if (m)
m_freem(m);

View File

@ -406,8 +406,6 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
/* Crypto operation descriptor */
crp->crp_flags = CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
crypto_use_mbuf(crp, m);
crp->crp_callback = esp_input_cb;
crp->crp_opaque = xd;
@ -460,7 +458,10 @@ esp_input(struct mbuf *m, struct secasvar *sav, int skip, int protoff)
} else if (sav->ivlen != 0)
crp->crp_iv_start = skip + hlen - sav->ivlen;
return (crypto_dispatch(crp));
if (V_async_crypto)
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
else
return (crypto_dispatch(crp));
crp_aad_fail:
free(xd, M_XDATA);
@ -895,8 +896,6 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
/* Crypto operation descriptor. */
crp->crp_flags |= CRYPTO_F_CBIFSYNC;
if (V_async_crypto)
crp->crp_flags |= CRYPTO_F_ASYNC | CRYPTO_F_ASYNC_KEEPORDER;
crypto_use_mbuf(crp, m);
crp->crp_callback = esp_output_cb;
crp->crp_opaque = xd;
@ -944,7 +943,10 @@ esp_output(struct mbuf *m, struct secpolicy *sp, struct secasvar *sav,
crp->crp_digest_start = m->m_pkthdr.len - alen;
}
return crypto_dispatch(crp);
if (V_async_crypto)
return (crypto_dispatch_async(crp, CRYPTO_ASYNC_ORDERED));
else
return (crypto_dispatch(crp));
crp_aad_fail:
free(xd, M_XDATA);

View File

@ -188,8 +188,6 @@ static struct crypto_ret_worker *crypto_ret_workers = NULL;
#define CRYPTO_RETW_LOCK(w) mtx_lock(&w->crypto_ret_mtx)
#define CRYPTO_RETW_UNLOCK(w) mtx_unlock(&w->crypto_ret_mtx)
#define CRYPTO_RETW_EMPTY(w) \
(TAILQ_EMPTY(&w->crp_ret_q) && TAILQ_EMPTY(&w->crp_ret_kq) && TAILQ_EMPTY(&w->crp_ordered_ret_q))
static int crypto_workers_num = 0;
SYSCTL_INT(_kern_crypto, OID_AUTO, num_workers, CTLFLAG_RDTUN,
@ -1406,11 +1404,8 @@ crp_sanity(struct cryptop *crp)
}
#endif
/*
* Add a crypto request to a queue, to be processed by the kernel thread.
*/
int
crypto_dispatch(struct cryptop *crp)
static int
crypto_dispatch_one(struct cryptop *crp, int hint)
{
struct cryptocap *cap;
int result;
@ -1418,49 +1413,82 @@ crypto_dispatch(struct cryptop *crp)
#ifdef INVARIANTS
crp_sanity(crp);
#endif
CRYPTOSTAT_INC(cs_ops);
crp->crp_retw_id = crp->crp_session->id % crypto_workers_num;
if (CRYPTOP_ASYNC(crp)) {
if (crp->crp_flags & CRYPTO_F_ASYNC_KEEPORDER) {
struct crypto_ret_worker *ret_worker;
/*
* Caller marked the request to be processed immediately; dispatch it
* directly to the driver unless the driver is currently blocked, in
* which case it is queued for deferred dispatch.
*/
cap = crp->crp_session->cap;
if (!atomic_load_int(&cap->cc_qblocked)) {
result = crypto_invoke(cap, crp, hint);
if (result != ERESTART)
return (result);
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
CRYPTO_RETW_LOCK(ret_worker);
crp->crp_seq = ret_worker->reorder_ops++;
CRYPTO_RETW_UNLOCK(ret_worker);
}
TASK_INIT(&crp->crp_task, 0, crypto_task_invoke, crp);
taskqueue_enqueue(crypto_tq, &crp->crp_task);
return (0);
}
if ((crp->crp_flags & CRYPTO_F_BATCH) == 0) {
/*
* Caller marked the request to be processed
* immediately; dispatch it directly to the
* driver unless the driver is currently blocked.
* The driver ran out of resources, put the request on the
* queue.
*/
cap = crp->crp_session->cap;
if (!cap->cc_qblocked) {
result = crypto_invoke(cap, crp, 0);
if (result != ERESTART)
return (result);
/*
* The driver ran out of resources, put the request on
* the queue.
*/
}
}
crypto_batch_enqueue(crp);
return 0;
return (0);
}
int
crypto_dispatch(struct cryptop *crp)
{
return (crypto_dispatch_one(crp, 0));
}
int
crypto_dispatch_async(struct cryptop *crp, int flags)
{
struct crypto_ret_worker *ret_worker;
if (!CRYPTO_SESS_SYNC(crp->crp_session)) {
/*
* The driver issues completions asynchonously, don't bother
* deferring dispatch to a worker thread.
*/
return (crypto_dispatch(crp));
}
#ifdef INVARIANTS
crp_sanity(crp);
#endif
CRYPTOSTAT_INC(cs_ops);
crp->crp_retw_id = crp->crp_session->id % crypto_workers_num;
if ((flags & CRYPTO_ASYNC_ORDERED) != 0) {
crp->crp_flags |= CRYPTO_F_ASYNC_ORDERED;
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
CRYPTO_RETW_LOCK(ret_worker);
crp->crp_seq = ret_worker->reorder_ops++;
CRYPTO_RETW_UNLOCK(ret_worker);
}
TASK_INIT(&crp->crp_task, 0, crypto_task_invoke, crp);
taskqueue_enqueue(crypto_tq, &crp->crp_task);
return (0);
}
void
crypto_dispatch_batch(struct cryptopq *crpq, int flags)
{
struct cryptop *crp;
int hint;
while ((crp = TAILQ_FIRST(crpq)) != NULL) {
hint = TAILQ_NEXT(crp, crp_next) != NULL ? CRYPTO_HINT_MORE : 0;
TAILQ_REMOVE(crpq, crp, crp_next);
if (crypto_dispatch_one(crp, hint) != 0)
crypto_batch_enqueue(crp);
}
}
static void
crypto_batch_enqueue(struct cryptop *crp)
{
@ -1814,10 +1842,10 @@ crypto_done(struct cryptop *crp)
* doing extraneous context switches; the latter is mostly
* used with the software crypto driver.
*/
if (!CRYPTOP_ASYNC_KEEPORDER(crp) &&
((crp->crp_flags & CRYPTO_F_CBIMM) ||
((crp->crp_flags & CRYPTO_F_CBIFSYNC) &&
(crypto_ses2caps(crp->crp_session) & CRYPTOCAP_F_SYNC)))) {
if ((crp->crp_flags & CRYPTO_F_ASYNC_ORDERED) == 0 &&
((crp->crp_flags & CRYPTO_F_CBIMM) != 0 ||
((crp->crp_flags & CRYPTO_F_CBIFSYNC) != 0 &&
CRYPTO_SESS_SYNC(crp->crp_session)))) {
/*
* Do the callback directly. This is ok when the
* callback routine does very little (e.g. the
@ -1829,36 +1857,35 @@ crypto_done(struct cryptop *crp)
bool wake;
ret_worker = CRYPTO_RETW(crp->crp_retw_id);
wake = false;
/*
* Normal case; queue the callback for the thread.
*/
CRYPTO_RETW_LOCK(ret_worker);
if (CRYPTOP_ASYNC_KEEPORDER(crp)) {
if ((crp->crp_flags & CRYPTO_F_ASYNC_ORDERED) != 0) {
struct cryptop *tmp;
TAILQ_FOREACH_REVERSE(tmp, &ret_worker->crp_ordered_ret_q,
cryptop_q, crp_next) {
TAILQ_FOREACH_REVERSE(tmp,
&ret_worker->crp_ordered_ret_q, cryptop_q,
crp_next) {
if (CRYPTO_SEQ_GT(crp->crp_seq, tmp->crp_seq)) {
TAILQ_INSERT_AFTER(&ret_worker->crp_ordered_ret_q,
tmp, crp, crp_next);
TAILQ_INSERT_AFTER(
&ret_worker->crp_ordered_ret_q, tmp,
crp, crp_next);
break;
}
}
if (tmp == NULL) {
TAILQ_INSERT_HEAD(&ret_worker->crp_ordered_ret_q,
crp, crp_next);
TAILQ_INSERT_HEAD(
&ret_worker->crp_ordered_ret_q, crp,
crp_next);
}
if (crp->crp_seq == ret_worker->reorder_cur_seq)
wake = true;
}
else {
if (CRYPTO_RETW_EMPTY(ret_worker))
wake = true;
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_q, crp, crp_next);
wake = crp->crp_seq == ret_worker->reorder_cur_seq;
} else {
wake = TAILQ_EMPTY(&ret_worker->crp_ret_q);
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_q, crp,
crp_next);
}
if (wake)
@ -1894,7 +1921,7 @@ crypto_kdone(struct cryptkop *krp)
ret_worker = CRYPTO_RETW(0);
CRYPTO_RETW_LOCK(ret_worker);
if (CRYPTO_RETW_EMPTY(ret_worker))
if (TAILQ_EMPTY(&ret_worker->crp_ret_kq))
wakeup_one(&ret_worker->crp_ret_q); /* shared wait channel */
TAILQ_INSERT_TAIL(&ret_worker->crp_ret_kq, krp, krp_next);
CRYPTO_RETW_UNLOCK(ret_worker);
@ -1991,13 +2018,10 @@ crypto_proc(void)
*/
if (submit->crp_session->cap == cap)
hint = CRYPTO_HINT_MORE;
break;
} else {
submit = crp;
if ((submit->crp_flags & CRYPTO_F_BATCH) == 0)
break;
/* keep scanning for more are q'd */
}
break;
}
}
if (submit != NULL) {

View File

@ -455,18 +455,10 @@ struct cryptop {
*/
int crp_flags;
#define CRYPTO_F_BATCH 0x0008 /* Batch op if possible */
#define CRYPTO_F_CBIMM 0x0010 /* Do callback immediately */
#define CRYPTO_F_DONE 0x0020 /* Operation completed */
#define CRYPTO_F_CBIFSYNC 0x0040 /* Do CBIMM if op is synchronous */
#define CRYPTO_F_ASYNC 0x0080 /* Dispatch crypto jobs on several threads
* if op is synchronous
*/
#define CRYPTO_F_ASYNC_KEEPORDER 0x0100 /*
* Dispatch the crypto jobs in the same
* order there are submitted. Applied only
* if CRYPTO_F_ASYNC flags is set
*/
#define CRYPTO_F_ASYNC_ORDERED 0x0100 /* Completions must happen in order */
#define CRYPTO_F_IV_SEPARATE 0x0200 /* Use crp_iv[] as IV. */
int crp_op;
@ -506,6 +498,8 @@ struct cryptop {
*/
};
TAILQ_HEAD(cryptopq, cryptop);
static __inline void
_crypto_use_buf(struct crypto_buffer *cb, void *buf, int len)
{
@ -587,12 +581,6 @@ crypto_use_output_uio(struct cryptop *crp, struct uio *uio)
_crypto_use_uio(&crp->crp_obuf, uio);
}
#define CRYPTOP_ASYNC(crp) \
(((crp)->crp_flags & CRYPTO_F_ASYNC) && \
crypto_ses2caps((crp)->crp_session) & CRYPTOCAP_F_SYNC)
#define CRYPTOP_ASYNC_KEEPORDER(crp) \
(CRYPTOP_ASYNC(crp) && \
(crp)->crp_flags & CRYPTO_F_ASYNC_KEEPORDER)
#define CRYPTO_HAS_OUTPUT_BUFFER(crp) \
((crp)->crp_obuf.cb_type != CRYPTO_BUF_NONE)
@ -642,6 +630,8 @@ extern void crypto_freesession(crypto_session_t cses);
#define CRYPTOCAP_F_SOFTWARE CRYPTO_FLAG_SOFTWARE
#define CRYPTOCAP_F_SYNC 0x04000000 /* operates synchronously */
#define CRYPTOCAP_F_ACCEL_SOFTWARE 0x08000000
#define CRYPTO_SESS_SYNC(sess) \
((crypto_ses2caps(sess) & CRYPTOCAP_F_SYNC) != 0)
extern int32_t crypto_get_driverid(device_t dev, size_t session_size,
int flags);
extern int crypto_find_driver(const char *);
@ -650,6 +640,9 @@ extern int crypto_getcaps(int hid);
extern int crypto_kregister(uint32_t, int, uint32_t);
extern int crypto_unregister_all(uint32_t driverid);
extern int crypto_dispatch(struct cryptop *crp);
#define CRYPTO_ASYNC_ORDERED 0x1 /* complete in order dispatched */
extern int crypto_dispatch_async(struct cryptop *crp, int flags);
extern void crypto_dispatch_batch(struct cryptopq *crpq, int flags);
extern int crypto_kdispatch(struct cryptkop *);
#define CRYPTO_SYMQ 0x1
#define CRYPTO_ASYMQ 0x2

View File

@ -60,7 +60,7 @@
* in the range 5 to 9.
*/
#undef __FreeBSD_version
#define __FreeBSD_version 1400003 /* Master, propagated to newvers */
#define __FreeBSD_version 1400004 /* Master, propagated to newvers */
/*
* __FreeBSD_kernel__ indicates that this system uses the kernel of FreeBSD,