AES-GCM support is added as per the AEAD type of crypto
operations. Support for AES-CTR is also added.
test/crypto and documentation is also updated for
dpaa2_sec to add supported algorithms.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
rte_malloc uses common memory area for all cores.
Now rte_malloc are replaced by per device mempool to allocate
space for FLE. This removes contention and improves performance.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Now that AAD is only used in AEAD algorithms,
there is no need to keep AAD in the authentication
structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Now that all the structures/functions for AEAD algorithms
are in place, migrate the two supported algorithms
AES-GCM and AES-CCM to these, instead of using
cipher and authentication parameters.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since there is a new operation type (AEAD), add parameters
for this in the application.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since there is a new operation type (AEAD), add parameters
for this in the application.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since there is a new operation type (AEAD), add parameters
for this in the application.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Some extra functions have been created to avoid
too many nested conditionals.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
AEAD operation parameters can be set in the new
aead structure, in the crypto operation.
This structure is within a union with the cipher
and authentication parameters, since operations can be:
- AEAD: using the aead structure
- Cipher only: using only the cipher structure
- Auth only: using only the authentication structure
- Cipher-then-auth/Auth-then-cipher: using both cipher
and authentication structures
Therefore, all three cannot be used at the same time.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
AEAD algorithms such as AES-GCM needed to be
used as a concatenation of a cipher transform and
an authentication transform.
Instead, a new transform and functions to handle it
are created to support these kind of algorithms,
making their use easier.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
AES-GMAC is an authentication algorithm, based on AES-GCM
without encryption. To simplify its usage, now it can be used
setting the authentication parameters, without requiring
to concatenate a ciphering transform.
Therefore, it is not required to set AAD, but authentication
data length and offset, giving the user the option
to have Scatter-Gather List in the input buffer,
as long as the driver supports it.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Digest length was duplicated in the authentication transform
and the crypto operation structures.
Since digest length is not expected to change in a same
session, it is removed from the crypto operation.
Also, the length has been shrunk to 16 bits,
which should be sufficient for any digest.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Additional authenticated data (AAD) information was duplicated
in the authentication transform and in the crypto
operation structures.
Since AAD length is not meant to be changed in a same session,
it is removed from the crypto operation structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
For wireless algorithms (SNOW3G, KASUMI, ZUC),
the IV for the authentication algorithms (F9, UIA2 and EIA3)
was taken from the AAD parameter, as there was no IV parameter
in the authentication structure.
Now that IV is available for all algorithms, there is need
to keep doing this, so AAD is not used for these algorithms
anymore.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Authentication algorithms, such as AES-GMAC or the wireless
algorithms (like SNOW3G) use IV, like cipher algorithms.
So far, AES-GMAC has used the IV from the cipher structure,
and the wireless algorithms have used the AAD field,
which is not technically correct.
Therefore, authentication IV parameters have been added,
so API is more correct. Like cipher IV, auth IV is expected
to be copied after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since IV parameters (offset and length) should not
change for operations in the same session, these parameters
are moved to the crypto transform structure, so they will
be stored in the sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since IV now is copied after the crypto operation, in
its private size, IV can be passed only with offset
and length.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Usually, IV will change for each crypto operation.
Therefore, instead of pointing at the same location,
IV is copied after each crypto operation.
This will let the IV to be passed as an offset from
the beginning of the crypto operation, instead of
a pointer.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Usually, IV will change for each crypto operation.
Therefore, instead of pointing at the same location,
IV is copied after each crypto operation.
This will let the IV to be passed as an offset from
the beginning of the crypto operation, instead of
a pointer.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Usually, IV will change for each crypto operation.
Therefore, instead of pointing at the same location,
IV is copied after each crypto operation.
This will let the IV to be passed as an offset from
the beginning of the crypto operation, instead of
a pointer.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Usually, IV will change for each crypto operation.
Therefore, instead of pointing at the same location,
IV is copied after each crypto operation.
This will let the IV to be passed as an offset from
the beginning of the crypto operation, instead of
a pointer.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Usually, IV will change for each crypto operation.
Therefore, instead of pointing at the same location,
IV is copied after each crypto operation.
This will let the IV to be passed as an offset from
the beginning of the crypto operation, instead of
a pointer.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
In order to facilitate the access to the private data,
after the crypto operation, two new macros have been
implemented:
- rte_crypto_op_ctod_offset(c,t,o), which returns a pointer
to "o" bytes after the start of the crypto operation
(rte_crypto_op)
- rte_crypto_op_ctophys_offset(c, o), which returns
the physical address of the data "o" bytes after the
start of the crypto operation (rte_crypto_op)
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
rte_crypto_op and rte_crypto_sym_op structures
were marked as cache aligned.
However, since these structures are always initialized
in a mempool, this alignment is useless, since the mempool
forces the alignment of its objects.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Instead of storing a pointer to operation specific parameters,
such as symmetric crypto parameters, use a zero-length array,
to mark that these parameters will be stored after the
generic crypto operation structure, which was already assumed
in the code, reducing the memory footprint of the crypto operation.
Besides, it is always expected to have rte_crypto_op
and rte_crypto_sym_op (the only operation specific parameters
structure right now) to be together, as they are initialized
as a single object in the crypto operation pool.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Storing a pointer to the user data is unnecessary,
since user can store additional data, after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Instead of storing some crypto operation flags,
such as operation status, as enumerations,
store them as uint8_t, for memory efficiency.
Also, reserve extra 5 bytes in the crypto operation,
for future additions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Session type (operation with or without session) is not
something specific to symmetric operations.
Therefore, the variable is moved to the generic crypto operation
structure.
Since this is an ABI change, the cryptodev library version
gets bumped.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Use rte_mempool_put_bulk for both latency and throughput tests instead
of rte_crypto_op_free to improve application performance.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Fixed a comment in QAT, referring to the IV size
for AES-GCM, that should be in bytes, and not bits.
Fixes: 53d8971cbe81 ("qat: fix AES-GCM decryption")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
syscall always returns '-1' on failure and there is no point
in printing that value. 'errno' is much more informative.
Fixes: 586e39001317 ("vhost: export numa node")
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Since "rte_eal_dev_init()" has been removed, the comment referred to
it should be modified simultaneously.
Fixes: 9721b4d543a3 ("eal: remove unused device init function")
Signed-off-by: Yong Wang <wang.yong19@zte.com.cn>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The rte_eth_dev.data pointer is set to a reference to a static table.
Attempting to rte_free() it leads to a panic. For example, the
following commands result in a panic if run in testpmd
testpmd> port attach virtio_user0,path=/dev/vhost-net,iface=test0
testpmd> port stop 2
testpmd> port close 2
testpmd> port detach 2
Fixes: ce2eabdd43ec ("net/virtio-user: add virtual device")
Cc: stable@dpdk.org
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Check return value of pthread_mutex_init(). Also destroy
mutex in case of other erros before returning.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Make sure we catch and log failed calls to pthread
functions.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a check for strdup() return value and fail gracefully if we
get a bad return code.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The MTU feature support check has to be done against MTU
feature bit mask, and not bit position.
Fixes: 72e8543093df ("vhost: add API to get MTU value")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To compare enabled features in current device we must use bit
mask instead of bit position.
Fixes: c843af3aa13e ("vhost: access header only if offloading is supported")
Cc: stable@dpdk.org
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add more explanations about vring size changes and different
virtio_header size.
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
There is no way to bypass IP checksum verification in Linux
kernel, no matter skb->ip_summed is assigned as CHECKSUM_UNNECESSARY
or CHECKSUM_PARTIAL.
So any packets with bad IP checksum will be dropped at VM IP layer.
To correct, we check this flag PKT_TX_IP_CKSUM to calculate IP csum.
Fixes: 859b480d5afd ("vhost: add guest offload setting")
Cc: stable@dpdk.org
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
As PKT_TX_TCP_SEG flag in mbuf->ol_flags implies PKT_TX_TCP_CKSUM,
applications, e.g., testpmd, don't set PKT_TX_TCP_CKSUM when TSO
is set.
This leads to that packets get dropped in VM tcp stack layer because
of bad TCP csum.
To fix this, we make sure TCP NEEDS_CSUM info is set into virtio net
header when PKT_TX_TCP_SEG is set, so that VM tcp stack will not
check the TCP csum.
Fixes: 859b480d5afd ("vhost: add guest offload setting")
Cc: stable@dpdk.org
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
vsocket->conn_mutex was allocated with pthread_mutex_init() but never
freed with pthread_mutex_destroy(). This is a potential memory leak,
depending on how pthread_mutex_t is implemented.
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To make vectorized burst routines enabled, it is required to run on x86_64
architecture. If all the conditions are met, the vectorized burst functions
are enabled automatically. The decision is made individually on RX and TX.
There's no PMD option to make a selection.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The callbacks are global to a device but the selection is made every queue
configuration, which is redundant.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
When searching LKEY, if search key is mempool pointer, the 2nd cacheline
has to be accessed and it even requires to check whether a buffer is
indirect per every search. Instead, using address for search key can reduce
cycles taken. And caching the last hit entry is beneficial as well.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>