When any layout is used, the header is stored in the head room of mbuf.
mbuf is allocated and filled by user, means there is no gurateen the
header is all zero for non TSO case. Therefore, we have to do the reset
by ourself:
memest(hdr, 0, head_size);
The memset has two impacts on performance:
- memset could not be inlined, which is a bit costly.
- more importantly, it touches the mbuf, which could introduce severe
cache issues as described by former patch.
Similiary, we could do the same trick: reset just when necessary, when
the corresponding field is already 0, which is likely true for a simple
l2 forward case. It could boost the performance up to 20+% in micro
benchmarking.
Cc: stable@dpdk.org
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
TSO is now enabled, but it's not actually being used by default in a
simple L2 forward mode. In such case, we have to zero the virtio net
headers, to inform the vhost backend that no offload is being used:
hdr->csum_start = 0;
hdr->csum_offset = 0;
hdr->flags = 0;
hdr->gso_type = 0;
hdr->gso_size = 0;
hdr->hdr_len = 0;
Such writes could be very costly; it introduces severe cache issues:
The above operations introduce cache write for each packet, which
stalls the read operation from the vhost backend.
The fact that virtio net header is initiated to zero in PMD driver
init stage means that these costly writes are unnecessary and could
be avoided:
if (hdr->csum_start != 0)
hdr->csum_start = 0;
And that's what the macro ASSIGN_UNLESS_EQUAL does. With this, the
performance drop introduced by TSO enabling is recovered: it could
be up to 20% in micro benchmarking.
Fixes: 58169a9c81 ("net/virtio: support Tx checksum offload")
Fixes: 696573046e ("net/virtio: support TSO")
Cc: stable@dpdk.org
Cc: Olivier Matz <olivier.matz@6wind.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
The commit aed0b12930 ("net/vhost: fix socket file deleted on stop")
moves rte_vhost_driver_register and rte_vhost_driver_unregister from
dev_start() and dev_stop() into driver's probe() and remove().
Apps, like testpmd, using vhost pmd in server mode, usually calls
dev_stop() and dev_close() as quitting, instead of driver-specific
remove(). Then those unix socket files have no chance to get removed.
Semantically, device-specific things should be put into device-specific
APIs. Fix this issue by moving rte_vhost_driver_unregister, plus other
structure free into dev_close().
Fixes: aed0b12930 ("net/vhost: fix socket file deleted on stop")
Cc: stable@dpdk.org
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Value returned from malloc is not checked for errors before being used.
This patch fixes following coverity issue.
static struct vhost_memory_kernel *
prepare_vhost_memory_kernel(void)
{
...
vm = malloc(sizeof(struct vhost_memory_kernel) +
max_regions *
sizeof(struct vhost_memory_region));
...
>>> CID 140744: (NULL_RETURNS)
>>> Dereferencing a null pointer "vm".
mr = &vm->regions[k++];
Coverity issue: 140744
Fixes: e3b434818b ("net/virtio-user: support kernel vhost")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The vtpci_ops assignment needs the 'hw->port_id' as an input parameter.
That said, we should set 'hw->port_id' firstly, then do the vtpci_ops
assignment, while the code does reversely. That would result to a crash
when more than one virtio devices are used, because we keep assigning
proper vtpci_ops to virtio_hw_internal[0]->vtpci_ops, leaving the pointer
for other ports being NULL.
Reverse the order fixes this issue.
Fixes: 9470427c88 ("net/virtio: do not store PCI device pointer at shared memory")
Cc: stable@dpdk.org
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Adds Makefile for scheduler cryptodev PMD, and updates existing
Makefiles. Different than other cryptodev PMDs, scheduler PMD
is required to be built as shared libraries.
Adds scheduler PMD enable and debug flags to config/common_base.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Implements all standard operations required for cryptodev,
and register them to cryptodev operation function pointer table.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds crypto scheduler's PMD's probe and remove function and the device's
enqueue and dequeue burst functions. A cryptodev scheduler PMD is
then registered in the end.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Implements round-robin scheduling mode and register into cryptodev
scheduler ops structure. This mode enqueues a burst of operation
to one of its slaves, and iterates the next burst to the other
slave. Same procedure is done on dequeueing operations.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds the implementations of the APIs for scheduler cryptodev PMD.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds a number of internal structures for the cryptodev scheduler PMD. The
structures include the scheduler context, slave, queue pair context,
and session.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds APIs and function prototypes for the scheduler PMD to perform extra
operations other than standard cryptodev APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This makes struct rte_cryptodev independent of struct rte_pci_device by
replacing it with a pointer to the generic struct rte_device.
This is inline with the recent changes in ethdev
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John Griffin <john.griffin@intel.com>
Reviewed-by: Shreyansh Jain <shreyansh.jain@nxp.com>
IFF_MULTI_QUEUE does not exist in older kernels:
drivers/net/tap/rte_eth_tap.c:143:19: error: ‘IFF_MULTI_QUEUE’ undeclared
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add two new feature flags:
* RTE_CRYPTODEV_FF_CPU_NEON
represents ARM NEON (TM) instructions
* RTE_CRYPTODEV_FF_CPU_ARM_CE
represents ARM crypto extensions
Add them to both cryptodev library, documentation and relevant
PMD driver for ARMv8.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
This patch introduces crypto poll mode driver
using ARMv8 cryptographic extensions.
CPU compatibility with this driver is detected in
run-time and virtual crypto device will not be
created if CPU doesn't provide:
AES, SHA1, SHA2 and NEON.
This PMD is optimized to provide performance boost
for chained crypto operations processing,
such as encryption + HMAC generation,
decryption + HMAC validation. In particular,
cipher only or hash only operations are
not provided.
The driver currently supports AES-128-CBC
in combination with: SHA256 HMAC and SHA1 HMAC
and relies on the external armv8_crypto library:
https://github.com/caviumnetworks/armv8_crypto
Build ARMv8 crypto PMD if compiling for ARM64
and CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO option
is enable in the configuration file.
ARMV8_CRYPTO_LIB_PATH environment variable will
point to the appropriate library directory.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Current Cryptodev AES-NI GCM PMD is implemented using Multi Buffer
Crypto library.This patch reimplement the device using ISA-L Crypto
library: https://github.com/01org/isa-l_crypto.
The migration entailed the following additional support for:
* GMAC algorithm.
* 256-bit cipher key.
* Session-less mode.
* Out-of place processing
* Scatter-gatter support for chained mbufs (only out-of place and
destination mbuf must be contiguous)
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds a user defined name initializing parameter to cryptodev
library.
Originally, for software cryptodev PMD, the vdev name parameter is
treated as the driver identifier, and will create an unique name for each
device automatically, which is not necessarily as same as the vdev
parameter.
This patch allows the user to either create a unique name for his software
cryptodev, or by default, let the system creates a unique one. This should
help the user managing the created cryptodevs easily.
Examples:
CLI command fragment 1: --vdev "crypto_aesni_gcm_pmd"
The above command will result in creating a AESNI-GCM PMD with name of
"crypto_aesni_gcm_X", where postfix X is the number assigned by the system,
starting from 0. This fragment can be placed in the same CLI command
multiple times, resulting the postfixs incremented by one for each new
device.
CLI command fragment 2: --vdev "crypto_aesni_gcm_pmd,name=gcm1"
The above command will result in creating a AESNI-GCM PMD with name of
"gcm1". This fragment can be placed in the same CLI command multiple
times, as long as each having a unique name value.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch introduces RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER feature flag
informing that selected crypto device supports segmented mbufs natively
and doesn't need to be coalesced before crypto operation.
While using segmented buffers in crypto devices may have unpredictable
results, for PMDs which doesn't support it natively, additional check is
made for debug compilation.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
EVP_CIPHER_CTX_set_padding() function always returns 1, so the check is
unneeded.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Tested-by: Zhaoyan Chen <zhaoyan.chen@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch sets iv size in qat PMD to 12 bytes to be
conformant with nist SP800-38D.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch sets iv size in aesni gcm PMD to 12 bytes to be
conformant with nist SP800-38D.
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
This commit fixes pre-counter block (J0) padding by clearing
four most significant bytes before setting initial counter value.
Fixes: b2bb359747 ("crypto/aesni_gcm: move pre-counter block to driver")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Release v0.44 of Intel(R) Multi-Buffer Crypto for IPsec library adds
support for AVX512 instructions. This patch enables the new AVX512
accelerated functions from the aesni_mb_pmd crypto poll mode driver.
This patch set requires that the aesni_mb_pmd is linked against the
version 0.44 or greater of the Multi-Buffer Crypto for IPsec library.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Update driver to use new AESNI Multibuffer IPSec library single
operation functionality (cipher only and authentication only).
This patch also adds tests for this new feature.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When using sessionless crypto operations, crypto session
is obtained from a pool of sessions, when processing the
operation. Once the operation is processed, the session
is put back in the pool, but for the AESNI MB PMD, this
session was not being saved in the operation and therefore,
it did not return to the session pool.
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Extra bytes are being written at end of data while process standard
openssl cipher encryption. This behaviour is unexpected.
This patch disable the padding feature in openssl library, which is
causing the problem.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
In out-of-place operation, data is DMAed from source mbuf
to destination mbuf. To avoid header data in dest mbuf being
overwritten, the minimal data-set should be DMAed.
Fixes: 39e0bee48e ("crypto/qat: rework request builder for performance")
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
The cryptodev API had specified that if the digest address field was
left empty on an authentication operation, then the PMD would assume
the digest was appended to the source or destination data.
This case was not handled at all by most PMDs and incorrectly handled
by the QAT PMD.
As no bugs were raised, it is assumed to be not needed, so this patch
removes it, rather than add handling for the case on all PMDs.
The digest can still be appended to the data, but its
address must now be provided in the op.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix portability
issues across different architectures.
CC: John Griffin <john.griffin@intel.com>
CC: Fiona Trahe <fiona.trahe@intel.com>
CC: Deepak Kumar Jain <deepak.k.jain@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Yong Wang <yongwang@vmware.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix portability
issues across different architectures.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
Suggested-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jan Medala <jan@semihalf.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Harish Patil <harish.patil@cavium.com>
CC: Rasesh Mody <rasesh.mody@cavium.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal abstraction
for I/O device memory read/write access to fix portability issues across
different architectures.
CC: Stephen Hurd <stephen.hurd@broadcom.com>
CC: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal abstraction
for I/O device memory read/write access to fix portability issues across
different architectures.
CC: Harish Patil <harish.patil@cavium.com>
CC: Rasesh Mody <rasesh.mody@cavium.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Alejandro Lucero <alejandro.lucero@netronome.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix portability
issues across different architectures.
CC: John Daley <johndale@cisco.com>
CC: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Jing Chen <jing.d.chen@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Helin Zhang <helin.zhang@intel.com>
CC: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>