The SDAP is a protocol in the LTE stack on top of PDCP for
QOS. A particular PDCP session may or may not have
SDAP enabled. But if it is enabled, SDAP header should be
authenticated but not encrypted if both confidentiality and
integrity is enabled. Hence, the driver should be intimated
from the xform so that it skip the SDAP header while encryption.
A new field is added in the PDCP xform to specify SDAP is enabled.
The overall size of the xform is not changed, as hfn_ovrd is just
a flag and does not need uint32. Hence, it is converted to uint8_t
and a 16 bit reserved field is added for future.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This patch removes enumerators RTE_CRYPTO_CIPHER_LIST_END,
RTE_CRYPTO_AUTH_LIST_END, RTE_CRYPTO_AEAD_LIST_END to prevent
ABI breakage that may arise when adding new crypto algorithms.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch updates QAT PMD to add raw data-path API support.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
This patch adds raw data-path APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchronous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add Scatter-gather list support for AES-GMAC.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Tested-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
This patch updates fips validation GCM test capabilities:
- In NIST GCMVS spec GMAC test vectors are the GCM ones with
plaintext length as 0 and uses AAD as input data. Originally
fips_validation tests treats them both as GCM test vectors.
This patch introduce automatic test type recognition between
the two: when plaintext length is 0 the prepare_gmac_xform
and prepare_auth_op functions are called, otherwise
prepare_gcm_xform and prepare_aead_op functions are called.
- NIST GCMVS also specified externally or internally IV
generation. When IV is to be generated by IUT internally IUT
shall store the generated IV in the response file. This patch
also adds the support to that.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Weqaar Janjua <weqaar.a.janjua@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
This patch adds SGL support to FIPS sample application.
Originally the application allocates single mbuf of 64KB - 1
bytes data room. With the change the user may reduce the
mbuf dataroom size by using the add cmdline option. If the
input test data is longer than the user provided data room
size the application will automatically build chained mbufs
for the target cryptodev PMD to test.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
added support for non-HMAC for auth algorithms
(SHA1, SHA2, MD5).
Corresponding capabilities are enabled so that test
application can enable those test cases.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Add support for KASUMI-F8/F9 algorithms through the intel-ipsec-mb
job API, allowing the mix of these algorithms with others.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add support for SNOW3G-UEA2/UIA2 algorithms through the intel-ipsec-mb
job API, allowing the mix of these algorithms with others.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add support for ZUC-EEA3/EIA3 algorithms through the intel-ipsec-mb
job API, allowing the mix of these algorithms with others.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds AES-ECB 128, 192 and 256 support to the aesni_mb PMD.
AES-ECB 128, 192 and 256 test vectors added to cryptodev tests.
Signed-off-by: Marcel Cornu <marcel.d.cornu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch replaces the usage of the word 'slave' with more
appropriate word 'worker' in QAT PMD and Scheduler PMD
as well as in their docs. Also the test app was modified
to use the new wording.
The Scheduler PMD's public API was modified according to the
previous deprecation notice:
rte_cryptodev_scheduler_slave_attach is now called
rte_cryptodev_scheduler_worker_attach,
rte_cryptodev_scheduler_slave_detach is
rte_cryptodev_scheduler_worker_detach,
rte_cryptodev_scheduler_slaves_get is
rte_cryptodev_scheduler_workers_get.
Also, the configuration value RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES
was renamed to RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
DPDK APIs have to be prefixed with "rte_" in order to avoid
namespace pollution.
Let's fix it while fpga_lte_fec API is still experimental.
Fixes: efd453698c49 ("baseband/fpga_lte_fec: add driver for FEC on FPGA")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tom Rix <trix@redhat.com>
DPDK APIs have to be prefixed with "rte_" in order to avoid
namespace pollution.
Let's fix it while fpga_5gnr_fec API is still experimental.
Fixes: 2d4306438c92 ("baseband/fpga_5gnr_fec: add configure function")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tom Rix <trix@redhat.com>
Since librte_ipsec was first introduced in 19.02 and there were no changes
in it's public API since 19.11, it should be considered mature enough to
remove the 'experimental' tag from it.
The RTE_SATP_LOG2_NUM enum is also being dropped from rte_ipsec_sa.h to
avoid possible ABI problems in the future.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add configure function to configure the PF from within
the bbdev-test itself without external application
configuration the device.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The option RTE_EAL_ALWAYS_PANIC_ON_ERROR was off by default,
and not customizable with meson. It is completely removed.
The function rte_dump_registers is a trace of the bare metal support
era, and was not supported in userland. It is completely removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
This commit adds new rte_prefetchX_write() variants, that suggest to the
compiler to use a prefetch instruction with intention to write. As a
compiler builtin, the compiler can choose based on compilation target
what the best implementation for this instruction is.
Three versions are provided, targeting the different levels of cache.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Add subport profile table to internal port data structure
and update the port config function.
Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
struct cmdline exposes platform-specific members it contains, most
notably struct termios that is only available on Unix. While ABI
considerations prevent from hinding the definition on already supported
platforms, struct cmdline is considered logically opaque from now on.
Add a deprecation notice targeted at 20.11.
* Remove tests checking struct cmdline content as meaningless.
* Fix missing cmdline_free() in unit test.
* Add cmdline_get_rdline() to access history buffer indirectly.
The new function is currently used only in tests.
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Introduce classify implementation that uses AVX512 specific ISA.
rte_acl_classify_avx512x32() is able to process up to 32 flows in parallel.
It uses 512-bit width registers/instructions and provides higher
performance then rte_acl_classify_avx512x16(), but can cause
frequency level change.
Note that for now only 64-bit version is supported.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce classify implementation that uses AVX512 specific ISA.
rte_acl_classify_avx512x16() is able to process up to 16 flows in parallel.
It uses 256-bit width registers/instructions only
(to avoid frequency level change).
Note that for now only 64-bit version is supported.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Removal of unused enum value (RTE_ACL_CLASSIFY_NUM).
This enum value is not used inside DPDK, while it prevents
to add new classify algorithms without causing an ABI breakage.
Note that this change introduce a formal ABI incompatibility
with previous versions of ACL library.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Add meson based build infrastructure along with the
OTX2 regexdev (REE) device functions.
Add Marvell OCTEON TX2 regex guide.
Signed-off-by: Guy Kaneti <guyk@marvell.com>
This patch enables the optimized calculation of CRC32-Ethernet and
CRC16-CCITT using the AVX512 and VPCLMULQDQ instruction sets. This CRC
implementation is built if the compiler supports the required instruction
sets. It is selected at run-time if the host CPU, again, supports the
required instruction sets.
Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Signed-off-by: David Coyle <david.coyle@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch adds support for run-time selection of the optimal
architecture-specific CRC path, based on the supported instruction set(s)
of the CPU.
The compiler option checks have been moved from the C files to the meson
script. The rte_cpu_get_flag_enabled function is called automatically by
the library at process initialization time to determine which
instructions the CPU supports, with the most optimal supported CRC path
ultimately selected.
Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Signed-off-by: David Coyle <david.coyle@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add rte_write32_wc and rte_write32_wc_relaxed functions
that implement 32bit stores using write combining memory protocol.
Provided generic stubs and x86 implementation.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Forward error correction (FEC) is a bit error correction mode.
It adds error correction information to data packets at the
transmit end, and uses the error correction information to correct
the bit errors generated during data packet transmission at the
receive end. This improves signal quality but also brings a delay
to signals. This function can be enabled or disabled as required.
This patch adds FEC support for ethdev.Introduce ethdev
operations which support query and config FEC information in
hardware.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
This patch adds Forward error correction(FEC) support for ethdev.
Introduce APIs which support query and config FEC information in
hardware.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, base and nb_queue in the tc_rxq and tc_txq information
of queue and TC mapping on both TX and RX paths are uint8_t.
However, these data will be truncated when queue number under a TC
is greater than 256. So it is necessary for base and nb_queue to
change from uint8_t to uint16_t.
Fixes: 89d6728c7837 ("ethdev: get DCB information")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
HWRM API to a newer 1.10.1.70 version.
Few fields have been renamed because of this.
rx_err_pkt -> rx_discard_pkts
rx_drop_pkts -> rx_error_pkts
tx_err_pkts -> tx_discard_pkts
tx_drop_pkts -> tx_error_pkts
link_signal_mode -> active_fec_signal_mode
tx_bd_long_hi.mss -> tx_bd_long_hi.kid_or_ts_high_mss
tx_bd_long_hi.hdr_size -> tx_bd_long_hi.kid_or_ts_low_hdr_size
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add support to select RSS hash based on innermost or outermost
headers. If an application is started without any specific settings
the default mode configured by FW or HW shall be used.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].
Add DSA device support to dpdk-devbind.py script.
[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
Rather than having the fence signalled via a flag on a descriptor - which
requires reading the docs to find out whether the flag needs to go on the
last descriptor before, or the first descriptor after the fence - we can
instead add a separate fence API call. This becomes unambiguous to use,
since the fence call explicitly comes between two other enqueue calls. It
also allows more freedom of implementation in the driver code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>