The structure rte_security_session is not directly used
by the application. The application just need an opaque
pointer to attached to the mbuf or rte_crypto_op while
enqueue. Hence, it can be hidden inside the library
and would prevent unnecessary indirection to the priv
session data in fastpath.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The current crypto raw data vectors need to be extended to support
out of place processing. It is proposed to add additional desl_sgl
to provide details for destination sgl.
The same is also extended to support rte_security usecases, where
we need total data length to know how much additional memory space
is available in buffer other than data length so that driver/HW
can write expanded size data after encryption.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
In crypto adapter metadata, first 8 bytes of request info is a space
holder for response info. For better clarity, reserved field should be
removed from request info. New space for response info can be made by
changing type of event crypto metadata to structure from union.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Lcore state FINISHED is used by the worker thread to indicate that
it has completed the assigned task. The state is changed to
WAIT by another thread after it observes the updated state. This
additional step is redundant. After this deprecation, the worker
thread will update the state to WAIT.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Feifei Wang <feifei.wang2@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
A bug with segmented packets has been discovered but the agreement
to apply the fix is not concluded at the time of DPDK 21.08 release.
This bug seems to be in DPDK for many years and should be fixed in 21.11.
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Make driver layer as internal, remove unnecessary rte_ prefix for
structures and functions that are not a part of public API.
Promote experimental trace and vector APIs to stable.
Add reserved field to `rte_event_timer` structure.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
One reserved byte in rte_crypto_op struct would be used to indicate
warnings and other information from the crypto/security operation. This
field will be used to communicate events such as soft expiry with IPsec
in lookaside mode.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The APIs which are internal to PMD and cryptodev library
can be marked as internal so that ABI checking do not
shout for changes in interfaces which are internal to DPDK.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Announce changes to add 2 unions.
The first union will provide integral and bits access to version and IHL.
The second union will provide integral and bits access to fragment flags
and offset.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Discussion thread:
https://mails.dpdk.org/archives/dev/2021-March/202959.html
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The mbuf offload flags do not match the DPDK namespace (they are
not prefixed by RTE_). Announce their rename in 21.11, and the
removal of the old names in 22.11.
A draft coccinelle script is provided to anticipate what the
renaming will be.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Clarifying the ABI policy on the promotion of experimental APIs to stable.
We have a fair number of APIs that have been experimental for more than
2 years. This policy amendment indicates that these APIs should be
promoted or removed, or should at least form a conversation between the
maintainer and original contributor.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Update the Minimal SW and HW version offload support
information for ASO metering and metering hierarchy.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Asaf Penso <asafp@nvidia.com>
PCI, vmbus, and auxiliary drivers printed a warning
when NUMA node had been reported as (-1) or not reported by OS:
EAL: Invalid NUMA socket, default to 0
This message and its level might confuse users because the configuration
is valid and nothing happens that requires attention or intervention.
It was also printed without the device identification and with an indent
(PCI only), which is confusing unless DEBUG logging is on to print
the header message with the device name.
Reduce level to INFO, reword the message, and suppress it when there is
only one NUMA node because NUMA awareness does not matter in this case.
Also, remove the indent for PCI.
Fixes: f0e0e86aa3 ("pci: move NUMA node check from scan to probe")
Fixes: 831dba47bd ("bus/vmbus: add Hyper-V virtual bus support")
Fixes: 1afce3086c ("bus/auxiliary: introduce auxiliary bus")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In the recent update, the misc5 matcher was introduced to
match VxLAN header extra fields. However, ConnectX-5
doesn't support misc5 for the UDP ports different from
VXLAN's standard one (4789).
Need to fall back to the previous approach and use legacy
misc matcher if non-standard UDP port is recognized
in VxLAN flow.
Fixes: 630a587bfb ("net/mlx5: support matching on VXLAN reserved field")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Windows headers define `s_addr`, `min`, and `max` as macros.
If DPDK headers are included after Windows ones, DPDK structure
definitions containing fields with these names get broken (example 1),
as well as any usage of such fields (example 2). If DPDK headers
undefined these macros, it could break consumer code (example 3).
It is proposed to rename structure fields in DPDK, because Win32 headers
are used more widely than DPDK, as a general-purpose platform compared
to domain-specific kit, and are harder to fix because of that.
Exact new names are left for further discussion.
Example 1:
/* in DPDK public header included after windows.h */
struct rte_type {
int min; /* ERROR: `min` is a macro */
};
Example 2:
#include <rte_ether.h>
#include <winsock2.h>
struct rte_ether_hdr eh;
eh.s_addr.addr_bytes[0] = 0; /* ERROR: `addr_s` is a macro */
Example 3:
#include <winsock2.h>
#include <rte_ether.h>
struct in_addr addr;
addr.s_addr = 0; /* ERROR: there is no `s_addr` field,
and `s_addr` macro is undefined by DPDK. */
Commit 6c068dbd9f ("net: work around s_addr macro on Windows")
modified definition of `struct rte_ether_hdr` to avoid the issue.
However, the workaround assumes `#define s_addr S_addr.S_un`
in Windows headers, which is not a part of official API.
It also complicates the definition of `struct rte_ether_hdr`.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Khoa To <khot@microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The struct member dataunit_len is introduced in DPDK 21.05.
It is limited to 16 bits to fit a padding hole in 32-bit build.
This means the maximum data-unit length is 64 KB.
Some use cases may benefit of a bigger size as the proposed 32 bits.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Announce adding 'RTE_ETH_' prefix to all public ethdev macros/enums on
v21.11.
Backward compatibility macros will be added on v21.11 and they will be
removed on v22.11.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
will be removed and the header will be made internal.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Update the incorrect description about atomic operations
with provided wrappers in deprecation doc[1].
[1]https://mails.dpdk.org/archives/dev/2021-July/213333.html
Fixes: 7518c5c4ae ("doc: announce adoption of C11 atomic operations semantics")
Cc: stable@dpdk.org
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
APIs and data structures hasve been modified as per deprecation
note, so removing deprecation notice from the notes.
Fixes: 85f52aa422 ("sched: add pipe config params to subport struct")
Cc: stable@dpdk.org
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Documented the role of RTE_ARM_EAL_RDTSC_USE_PMU to enable
PMU based rte_rdtsc().
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.
Cc: stable@dpdk.org
Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
Currently the sample app user guides use hard coded code snippets,
this patch changes these to use literalinclude which will dynamically
update the snippets as changes are made to the code.
This was introduced in commit 413c75c33c ("doc: show how to include
code in guides"). Comments within the sample apps were updated to
accommodate this as part of this patch. This will help to ensure that
the code within the sample app user guides is up to date and not out
of sync with the actual code.
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Announce changes to make rte_security_set_pkt_metadata() and
rte_security_get_userdata() inline instead of C functions and
also addition of another field in structure rte_security_ctx for
holding flags.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The support for multiple data-units includes the next:
- Add a new command-line argument to provide the data-unit length.
- Set the length in the cipher xform.
- Validate device capabilities for this feature.
- Pad the AES-XTS operation length to be aligned to the defined data-unit.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Running with stdout suppressed or redirected for further processing
is very confusing in the case of errors. Fix it by logging errors and
warnings to stderr.
Since lines with log messages are touched anyway concatenate split
format strings to make it easier to search using grep.
Fix indent of format string arguments.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
The first argument to rte_bsf32_safe was incorrectly declared as
a 64 bit value. The code only works on 32 bit values and the underlying
function rte_bsf32 only accepts 32 bit values. This was a mistake
introduced when the safe version was added and probably cause
by copy/paste from the 64 bit version.
The bug passed silently under the radar until some other code was
built with -Wall and -Wextra in C++ and C++ complains about the
missing cast.
Yes, this is a API signature change, but the original code was wrong.
It is an inline so not an ABI change.
Fixes: 4e261f5519 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Applications need to stop DMA transfers and finish all the inflight
packets when in VM memory hot-plug case and async vhost is used. This
patch is to provide an unsafe API to clear inflight packets which
are submitted to DMA engine in vhost async data path. Update the
program guide and release notes for virtqueue inflight packets clear
API in vhost lib.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The prerequisite info is already present in the platform guide.
No need to repeat it in individual dev guides.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Allow user to specify his own hash key and hash ctrl if the
device is supporting that. HW interprets the key in reverse byte order,
so the PMD reorders the key before passing it to the ena_com layer.
Default key is being set in random matter each time the device is being
initialized.
Moreover, make minor adjustments for reta size setting in terms
of returning error values.
RSS code was moved to ena_rss.c file to improve readability.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
In order to support asynchronous Rx in the applications, the driver has
to configure the event file descriptors and configure the HW.
This patch configures appropriate data structures for the rte_ethdev
layer, adds .rx_queue_intr_enable and .rx_queue_intr_disable API
handlers, and configures IO queues to work in the interrupt mode, if it
was requested by the application.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Artur Rojek <ar@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
The support of RFC2698 and RFC4115 are added in mlx5 PMD. Only the
ASO metering supports these two profiles.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In the previous implementation, the policy for yellow color was not
supported. The action validation for yellow was skipped.
Since the yellow color policy needs to be supported, the validation
should also be done for the yellow color. In the meanwhile, due to
the fact that color policies of one meter should be used for the
same flow(s), the domains supported of both colors should be the
same. If both of the colors have RSS as the termination actions,
except the queues, all other parameters of RSS should be the same.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
RoCE disabling requirement is based on PCI address.
In order to support Sub-Function, a conversion is needed
in the case of an auxiliary device.
SF device can be probed with such devargs string:
auxiliary:mlx5_core.sf.<id>,class=vdpa
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Introduce SF support.
Similar to VF, SF on auxiliary bus is a portion of hardware PF,
no representor or bonding parameters for SF.
Devargs to support SF:
-a auxiliary:mlx5_core.sf.8,dv_flow_en=1
New global syntax to support SF:
-a bus=auxiliary,name=mlx5_core.sf.8/class=eth/driver=mlx5,dv_flow_en=1
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds thread unsafe version for async register and
unregister functions.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch reworks the async configuration structure to improve code
readability. In addition, add preserved padding fields on the structure
for future usage.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch allows to check the amount of in-flight packets
for the vhost queue using async acceleration.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch provides the support for IPsec protocol
offload to the hardware.
Following security operations are added:
- session_create
- session_destroy
- capabilities_get
Signed-off-by: Michael Shamis <michaelsh@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Tested-by: Liron Himi <lironh@marvell.com>
In order to test the new mlx5 crypto PMD, the driver is added to the
crypto test application.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The crypto operations are done with the WQE set which contains
one UMR WQE and one rdma write WQE. Most segments of the WQE
set are initialized properly during queue setup, only limited
segments are initialized according to the crypto detail in the
datapath process.
This commit adds the enqueue and dequeue operations and updates
the WQE set segments accordingly.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The mlx5 HW crypto operations are done by attaching crypto property
to a memory region. Once done, every access to the memory via the
crypto-enabled memory region will result with in-line encryption or
decryption of the data.
As a result, the design choice is to provide two types of WQEs. One
is UMR WQE which sets the crypto property and the other is rdma write
WQE which sends DMA command to copy data from local MR to remote MR.
The size of the WQEs will be defined by a new devarg called
max_segs_num.
This devarg also defines the maximum segments in mbuf chain that will be
supported for crypto operations.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
A keytag is a piece of data encrypted together with a DEK.
When a DEK is referenced by an MKEY.bsf through its index, the keytag is
also supplied in the BSF as plaintext. The HW will decrypt the DEK (and
the attached keytag) and will fail the operation if the keytags don't
match.
This commit adds the configuration of the keytag with devargs.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
To work with crypto engines that are marked with wrapped_import_method,
a login session is required.
A crypto login object needs to be created using DevX.
The crypto login object contains:
- The credential pointer.
- The import_KEK pointer to be used for all secured information
communicated in crypto commands (key fields), including the
provided credential in this command.
- The credential secret, wrapped by the import_KEK indicated in
this command. Size includes 8 bytes IV for wrapping.
Added devargs for the required login values:
- wcs_file - path to the file containing the credential.
- import_kek_id - the import KEK pointer.
- credential_id - the credential pointer.
Create the login DevX object in pci_probe function and destroy it in
pci_remove.
Destroying the crypto login object means logout.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Mellanox user space drivers don't deal with physical addresses as part
of a memory protection mechanism.
The device translates the given virtual address to a physical address
using the given memory key as an address space identifier.
That's why any mbuf virtual address is moved directly to the HW
descriptor(WQE).
The mapping between the virtual address to the physical address is saved
in MR configured by the kernel to the HW.
Each MR has a key that should also be moved to the WQE by the SW.
When the SW sees an unmapped address, it extends the address range and
creates a MR using a system call.
Add memory region cache management:
- 2 level cache per queue-pair - no locks.
- 1 shared cache between all the queues using a lock.
Using this way, the MR key search per data-path address is optimized.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Sessions are used in symmetric transformations in order to prepare
objects and data for packet processing stage.
A mlx5 session includes iv_offset, pointer to mlx5_crypto_dek struct,
bsf_size, bsf_p_type, block size index, encryption_order and encryption
standard.
Implement the next session operations:
mlx5_crypto_sym_session_get_size- returns the size of the mlx5
session struct.
mlx5_crypto_sym_session_configure- prepares the DEK hash-list
and saves all the session data.
mlx5_crypto_sym_session_clear - destroys the DEK hash-list.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add a new PMD for Mellanox devices- crypto PMD.
The crypto PMD will be supported starting Nvidia ConnectX6 and
BlueField2.
The crypto PMD will add the support of encryption and decryption using
the AES-XTS symmetric algorithm.
The crypto PMD requires rdma-core and uses mlx5 DevX.
This patch adds the PCI probing, basic functions, build files and
log utility.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Isal compress PMD has build failures on Arm platform.
As dependent library ISA-L is supported on Arm platform,
support of the PMD is expanded to Arm architecture.
Fixed build failure caused by architecture specific code,
and made the PMD multi architecture compatible.
Bugzilla ID: 755
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
For now, a rule may have only one dedicated counter, shared counters
are not supported.
HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.
The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Add event vector support for cnxk event Rx adapter, add control path
APIs to get vector limits and ability to configure event vectorization
on a given Rx queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for event eth Rx adapter.
Resize cn10k workslot fastpath structure to fit in 64B cacheline size.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Support AVF RSS for inner most header of GTPoGRE packet. It supports
RSS based on inner most IP src + dst address and TCP/UDP src + dst
port.
Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add support for rte_flow_item_raw to parse custom L2 and L3
protocols.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Query MLX5 port hardware if it is capable to offload IPv4
IHL field.
Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version required to offload IPv4 IHL is
xx_30_2000.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.
For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Use the new multi-monitor intrinsic to allow monitoring multiple ethdev
Rx queues while entering the energy efficient power state. The multi
version will be used unconditionally if supported, and the UMWAIT one
will only be used when multi-monitor is not supported by the hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Currently, there is a hard limitation on the PMD power management
support that only allows it to support a single queue per lcore. This is
not ideal as most DPDK use cases will poll multiple queues per core.
The PMD power management mechanism relies on ethdev Rx callbacks, so it
is very difficult to implement such support because callbacks are
effectively stateless and have no visibility into what the other ethdev
devices are doing. This places limitations on what we can do within the
framework of Rx callbacks, but the basics of this implementation are as
follows:
- Replace per-queue structures with per-lcore ones, so that any device
polled from the same lcore can share data
- Any queue that is going to be polled from a specific lcore has to be
added to the list of queues to poll, so that the callback is aware of
other queues being polled by the same lcore
- Both the empty poll counter and the actual power saving mechanism is
shared between all queues polled on a particular lcore, and is only
activated when all queues in the list were polled and were determined
to have no traffic.
- The limitation on UMWAIT-based polling is not removed because UMWAIT
is incapable of monitoring more than one address.
Also, while we're at it, update and improve the docs.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Currently, we expect that only one callback can be active at any given
moment, for a particular queue configuration, which is relatively easy
to implement in a thread-safe way. However, we're about to add support
for multiple queues per lcore, which will greatly increase the
possibility of various race conditions.
We could have used something like an RCU for this use case, but absent
of a pressing need for thread safety we'll go the easy way and just
mandate that the API's are to be called when all affected ports are
stopped, and document this limitation. This greatly simplifies the
`rte_power_monitor`-related code.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.
This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.
Existing implementations are adjusted to follow the new semantics.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
At this point, multiple different Ethernet drivers from multiple vendors
will support the PMD power management scheme. It would be useful to add
it to the NIC feature table to indicate support for it.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add cross-compiling guidance for 32-bit aarch32 DPDK on aarch64 host.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Currently in DPDK only acpi_cpufreq and pstate_cpufreq drivers are
supported, which are both not available on arm64 platforms. Add
support for cppc_cpufreq driver which works on most arm64 platforms.
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Acked-by: David Hunt <david.hunt@intel.com>
The current meson option 'machine' should only specify the ISA, which is
not sufficient for Arm, where setting ISA implies other settings as well
(and is used in Arm configuration as such).
Use the existing 'platform' meson option to differentiate the type of
the build (native/generic) and set ISA accordingly, unless the user
chooses to override it with a new option, 'cpu_instruction_set'.
The 'machine' option set the ISA in x86 builds and set native/default
'build type' in aarch64 builds. These two new variables, 'platform' and
'cpu_instruction_set', now properly set both ISA and build type for all
architectures in a uniform manner.
The 'machine' option also doesn't describe very well what it sets. The
new option, 'cpu_instruction_set', is much more descriptive. Keep
'machine' for backwards compatibility.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In order to allow\disallow configuring rules with identical
patterns, the new device argument 'allow_duplicate_pattern'
is introduced.
If allow, these rules be inserted successfully and only the
first rule take affect.
If disallow, the first rule will be inserted and other rules
be rejected.
The default is to allow.
Set it to 0 if disallow, for example:
-a <PCI_BDF>,allow_duplicate_pattern=0
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This adds the validation when creating a policy with meter action.
Currently meter action is only allowed for green color in policy, and
8 meters are supported at maximum in one meter hierarchy.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently when creating meter policy, a src port_id match item will
always be added in switch domain. So if one meter is used by another
port, it will not work correctly.
This issue is solved:
1. If policy fate action is port_id, add the src port_id match item,
and the meter cannot be shared by another port.
2. If policy fate action isn't port_id, don't add the src port_id
match, meter can be shared by another port.
This fix enables one meter being shared by different ports. User can
create a meter flow using a port_id match item to make this meter
shared by other port.
Fixes: afb4aa4f12 ("net/mlx5: support meter policy operations")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
ConnectX-4 and ConnectX-4 Lx NICs require all L2 headers of transmitted
packets to be inlined. By default only first 18 bytes are inlined,
which is insufficient if additional encapsulation is used, like Q-in-Q.
Thus, default settings caused such traffic to be dropepd on Tx.
Document a recommendation to increase inlined data size in such cases.
Fixes: 505f1fe426 ("net/mlx5: add Tx devargs")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
WINOF2 2.70 Windows kernel driver allows DevX rule creation
of types TCP and IPv6.
Added the types to the supported items in mlx5_flow_os_item_supported
to allow them to be created in the PMD.
Added description of new rules support in Windows kernel driver WINOF2 2.70
to the mlx5 driver guide.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add support to fetch the SFP EEPROM settings from the firmware.
For SFP+ modules we will display 0xA0 page for status and 0xA2 page
for other information. For QSFP modules we will show the 0xA0 page.
Also identify the module types for QSFP28, QSFP, QSFP+ apart
from the SFP modules and return an error for 10GBase-T PHY.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
1. Add templates to support Thor platform.
2. Flow counter manager is not enabled if no flow counters are
configured.
3. Mark database is not enabled if mark action is not supported.
4. Removed application to port default flow.
5. Add allocate and write for the global registry file.
6. Multiple default flow templates are combined to one.
7. Remove default loopback action record, this is required in order to
support multiple platforms.
8. Enable port table support in the generic table.
9. remove global template table in order to support multiple platforms.
10. Add support to get parent VNIC from port table database.
11. VF representor action mark is made optional since not all
configurations need representor support.
12. Add layer 4 ports to computational fields.
13. Update templates to support the above changes.
14. Add support for wildcard.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Conflict resolution feature allows rejection of flows based on
the previously added flows that conflict. For instance, a five
tuple flow is added and then you add a new flow with only 4 tuple
instead having same layer2 details then it will be rejected.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, a devarg (host-based-truflow) is passed while launching
the app to enable TRUFLOW feature. However, this mechanism adds
an extra step in enabling TRUFLOW. This doesn't give a seamless
experience when flow offloads has to work with FW that doesn't/does
support TRUFLOW feature. Also, it's likely that customers may not
want to use devarg to enable flow offloads.
This patch fixes it by checking for TRUFLOW feature support in
device's capabilities and configurations field of the hwrm_ver_get.
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Added support for crypto adapter OP_FORWARD mode.
As OcteonTx CPT crypto completions could be out of order, each crypto op
is enqueued to CPT, dequeued from CPT and enqueued to SSO one-by-one.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch updates the dependency requirement information
for aesni-gcm, aesni-mb, snow3g, zuc, and kasumi PMDs. Previously
building these PMDs with Make will fail when the system is
installed intel-ipsec-mb library version 1.0 or newer.
Since Make build system is deprecated already, instead of fixing
the issue the documentation is updated to state it.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Registered cn9k and cn10k for asymmetric crypto
autotest. Documentation and release notes are also
updated.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add asymmetric crypto capabilities supported
by cn9k and cn10k PMDs. Documentation is also
updated for the same.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add asymmetric crypto session ops for both cn9k
and cn10k PMD.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>