When parsing a devargs, try to parse using the global device syntax
first. Fallback on legacy syntax on error.
Example of new global device syntax:
-a bus=pci,addr=82:00.0/class=eth/driver=mlx5,dv_flow_en=1
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Gaetan Rivet <grive@u256.net>
Start a new release cycle with empty release notes.
The ABI version becomes 22.0.
The map files are updated to the new ABI major number (22).
The ABI exceptions are dropped and CI ABI checks are disabled because
compatibility is not preserved.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The structures rte_cryptodev_sym_session and
rte_cryptodev_asym_session are not used by the
application directly. The application just need
an opaque pointer which it can attach to rte_crypto_op
while enqueue.
Hence, these structures can be internal to library
hidden from the user.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Transfer flow rules may be applied to traffic entering switch from
many sources. There are flow API pattern items which allow to specify
ingress port match criteria explicitly, but it is not documented
if ethdev port used to create flow rule adds any implicit match
criteria and how it coexists with explicit ones.
These aspects should be documented and drivers and applications
which use it in a different way must be fixed.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
By its very name, action PORT_ID means that packets hit an ethdev with the
given DPDK port ID. At least the current comments don't state the opposite.
However some drivers implement it in a different way and direct traffic to
the opposite end of the "wire" plugged to the given ethdev. For example in
the case of a VF representor traffic is redirected to the corresponding VF
itself rather than to the representor ethdev and OvS uses PORT_ID action
this way.
The documentation must be clarified and, likely, rte_flow_action_port_id
structure should be extended to support both meanings.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Currently there is a dedicated modify action for each
packet field that the application wants to change.
For example:
RTE_FLOW_ACTION_TYPE_SET_IPV4_DST to modify destination of IPv4.
A new action RTE_FLOW_ACTION_TYPE_MODIFY_FIELD added the ability
to use the same action to modify any field, in addition to be able to
modify the value based on different field and not just immediate value.
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
In the current implementation,
the action rte_flow_action_modify_field is not well defined
for fields larger than 64 bits (for example IPv6 source)
In addition, the byte order is also not well defined.
Both of those issue should be fixed.
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
To support shared Rx queue, this patch announces new offload flag
RTE_ETH_RX_OFFLOAD_SHARED_RXQ and new shared_group field to struct
rte_eth_rxconf in DPDK v21.11.
[1] mail list discussion:
https://mails.dpdk.org/archives/dev/2021-July/215575.html
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch announces the renaming of struct
vhost_device_ops to rte_vhost_device_ops in DPDK v21.11.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Adrian Moreno <amorenoz@redhat.com>
Acked-by: Marvin Liu <yong.liu@intel.com>
This patch announces the marking of all the vDPA driver APIs
as internal.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Marvin Liu <yong.liu@intel.com>
This patch announces the experimental tag removal of 10 vhost APIs,
which have been experimental for more than 2 years.
All APIs could be made stable in DPDK 21.11.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Marvin Liu <yong.liu@intel.com>
IPsec xform struct would be updated to include IPsec SA lifetime
configuration. The existing member 'esn_soft_limit' would only track
ESN. And as sequence number control is getting introduced,
'esn_soft_limit' may not indicate the number of packets processed.
Replace that with a new structure to cover all lifetime cases with
support for specifying both soft and hard lifetimes.
ESN control introduced by https://patches.dpdk.org/patch/95808/
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The structure rte_security_session is not directly used
by the application. The application just need an opaque
pointer to attached to the mbuf or rte_crypto_op while
enqueue. Hence, it can be hidden inside the library
and would prevent unnecessary indirection to the priv
session data in fastpath.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The current crypto raw data vectors need to be extended to support
out of place processing. It is proposed to add additional desl_sgl
to provide details for destination sgl.
The same is also extended to support rte_security usecases, where
we need total data length to know how much additional memory space
is available in buffer other than data length so that driver/HW
can write expanded size data after encryption.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
In crypto adapter metadata, first 8 bytes of request info is a space
holder for response info. For better clarity, reserved field should be
removed from request info. New space for response info can be made by
changing type of event crypto metadata to structure from union.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Lcore state FINISHED is used by the worker thread to indicate that
it has completed the assigned task. The state is changed to
WAIT by another thread after it observes the updated state. This
additional step is redundant. After this deprecation, the worker
thread will update the state to WAIT.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Feifei Wang <feifei.wang2@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
A bug with segmented packets has been discovered but the agreement
to apply the fix is not concluded at the time of DPDK 21.08 release.
This bug seems to be in DPDK for many years and should be fixed in 21.11.
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Make driver layer as internal, remove unnecessary rte_ prefix for
structures and functions that are not a part of public API.
Promote experimental trace and vector APIs to stable.
Add reserved field to `rte_event_timer` structure.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
One reserved byte in rte_crypto_op struct would be used to indicate
warnings and other information from the crypto/security operation. This
field will be used to communicate events such as soft expiry with IPsec
in lookaside mode.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The APIs which are internal to PMD and cryptodev library
can be marked as internal so that ABI checking do not
shout for changes in interfaces which are internal to DPDK.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Announce changes to add 2 unions.
The first union will provide integral and bits access to version and IHL.
The second union will provide integral and bits access to fragment flags
and offset.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Discussion thread:
https://mails.dpdk.org/archives/dev/2021-March/202959.html
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The mbuf offload flags do not match the DPDK namespace (they are
not prefixed by RTE_). Announce their rename in 21.11, and the
removal of the old names in 22.11.
A draft coccinelle script is provided to anticipate what the
renaming will be.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Clarifying the ABI policy on the promotion of experimental APIs to stable.
We have a fair number of APIs that have been experimental for more than
2 years. This policy amendment indicates that these APIs should be
promoted or removed, or should at least form a conversation between the
maintainer and original contributor.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Update the Minimal SW and HW version offload support
information for ASO metering and metering hierarchy.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Asaf Penso <asafp@nvidia.com>
PCI, vmbus, and auxiliary drivers printed a warning
when NUMA node had been reported as (-1) or not reported by OS:
EAL: Invalid NUMA socket, default to 0
This message and its level might confuse users because the configuration
is valid and nothing happens that requires attention or intervention.
It was also printed without the device identification and with an indent
(PCI only), which is confusing unless DEBUG logging is on to print
the header message with the device name.
Reduce level to INFO, reword the message, and suppress it when there is
only one NUMA node because NUMA awareness does not matter in this case.
Also, remove the indent for PCI.
Fixes: f0e0e86aa3 ("pci: move NUMA node check from scan to probe")
Fixes: 831dba47bd ("bus/vmbus: add Hyper-V virtual bus support")
Fixes: 1afce3086c ("bus/auxiliary: introduce auxiliary bus")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In the recent update, the misc5 matcher was introduced to
match VxLAN header extra fields. However, ConnectX-5
doesn't support misc5 for the UDP ports different from
VXLAN's standard one (4789).
Need to fall back to the previous approach and use legacy
misc matcher if non-standard UDP port is recognized
in VxLAN flow.
Fixes: 630a587bfb ("net/mlx5: support matching on VXLAN reserved field")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Windows headers define `s_addr`, `min`, and `max` as macros.
If DPDK headers are included after Windows ones, DPDK structure
definitions containing fields with these names get broken (example 1),
as well as any usage of such fields (example 2). If DPDK headers
undefined these macros, it could break consumer code (example 3).
It is proposed to rename structure fields in DPDK, because Win32 headers
are used more widely than DPDK, as a general-purpose platform compared
to domain-specific kit, and are harder to fix because of that.
Exact new names are left for further discussion.
Example 1:
/* in DPDK public header included after windows.h */
struct rte_type {
int min; /* ERROR: `min` is a macro */
};
Example 2:
#include <rte_ether.h>
#include <winsock2.h>
struct rte_ether_hdr eh;
eh.s_addr.addr_bytes[0] = 0; /* ERROR: `addr_s` is a macro */
Example 3:
#include <winsock2.h>
#include <rte_ether.h>
struct in_addr addr;
addr.s_addr = 0; /* ERROR: there is no `s_addr` field,
and `s_addr` macro is undefined by DPDK. */
Commit 6c068dbd9f ("net: work around s_addr macro on Windows")
modified definition of `struct rte_ether_hdr` to avoid the issue.
However, the workaround assumes `#define s_addr S_addr.S_un`
in Windows headers, which is not a part of official API.
It also complicates the definition of `struct rte_ether_hdr`.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Khoa To <khot@microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The struct member dataunit_len is introduced in DPDK 21.05.
It is limited to 16 bits to fit a padding hole in 32-bit build.
This means the maximum data-unit length is 64 KB.
Some use cases may benefit of a bigger size as the proposed 32 bits.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Announce adding 'RTE_ETH_' prefix to all public ethdev macros/enums on
v21.11.
Backward compatibility macros will be added on v21.11 and they will be
removed on v22.11.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
will be removed and the header will be made internal.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Update the incorrect description about atomic operations
with provided wrappers in deprecation doc[1].
[1]https://mails.dpdk.org/archives/dev/2021-July/213333.html
Fixes: 7518c5c4ae ("doc: announce adoption of C11 atomic operations semantics")
Cc: stable@dpdk.org
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
APIs and data structures hasve been modified as per deprecation
note, so removing deprecation notice from the notes.
Fixes: 85f52aa422 ("sched: add pipe config params to subport struct")
Cc: stable@dpdk.org
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Documented the role of RTE_ARM_EAL_RDTSC_USE_PMU to enable
PMU based rte_rdtsc().
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.
Cc: stable@dpdk.org
Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
Currently the sample app user guides use hard coded code snippets,
this patch changes these to use literalinclude which will dynamically
update the snippets as changes are made to the code.
This was introduced in commit 413c75c33c ("doc: show how to include
code in guides"). Comments within the sample apps were updated to
accommodate this as part of this patch. This will help to ensure that
the code within the sample app user guides is up to date and not out
of sync with the actual code.
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Announce changes to make rte_security_set_pkt_metadata() and
rte_security_get_userdata() inline instead of C functions and
also addition of another field in structure rte_security_ctx for
holding flags.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The support for multiple data-units includes the next:
- Add a new command-line argument to provide the data-unit length.
- Set the length in the cipher xform.
- Validate device capabilities for this feature.
- Pad the AES-XTS operation length to be aligned to the defined data-unit.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Running with stdout suppressed or redirected for further processing
is very confusing in the case of errors. Fix it by logging errors and
warnings to stderr.
Since lines with log messages are touched anyway concatenate split
format strings to make it easier to search using grep.
Fix indent of format string arguments.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
The first argument to rte_bsf32_safe was incorrectly declared as
a 64 bit value. The code only works on 32 bit values and the underlying
function rte_bsf32 only accepts 32 bit values. This was a mistake
introduced when the safe version was added and probably cause
by copy/paste from the 64 bit version.
The bug passed silently under the radar until some other code was
built with -Wall and -Wextra in C++ and C++ complains about the
missing cast.
Yes, this is a API signature change, but the original code was wrong.
It is an inline so not an ABI change.
Fixes: 4e261f5519 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Applications need to stop DMA transfers and finish all the inflight
packets when in VM memory hot-plug case and async vhost is used. This
patch is to provide an unsafe API to clear inflight packets which
are submitted to DMA engine in vhost async data path. Update the
program guide and release notes for virtqueue inflight packets clear
API in vhost lib.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The prerequisite info is already present in the platform guide.
No need to repeat it in individual dev guides.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Allow user to specify his own hash key and hash ctrl if the
device is supporting that. HW interprets the key in reverse byte order,
so the PMD reorders the key before passing it to the ena_com layer.
Default key is being set in random matter each time the device is being
initialized.
Moreover, make minor adjustments for reta size setting in terms
of returning error values.
RSS code was moved to ena_rss.c file to improve readability.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
In order to support asynchronous Rx in the applications, the driver has
to configure the event file descriptors and configure the HW.
This patch configures appropriate data structures for the rte_ethdev
layer, adds .rx_queue_intr_enable and .rx_queue_intr_disable API
handlers, and configures IO queues to work in the interrupt mode, if it
was requested by the application.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Artur Rojek <ar@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
The support of RFC2698 and RFC4115 are added in mlx5 PMD. Only the
ASO metering supports these two profiles.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In the previous implementation, the policy for yellow color was not
supported. The action validation for yellow was skipped.
Since the yellow color policy needs to be supported, the validation
should also be done for the yellow color. In the meanwhile, due to
the fact that color policies of one meter should be used for the
same flow(s), the domains supported of both colors should be the
same. If both of the colors have RSS as the termination actions,
except the queues, all other parameters of RSS should be the same.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
RoCE disabling requirement is based on PCI address.
In order to support Sub-Function, a conversion is needed
in the case of an auxiliary device.
SF device can be probed with such devargs string:
auxiliary:mlx5_core.sf.<id>,class=vdpa
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Introduce SF support.
Similar to VF, SF on auxiliary bus is a portion of hardware PF,
no representor or bonding parameters for SF.
Devargs to support SF:
-a auxiliary:mlx5_core.sf.8,dv_flow_en=1
New global syntax to support SF:
-a bus=auxiliary,name=mlx5_core.sf.8/class=eth/driver=mlx5,dv_flow_en=1
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds thread unsafe version for async register and
unregister functions.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch reworks the async configuration structure to improve code
readability. In addition, add preserved padding fields on the structure
for future usage.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch allows to check the amount of in-flight packets
for the vhost queue using async acceleration.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch provides the support for IPsec protocol
offload to the hardware.
Following security operations are added:
- session_create
- session_destroy
- capabilities_get
Signed-off-by: Michael Shamis <michaelsh@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Tested-by: Liron Himi <lironh@marvell.com>
In order to test the new mlx5 crypto PMD, the driver is added to the
crypto test application.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The crypto operations are done with the WQE set which contains
one UMR WQE and one rdma write WQE. Most segments of the WQE
set are initialized properly during queue setup, only limited
segments are initialized according to the crypto detail in the
datapath process.
This commit adds the enqueue and dequeue operations and updates
the WQE set segments accordingly.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The mlx5 HW crypto operations are done by attaching crypto property
to a memory region. Once done, every access to the memory via the
crypto-enabled memory region will result with in-line encryption or
decryption of the data.
As a result, the design choice is to provide two types of WQEs. One
is UMR WQE which sets the crypto property and the other is rdma write
WQE which sends DMA command to copy data from local MR to remote MR.
The size of the WQEs will be defined by a new devarg called
max_segs_num.
This devarg also defines the maximum segments in mbuf chain that will be
supported for crypto operations.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
A keytag is a piece of data encrypted together with a DEK.
When a DEK is referenced by an MKEY.bsf through its index, the keytag is
also supplied in the BSF as plaintext. The HW will decrypt the DEK (and
the attached keytag) and will fail the operation if the keytags don't
match.
This commit adds the configuration of the keytag with devargs.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
To work with crypto engines that are marked with wrapped_import_method,
a login session is required.
A crypto login object needs to be created using DevX.
The crypto login object contains:
- The credential pointer.
- The import_KEK pointer to be used for all secured information
communicated in crypto commands (key fields), including the
provided credential in this command.
- The credential secret, wrapped by the import_KEK indicated in
this command. Size includes 8 bytes IV for wrapping.
Added devargs for the required login values:
- wcs_file - path to the file containing the credential.
- import_kek_id - the import KEK pointer.
- credential_id - the credential pointer.
Create the login DevX object in pci_probe function and destroy it in
pci_remove.
Destroying the crypto login object means logout.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Mellanox user space drivers don't deal with physical addresses as part
of a memory protection mechanism.
The device translates the given virtual address to a physical address
using the given memory key as an address space identifier.
That's why any mbuf virtual address is moved directly to the HW
descriptor(WQE).
The mapping between the virtual address to the physical address is saved
in MR configured by the kernel to the HW.
Each MR has a key that should also be moved to the WQE by the SW.
When the SW sees an unmapped address, it extends the address range and
creates a MR using a system call.
Add memory region cache management:
- 2 level cache per queue-pair - no locks.
- 1 shared cache between all the queues using a lock.
Using this way, the MR key search per data-path address is optimized.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Sessions are used in symmetric transformations in order to prepare
objects and data for packet processing stage.
A mlx5 session includes iv_offset, pointer to mlx5_crypto_dek struct,
bsf_size, bsf_p_type, block size index, encryption_order and encryption
standard.
Implement the next session operations:
mlx5_crypto_sym_session_get_size- returns the size of the mlx5
session struct.
mlx5_crypto_sym_session_configure- prepares the DEK hash-list
and saves all the session data.
mlx5_crypto_sym_session_clear - destroys the DEK hash-list.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add a new PMD for Mellanox devices- crypto PMD.
The crypto PMD will be supported starting Nvidia ConnectX6 and
BlueField2.
The crypto PMD will add the support of encryption and decryption using
the AES-XTS symmetric algorithm.
The crypto PMD requires rdma-core and uses mlx5 DevX.
This patch adds the PCI probing, basic functions, build files and
log utility.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Isal compress PMD has build failures on Arm platform.
As dependent library ISA-L is supported on Arm platform,
support of the PMD is expanded to Arm architecture.
Fixed build failure caused by architecture specific code,
and made the PMD multi architecture compatible.
Bugzilla ID: 755
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
For now, a rule may have only one dedicated counter, shared counters
are not supported.
HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.
The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Add event vector support for cnxk event Rx adapter, add control path
APIs to get vector limits and ability to configure event vectorization
on a given Rx queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for event eth Rx adapter.
Resize cn10k workslot fastpath structure to fit in 64B cacheline size.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Support AVF RSS for inner most header of GTPoGRE packet. It supports
RSS based on inner most IP src + dst address and TCP/UDP src + dst
port.
Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add support for rte_flow_item_raw to parse custom L2 and L3
protocols.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Query MLX5 port hardware if it is capable to offload IPv4
IHL field.
Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version required to offload IPv4 IHL is
xx_30_2000.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.
For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Use the new multi-monitor intrinsic to allow monitoring multiple ethdev
Rx queues while entering the energy efficient power state. The multi
version will be used unconditionally if supported, and the UMWAIT one
will only be used when multi-monitor is not supported by the hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Currently, there is a hard limitation on the PMD power management
support that only allows it to support a single queue per lcore. This is
not ideal as most DPDK use cases will poll multiple queues per core.
The PMD power management mechanism relies on ethdev Rx callbacks, so it
is very difficult to implement such support because callbacks are
effectively stateless and have no visibility into what the other ethdev
devices are doing. This places limitations on what we can do within the
framework of Rx callbacks, but the basics of this implementation are as
follows:
- Replace per-queue structures with per-lcore ones, so that any device
polled from the same lcore can share data
- Any queue that is going to be polled from a specific lcore has to be
added to the list of queues to poll, so that the callback is aware of
other queues being polled by the same lcore
- Both the empty poll counter and the actual power saving mechanism is
shared between all queues polled on a particular lcore, and is only
activated when all queues in the list were polled and were determined
to have no traffic.
- The limitation on UMWAIT-based polling is not removed because UMWAIT
is incapable of monitoring more than one address.
Also, while we're at it, update and improve the docs.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Currently, we expect that only one callback can be active at any given
moment, for a particular queue configuration, which is relatively easy
to implement in a thread-safe way. However, we're about to add support
for multiple queues per lcore, which will greatly increase the
possibility of various race conditions.
We could have used something like an RCU for this use case, but absent
of a pressing need for thread safety we'll go the easy way and just
mandate that the API's are to be called when all affected ports are
stopped, and document this limitation. This greatly simplifies the
`rte_power_monitor`-related code.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.
This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.
Existing implementations are adjusted to follow the new semantics.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
At this point, multiple different Ethernet drivers from multiple vendors
will support the PMD power management scheme. It would be useful to add
it to the NIC feature table to indicate support for it.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add cross-compiling guidance for 32-bit aarch32 DPDK on aarch64 host.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Currently in DPDK only acpi_cpufreq and pstate_cpufreq drivers are
supported, which are both not available on arm64 platforms. Add
support for cppc_cpufreq driver which works on most arm64 platforms.
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Acked-by: David Hunt <david.hunt@intel.com>
The `doc` target used `echo` as its command.
On Windows, `echo` is always a shell built-in, there is no binary.
Starting from meson 0.58, `run_target()` always searches for command
executable and no longer accepts `echo` as such on Windows.
Replace plain `echo` with a Python one-liner.
Fixes: d02a2dab2d ("doc: support building HTML guides with meson")
Cc: stable@dpdk.org
Reported-by: Rob Scheepens <rob.scheepens@nutanix.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The current meson option 'machine' should only specify the ISA, which is
not sufficient for Arm, where setting ISA implies other settings as well
(and is used in Arm configuration as such).
Use the existing 'platform' meson option to differentiate the type of
the build (native/generic) and set ISA accordingly, unless the user
chooses to override it with a new option, 'cpu_instruction_set'.
The 'machine' option set the ISA in x86 builds and set native/default
'build type' in aarch64 builds. These two new variables, 'platform' and
'cpu_instruction_set', now properly set both ISA and build type for all
architectures in a uniform manner.
The 'machine' option also doesn't describe very well what it sets. The
new option, 'cpu_instruction_set', is much more descriptive. Keep
'machine' for backwards compatibility.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In order to allow\disallow configuring rules with identical
patterns, the new device argument 'allow_duplicate_pattern'
is introduced.
If allow, these rules be inserted successfully and only the
first rule take affect.
If disallow, the first rule will be inserted and other rules
be rejected.
The default is to allow.
Set it to 0 if disallow, for example:
-a <PCI_BDF>,allow_duplicate_pattern=0
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>