5388 Commits

Author SHA1 Message Date
Ali Alnubani
00373909c8 doc: fix typos and punctuation in flow API guide
This fixes typos and punctuation in the rte flow API guide.

Fixes: 2f82d143fb31 ("ethdev: add group jump action")
Fixes: 4d73b6fb9907 ("doc: add generic flow API guide")
Cc: stable@dpdk.org

Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2022-03-03 14:10:46 +01:00
Stephen Hemminger
7cc8ef9cf4 add missing newline at EOF
The text files did not end with newline.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2022-02-27 21:28:58 +01:00
Elena Agostini
24c77594e0 gpu/cuda: map GPU memory with GDRCopy
To enable the gpudev rte_gpu_mem_cpu_map feature to expose
GPU memory to the CPU, the GPU CUDA driver library needs
the GDRCopy library and driver.

If DPDK is built without GDRCopy, the GPU CUDA driver returns
error if the is invoked rte_gpu_mem_cpu_map.

All the others GPU CUDA driver functionalities are not affected by
the absence of GDRCopy, thus this is an optional functionality
that can be enabled in the GPU CUDA driver.

CUDA driver documentation has been updated accordingly.

Signed-off-by: Elena Agostini <eagostini@nvidia.com>
2022-02-27 18:16:20 +01:00
Elena Agostini
8b6502f286 doc: add CUDA driver features
The features list were missed when introducing the driver.

Fixes: 1306a73b1958 ("gpu/cuda: introduce CUDA driver")
Cc: stable@dpdk.org

Signed-off-by: Elena Agostini <eagostini@nvidia.com>
2022-02-27 17:45:41 +01:00
Michael Baum
311b17e669 net/mlx5: support queue/RSS actions for external Rx queue
Add support queue/RSS action for external Rx queue.
In indirection table creation, the queue index will be taken from
mapping array.

This feature supports neither LRO nor Hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-02-25 17:33:31 +01:00
Michael Baum
9d936f4f1a common/mlx5: support remote PD and CTX
Add option to probe common device using import CTX/PD functions instead
of create functions.
This option requires accepting the context FD and the PD handle as
devargs.

This sharing can be useful for applications that use PMD for only some
operations. For example, an app that generates queues itself and uses
PMD just to configure flow rules.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-02-25 17:33:31 +01:00
Sean Zhang
5c4d491791 net/mlx5: support matching GRE optional fields
This patch adds matching on the optional fields (checksum/key/sequence)
of GRE header. The matching on checksum and sequence fields requests
support from rdma-core with the capability of misc5 and tunnel_header 0-3.

For patterns without checksum and sequence specified, keep using misc for
matching as before, but for patterns with checksum or sequence, validate
capability first and then use misc5 for the matching.

Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-25 16:34:08 +01:00
Ferruh Yigit
23f7ec1d9b doc: add GRE option flow item to feature list
'gre_option' flow item was missing in the feature list, adding it.

Fixes: f61490bdf218 ("ethdev: support GRE optional fields")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2022-02-25 19:51:20 +01:00
Tomasz Duszynski
64e63c1929 common/cnxk: support CNF95xx B0 variant
Add CNF95xx B0 variant to the list of supported models.

Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
2022-02-25 11:24:55 +01:00
Vamsi Attunuru
9d127f4404 net/cnxk: make inline inbound device usage as default
Currently inline inbound device usage is not default for eventdev,
patch renames force_inl_dev dev arg to no_inl_dev and enables inline
inbound device by default.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2022-02-25 09:20:35 +01:00
Satha Rao
50e2c7fdc1 net/cnxk: enable packet marking callbacks
cnxk platform supports red/yellow packet marking based on TM
configuration. This patch set hooks to enable/disable packet
marking for VLAN DEI, IP DSCP and IP ECN. Marking enabled only
in scalar mode.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2022-02-25 08:47:53 +01:00
Vidya Sagar Velumuri
c062f5726f net/cnxk: support IP reassembly
Added capability and support for inline inbound IP reassembly
in cnxk driver. The IP reassembly offload is supported only
when the inline IPSec security offload is enabled.

In case of IP reassembly incomplete, the mbufs are attached
in the mbuf dynamic field and a dynamic flag is set accordingly.

Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2022-02-24 21:33:33 +01:00
Suanming Mou
c40c061a02 net/mlx5: add basic flow queue operation
The HW steering uses async queue-based flow rules management
mechanism. The matcher and part of the actions have been
prepared during flow table creation. Some remaining actions
will be constructed during flow creation if needed.

A flow postpone attribute bit describes if flow management
should be applied to the HW directly. An extra push function
is provided to force push all the cached flows to the HW.

Once the flow has been applied to the HW, the pull function
will be called to get the queued creation/destruction flows.

The DR rule flow memory is represented in PMD layer instead
of allocating from HW steering layer. While destroying the
flow, the flow rule memory can only be freed after the CQE
received.

The HW queue job descriptor is currently introduced to convey
the flow information and operation type between the flow
insertion/destruction in the pull function.

This commit adds the basic flow queue operation for:
rte_flow_async_create();
rte_flow_async_destroy();
rte_flow_push();
rte_flow_pull();

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:20 +01:00
Suanming Mou
d84c3cf766 net/mlx5: introduce hardware steering enable routine
The new hardware steering engine relies on using dedicated steering WQEs
instead of writing to the low-level steering table entries directly.
In the first implementation the hardware steering engine supports the
new queue based Flow API, the existing synchronous non-queue based Flow
API is not supported.

A new dv_flow_en value 2 is added to manage mlx5 PMD steering engine:

dv_flow_en	rte_flow API	rte_flow_async API
------------------------------------------------
 0		support		not support
 1		support		not support
 2		not support	support

This commit introduces the extra dv_flow_en = 2 to specify the new
flow initialize and manage operation routine.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 22:10:17 +01:00
Viacheslav Ovsiienko
49e8797619 net/mlx5: support wait on time in Tx
The hardware since ConnectX-7 supports waiting on
specified moment of time with new introduced wait
descriptor. A timestamp can be directly placed
into descriptor and pushed to sending queue.
Once hardware encounter the wait descriptor the
queue operation is suspended till specified moment
of time. This patch update the Tx datapath to handle
this new hardware wait capability.

PMD documentation and release notes updated accordingly.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-24 13:46:57 +01:00
Alexander Kozyrev
d906fff518 app/testpmd: add async indirect actions operations
Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
f9bf7dff5d app/testpmd: add flow queue pull operation
Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
9cbbee1451 app/testpmd: add flow queue push operation
Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
ecdc927b99 app/testpmd: add async flow create/destroy operations
Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
c4b3887334 app/testpmd: add flow table management
Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
04cc665fab app/testpmd: add flow template management
Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
9ad3a41ab2 app/testpmd: add flow engine configuration
Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
13cd6d5cc7 ethdev: bring in async indirect actions operations
Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-02-24 14:04:48 +01:00
Alexander Kozyrev
197e820c66 ethdev: bring in async queue-based flow rules operations
A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-02-24 14:04:47 +01:00
Alexander Kozyrev
f076bcfbcf ethdev: add flow item/action templates
Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-02-24 14:04:47 +01:00
Alexander Kozyrev
4ff58b734b ethdev: introduce flow engine configuration
The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-02-24 14:04:47 +01:00
Nithin Dabilpuram
fe5846bcc0 net/cnxk: add devargs for min-max SPI
Add support for inline inbound SPI range via devargs
instead of just max SPI value and range being 0..max.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2022-02-23 17:38:20 +01:00
Sunil Kumar Kori
9544713564 net/cnxk: support priority flow control
Adds support for priority flow control support for CNXK
platforms.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2022-02-23 17:34:08 +01:00
Michael Baum
a6b9d5a538 common/mlx5: update doorbell mapping parameter name
The "tx_db_nc" devarg forces doorbell register mapping to non-cached
region eliminating the extra write memory barrier. This argument was
used in creating the UAR for Tx and thus affected its performance.

Recently [1] its use has been extended to all UAR creation in all mlx5
drivers, and now its name is no longer so accurate.

This patch changes its name to "sq_db_nc" to suit any send queue that
uses it. The old name will still work for backward compatibility.

[1] commit 5dfa003db53f ("common/mlx5: fix post doorbell barrier")

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-23 15:57:43 +01:00
Michael Baum
a3ade5e34d doc: add shared guide for mlx5 drivers
Adds new documentation for MLX5 common driver that contains:
 - Its features list (doesn't exist for now).
 - Its devargs description.
 - Device configuration information and tutorial.
 - Quick Start Guide for Mellanox OFED/EN.

Move into this doc all shared information from other MLX5 PMD docs and
add them reference to new common doc.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-23 15:57:42 +01:00
Michael Baum
67e1bb42b9 doc: correct name of BlueField-2 in mlx5 guide
Update "BlueField 2" -> "BlueField-2" in mlx5 docs.

Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-23 15:57:41 +01:00
Michael Baum
ec49089884 doc: replace broken links in mlx guides
Update links in both mlx4 and mlx5 doc.

Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-23 15:57:41 +01:00
Michael Baum
9c7dc70265 doc: remove obsolete vector Tx explanations from mlx5 guide
Vectorized routines were removed in result of Tx datapath refactoring,
and devarg keys documentation was updated.

However, more updating should have been done. In environment variables
doc, there was explanation according to vectorized Tx which isn't
relevant anymore.

This patch removes this irrelevant explanation.

Fixes: a6bd4911ad93 ("net/mlx5: remove Tx implementation")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-02-23 15:57:40 +01:00
Michal Krawczyk
cc0c5d2519 net/ena: make Tx completion timeout configurable
The default missing Tx completion timeout was set to 5 seconds.
In order to provide users with the interface to control this timeout
to adjust it with the application's watchdog, the device argument for
controlling this value was added.

The parameter is called 'miss_txc_to' and can be modified using the
devargs interface:

  ./app -a <bdf>,miss_txc_to=UINT_NUMBER

This parameter accepts values from 0 to 60 and indicates number of
seconds after which the Tx packet will be considered as missing.

HW hints for the Tx completions timeout were removed to do not overwrite
parameter from the user. Also specifying default Tx completion timeout
value was moved from the configuration to init phase in order to
simplify default value assignment.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:03 +01:00
Michal Krawczyk
3cec73fabb net/ena: support xstat names by ID
ENA was only supporting retrieval of all the xstats name and wasn't
implementing the eth_xstats_get_names_by_id API.

As this API may be more efficient than retrieving all the names, it
tries to avoid excessive string copying.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:03 +01:00
Dawid Gorecki
a52b317e7d net/ena: support Tx mbuf free on demand
ENA driver did not allow applications to call tx_cleanup. Freeing Tx
mbufs was always done by the driver and it was not possible to manually
request the driver to free mbufs.

Modify ena_tx_cleanup function to accept maximum number of packets to
free and return number of packets that was freed.

Signed-off-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:03 +01:00
Michal Krawczyk
850e1bb1c7 net/ena/base: make IO memzone unique per port
Originally, the ena_com memzone counter was shared by ports, which
caused the memzones to be harder to identify and could potentially
lead to race and because of that the counter had to be atomic.

This atomic counter was global variable and it couldn't work in the
multiprocess implementation.

The memzone is now being identified by the local to port memzone counter
and the port ID - both of those information can be found in the shared
data, so it can be probed easily.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:03 +01:00
Stanislaw Kardach
e3595539e0 net/ena: proxy AQ calls to primary process
Due to how the ena_com compatibility layer is written, all AQ commands
triggering functions use stack to save results of AQ and then copy them
to user given function.
Therefore to keep the compatibility layer common, introduce ENA_PROXY
macro. It either calls the wrapped function directly (in primary
process) or proxies it to the primary via DPDK IPC mechanism. Since all
proxied calls are taken under a lock share the result data through
shared memory (in struct ena_adapter) to work around 256B IPC parameter
size limit.

New proxy calls can be added by
1. Adding a new message type at the end of enum ena_mp_req
2. Adding new message arguments to the struct ena_mp_body if needed
3. Defining proxy request descriptor with ENA_PROXY_DESC. Its arguments
   include handlers for request preparation and response processing.
   Any of those may be empty (aside of marking arguments as used).
4. Adding request handling logic to ena_mp_primary_handle()
5. Replacing proxied function calls with ENA_PROXY(adapter, <func>, ...)

Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:03 +01:00
Michal Krawczyk
b2c1fe38ad net/ena/base: use optimized memcpy version also on Arm
As the default behavior for arm64 is to alias rte_memcpy as memcpy, ENA
cannot redefine memcpy as rte_memcpy as it would cause nested
declaration.

To make it possible to use optimized memcpy in the ena_com layer on Arm,
the driver now redefines memcpy when it is beneficial:
  * For arm64 only when the flag RTE_ARCH_ARM64_MEMCPY was defined
  * For arm only when the flag RTE_ARCH_ARM_NEON_MEMCPY was defined

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:03 +01:00
Michal Krawczyk
b9b05d6f86 net/ena: make link status change interrupt configurable
ENA uses AENQ for notification about various events, like LSC, keep
alive etc. By default it was enabling all AENQ that were supported by
both the driver and the device. As a result the LSC was always processed
even if the application turned it off explicitly.

As the DPDK provides application with the possibility to configure the
LSC, ENA should respect that. AENQ groups are now being updated upon
configure step, thus LSC can be activated or disabled between ENA PMD
reconfigurations. Moreover, the LSC capability for the device is being
determined dynamically.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:02 +01:00
Michal Krawczyk
84daba9962 net/ena: add extra Rx checksum related xstats
* Split 'bad_csum' Rx statistic into 'l3_csum_bad' and 'l4_csum_bad' to
  be able to check which checksum was not calculated properly.
* Add l4_csum_good statistic, which shows how many times L4 Rx checksum
  was properly offloaded.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
2022-02-23 19:01:02 +01:00
Ciara Loftus
dffc3e9be3 doc: add AF_XDP queue setup information
When an AF_XDP PMD is created without specifying the 'start_queue', the
default Rx queue associated with the socket will be Rx queue 0. A common
scenario encountered by users new to AF_XDP is that they create the
socket on queue 0 however their interface is configured with many more
queues. In this case, traffic might land on for example queue 18 which
means it will never reach the socket.

This commit updates the AF_XDP documentation with instructions on how to
configure the interface to ensure the traffic will land on queue 0 and
thus reach the socket successfully.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2022-02-23 13:37:58 +01:00
John Miller
51ec6c74e8 net/ark: support new devices
Add two new supported device ID's.
Add documentation for new devices.

Signed-off-by: John Miller <john.miller@atomicrules.com>
2022-02-16 00:48:06 +01:00
Martin Spinler
1b4081870e net/nfb: use timestamp offload flag
Rewrite the RX timestamp setup code to use standard offload flag.

Signed-off-by: Martin Spinler <spinler@cesnet.cz>
2022-02-15 14:53:41 +01:00
Jie Wang
3f3ae64f14 net/iavf: support L2TPv2 for flow director
Add support for L2TPv2(include PPP over L2TPv2) protocols FDIR
based on outer MAC src/dst address and L2TPv2 session ID.

Add support for PPPoL2TPv2oUDP protocols FDIR based on inner IP
src/dst address and UDP/TCP src/dst port.

Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2
eth/ipv4(6)/udp/l2tpv2/ppp

eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2022-02-15 17:32:33 +01:00
Jie Wang
01d9025629 net/iavf: support L2TPv2 for RSS
Add support for L2TPv2(include PPP over L2TPv2) protocols RSS based
on outer MAC src/dst address and L2TPv2 session ID.

Patterns are listed below:
eth/ipv4/udp/l2tpv2
eth/ipv4/udp/l2tpv2/ppp
eth/ipv6/udp/l2tpv2
eth/ipv6/udp/l2tpv2/ppp

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2022-02-15 17:32:10 +01:00
Pablo de Lara
35cb5bd236 doc: support IPsec Multi-buffer lib v1.2
Updated AESNI MB and AESNI GCM, KASUMI, ZUC and SNOW3G PMD documentation
guides with information about the latest Intel IPSec Multi-buffer
library supported.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2022-02-24 11:28:29 +01:00
Jakub Poczatek
1998071cb6 doc: fix FIPS guide
Added missing step for converting SHA request files to correct
format. Replaced AES_GCM with GCM to follow the correct
naming format.

Fixes: 3d0fad56b74 ("examples/fips_validation: add crypto FIPS application")
Cc: stable@dpdk.org

Signed-off-by: Jakub Poczatek <jakub.poczatek@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2022-02-23 11:43:14 +01:00
Nithin Dabilpuram
48a398718d examples/ipsec-secgw: add pool size parameters
Add support to enable per port packet pool and also override
vector pool size from command line args. This is useful
on some HW to tune performance based on usecase.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-02-23 11:43:14 +01:00
Arek Kusztal
6c25a68adc crypto/qat: add ECPM algorithm
This patch adds Elliptic Curve Multiplication
algorithm to Intel QuickAssist Technology PMD.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
2022-02-23 10:17:06 +01:00