The DPAA DMA driver is an implementation of the dmadev APIs,
that provide means to initiate a DMA transaction from CPU.
The initiated DMA is performed without CPU being involved
in the actual DMA transaction. This is achieved via using
the QDMA controller of DPAA SoC.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Increase default value for config parameter RTE_LIBRTE_IP_FRAG_MAX_FRAG
from 4 to 8. This parameter controls maximum number of fragments per
packet in ip reassembly table. Increasing this value from 4 to 8 will
allow users to cover common case with jumbo packet size of 9KB and
fragments with default frame size (1500B).
As RTE_LIBRTE_IP_FRAG_MAX_FRAG is used in definition of public
structure (struct rte_ip_frag_death_row), this is an ABI change.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In heterogeneous computing system, processing is not only in the CPU.
Some tasks can be delegated to devices working in parallel.
When mixing network activity with task processing there may be the need
to put in communication the CPU with the device in order to synchronize
operations.
An example could be a receive-and-process application
where CPU is responsible for receiving packets in multiple mbufs
and the GPU is responsible for processing the content of those packets.
The purpose of this list is to provide a buffer in CPU memory visible
from the GPU that can be treated as a circular buffer
to let the CPU provide fondamental info of received packets to the GPU.
A possible use-case is described below.
CPU:
- Trigger some task on the GPU
- in a loop:
- receive a number of packets
- provide packets info to the GPU
GPU:
- Do some pre-processing
- Wait to receive a new set of packet to be processed
Layout of a communication list would be:
-------
| 0 | => pkt_list
| status |
| #pkts |
-------
| 1 | => pkt_list
| status |
| #pkts |
-------
| 2 | => pkt_list
| status |
| #pkts |
-------
| .... | => pkt_list
-------
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
In heterogeneous computing system, processing is not only in the CPU.
Some tasks can be delegated to devices working in parallel.
When mixing network activity with task processing there may be the need
to put in communication the CPU with the device in order to synchronize
operations.
The purpose of this flag is to allow the CPU and the GPU to
exchange ACKs. A possible use-case is described below.
CPU:
- Trigger some task on the GPU
- Prepare some data
- Signal to the GPU the data is ready updating the communication flag
GPU:
- Do some pre-processing
- Wait for more data from the CPU polling on the communication flag
- Consume the data prepared by the CPU
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
In heterogeneous computing system, processing is not only in the CPU.
Some tasks can be delegated to devices working in parallel.
Such workload distribution can be achieved by sharing some memory.
As a first step, the features are focused on memory management.
A function allows to allocate memory inside the device,
or in the main (CPU) memory while making it visible for the device.
This memory may be used to save packets or for synchronization data.
The next step should focus on GPU processing task control.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
In heterogeneous computing system, processing is not only in the CPU.
Some tasks can be delegated to devices working in parallel.
The new library gpudev is for dealing with GPGPU computing devices
from a DPDK application running on the CPU.
The infrastructure is prepared to welcome drivers in drivers/gpu/.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Add the basic device probe and remove functions and initial
documentation for new hisilicon DMA drivers.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
For the Ethernet RQs, if there all receiving descriptors are
exhausted, the packets being received will be dropped. This behavior
prevents slow or malicious software entities at the host from
affecting the network. While for hairpin cases, even if there is no
software involved during the packet forwarding from Rx to Tx side,
some hiccup in the hardware or back pressure from Tx side may still
cause the descriptors to be exhausted. In certain scenarios it may be
preferred to configure the device to avoid such packet drops,
assuming the posting of descriptors will resume shortly.
To support this, a new devarg "delay_drop" is introduced. By default,
the delay drop is enabled for hairpin Rx queues and disabled for
standard Rx queues. This value is used as a bit mask:
- bit 0: enablement of standard Rx queue
- bit 1: enablement of hairpin Rx queue
And this attribute will be applied to all Rx queues of a device.
The "rq_delay_drop" capability in the HCA_CAP is checked before
creating any queue. If the hardware capabilities do not support
this delay drop, all the Rx queues will still be created without
this attribute, and the devarg setting will be ignored even if it
is specified explicitly. A warning log is used to notify the
application when this occurs.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Implement set/get_sram_policy which support both Rx/Tx
direction truflow type the specific SRAM bank.
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The accumulation of flow counters is not determined by the application
device arguments anymore. Instead it is now dictated by the platform
capabilities whether to do software based accumulation or not.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This change allows adding IP header matches for GRE flows that
does not specify outer IP header in the flow match pattern.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Enabled wildcard match support for IPv4 ingress flows.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Added support for socket redirect feature capability so applications
can enable or disable this feature. This patch contains the template
changes.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Update driver to read the multi root capability and ignore
PCI address check while creating ulp session when multi root
capability is enabled in the hardware. DPDK HSI version updated
from 1.10.2.44 to 1.10.2.68.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
* Added support for NAT action for destination IP and port
combination for Thor devices.
* Consolidated the encapsulation and NAT entries for scaling flows
with NAT actions.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add new vDPA PMD to support vDPA operations of Xilinx devices.
This patch implements probe and remove functions.
Signed-off-by: Vijay Kumar Srivastava <vsrivast@xilinx.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Adds event vector support to inline protocol offload mode.
By default vector support is disabled, it can be enabled by
using the option --event-vector.
Additional options to configure vector size and vector timeout are
also implemented and can be used by specifying --vector-size and
--vector-tmo.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add telemetry support to the IPsec GW sample app and add
support for per SA telemetry when using IPsec library.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add null auth support with lookaside IPsec on cn10k crypto PMDs.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add support to allow user to specific MSS for TCP TSO offload on a per SA
basis. MSS configuration in the context of IPsec is only supported for
outbound SA's in the context of an inline IPsec Crypto offload.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add support for transmit segmentation offload to inline crypto processing
mode. This offload is not supported by other offload modes, as at a
minimum it requires inline crypto for IPsec to be supported on the
network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Currently, the compression block size is 15 by default, which
is the maximum.
Add "log-block-size" devarg to select compression block size manually.
The value provided should be between 4 to 15.
Any out-of-range value will be defaulted to 15.
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add support for mlx5 crypto pmd on Windows OS.
Add changes to release note and PMD guide.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The cryptodev library now registers commands with telemetry, and
implements the corresponding callback functions. These commands
allow a list of cryptodevs to be queried, as well as info and stats
for the corresponding cryptodev.
An example usage can be seen below:
Connecting to /var/run/dpdk/rte/dpdk_telemetry.v2
{"version": "DPDK 21.11.0-rc0", "pid": 1135019, "max_output_len": 16384}
--> /
{"/": ["/", "/cryptodev/info", "/cryptodev/list", "/cryptodev/stats", ...]}
--> /cryptodev/list
{"/cryptodev/list": [0,1,2,3]}
--> /cryptodev/info,0
{"/cryptodev/info": {"device_name": "0000:1c:01.0_qat_sym", \
"max_nb_queue_pairs": 2}}
--> /cryptodev/stats,0
{"/cryptodev/stats": {"enqueued_count": 0, "dequeued_count": 0, \
"enqueue_err_count": 0, "dequeue_err_count": 0}}
Signed-off-by: Rebecca Troy <rebecca.troy@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Deprecation notice targeted for 21.11 has been committed with
following as the first commit of the series.
Fixes: b7c984291611 ("interrupts: add allocator and accessors")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: David Marchand <david.marchand@redhat.com>
MAE counters can be polled from a control thread if no service core is
allocated for this.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Add the device and vendor numbers to the PCI ID map so
that a VF can be probed.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The GTP, GTP-U, GTP-C header fields can be matched, however NIC does not
support GTP tunneling so no items after the GTP header can be specified.
If a GTP-U or GTP-C item is specified without a preceding UDP item, the
UDP destination port is implicitly matched. For GTP, the destination UDP
port must be specified but its value is not enforced.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Enable protocol agnostic flow offloading to support raw pattern input
for RSS hash flow rule creation. It is based on Parser Library feature.
Current rte_flow raw API is utilized.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
As previously announced, this patch renames struct
vhost_device_ops to struct rte_vhost_device_ops.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Ten vhost APIs were announced to be stable and promoted in below
commit, so remove the related deprecation notice.
Fixes: 945ef8a04098 ("vhost: promote some APIs to stable")
Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch marks the vDPA driver APIs as internal and
rename the corresponding header file to vdpa_driver.h.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Protocol agnostic flow offloading in Flow Director is enabled by this
patch based on the Parser Library, using existing rte_flow raw API.
Note that the raw flow requires:
1. byte string of raw target packet bits.
2. byte string of mask of target packet.
Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:
flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end
Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.
Also update the ice feature list with rte_flow item raw.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add MAC addresses to filter incoming packets, support to set
multicast addresses to filter. And support to set unicast table array.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>