Commit Graph

4371 Commits

Author SHA1 Message Date
Andrew Rybchenko
9f3b3a96de ethdev: remove legacy MACVLAN filter type support
Instead of MACVLAN filter RTE flow API should be used.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:35:05 +01:00
Xueming Li
0474419bae vdpa/mlx5: handle hardware error
When hardware error happens, vdpa didn't get such information and leave
driver in silent: working state but no response.

This patch subscribes firmware virtq error event and try to recover max
3 times in 3 seconds, stop virtq if max retry number reached.

When error happens, PMD log in warning level. If failed to recover,
outputs error log. Query virtq statistics to get error counters report.

Acked-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-11-03 23:35:05 +01:00
Xueming Li
9fbe97f0ce net/mlx5: remove shared context lock
To support multi-thread flow insertion, this patch removes shared data
lock since all resources should support concurrent protection.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:35:05 +01:00
Matan Azrad
86b59a1af6 net/mlx5: support VLAN matching fields
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by
patch [1].

Using these fields, the application can match all the VLAN options by
single flow: any, VLAN only and non-VLAN only.

Add the support for the fields.
By the way, add the support for QinQ packets matching.

VLAN\QinQ limitations are listed in the driver document.

[1] https://patches.dpdk.org/patch/80965/

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 23:35:04 +01:00
Bing Zhao
fea928802d doc: update hairpin support in mlx5 guide
Hairpin between two ports will be supported by mlx5 PMD.

The supported scenarios and limitations are listed in "mlx5.rst".

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:04 +01:00
Ajit Khaparde
e24a5d3f58 net/bnxt: set thread safe flow ops flag
PMD supports thread-safe flow operations. Set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE dev_flag to indicate this info
to the application. rte_flow API functions can avoid using its
own mutex for safe multi-thread flow handling.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 23:35:03 +01:00
Venkat Duvvuru
675e31d877 net/bnxt: support VXLAN decap offload
VXLAN decap offload can happen in stages. The offload request may
not come as a single flow request rather may come as two flow offload
requests F1 & F2. This patch is adding support for this two stage
offload design. The match criteria for F1 is O_DMAC, O_SMAC,
O_DST_IP, O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match
criteria for F2 is O_SRC_IP, O_DST_IP, VNI and inner header fields.
F1 and F2 flow offload requests can come in any order. If F2 flow
offload request comes first then F2 can’t be offloaded as there is
no O_DMAC information in F2. In this case, F2 will be deferred until
F1 flow offload request arrives. When F1 flow offload request is
received it will have O_DMAC information. Using F1’s O_DMAC, driver
creates an L2 context entry in the hardware as part of offloading F1.
F2 will now use F1’s O_DMAC to get the L2 context id associated with
this O_DMAC and other flow fields that are cached already at the time
of deferring F2 for offloading. F2s that arrive after F1 is offloaded
will be directly programmed and not cached.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 23:35:03 +01:00
Somnath Kotur
e9a705c3a4 net/bnxt: modify HWRM command to create reps
Use cfa pair alloc for configuring reps.
Instead of cfa_vfr_alloc for Wh+ and cfa_pair_alloc for Stingray,
converge to cfa_pair_alloc/free for both devices. Set the command
request structure bits accordingly.
As part of this, remove the old cfa_vfr_alloc cmd definitions as FW
has deprecated support for those commands.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 23:35:03 +01:00
Wenzhuo Lu
9ab9514c15 net/iavf: enable AVX512 for Tx
To enhance the per-core performance, this patch adds some AVX512
instructions to the data path to handle the Tx descriptors.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:35:02 +01:00
Wenzhuo Lu
31737f2b66 net/iavf: enable AVX512 for legacy Rx
To enhance the per-core performance, this patch adds some AVX512
instructions to the data path to handle the legacy Rx descriptors.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:35:02 +01:00
Ajit Khaparde
6abd886826 doc: fix a typo in flow API guide
flow_type_rss_offloads was misspelt as flow_tpe_rss_offloads

Fixes: 6abee736ab ("doc: update RSS flow action with best effort")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-11-03 23:35:02 +01:00
Jiawen Wu
c22e6c7ae1 net/txgbe: add Rx and Tx descriptor status
Supports check the status of Rx and Tx descriptors.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:35:02 +01:00
Viacheslav Ovsiienko
eaf691f950 doc: add Rx buffer split limitation to mlx5 guide
The buffer split feature is mentioned in the mlx5 PMD
documentation, the limitation is description is added
as well.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:35:02 +01:00
Gregory Etelson
4ec6360de3 net/mlx5: implement tunnel offload
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
 - apply matches to both outer and inner packet headers
   during entire offload procedure;
 - restore outer header of partially offloaded packet;
 - model is implemented as a set of helper functions.

Implementation details:
* tunnel_offload PMD parameter must be set to 1 to enable the feature.
* application cannot use MARK and META flow actions with tunnel.
* offload JUMP action is restricted to steering tunnel rule only.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:35:02 +01:00
Lance Richardson
3a23111261 net/bnxt: update PMD supported features
Mark "BSD nic_uio", "Usage doc", and "Perf doc" as supported
for the bnxt PMD.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 23:35:02 +01:00
Jiawen Wu
bd8e3adc11 net/txgbe: support PTP
Add PTP support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
ab7a653001 net/txgbe: support register dump
Add register dump support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
3cc8b50d69 net/txgbe: support EEPROM info get
Add EEPROM information get related operations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
bc84ac0fad net/txgbe: support getting FW version
Add firmware version get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
3926214fd8 net/txgbe: support MTU set
Add MTU set operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
d06e6723ff net/txgbe: add device promiscuous and allmulticast mode
Add device promiscuous and allmulticast mode.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
69ce8c8a4c net/txgbe: support flow control
Add flow control support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
8bdc7882f3 net/txgbe: support DCB
Add DCB transmit and receive mode configurations,
and allocate DCB packet buffer.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
9e487a37c0 net/txgbe: support RSS
Add RSS configure, support to RSS hash and reta operations for PF.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
c35b73a1e7 net/txgbe: add VMDq configure
Add multiple queue setting with VMDq.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
770a352363 net/txgbe: add PF module configure for SRIOV
Add PF module configure for SRIOV.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
a6712cd029 net/txgbe: add PF module init and uninit for SRIOV
Add PF module init and uninit operations with mailbox.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
220b0e49bc net/txgbe: support VLAN
Add VLAN filter, tpid, offload and strip set support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
c1d4e9d37a net/txgbe: add queue stats mapping
Add queue stats mapping set, and clear hardware counters.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
91fe49c87d net/txgbe: support device xstats
Add device extended stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
c9bb590d42 net/txgbe: support device statistics
Add device stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
a5682d28f1 net/txgbe: support Rx interrupt
Support rx queue interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
0e484278c8 net/txgbe: support Rx
Fill receive functions and define receive descriptor.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
ca46fcd753 net/txgbe: support Tx with hardware offload
Fill transmit function with hardware offload.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
9e30b88f60 net/txgbe: support packet type
Add packet type marco definition and convert ptype to ptid.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
b4cfffaa85 net/txgbe: add Rx and Tx start and stop
Add receive and transmit units start and stop for specified queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
be797cbf45 net/txgbe: add Rx and Tx init
Add receive and transmit initialize unit.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:27 +01:00
Jiawen Wu
a331fe3b69 net/txgbe: add MAC address operations
Add MAC address related operations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:26 +01:00
Jiawen Wu
75cbb1f0e8 net/txgbe: add device configuration
Add device configure operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:26 +01:00
Jiawen Wu
2fc745e6b6 net/txgbe: add interrupt operation
Add device interrupt handler and setup misx interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:26 +01:00
Jiawen Wu
86d8adc770 net/txgbe: support getting device info
Add device information get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:26 +01:00
Jiawen Wu
7dc117068a net/txgbe: support probe and remove
Add basic PCIe ethdev probe and remove.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:26 +01:00
Jiawen Wu
a3babbdd0f net/txgbe: add build and doc infrastructure
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for txgbe PMD.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:26 +01:00
Cheng Jiang
abec60e711 examples/vhost: support vhost async data path
This patch is to implement vhost DMA operation callbacks for CBDMA
PMD and add vhost async data-path in vhost sample. With providing
callback implementation for CBDMA, vswitch can leverage IOAT to
accelerate vhost async data-path.

Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-11-03 23:24:26 +01:00
Cheng Jiang
3a04ecb214 examples/vhost: add async vhost args parsing
This patch is to add async vhost driver arguments parsing function
for CBDMA channel, DMA initiation function and args description.
The meson build file is changed to fix dependency problem. With
these arguments vhost device can be set to use CBDMA or CPU for
enqueue operation and bind vhost device with specific CBDMA channel
to accelerate data copy.

Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-11-03 23:24:26 +01:00
Andrew Rybchenko
bfea115b9f doc: advertise flow API transfer rules support in sfc
Transfer rules support matching on various inner and outer packet
headers, traffic source items like PORT_ID, PHY_PORT, PF and VF and
actions to route traffic to destination (PORT_ID, PHY_PORT, PF, VF or
DROP), MARK, FLAG and apply VLAN push/pop transformations.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
dadff13793 net/sfc: support encap flow items in transfer rules
Add support for flow items VXLAN, Geneve and NVGRE to
MAE-specific RTE flow implementation.

Having support for these items implies the ability to insert
so-called outer MAE rules and refer to them in MAE action rules.
The patch takes care of all necessary facilities to do that.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
c50442b0a7 net/sfc: support flow item UDP in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e0eb90cbf8 net/sfc: support flow item TCP in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e191e0d51c net/sfc: support flow item IPv6 in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
bcf8dda8ca net/sfc: support flow item IPv4 in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
900d0c1b71 net/sfc: support flow item VLAN in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

In a pattern, a L2 item preceding an item VLAN must have
correct "type" ("inner_type") set depending on the total
number of VLAN tags (double-tagging is supported):

"pattern eth type is X / vlan / end",
X = 0x8100, or 0x88a8, or 0x9100, or 0x9200, or 0x9300

"pattern eth type is X / vlan inner_type is 0x8100 / vlan / end"
X = 0x88a8, or 0x9100, or 0x9200, or 0x9300

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
33d96ca564 net/sfc: support flow item port ID in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

The DPDK port must not relate to a different physical device.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
1fb65e4dae net/sfc: support flow action port ID in transfer rules
The action handler will use MAE action DELIVER with MPORT
of the PCIe function associated with a given DPDK port ID.
The DPDK port must not relate to a different physical device.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
0839236d03 net/sfc: support flow action drop in transfer rules
Effectively, the resulting action will be of type DELIVER, and
destination MPORT will be a properly constructed NULL value.
This will achieve the requested behaviour (no delivery).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
70f3cadce6 net/sfc: support flow actions PF and VF in transfer rules
The action handler will use MAE action DELIVER with MPORT of the PF/VF.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
2ada1448d8 net/sfc: support flow items PF and VF in transfer rules
Add support for these flow items to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e7de2bcbe8 net/sfc: support flow action mark in MAE backend
The action handler will use MAE action MARK.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
8a771e04db net/sfc: support flow action flag in MAE backend
The action handler will use MAE action FLAG.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
62c3a8b7e0 net/sfc: support VLAN push actions in MAE backend
A group of actions (OF_PUSH_VLAN, OF_VLAN_SET_VID and
OF_VLAN_SET_PCP) maps to MAE action VLAN_PUSH.

This action group is supported only for rules which have transfer
attribute, and can be requested once or twice per a rule.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
9346bfa18b net/sfc: support flow action OF pop VLAN in MAE backend
This action is supported only for rules which have transfer attribute,
and can be requested once or twice per a rule.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
d487651090 net/sfc: support flow action PHY port in MAE backend
The action handler will use MAE action DELIVER with
MPORT of a given physical port.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
0b90a1377d net/sfc: support flow item eth in MAE backend
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
9ca45fd145 net/sfc: support flow item PHY PORT in MAE backend
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
01628fc5f6 net/sfc: add concept of MAE (transfer) rules
Define the corresponding specification structure and
make the code identify MAE rules by testing transfer
attribute presence. Also, add a priority level check.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Min Hu (Connor)
393e557ea8 app/testpmd: fix bonding xmit balance policy command
Currently there exists inconsistency about name of transmission
policy for a Link Bonding device. "xmit_balance_policy" is not
correct, which should be modified to "balance_xmit_policy".

Fixes: 2950a76931 ("bond: testpmd support")
Fixes: ac718398f4 ("doc: testpmd application user guide")
Cc: stable@dpdk.org

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-11-03 23:24:24 +01:00
Simei Su
40d466fa9f net/ice: support ACL filter in DCF
Add ice_acl_create_filter to create a rule and ice_acl_destroy_filter
to destroy a rule. If a flow is matched by ACL filter, filter rule
will be set to HW. Currently IPV4/IPV4_UDP/IPV4_TCP/IPV4_SCTP pattern
and drop action are supported.

Signed-off-by: Simei Su <simei.su@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:24 +01:00
Dekel Peled
d5a7d04c79 net/mlx5: support query of age action
Recent patch [1] adds to ethdev the API for query of age action.
This patch implements in MLX5 PMD the query of age action using
this API.

[1] https://mails.dpdk.org/archives/dev/2020-October/184864.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
613d64e412 net/mlx5: log LRO minimal size
Add debug printout showing HCA capability lro_min_mss_size - the
minimal size of TCP segment required for coalescing.
MLX5 PMD documentation is updated to note this condition.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
491757372f net/mlx5: enforce limitation on IPv6 next protocol
Due to PRM requirement, the IPv6 header item 'proto' field, indicating
the next header protocol, should not be set as extension header.
This patch adds the relevant validation, and documents the limitation.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
ad3d227ead net/mlx5: support match on IPv6 fragment packets
This patch adds to MLX5 PMD the support of matching on IPv6
fragmented and non-fragmented packets, using the new field
has_frag_ext, added to rte_flow following RFC [1].

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
6859e67ef6 net/mlx5: support match on IPv4 fragment packets
This patch adds to MLX5 PMD the support of matching on IPv4
fragmented and non-fragmented packets, using the IPv4 header
fragment_offset field.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Yi Yang
c0d002aed9 gso: fix mbuf freeing responsibility
rte_gso_segment decreased refcnt of pkt by one, but
it is wrong if pkt is external mbuf, pkt won't be
freed because of incorrect refcnt, the result is
application can't allocate mbuf from mempool because
mbufs in mempool are run out of.

One correct way is application should call
rte_pktmbuf_free after calling rte_gso_segment to free
pkt explicitly. rte_gso_segment must not handle it, this
should be responsibility of application.

This commit changed rte_gso_segment in functional behavior
and return value, so the application must take appropriate
actions according to return values, "ret < 0" means it
should free and drop 'pkt', "ret == 0" means 'pkt' isn't
GSOed but 'pkt' can be transmitted as a normal packet,
"ret > 0" means 'pkt' has been GSOed into two or multiple
segments, it should use "pkts_out" to transmit these
segments. The application must free 'pkt' after call
rte_gso_segment when return value isn't equal to 0.

Fixes: 119583797b ("gso: support TCP/IPv4 GSO")
Cc: stable@dpdk.org

Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-11-03 22:45:02 +01:00
Didier Pallard
8bd1040a70 crypto/octeontx2: fix out-of-place support
Out of place with linear buffers is supported by octeontx2
while not advertised.

Fixes: 6aa9ceaddf ("crypto/octeontx2: add symmetric capabilities")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Ankur Dwivedi <adwivedi@marvell.com>
2020-11-02 09:24:41 +01:00
Didier Pallard
16c011472d crypto/octeontx: fix out-of-place support
Out of place with linear buffers is supported by octeontx
while not advertised.

Fixes: 0dc1cffa4d ("crypto/octeontx: add hardware init routine")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Ankur Dwivedi <adwivedi@marvell.com>
2020-11-02 09:24:41 +01:00
Vikas Gupta
fbca4639a6 doc: update bcmfs guide
Update bcmfs.rst file with supported features and devices.

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: JP Lee <jongpil.lee@broadcom.com>
2020-11-02 09:24:41 +01:00
John McNamara
41545d91a4 doc: recommend latest OpenSSL version
Add recommendation to update to latest OpenSSL version when
using the OpenSSL PMD and to at least version 1.1.1g to avoid
known CVEs.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
2020-11-02 09:24:41 +01:00
Arek Kusztal
57e03d2a62 doc: remove notice about AES-GCM IV and J0
This patch removes information about deprecation of AES-GCM/GMAC
API for IV without J0.

Fixes: fac52fb26a ("cryptodev: add option to support both IV and J0 for GCM")
Cc: stable@dpdk.org

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
2020-11-02 09:24:40 +01:00
Adam Dybkowski
3cc4d996fa doc: update VFIO usage in qat crypto guide
This patch marks the old igb-uio driver as unsecure when used
with the QAT PMD and updates all examples to recommend using
VFIO-PCI instead.
It also mentions security issues with the QAT CPM and provides
information about the new VFIO-PCI parameter 'disable_denylist'
available in Linux kernels 5.9 and later.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2020-11-02 09:24:40 +01:00
Bruce Richardson
8809f78c7d doc: fix driver names
Since the built driver filenames have changed in DPDK 20.11, we need to
update the driver doc to match.

Most drivers start their section with the driver filename highlighted in
bold, while a number were missing the highlight. When updating the names,
add the markers for bold text to any missing it, so as to have things more
consistent.

Fixes: a20b2c01a7 ("build: standardize component names and defines")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 16:23:03 +01:00
Thomas Monjalon
af270529ad ethdev: include mbuf registration in Tx timestamp API
Previously, the Tx timestamp field and flag were registered in testpmd,
as described in mlx5 guide.
For consistency between Rx and Tx timestamps,
managing mbuf registrations inside the driver, as properly documented,
is a simpler expectation.

The only driver to support this feature (mlx5) is updated
as well as the testpmd application.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
c7e857e16f mbuf: remove deprecated timestamp field
As announced in the deprecation note, the field timestamp
is removed to give more space to the dynamic fields.
The related offload flag PKT_RX_TIMESTAMP is also removed.

This is how the mbuf layout looks like (pahole-style):

word  type                              name                byte  size
 0    void *                            buf_addr;         /*   0 +  8 */
 1    rte_iova_t                        buf_iova          /*   8 +  8 */
      /* --- RTE_MARKER64               rearm_data;                   */
 2    uint16_t                          data_off;         /*  16 +  2 */
      uint16_t                          refcnt;           /*  18 +  2 */
      uint16_t                          nb_segs;          /*  20 +  2 */
      uint16_t                          port;             /*  22 +  2 */
 3    uint64_t                          ol_flags;         /*  24 +  8 */
      /* --- RTE_MARKER                 rx_descriptor_fields1;        */
 4    uint32_t             union        packet_type;      /*  32 +  4 */
      uint32_t                          pkt_len;          /*  36 +  4 */
 5    uint16_t                          data_len;         /*  40 +  2 */
      uint16_t                          vlan_tci;         /*  42 +  2 */
 5.5  uint64_t             union        hash;             /*  44 +  8 */
 6.5  uint16_t                          vlan_tci_outer;   /*  52 +  2 */
      uint16_t                          buf_len;          /*  54 +  2 */
 7    uint64_t                          dynfield0[1];     /*  56 +  8 */
      /* --- RTE_MARKER                 cacheline1;                   */
 8    struct rte_mempool *              pool;             /*  64 +  8 */
 9    struct rte_mbuf *                 next;             /*  72 +  8 */
10    uint64_t             union        tx_offload;       /*  80 +  8 */
11    struct rte_mbuf_ext_shared_info * shinfo;           /*  88 +  8 */
12    uint16_t                          priv_size;        /*  96 +  2 */
      uint16_t                          timesync;         /*  98 +  2 */
12.5  uint32_t                          dynfield1[7];     /* 100 + 28 */
16    /* --- END                                             128      */

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
52bf2010c9 eventdev: remove software Rx timestamp
This a revert of the commit 569758758d ("eventdev: add Rx timestamp").
If the Rx timestamp is not configured on the ethdev port,
there is no reason to set one.
Also the accuracy  of the timestamp was bad because set at a late stage.
Anyway there is no trace of the usage of this timestamp.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 15:28:26 +01:00
Timothy McDaniel
26aeabe079 event/dlb: add dequeue and its burst variants
Add support for dequeue, dequeue_burst, ...

DLB does not currently support interrupts, but instead uses
umonitor/umwait if supported by the processor. This allows
the software to monitor and wait on writes to a cache-line.

DLB supports normal and sparse cq mode. In normal mode the
hardware will pack 4 QEs into each cache line. In sparse cq
mode, the hardware will only populate one QE per cache line.
Software must be aware of the cq mode, and take the appropriate
actions, based on the mode.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
4784f1eaa3 event/dlb: add enqueue and its burst variants
Add support for enqueue and its variants.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
ee57517013 event/dlb: add port setup
Configure the load balanced (ldb) or directed (dir) port.
The consumer queue (CQ) and producer port (PP) are also
set up here.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
f7a9172f36 event/dlb: add queue setup
Load balanced (ldb) queues are setup here.
Directed queues are not set up until link time, at which
point we know the directed port ID. Directed queue setup
will only fail if this queue is already setup or there are
no directed queues left to configure.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
b94c709dec event/dlb: add infos get and configure
Add support for configuring the DLB hardware.
In particular, this patch configures the DLB
hardware's scheduling domain, such that it is provisioned with
the requested number of ports and queues, provided sufficient
resources are available. Individual queues and ports are
configured later in port setup and eventdev start.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
218be03459 event/dlb: add documentation and build infrastructure
Note that config/rte_config.h contains several configuration
switches, providing for fine control of the PMD's
runtime behaviour.

The meson infrastructure is expanded as additional files are
added to this patchset.

Adds announcement of availability of the new driver
for Intel Dynamic Load Balancer 1.0 hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
a2e4f1f5e7 event/dlb2: add dequeue and its burst variants
Add support for dequeue, dequeue_burst, ...

DLB2 does not currently support interrupts, but instead use
umonitor/umwait if supported by the processor. This allows
the software to monitor and wait on writes to a cache-line.

DLB2 supports normal and sparse cq mode. In normal mode the
hardware will pack 4 QEs into each cache line. In sparse cq
mode, the hardware will only populate one QE per cache line.
Software must be aware of the cq mode, and take the appropriate
actions, based on the mode.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
f7cc194b0f event/dlb2: add enqueue and its burst variants
Add support for enqueue and its variants.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
3a6d0c04e7 event/dlb2: add port setup
Configure the load balanced (ldb) or directed (dir) port.
The consumer queue (CQ) and producer port (PP) are also
set up here.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
7e668e575b event/dlb2: add queue setup
Load balanced (ldb) queues are setup here.
Directed queues are not set up until link time, at which
point we know the directed port ID. Directed queue setup
will only fail if this queue is already setup or there are
no directed queues left to configure.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
f3cad285bb event/dlb2: add infos get and configure
Add support for configuring the DLB2 hardware.
In particular, this patch configures the DLB2
hardware's scheduling domain, such that it is provisioned with
the requested number of ports and queues, provided sufficient
resources are available. Individual queues and ports are
configured later in port setup and eventdev start.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
166378a794 event/dlb2: add documentation and build infrastructure
Adds the meson build infrastructure, which includes
compile-time constants in rte_config.h. DLB2 is
only supported on Linux 64 bit X86 platforms at this time.

Adds announcement of availability for the new driver
for Intel Dynamic Load Balancer 2.0 hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 06:46:12 +01:00
David Marchand
79d69c6dcf mbuf: remove seqn field
As announced in the deprecation note, the field seqn is removed to give
more space to the dynamic fields.

This is how the mbuf layout looks like (pahole-style):

word  type                              name                byte  size
 0    void *                            buf_addr;         /*   0 +  8 */
 1    rte_iova_t                        buf_iova          /*   8 +  8 */
      /* --- RTE_MARKER64               rearm_data;                   */
 2    uint16_t                          data_off;         /*  16 +  2 */
      uint16_t                          refcnt;           /*  18 +  2 */
      uint16_t                          nb_segs;          /*  20 +  2 */
      uint16_t                          port;             /*  22 +  2 */
 3    uint64_t                          ol_flags;         /*  24 +  8 */
      /* --- RTE_MARKER                 rx_descriptor_fields1;        */
 4    uint32_t             union        packet_type;      /*  32 +  4 */
      uint32_t                          pkt_len;          /*  36 +  4 */
 5    uint16_t                          data_len;         /*  40 +  2 */
      uint16_t                          vlan_tci;         /*  42 +  2 */
 5.5  uint64_t             union        hash;             /*  44 +  8 */
 6.5  uint16_t                          vlan_tci_outer;   /*  52 +  2 */
      uint16_t                          buf_len;          /*  54 +  2 */
 7    uint64_t                          timestamp;        /*  56 +  8 */
      /* --- RTE_MARKER                 cacheline1;                   */
 8    struct rte_mempool *              pool;             /*  64 +  8 */
 9    struct rte_mbuf *                 next;             /*  72 +  8 */
10    uint64_t             union        tx_offload;       /*  80 +  8 */
11    struct rte_mbuf_ext_shared_info * shinfo;           /*  88 +  8 */
12    uint16_t                          priv_size;        /*  96 +  2 */
      uint16_t                          timesync;         /*  98 +  2 */
12.5  uint32_t                          dynfield1[7];     /* 100 + 28 */
16    /* --- END                                             128      */

Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-31 22:14:44 +01:00
Thomas Monjalon
5284adad3e mbuf: remove userdata field
As announced in the deprecation note, the field userdata / udata64
is removed to give more space to the dynamic fields.

This is how the mbuf layout looks like (pahole-style):

word  type                              name                byte  size
 0    void *                            buf_addr;         /*   0 +  8 */
 1    rte_iova_t                        buf_iova          /*   8 +  8 */
      /* --- RTE_MARKER64               rearm_data;                   */
 2    uint16_t                          data_off;         /*  16 +  2 */
      uint16_t                          refcnt;           /*  18 +  2 */
      uint16_t                          nb_segs;          /*  20 +  2 */
      uint16_t                          port;             /*  22 +  2 */
 3    uint64_t                          ol_flags;         /*  24 +  8 */
      /* --- RTE_MARKER                 rx_descriptor_fields1;        */
 4    uint32_t             union        packet_type;      /*  32 +  4 */
      uint32_t                          pkt_len;          /*  36 +  4 */
 5    uint16_t                          data_len;         /*  40 +  2 */
      uint16_t                          vlan_tci;         /*  42 +  2 */
 5.5  uint64_t             union        hash;             /*  44 +  8 */
 6.5  uint16_t                          vlan_tci_outer;   /*  52 +  2 */
      uint16_t                          buf_len;          /*  54 +  2 */
 7    uint64_t                          timestamp;        /*  56 +  8 */
      /* --- RTE_MARKER                 cacheline1;                   */
 8    struct rte_mempool *              pool;             /*  64 +  8 */
 9    struct rte_mbuf *                 next;             /*  72 +  8 */
10    uint64_t             union        tx_offload;       /*  80 +  8 */
11    uint16_t                          priv_size;        /*  88 +  2 */
      uint16_t                          timesync;         /*  90 +  2 */
      uint32_t                          seqn;             /*  92 +  4 */
12    struct rte_mbuf_ext_shared_info * shinfo;           /*  96 +  8 */
13    uint64_t                          dynfield1[3];     /* 104 + 24 */
16    /* --- END                                             128      */

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-10-31 16:13:11 +01:00
Thomas Monjalon
eb8258402b examples/rxtx_callbacks: switch TSC to dynamic field
The example used the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-31 16:13:11 +01:00
Thomas Monjalon
614af75489 security: switch metadata to dynamic mbuf field
The device-specific metadata was stored in the deprecated field udata64.
It is moved to a dynamic mbuf field in order to allow removal of udata64.

The name rte_security_dynfield is not very descriptive
but it should be replaced later by separate fields for each type of data
that drivers pass to the upper layer.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2020-10-31 16:13:11 +01:00
Honnappa Nagarahalli
47bec9a5ca ring: add zero copy API
Add zero-copy APIs. These APIs provide the capability to
copy the data to/from the ring memory directly, without
having a temporary copy (for ex: an array of mbufs on
the stack). Use cases that involve copying large amount
of data to/from the ring can benefit from these APIs.

Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-29 14:13:31 +01:00
Xiaoyun Li
079981e980 examples/tep_term: remove this application
This example sets up a scenario that VXLAN packets can be received
by different PF queues based on VNID and each queue is bound to a VM
with a VNID so that the VM can receive its inner packets.

Usually, OVS is used to do the software encap/decap for VXLAN packets.

And the VXLAN packets offloading can be replaced with flow rules in
testpmd like Chapter "Sample VXLAN flow rules" in Testpmd Application
User Guide.

And this example hasn't been used for a long time.

So deprecate this example.

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-29 12:37:51 +01:00
Vladimir Medvedkin
1e5630e40d fib6: add AVX512 lookup
Add new lookup implementation for FIB6 trie algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-28 21:29:13 +01:00
Vladimir Medvedkin
b3509fa365 fib: add AVX512 lookup
Add new lookup implementation for DIR24_8 algorithm using
AVX512 instruction set

Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-28 21:29:11 +01:00
Ray Kinsella
23b4fd825f doc: update ABI version references
Updated references to abi versions in the contributors guide.
Fixed an inaccurate reference to a symbol in the policy.

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2020-10-27 08:53:53 +01:00
Ruifeng Wang
ced5a6ce24 lpm: hide internal data
Fields except tbl24 and tbl8 in rte_lpm structure have no
need to be exposed to the user.
Hide the unneeded exposure of structure fields for better
ABI maintainability.

Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
2020-10-24 19:08:06 +02:00
Dharmik Thakkar
769b2de7fb hash: implement RCU resources reclamation
Currently, users have to use external RCU mechanisms to free resources
when using lock free hash algorithm.

Integrate RCU QSBR process to make it easier for the applications to use
lock free algorithm.
Refer to RCU documentation to understand various aspects of
integrating RCU library into other libraries.

Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
2020-10-24 09:25:13 +02:00
Thomas Monjalon
513ee0ab74 doc: remove references to make from known issues
The config options CONFIG_RTE_* are simple RTE_* defines with meson.
Now that make support is dropped, update the HPET config reference.

The comment about the AVX512 config option is not relevant anymore.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-10-23 19:25:21 +02:00
Ruifeng Wang
50ee0c2d0a doc: update guide for armv8 crypto
Added guide about building by using meson.

Also added the command to create virtual device.

Suggested-by: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
2020-10-22 23:11:58 +02:00
Kevin Laatz
559fe5f7b6 doc: update patch cheatsheet to use meson
With 'make' being removed, the patch cheatsheet needs to be updated to
remove any references to 'make'. These references have been replaced with
meson alternatives in this patch.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
194124fb67 doc: add to release notes to reflect removal of make
Added an entry to describe the removal of the Make build system to the
release notes for 20.11.

Signed-off-by: Ciara Power <ciara.power@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
532e4e48ca doc: remove references to make from contributing guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
95fcf7bff4 doc: remove reference to make from tools guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
e2a94f9ad3 doc: remove references to make from apps guide
While make has been deprecated for DPDK, it's still applicable for
some example apps to be built standalone, this patch adjusts the
guides to take that into consideration.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
6250e968ac doc: remove references to make from rawdevs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
08b1d50543 doc: remove references to make from eventdevs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
89515c0348 doc: remove references to make from compressdevs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
fd5f9fb95f doc: remove references to make from cryptodevs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-22 22:54:05 +02:00
Ciara Power
07a2a57261 doc: remove references to make from bbdevs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
dde524d3ff doc: remove references to make from vdpadevs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-10-22 22:54:05 +02:00
Ciara Power
68d99d00ae doc: remove references to make from NICs guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2020-10-22 22:54:05 +02:00
Ciara Power
a3b34b1df8 doc: remove references to make from mempool guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-22 22:54:05 +02:00
Ciara Power
d2e65d43fe doc: remove references to make from platforms guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-22 22:54:05 +02:00
David Marchand
30105f664f drivers: add headers install helper
A lot of drivers export headers, reproduce the same facility than for
libraries.

Note: this change fixes an issue with the crypto scheduler headers which
were not installed properly. A separate backport will be sent to stable
branches.

Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-22 14:16:22 +02:00
David Marchand
6b3848e211 build: fix version map file references in documentation
Fixes: 63b3907833 ("build: remove library name from version map file name")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-22 14:11:49 +02:00
Stephen Hemminger
d250589d57 net/memif: replace master/slave arguments
Replace master/slave terms in this driver.

The memory interface drivers uses a client/server architecture
so change the variable names and device arguments to that.

The previous devargs are maintained for compatibility, but if
used cause a notice in the log.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-10-20 13:17:08 +02:00
Stephen Hemminger
cb056611a8 eal: rename lcore master and slave
Replace master lcore with main lcore and
replace slave lcore with worker lcore.

Keep the old functions and macros but mark them as deprecated
for this release.

The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2020-10-20 13:17:08 +02:00
Stephen Hemminger
57c789fd94 doc: add policy about master/slave words
Update the coding style document to include a policy against
introducing new master/slave usage. This is taken from the similar
place in the Linux kernel coding style.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2020-10-20 11:41:41 +02:00
Bruce Richardson
a20b2c01a7 build: standardize component names and defines
As discussed on the dpdk-dev mailing list[1], we can make some easy
improvements in standardizing the naming of the various components in DPDK,
and their associated feature-enabled macros.

Following this patch, each library will have the name in format,
'librte_<name>.so', and the macro indicating that library is enabled in the
build will have the form 'RTE_LIB_<NAME>'.

Similarly, for libraries, the equivalent name formats and macros are:
'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
device type taken from the relevant driver subdirectory name, i.e. 'net',
'crypto' etc.

To avoid too many changes at once for end applications, the old macro names
will still be provided in the build in this release, but will be removed
subsequently.

[1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
2020-10-19 22:15:34 +02:00
Bruce Richardson
c0a775a141 doc: add SPDX license tag header to meson guide
The build-sdk-meson.rst file originates from the short plain-text meson
instructions added in 2018. Add SPDX tag and copyright notice based on the
original commit.

Fixes: 9c3adc289c ("doc: add instructions on build using meson")
Cc: stable@dpdk.org

Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-19 18:21:44 +02:00
Ciara Power
1e6a661302 acl: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path. These checks are added in the check alg helper functions.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-19 16:45:02 +02:00
Ciara Power
4635c840ce doc: describe how to enable AVX512
This patch adds documentation on the usage of the max SIMD bitwidth EAL
setting to enable AVX-512 at runtime.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-19 16:45:02 +02:00
Ciara Power
580af30dd6 eal: control max SIMD bitwidth
This patch adds a max SIMD bitwidth EAL configuration. The API allows
for an app to set this value. It can also be set using EAL argument
--force-max-simd-bitwidth, which will lock the value and override any
modifications made by the app.

Each arch has a define for the default SIMD bitwidth value, this is used
on EAL init to set the config max SIMD bitwidth.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
2020-10-19 16:45:02 +02:00
Akhil Goyal
e30b2833c4 security: update session create API
The API ``rte_security_session_create`` takes only single
mempool for session and session private data. So the
application need to create mempool for twice the number of
sessions needed and will also lead to wastage of memory as
session private data need more memory compared to session.
Hence the API is modified to take two mempool pointers
- one for session and one for private data.
This is very similar to crypto based session create APIs.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Reviewed-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Tested-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
2020-10-19 09:54:54 +02:00
Konstantin Ananyev
5636d60347 examples/l3fwd-acl: select ACL classify method
Replace '--scalar' command-line option with new one: --alg=<algname>
to allow user explicitly select desired classify method.
This is an optional parameter, if not specified default classify
algorithm will be used.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-19 09:20:25 +02:00
Konstantin Ananyev
20f76bb666 examples/l3fwd-acl: update MAC addresses
Introduces two changes into l3fwd-acl behaviour to make
it behave in the same way as l3fwd:
- Add a command-line parameter to allow the user to specify the
  destination mac address for each ethernet port used.
- While forwarding the packet update source and destination mac
  addresses.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-19 09:20:25 +02:00
Ferruh Yigit
a72cb3e765 doc: announce queue stats moving to xstats
Queue stats will be removed from basic stats to xstats.

It will be PMDs responsibility to fill queue stats based on number of
queues they have.

Until all PMDs implement the xstats, a temporary
'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' device flag created. PMDs switched
to the xstats should clear this flag to bypass the ethdev layer autofill
for queue stats.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 23:27:15 +02:00
Ivan Ilchenko
58af59172b ethdev: allow stop function to return an error
Change rte_eth_dev_stop() return value from void to int
and return negative errno values in case of error conditions.
Also update the usage of the function in ethdev according to
the new return type.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 22:26:41 +02:00
Thomas Monjalon
8a5a0aad5d ethdev: allow close function to return an error
The API function rte_eth_dev_close() was returning void.
The return type is changed to int for notifying of errors.

If an error happens during a close operation,
the status of the port is undefined,
a maximum of resources having been freed.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Liron Himi <lironh@marvell.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 22:26:41 +02:00
Viacheslav Ovsiienko
91c78e090e app/testpmd: add rxoffs commands and parameters
Add command line parameter:

--rxoffs=X[,Y]

Sets the offsets of packet segments from the beginning of the
receiving buffer if split feature is engaged. Affects only the
queues configured with split offloads (currently BUFFER_SPLIT
is supported only).

Add interactive mode command, providing the same:

testpmd> set rxoffs (x[,y]*)

Where x[,y]* represents a CSV list of values, without white space.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:40 +02:00
Viacheslav Ovsiienko
0f2096d7ab app/testpmd: add rxpkts commands and parameters
Add command line parameter:

--rxpkts=X[,Y]

Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only).

Add interactive mode command:

testpmd> set rxpkts (x[,y]*)

Where x[,y]* represents a CSV list of values, without white space.

Sets the length of segments to scatter packets on receiving if split
feature is engaged. Affects only the queues configured with split
offloads (currently BUFFER_SPLIT is supported only). Optionally the
multiple memory pools can be specified with --mbuf-size command line
parameter and the mbufs to receive will be allocated sequentially
from these extra memory pools.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:40 +02:00
Viacheslav Ovsiienko
26cbb4191e app/testpmd: add multiple pools per core creation
The command line parameter --mbuf-size is updated, it can handle
the multiple values like the following:

--mbuf-size=2176,512,768,4096

specifying the creation the extra memory pools with the requested
mbuf data buffer sizes. If some buffer split feature is engaged
the extra memory pools can be used to configure the Rx queues
with rte_the_dev_rx_queue_setup_ex().

The extra pools are created with requested sizes, and pool names
are assigned with appended index: mbuf_pool_socket_%socket_%index.
Index zero is used to specify the first mandatory pool to maintain
compatibility with existing code.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 22:26:40 +02:00
Viacheslav Ovsiienko
4ff702b5df ethdev: introduce Rx buffer split
The DPDK datapath in the transmit direction is very flexible.
An application can build the multi-segment packet and manages
almost all data aspects - the memory pools where segments
are allocated from, the segment lengths, the memory attributes
like external buffers, registered for DMA, etc.

In the receiving direction, the datapath is much less flexible,
an application can only specify the memory pool to configure the
receiving queue and nothing more. In order to extend receiving
datapath capabilities it is proposed to add the way to provide
extended information how to split the packets being received.

The new offload flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT in device
capabilities is introduced to present the way for PMD to report to
application about supporting Rx packet split to configurable
segments. Prior invoking the rte_eth_rx_queue_setup() routine
application should check RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag.

The following structure is introduced to specify the Rx packet
segment for RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload:

struct rte_eth_rxseg_split {

    struct rte_mempool *mp; /* memory pools to allocate segment from */
    uint16_t length; /* segment maximal data length,
		       	configures "split point" */
    uint16_t offset; /* data offset from beginning
		       	of mbuf data buffer */
    uint32_t reserved; /* reserved field */
};

The segment descriptions are added to the rte_eth_rxconf structure:
   rx_seg - pointer the array of segment descriptions, each element
             describes the memory pool, maximal data length, initial
             data offset from the beginning of data buffer in mbuf.
	     This array allows to specify the different settings for
	     each segment in individual fashion.
   rx_nseg - number of elements in the array

If the extended segment descriptions is provided with these new
fields the mp parameter of the rte_eth_rx_queue_setup must be
specified as NULL to avoid ambiguity.

There are two options to specify Rx buffer configuration:
- mp is not NULL, rrx_conf.rx_nseg is zero, it is compatible
  configuration, follows existing implementation, provides
  the single pool and no description for segment sizes
  and offsets.
- mp is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not
  zero, it provides the extended configuration, individually for
  each segment.

f the Rx queue is configured with new settings the packets being
received will be split into multiple segments pushed to the mbufs
with specified attributes. The PMD will split the received packets
into multiple segments according to the specification in the
description array.

For example, let's suppose we configured the Rx queue with the
following segments:
    seg0 - pool0, len0=14B, off0=2
    seg1 - pool1, len1=20B, off1=128B
    seg2 - pool2, len2=20B, off2=0B
    seg3 - pool3, len3=512B, off3=0B

The packet 46 bytes long will look like the following:
    seg0 - 14B long @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
    seg1 - 20B long @ 128 in mbuf from pool1
    seg2 - 12B long @ 0 in mbuf from pool2

The packet 1500 bytes long will look like the following:
    seg0 - 14B @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
    seg1 - 20B @ 128 in mbuf from pool1
    seg2 - 20B @ 0 in mbuf from pool2
    seg3 - 512B @ 0 in mbuf from pool3
    seg4 - 512B @ 0 in mbuf from pool3
    seg5 - 422B @ 0 in mbuf from pool3

The offload RTE_ETH_RX_OFFLOAD_SCATTER must be present and
configured to support new buffer split feature (if rx_nseg
is greater than one).

The split limitations imposed by underlying PMD is reported
in the new introduced rte_eth_dev_info->rx_seg_capa field.

The new approach would allow splitting the ingress packets into
multiple parts pushed to the memory with different attributes.
For example, the packet headers can be pushed to the embedded
data buffers within mbufs and the application data into
the external buffers attached to mbufs allocated from the
different memory pools. The memory attributes for the split
parts may differ either - for example the application data
may be pushed into the external memory located on the dedicated
physical device, say GPU or NVMe. This would improve the DPDK
receiving datapath flexibility with preserving compatibility
with existing API.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 22:26:40 +02:00
Gregory Etelson
1b9f274623 app/testpmd: add commands for tunnel offload
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
 - apply matches to both outer and inner packet headers
   during entire offload procedure;
 - restore outer header of partially offloaded packet;
 - model is implemented as a set of helper functions.

Implementation details:

* Create application tunnel:
flow tunnel create <port> type <tunnel type>
On success, the command creates application tunnel object and returns
the tunnel descriptor. Tunnel descriptor is used in subsequent flow
creation commands to reference the tunnel.

* Create tunnel steering flow rule:
tunnel_set <tunnel descriptor> parameter used with steering rule
template.

* Create tunnel matching flow rule:
tunnel_match <tunnel descriptor> used with matching rule template.

* If tunnel steering rule was offloaded, outer header of a partially
offloaded packet is restored after miss.

Example:
test packet=
<Ether  dst=24:8a:07:8d:ae:d6 src=50:6b:4b:cc:fc:e2 type=IPv4 |
<IP  version=4 ihl=5 proto=udp src=1.1.1.1 dst=1.1.1.10 |
<UDP  sport=4789 dport=4789 len=58 chksum=0x7f7b |
<VXLAN  NextProtocol=Ethernet vni=0x0 |
<Ether  dst=24:aa:aa:aa:aa:d6 src=50:bb:bb:bb:bb:e2 type=IPv4 |
<IP  version=4 ihl=5 proto=icmp src=2.2.2.2 dst=2.2.2.200 |
<ICMP  type=echo-request code=0 chksum=0xf7ff id=0x0 seq=0x0 |>>>>>>>
>>> len(packet)
92

testpmd> flow flush 0
testpmd> port 0/queue 0: received 1 packets
src=50:6B:4B:CC:FC:E2 - dst=24:8A:07:8D:AE:D6 - type=0x0800 -
length=92

testpmd> flow tunnel 0 type vxlan
port 0: flow tunnel #1 type vxlan
testpmd> flow create 0 ingress group 0 tunnel_set 1
         pattern eth /ipv4 / udp dst is 4789 / vxlan / end
         actions  jump group 0 / end
Flow rule #0 created
testpmd> port 0/queue 0: received 1 packets
tunnel restore info: - vxlan tunnel - outer header present # <--
  src=50:6B:4B:CC:FC:E2 - dst=24:8A:07:8D:AE:D6 - type=0x0800 -
length=92

testpmd> flow create 0 ingress group 0 tunnel_match 1
         pattern eth / ipv4 / udp dst is 4789 / vxlan / eth / ipv4 /
         end
         actions set_mac_dst mac_addr 02:CA:FE:CA:FA:80 /
         queue index 0 / end
Flow rule #1 created
testpmd> port 0/queue 0: received 1 packets
  src=50:BB:BB:BB:BB:E2 - dst=02:CA:FE:CA:FA:80 - type=0x0800 -
length=42

* Destroy flow tunnel
flow tunnel destroy <port> id <tunnel id>

* Show existing flow tunnels
flow tunnel list <port>

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
2020-10-16 19:48:19 +02:00
Eli Britstein
9ec0f97e02 ethdev: add tunnel offload model
rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects.  The hardware model design for
high-level software objects is not trivial.  Furthermore, an optimal
design is often vendor-specific.

When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.

There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor.  For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.

The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.

The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.

For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.

VXLAN Code example:

Assume application needs to do inner NAT on the VXLAN packet.
The first  rule in group 0:

flow create <port id> ingress group 0
  pattern eth / ipv4 / udp dst is 4789 / vxlan / end
  actions {pmd actions} / jump group 3 / end

The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3.  In group 3 the packet will miss since there is no
flow to match and will be sent to the application.  Application  will
call rte_flow_get_restore_info() to get the packet outer header.

Application will insert a new rule in group 3 to match outer and inner
headers:

flow create <port id> ingress group 3
  pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
          udp dst 4789 / vxlan vni is 10 /
          ipv4 dst is 184.1.2.3 / end
  actions  set_ipv4_dst  186.1.1.1 / queue index 3 / end

Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1

Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on.  It's PMD
responsibility to translate these items to outer metadata.

API usage:

/**
 * 1. Initiate RTE flow tunnel object
 */
const struct rte_flow_tunnel tunnel = {
  .type = RTE_FLOW_ITEM_TYPE_VXLAN,
  .tun_id = 10,
}

/**
 * 2. Obtain PMD tunnel actions
 *
 * pmd_actions is an intermediate variable application uses to
 * compile actions array
 */
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
                              &num_pmd_actions, &error);
/**
 * 3. offload the first  rule
 * matching on VXLAN traffic and jumps to group 3
 * (implicitly decaps packet)
 */
app_actions  =   jump group 3
rule_items = app_items;  /** eth / ipv4 / udp / vxlan  */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
                         rule_items, rule_actions, &error);

/**
  * 4. after flow creation application does not need to keep the
  * tunnel action resources.
  */
rte_flow_tunnel_action_release(port_id, pmd_actions,
                               num_pmd_actions);
/**
  * 5. After partially offloaded packet miss because there was no
  * matching rule handle miss on group 3
  */
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);

/**
 * 6. Offload NAT rule:
 */
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
            vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }

rte_flow_tunnel_match(&info.tunnel, &pmd_items,
                      &num_pmd_items,  &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
                         rule_items, rule_actions, &error);

/**
 * 7. Release PMD items after rule creation
 */
rte_flow_tunnel_item_release(port_id,
                             pmd_items, num_pmd_items);

References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html

Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:19 +02:00
Gregory Etelson
5d1bff8fe2 ethdev: allow negative values in flow rule types
RTE flow items & actions use positive values in item & action type.
Negative values are reserved for PMD private types. PMD
items & actions usually are not exposed to application and are not
used to create RTE flows.

The patch allows applications with access to PMD flow
items & actions ability to integrate RTE and PMD items & actions
and use them to create flow rule.

RTE flow item or action conversion library accepts positive known
element types with predefined sizes only. Private PMD items and
actions do not fit into this scheme because PMD type values are
negative, each PMD has it's own types numeration and element types and
their sizes are not visible at RTE level.  To resolve these
limitations the patch proposes this solution:
1. PMD can expose elements of pointer size only.  RTE flow
   conversion functions will use pointer size for each configuration
   object in private PMD element it processes;
2. RTE flow verification will not reject elements with negative type.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:19 +02:00
Dekel Peled
09315fc838 ethdev: add VLAN attributes to ethernet and VLAN items
This patch implements the change proposes in RFC [1], adding dedicated
fields to ETH and VLAN items structs, to clearly define the required
characteristic of a packet, and enable precise match criteria.
Documentation is updated accordingly.

[1] https://mails.dpdk.org/archives/dev/2020-August/177536.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-10-16 19:48:19 +02:00
Bing Zhao
01817b10d2 app/testpmd: change hairpin queues setup
A new parameter `hairpin-mode` is introduced to the testpmd command
line. Bitmask value is used to provide a more flexible configuration.
This parameter should be used when `hairpinq` is specified in the
command line.

Bit 0 in the LSB indicates the hairpin will use the loop mode. The
previous port Rx queue will be connected to the current port Tx
queue.
Bit 1 in the LSB indicates the hairpin will use pair port mode. The
even index port will be paired with the next odd index port. If the
total number of the probed ports is odd, then the last one will be
paired to itself.
If this byte is zero, then each port will be paired to itself.
Bit 0 takes a higher priority in the checking.

Bit 4 in the second bytes indicate if the hairpin will use explicit
Tx flow mode.

e.g. in the command line, "--hairpinq=2 --hairpin-mode=0x11"

If not set, default value zero will be used and the behavior will
try to get aligned with the previous single port mode. If the ports
belong to different vendors' NICs, it is suggested to use the `self`
hairpin mode only.

Since hairpin configures the hardware resources, the port mask of
packets forwarding engine will not be used here.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-10-16 19:48:19 +02:00
Bing Zhao
9a9ba10ada ethdev: add function to get hairpin peer ports list
After hairpin queues are configured, in general, the application will
maintain the ports topology and even the queues configuration for
the hairpin. But sometimes it will not.

If there is no hot-plug, it is easy to bind and unbind hairpin among
all the ports. The application can just connect or disconnect the
hairpin egress ports to/from all the probed ingress ports. Then all
the connections could be handled properly.

But with hot-plug / hot-unplug, one port could be probed and removed
dynamically. With two ports hairpin, all the connections from and to
this port should be handled after start(bind) or before stop(unbind).
It is necessary to know the hairpin topology with this port.

This function will return the ports list with the actual peer ports
number after configuration. Either peer Rx or Tx ports will be
gotten with this function call.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-10-16 19:48:19 +02:00
Bing Zhao
5d9f23fb8f ethdev: add new attributes to hairpin config
To support two ports hairpin mode and keep the backward compatibility
for the application, two new attribute members of the hairpin queue
configuration structure will be added.

`tx_explicit` means if the application itself will insert the Tx part
flow rules. If not set, PMD will insert the rules implicitly.
`manual_bind` means if the hairpin Tx queue and peer Rx queue will be
bound automatically during the device start stage.

Different Tx and Rx queue pairs could have different values, but it
is highly recommended that all paired queues between one egress and
its peer ingress ports have the same values, in order not to bring
any chaos to the system. The actual support of these attribute
parameters will be checked and decided by the PMD drivers.

In the single port hairpin, if both are zero without any setting, the
behavior will remain the same as before. It means that no bind API
needs to be called and no Tx flow rules need to be inserted manually
by the application.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 19:48:19 +02:00
Bing Zhao
a9916fdfb8 ethdev: add hairpin bind and unbind API
In single port hairpin mode, all the hairpin Tx and Rx queues belong
to the same device. After the queues are set up properly, there is
no other dependency between the Tx queue and its Rx peer queue. The
binding process that connected the Tx and Rx queues together from
hardware level will be done automatically during the device start
procedure. Everything required is configured and initialized already
for the binding process.

But in two ports hairpin mode, there will be some cross-dependences
between two different ports. Usually, the ports will be initialized
serially by the main thread but not in parallel. The earlier port
will not be able to enable the bind if the following peer port is
not yet configured with HW resources. What's more, if one port is
detached / attached dynamically, it would introduce more trouble
for the hairpin binding.

To overcome these, new APIs for binding and unbinding are added.
During startup, only the hairpin Tx and Rx peer queues will be set
up. Nothing will be done when starting the device if the queues are
without auto-bind attribute. Only after the required ports pair
started, the `rte_eth_hairpin_bind()` API can be called to bind the
all Tx queues of the egress port to the Rx queues of the peer port.
Then the connection between the egress and ingress ports pair will
be established.

The `rte_eth_hairpin_unbind()` API could be used to disconnect the
egress and the peer ingress ports. This should only be called before
the device is closed if needed. When doing the clean up, all the
egress and ingress pairs related to a single port should be taken
into consideration, especially in the hot unplug case.
mode is described.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 19:48:19 +02:00
Dekel Peled
69cb50d9be ethdev: add IPv6 fragment extension header item
Applications handling fragmented IPv6 packets need to match on IPv6
fragment extension header, in order to identify the fragments order
and location in the packet.
This patch introduces the IPv6 fragment extension header item,
proposed in [1].

Relevant definitions are moved from lib/librte_ip_frag/rte_ip_frag.h
to lib/librte_net/rte_ip.h, as they are needed for IPv6 header handling.
struct ipv6_extension_fragment renamed to rte_ipv6_fragment_ext to
adapt it to the common naming convention.

Default mask is not defined, since all fields are optional.

[1] http://mails.dpdk.org/archives/dev/2020-March/160255.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 19:48:18 +02:00
Dekel Peled
ad976bd40d ethdev: add extensions attributes to IPv6 item
Using the current implementation of DPDK, an application cannot match on
IPv6 packets, based on the existing extension headers, in a simple way.

Field 'Next Header' in IPv6 header indicates type of the first extension
header only. Following extension headers can't be identified by
inspecting the IPv6 header.
As a result, the existence or absence of specific extension headers
can't be used for packet matching.

For example, fragmented IPv6 packets contain a dedicated extension header
(which is implemented in a later patch of this series).
Non-fragmented packets don't contain the fragment extension header.
For an application to match on non-fragmented IPv6 packets, the current
implementation doesn't provide a suitable solution.
Matching on the Next Header field is not sufficient, since additional
extension headers might be present in the same packet.
To match on fragmented IPv6 packets, the same difficulty exists.

This patch implements the update as detailed in RFC [1].
A set of additional values will be added to IPv6 header struct.
These values will indicate the existence of every defined extension
header type, providing simple means for identification of existing
extensions in the packet header.
Continuing the above example, fragmented packets can be identified using
the specific value indicating existence of fragment extension header.
To match on non-fragmented IPv6 packets, need to use has_frag_ext 0.
To match on fragmented IPv6 packets, need to use has_frag_ext 1.
To match on any IPv6 packets, the has_frag_ext field should
not be specified for match.

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 19:48:18 +02:00
Andrey Vesnovaty
55509e3a49 app/testpmd: support shared flow action
This patch adds shared action support to testpmd CLI.

All shared actions created via testpmd CLI assigned ID for further
reference in other CLI commands. Shared action ID supplied as CLI
argument or assigned by testpmd is similar to flow ID & limited to
scope of testpdm CLI.

Create shared action syntax:
flow shared_action {port_id} create [action_id {shared_action_id}]
	[ingress] [egress] action {action} / end

Create shared action examples:
	flow shared_action 0 create action_id 100 \
		ingress action rss queues 1 2 end / end
	This creates shared rss action with id 100 on port 0.

	flow shared_action 0 create action_id \
		ingress action rss queues 0 1 end / end
	This creates shared rss action with id assigned by testpmd
	on port 0.

Update shared action syntax:
flow shared_action {port_id} update {shared_action_id}
	action {action} / end

Update shared action example:
	flow shared_action 0 update 100 \
		action rss queues 0 3 end / end
	This updates shared rss action having id 100 on port 0
	with rss to queues 0 3 (in create example rss queues were
	1 & 2).

Destroy shared action syntax:
flow shared_action {port_id} destroy action_id {shared_action_id} [...]

Destroy shared action example:
	flow shared_action 0 destroy action_id 100 action_id 101
	This destroys shared actions having id 100 & 101

Query shared action syntax:
flow shared_action {port} query {shared_action_id}

Query shared action example:
	flow shared_action 0 query 100
	This queries shared actions having id 100

Use shared action as flow action syntax:
flow create {port_id} ... / end actions [action / [...]]
	shared {action_id} / [action / [...]] end

Use shared action as flow action example:
	flow create 0 ingress pattern ... / end \
		actions shared 100 / end
	This creates flow rule where rss action is shared rss action
	having id 100.

All shared action CLIs report status of the command.
Shared action query CLI output depends on action type.

Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 19:48:18 +02:00
Andrey Vesnovaty
4d9fd85fb5 ethdev: add shared actions to flow API
Introduce extension of flow action API enabling sharing of single
rte_flow_action in multiple flows. The API intended for PMDs, where
multiple HW offloaded flows can reuse the same HW essence/object
representing flow action and modification of such an essence/object
affects all the rules using it.

Motivation and example
===
Adding or removing one or more queues to RSS used by multiple flow rules
imposes per rule toll for current DPDK flow API; the scenario requires
for each flow sharing cloned RSS action:
- call `rte_flow_destroy()`
- call `rte_flow_create()` with modified RSS action

API for sharing action and its in-place update benefits:
- reduce the overhead of multiple RSS flow rules reconfiguration
- optimize resource utilization by sharing action across multiple
  flows

Change description
===

Shared action
===
In order to represent flow action shared by multiple flows new action
type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum
rte_flow_action_type`).
Actually the introduced API decouples action from any specific flow and
enables sharing of single action by its handle across multiple flows.

Shared action create/use/destroy
===
Shared action may be reused by some or none flow rules at any given
moment, i.e. shared action resides outside of the context of any flow.
Shared action represent HW resources/objects used for action offloading
implementation.
API for shared action create (see `rte_flow_shared_action_create()`):
- should allocate HW resources and make related initializations required
  for shared action implementation.
- make necessary preparations to maintain shared access to
  the action resources, configuration and state.
API for shared action destroy (see `rte_flow_shared_action_destroy()`)
should release HW resources and make related cleanups required for shared
action implementation.

In order to share some flow action reuse the handle of type
`struct rte_flow_shared_action` returned by
rte_flow_shared_action_create() as a `conf` field of
`struct rte_flow_action` (see "example" section).

If some shared action not used by any flow rule all resources allocated
by the shared action can be released by rte_flow_shared_action_destroy()
(see "example" section). The shared action handle passed as argument to
destroy API should not be used any further i.e. result of the usage is
undefined.

Shared action re-configuration
===
Shared action behavior defined by its configuration can be updated via
rte_flow_shared_action_update() (see "example" section). The shared
action update operation modifies HW related resources/objects allocated
on the action creation. The number of operations performed by the update
operation should not depend on the number of flows sharing the related
action. On return of shared action update API action behavior should be
according to updated configuration for all flows sharing the action.

Shared action query
===
Provide separate API to query shared action state (see
rte_flow_shared_action_update()). Taking a counter as an example: query
returns value aggregating all counter increments across all flow rules
sharing the counter. This API doesn't query shared action configuration
since it is controlled by rte_flow_shared_action_create() and
rte_flow_shared_action_update() APIs and no supposed to change by other
means.

example
===

struct rte_flow_action actions[2];
struct rte_flow_shared_action_conf conf;
struct rte_flow_action action;
/* skipped: initialize conf and action */
struct rte_flow_shared_action *handle =
	rte_flow_shared_action_create(port_id, &conf, &action, &error);
actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED;
actions[0].conf = handle;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
/* skipped: init attr0 & pattern0 args */
struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0,
					actions, error);
/* create more rules reusing shared action */
struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1,
					actions, error);
/* skipped: for flows 2 till N */
struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN,
					actions, error);
/* update shared action */
struct rte_flow_action updated_action;
/*
 * skipped: initialize updated_action according to desired action
 * configuration change
 */
rte_flow_shared_action_update(port_id, handle, &updated_action, error);
/*
 * from now on all flows 1 till N will act according to configuration of
 * updated_action
 */
/* skipped: destroy all flows 1 till N */
rte_flow_shared_action_destroy(port_id, handle, error);

Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-16 19:48:18 +02:00
Jiawei Wang
f78b86c323 doc: add sample flow limitation in mlx5 guide
Add description about the sample flow limitation.
Sample Flow supports in NIC-Rx and E-Switch domains.
Due to Metadata register c0 is deleted while doing the loopback,
so that only support forward the sampling packet into
E-Switch manager port, no additional action support in sample flow.

Add the offloads minimum versions for new sampling feature.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Sarosh Arif
a09272436e doc: fix typo in pcap guide
Changed "net_pcap1;" to "net_pcap1," in order to make the command
correct.

Fixes: 53bf484034 ("net/pcap: capture only ingress packets from Rx iface")
Cc: stable@dpdk.org

Signed-off-by: Sarosh Arif <sarosh.arif@emumba.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
c31689038d doc: advertise Alveo SN1000 SmartNICs family support
Alveo SN1000 family is SmartNICs based on EF100 architecture.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
942a636499 net/sfc: support Tx VLAN insertion offload for EF100
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Ivan Malov
77cb007124 net/sfc: support tunnel TSO for EF100 native Tx
Handle VXLAN and Geneve TSO on EF100 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Ivan Malov
4f936666d7 net/sfc: support TSO for EF100 native datapath
Riverhead boards support TSO version 3.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
f71965f9df net/sfc: support tunnels for EF100 native Tx
Add support for outer IPv4/UDP and inner IPv4/UDP/TCP checksum offloads.
Use partial checksum offload for inner TCP/UDP offload.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
e30f1081c2 net/sfc: support IPv4 header checksum offload for EF100 Tx
Use outer layer 3 full checksum offload which does not require any
assistance from driver.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
a8e0c002f2 net/sfc: support TCP and UDP checksum offloads for EF100
Use outer layer 4 full checksum offload which does not require any
assistance from driver.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
94d31cd115 net/sfc: support multi-segment Tx for EF100
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
0cb551b690 net/sfc: implement EF100 native Tx
No offloads support yet including multi-segment (Tx gather).

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
554644e364 net/sfc: implement EF100 native Rx
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
d9bc45879c doc: avoid references to removed config in sfc guide
CONFIG_* variables were used by make-based build system which is
removed.

Fixes: 3cc6ecfdfe ("build: remove makefiles")

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:17 +02:00
Andrew Rybchenko
e1ecb6c775 doc: fix EF10 Rx mode name in sfc guide
Fixes: 390f9b8d82 ("net/sfc: support equal stride super-buffer Rx mode")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:17 +02:00
Ciara Loftus
53a73b7b9d net/af_xdp: forbid umem sharing for xsks with same context
AF_XDP PMDs who wish to share a UMEM must have a unique context
(ctx) ie. netdev,qid tuple. For instance, the following will not
work since both PMDs' contexts are identical.

  --vdev net_af_xdp0,iface=ens786f1,start_queue=0,shared_umem=1
  --vdev net_af_xdp1,iface=ens786f1,start_queue=0,shared_umem=1

Supporting this scenario would require locks, which would impact
the performance of the more typical cases - xsks with different
netdev,qid tuples.

Fixes: 74b46340e2 ("net/af_xdp: support shared UMEM")

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
2020-10-16 19:48:17 +02:00
Dekel Peled
f6859b5136 ethdev: support query of age action
Existing API supports AGE action to monitor the aging of a flow.
This patch implements RFC [1], introducing the response format for query
of an AGE action.
Application will be able to query the AGE action state.
The response will be returned in the format implemented here.

[1] https://mails.dpdk.org/archives/dev/2020-September/180061.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-16 19:48:16 +02:00
Jiawei Wang
805b8faa6b ethdev: introduce flow sample action
When using full offload, all traffic will be handled by the HW, and
forwarded to the requested VF or wire and the control application does
not see this traffic anymore. So there's a need for an action that
enables the control application some forwarded traffic visibility.

The solution introduces a new action that will sample the incoming
traffic and send a duplicated traffic with the specified ratio to the
application, while the original packet will continue to the target
destination.

The packets sampled equals is '1/ratio', the ratio value set to 1
means that the packets will be completely mirrored. The sample packet
can be assigned with different set of actions from the original packet.

In order to support the sample packet in rte_flow, new rte_flow action
definition RTE_FLOW_ACTION_TYPE_SAMPLE and structure rte_flow_action_sample
will be introduced.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 19:47:58 +02:00
Jakub Grajciar
2f865ed07b net/memif: use abstract socket address
Abstract socket address has no connection with
filesystem pathnames and the socket disappears
once all open references are closed.

Memif pmd will use abstract socket address by default.
For backwards compatibility use new argument
'socket-abstract=no'

Signed-off-by: Jakub Grajciar <jgrajcia@cisco.com>
2020-10-16 19:47:58 +02:00
Mike Baucom
191f19cef8 net/bnxt: support runtime EM selection
This patch adds support to select internal Exact Match vs
External Exact Match support while loading the PMD.
- Added new mem type conditional opcode for internal/external
- Adapted the flowdb resource counts based on selected mode
- Template changes to use the new opcode
- The decision for internal/external EM support is based on the
  devargs parameter max_num_kflows.  If this is set, external EM
  is used.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 19:47:58 +02:00
Mike Baucom
b7d773d4ba net/bnxt: add Stingray device support to ULP
- Add new template files for Stingray
- Add new TRUFLOW resources for Stingray

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 19:47:58 +02:00
Li Zhang
b1088fccb5 net/mlx5: support ICMP identifier matching
PRM expose fields "Icmp_header_data" in IPv4 ICMP.
Update ICMP mask parameter with ICMP identifier and sequence number
fields.
ICMP sequence number spec with mask, Icmp_header_data low 16 bits are
set.
ICMP identifier spec with mask, Icmp_header_data high 16 bits are set.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-10-16 19:47:58 +02:00
Thomas Monjalon
1372d0cb2a ethdev: fix xstat name of basic stats per queue
As described in doc/guides/prog_guide/poll_mode_drv.rst,
the naming scheme for the xstats is parts separated with underscore:
	* direction
	* detail 1
	* detail 2
	* detail n
	* unit
where detail 1 can be "q" followed with a queue number.
It means the name of the stats per queue should be rx_qN_* or tx_qN_*.

The second underscore was missing so far.
Fixing the basic xstat names may be considered an API change,
that's why it should not be backported.

While fixing this mistake, some examples of the naming scheme
are given as part of the API documentation of rte_eth_xstat_name.
More proposals about standardizing statistics:
	http://fast.dpdk.org/events/slides/DPDK-2019-09-Ethernet_Statistics.pdf

Fixes: bd6aa172cf ("ethdev: fetch extended statistics with integer ids")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2020-10-16 19:47:55 +02:00
Ophir Munk
df655504e3 app/testpmd: cleanup tunnel protocols parsing
This is a cleanup commit.
It assembles all tunnel outer updates into one function call to avoid
code duplications.
It defines RTE_VXLAN_GPE_DEFAULT_PORT (4790) in accordance with all
other tunnel protocol definitions.
It replaces all numeric values 4789 in their corresponding definition
RTE_VXLAN_GPE_DEFAULT_PORT.
It updates the 'csum parse-tunnel' documentation.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 19:18:47 +02:00
Ophir Munk
2f60c649b1 app/testpmd: enable configuring GENEVE port
IANA has assigned port 6081 as the fixed well-known destination port for
GENEVE. Nevertheless draft-ietf-nvo3-geneve-09 recommends that
implementations make this configurable.  This commit enables specifying
any positive UDP destination port number for GENEVE protocol parsing.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 19:18:47 +02:00
Ophir Munk
ea0e711b8a app/testpmd: add GENEVE parsing
GENEVE is a widely used tunneling protocol in modern Virtualized
Networks. testpmd already supports parsing of several tunneling
protocols including VXLAN, VXLAN-GPE, GRE. This commit adds GENEVE
parsing of inner protocols (IPv4-0x0800, IPv6-0x86dd, Ethernet-0x6558)
based on IETF draft-ietf-nvo3-geneve-09. GENEVE is considered more
flexible than the other protocols.  In terms of protocol format GENEVE
header has a variable length options as opposed to other tunneling
protocols which have a fixed header size.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 19:18:47 +02:00
Fan Zhang
ea1b835a0e vhost/crypto: fix feature negotiation
This patch fixes the feature negotiation for vhost crypto during
initialization. The patch uses the newly created driver start
function to inform the driver type with the fixed vhost features.
In addition the patch provides a new API specifically used by
the application to start a vhost-crypto driver.

Fixes: 939066d965 ("vhost/crypto: add public function implementation")
Cc: stable@dpdk.org

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-16 19:18:47 +02:00
Thomas Monjalon
028c57407d doc: make sphinx errors more visible
When running Sphinx through ninja, the wrapper configured in meson
redirects stdout to a log file.
It makes more important to print issues on stderr.

Some warnings generated by the conf.py were hidden because
printed on stdout. The first improvement is to print them on stderr.

The second measure is to stop processing if meson was configured
with --werror.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-16 15:01:54 +02:00
Thomas Monjalon
7e23a23a82 doc: fix project version in guides
The DPDK version should appear in the top left corner of the HTML guides.
When dropping make, the variable version has been removed,
so Sphinx stopped integrating the version number.

Fixes: a4362f1502 ("doc: build without using make")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 14:41:32 +02:00
Vikas Gupta
26015a9b00 crypto/bcmfs: fix features documentation
Fix documentation error in bcmfs.ini.
Add a section for asymmetric algorithms.

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 14:41:32 +02:00
Omkar Maslekar
4ffc2276e2 eal: add cache line demotion API
rte_cldemote is similar to a prefetch hint - in reverse.
On x86, cldemote(addr) enables software to hint to hardware that line is
likely to be shared. This is quite useful in core-to-core communications
where cache-line is likely to be shared.
ARM and PPC implementation is provided with NOP and can be added if any
equivalent instructions could be used for implementation on those
architectures.

Signed-off-by: Omkar Maslekar <omkar.maslekar@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
2020-10-16 14:11:45 +02:00
Timothy McDaniel
75d113136f eventdev: express DLB/DLB2 PMD constraints
This commit implements the eventdev ABI changes required by
the DLB/DLB2 PMDs.  Several data structures and constants are modified
or added in this patch, thereby requiring modifications to the
dependent apps and examples.

The DLB/DLB2 hardware does not conform exactly to the eventdev interface.
1) It has a limit on the number of queues that may be linked to a port.
2) Some ports a further restricted to a maximum of 1 linked queue.
3) DLB does not have the ability to carry the flow_id as part
   of the event (QE) payload. Note that the DLB2 hardware is capable of
   carrying the flow_id.

Following is a detailed description of the changes that have been made.

1) Add new fields to the rte_event_dev_info struct. These fields allow
the device to advertise its capabilities so that applications can take
the appropriate actions based on those capabilities.

    struct rte_event_dev_info {
	uint32_t max_event_port_links;
	/**< Maximum number of queues that can be linked to a single event
	 * port by this device.
	 */

	uint8_t max_single_link_event_port_queue_pairs;
	/**< Maximum number of event ports and queues that are optimized for
	 * (and only capable of) single-link configurations supported by this
	 * device. These ports and queues are not accounted for in
	 * max_event_ports or max_event_queues.
	 */
    }

2) Add a new field to the rte_event_dev_config struct. This field allows
the application to specify how many of its ports are limited to a single
link, or will be used in single link mode.

    /** Event device configuration structure */
    struct rte_event_dev_config {
	uint8_t nb_single_link_event_port_queues;
	/**< Number of event ports and queues that will be singly-linked to
	 * each other. These are a subset of the overall event ports and
	 * queues; this value cannot exceed *nb_event_ports* or
	 * *nb_event_queues*. If the device has ports and queues that are
	 * optimized for single-link usage, this field is a hint for how many
	 * to allocate; otherwise, regular event ports and queues can be used.
	 */
    }

3) Replace the dedicated implicit_release_disabled field with a bit field
of explicit port capabilities. The implicit_release_disable functionality
is assigned to one bit, and a port-is-single-link-only  attribute is
assigned to other, with the remaining bits available for future assignment.

	* Event port configuration bitmap flags */
	#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
	/**< Configure the port not to release outstanding events in
	 * rte_event_dev_dequeue_burst(). If set, all events received through
	 * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
	 * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
	 * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
	 */
	#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)

	/**< This event port links only to a single event queue.
	 *
	 *  @see rte_event_port_setup(), rte_event_port_link()
	 */

	#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
	/**
	 * The implicit release disable attribute of the port
	 */

	struct rte_event_port_conf {
		uint32_t event_port_cfg;
		/**< Port cfg flags(EVENT_PORT_CFG_) */
	}

This patch also removes the depreciation notice and announce
the new eventdev ABI changes in release note.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-10-15 23:16:07 +02:00
Radu Nicolau
70207f35e2 event/sw: improve performance
Add minimum burst throughout the scheduler pipeline and a flush counter.
Use a single threaded ring implementation for the reorder buffer free list.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
2020-10-15 23:09:58 +02:00
Pavan Nikhilesh
227f283599 event/octeontx: validate events requested against available
Validate events configured in ssopf against the total number of
events configured across all the RX/TIM event adapters.

Events available to ssopf can be reconfigured by passing the required
amount to kernel bootargs and are only limited by DRAM size.
Example:
	ssopf.max_events= 2097152

Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2020-10-15 21:26:19 +02:00
Suanming Mou
80d1a9aff7 ethdev: make flow API thread safe
Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.

For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.

And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.

This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.

A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.

For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.

The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2020-10-16 00:44:58 +02:00
Akhil Goyal
a11aeb0936 doc: remove unnecessary API code from security guide
Various xform structures are being copied in
rte_security guide which can be referred from the
API documentation generated by Doxygen. The security guide
does not talk about specific details of these xforms and
thus are removed from the security guide.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:24:41 +02:00
Akhil Goyal
486f067a41 security: modify PDCP xform to support SDAP
The SDAP is a protocol in the LTE stack on top of PDCP for
QOS. A particular PDCP session may or may not have
SDAP enabled. But if it is enabled, SDAP header should be
authenticated but not encrypted if both confidentiality and
integrity is enabled. Hence, the driver should be intimated
from the xform so that it skip the SDAP header while encryption.

A new field is added in the PDCP xform to specify SDAP is enabled.
The overall size of the xform is not changed, as hfn_ovrd is just
a flag and does not need uint32. Hence, it is converted to uint8_t
and a 16 bit reserved field is added for future.

Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-14 22:24:41 +02:00
Arek Kusztal
3a6f835b33 cryptodev: remove algo lists end
This patch removes enumerators RTE_CRYPTO_CIPHER_LIST_END,
RTE_CRYPTO_AUTH_LIST_END, RTE_CRYPTO_AEAD_LIST_END to prevent
ABI breakage that may arise when adding new crypto algorithms.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Fan Zhang
728c76b0e5 crypto/qat: support raw datapath API
This patch updates QAT PMD to add raw data-path API support.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
2020-10-14 22:22:06 +02:00
Fan Zhang
eb7eed345c cryptodev: add raw crypto datapath API
This patch adds raw data-path APIs for enqueue and dequeue
operations to cryptodev. The APIs support flexible user-define
enqueue and dequeue behaviors.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Piotr Bronowski <piotrx.bronowski@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Fan Zhang
8d928d47a2 cryptodev: change crypto symmetric vector structure
This patch updates ``rte_crypto_sym_vec`` structure to add
support for both cpu_crypto synchronous operation and
asynchronous raw data-path APIs. The patch also includes
AESNI-MB and AESNI-GCM PMD changes, unit test changes and
documentation updates.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Pablo de Lara
a141f0c7e7 crypto/aesni_mb: support AES-CCM-256
This patch adds support for AES-CCM-256 when using AESNI-MB

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2020-10-14 22:22:06 +02:00
Pablo de Lara
010230a154 crypto/aesni_mb: support Chacha20-Poly1305
Add support for Chacha20-Poly1305 AEAD algorithm.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2020-10-14 22:22:06 +02:00
Pablo de Lara
515a6cc299 crypto/aesni_gcm: support SGL on AES-GMAC
Add Scatter-gather list support for AES-GMAC.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Tested-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
2020-10-14 22:22:06 +02:00
Fan Zhang
d09abf2d10 examples/fips_validation: update GCM test
This patch updates fips validation GCM test capabilities:

- In NIST GCMVS spec GMAC test vectors are the GCM ones with
plaintext length as 0 and uses AAD as input data. Originally
fips_validation tests treats them both as GCM test vectors.
This patch introduce automatic test type recognition between
the two: when plaintext length is 0 the prepare_gmac_xform
and prepare_auth_op functions are called, otherwise
prepare_gcm_xform and prepare_aead_op functions are called.

- NIST GCMVS also specified externally or internally IV
generation. When IV is to be generated by IUT internally IUT
shall store the generated IV in the response file. This patch
also adds the support to that.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Weqaar Janjua <weqaar.a.janjua@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
2020-10-14 22:22:06 +02:00
Fan Zhang
952e10cdad examples/fips_validation: support scatter gather list
This patch adds SGL support to FIPS sample application.
Originally the application allocates single mbuf of 64KB - 1
bytes data room. With the change the user may reduce the
mbuf dataroom size by using the add cmdline option. If the
input test data is longer than the user provided data room
size the application will automatically build chained mbufs
for the target cryptodev PMD to test.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
2020-10-14 22:22:06 +02:00
Akhil Goyal
a054c627a1 crypto/dpaa2_sec: support non-HMAC auth algo versions
added support for non-HMAC for auth algorithms
(SHA1, SHA2, MD5).
Corresponding capabilities are enabled so that test
application can enable those test cases.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Akhil Goyal
a6e892f427 crypto/dpaa2_sec: support DES-CBC
add DES-CBC support for cipher_only, chain and ipsec protocol.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Vikas Gupta
cee518e317 test/crypto: add bcmfs
Add global test suite for bcmfs crypto pmd

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Vikas Gupta
4ed19f0db5 crypto/bcmfs: add session handling and capabilities
Add session handling and capabilities supported by crypto HW
accelerator

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 22:22:06 +02:00
Vikas Gupta
c8e79da7c6 crypto/bcmfs: introduce BCMFS driver
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documentation and
maintainers file.

Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:42:57 +02:00
Pablo de Lara
ae8e085c60 crypto/aesni_mb: support KASUMI F8/F9
Add support for KASUMI-F8/F9 algorithms through the intel-ipsec-mb
job API, allowing the mix of these algorithms with others.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:40:32 +02:00
Pablo de Lara
6c42e0cf4d crypto/aesni_mb: support SNOW3G-UEA2/UIA2
Add support for SNOW3G-UEA2/UIA2 algorithms through the intel-ipsec-mb
job API, allowing the mix of these algorithms with others.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:39:43 +02:00
Pablo de Lara
fd8df85487 crypto/aesni_mb: support ZUC-EEA3/EIA3
Add support for ZUC-EEA3/EIA3 algorithms through the intel-ipsec-mb
job API, allowing the mix of these algorithms with others.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:38:49 +02:00
Nagadheeraj Rottela
678f3eca1d crypto/nitrox: support cipher-only operations
This patch adds cipher only crypto operation support.

Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
2020-10-14 21:37:24 +02:00
Nagadheeraj Rottela
93ba4a6e17 crypto/nitrox: support AES-GCM
This patch adds AES-GCM AEAD algorithm.

Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
2020-10-14 21:36:27 +02:00
Tejasree Kondoj
4edede7bc6 crypto/octeontx2: support lookaside IPsec IPv6
Adding IPv6 tunnel mode support in lookaside IPsec PMD.

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2020-10-14 21:35:22 +02:00
Marcel Cornu
c94c520b41 crypto/aesni_mb: support AES-ECB
This patch adds AES-ECB 128, 192 and 256 support to the aesni_mb PMD.
AES-ECB 128, 192 and 256 test vectors added to cryptodev tests.

Signed-off-by: Marcel Cornu <marcel.d.cornu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2020-10-14 21:34:02 +02:00
Nicolas Chautru
cbcda56cce doc: update bbdev guide
Clarify the capability assumptions for LLR and HARQ
compression format.
Correct one historical typo.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Aidan Goddard <aidan.goddard@accelercomm.com>
2020-10-14 21:32:11 +02:00
Adam Dybkowski
85b00824ae crypto/scheduler: rename slave to worker
This patch replaces the usage of the word 'slave' with more
appropriate word 'worker' in QAT PMD and Scheduler PMD
as well as in their docs. Also the test app was modified
to use the new wording.

The Scheduler PMD's public API was modified according to the
previous deprecation notice:
rte_cryptodev_scheduler_slave_attach is now called
rte_cryptodev_scheduler_worker_attach,
rte_cryptodev_scheduler_slave_detach is
rte_cryptodev_scheduler_worker_detach,
rte_cryptodev_scheduler_slaves_get is
rte_cryptodev_scheduler_workers_get.

Also, the configuration value RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES
was renamed to RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:31:46 +02:00
Nicolas Chautru
a5bdedf999 doc: remove orphan bbdev PMD feature table
Removing a feature table referring erroneously
to a PMD not present in DPDK.

Fixes: 65f1eec ("doc: add feature matrix table for bbdev")
Cc: stable@dpdk.org

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:30:10 +02:00
Maxime Coquelin
e6925585d9 baseband/fpga_lte_fec: fix API naming
DPDK APIs have to be prefixed with "rte_" in order to avoid
namespace pollution.

Let's fix it while fpga_lte_fec API is still experimental.
Fixes: efd453698c ("baseband/fpga_lte_fec: add driver for FEC on FPGA")

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tom Rix <trix@redhat.com>
2020-10-14 21:29:59 +02:00
Maxime Coquelin
7adbb468fb baseband/fpga_5gnr_fec: fix API naming
DPDK APIs have to be prefixed with "rte_" in order to avoid
namespace pollution.

Let's fix it while fpga_5gnr_fec API is still experimental.

Fixes: 2d4306438c ("baseband/fpga_5gnr_fec: add configure function")

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tom Rix <trix@redhat.com>
2020-10-14 21:28:31 +02:00
Conor Walsh
a748d24d79 ipsec: promote library as stable
Since librte_ipsec was first introduced in 19.02 and there were no changes
in it's public API since 19.11, it should be considered mature enough to
remove the 'experimental' tag from it.
The RTE_SATP_LOG2_NUM enum is also being dropped from rte_ipsec_sa.h to
avoid possible ABI problems in the future.

Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-14 21:26:36 +02:00
Nicolas Chautru
b17d70922d baseband/acc100: add configure function
Add configure function to configure the PF from within
the bbdev-test itself without external application
configuration the device.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-14 21:26:25 +02:00
Nicolas Chautru
f404dfe35c baseband/acc100: support 4G processing
Adding capability for 4G encode and decoder processing

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-14 21:06:56 +02:00
Nicolas Chautru
5ad5060f8f baseband/acc100: add LDPC processing functions
Adding LDPC decode and encode processing operations

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-14 21:06:56 +02:00
Nicolas Chautru
db7949bde4 baseband/acc100: introduce PMD for ACC100
Add stubs for the ACC100 PMD

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
Acked-by: Liu Tianjiao <tianjiao.liu@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-14 21:06:56 +02:00
Thomas Monjalon
3cd73a1a1c eal: simplify exit functions
The option RTE_EAL_ALWAYS_PANIC_ON_ERROR was off by default,
and not customizable with meson. It is completely removed.

The function rte_dump_registers is a trace of the bare metal support
era, and was not supported in userland. It is completely removed.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-10-15 22:33:47 +02:00
Harry van Haaren
31f83163cf eal: add new prefetch write variants
This commit adds new rte_prefetchX_write() variants, that suggest to the
compiler to use a prefetch instruction with intention to write. As a
compiler builtin, the compiler can choose based on compilation target
what the best implementation for this instruction is.

Three versions are provided, targeting the different levels of cache.

Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
2020-10-15 21:49:59 +02:00
Savinay Dharmappa
bf32a357e2 sched: remove redundant subport parameters
Remove redundant data structure fields.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2020-10-15 02:14:28 +02:00
Savinay Dharmappa
ac6fcb841b sched: update subport rate dynamically
Add support to update subport rate dynamically.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2020-10-15 02:13:08 +02:00
Savinay Dharmappa
0ea4c6afca sched: add subport profile table
Add subport profile table to internal port data structure
and update the port config function.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2020-10-15 02:11:50 +02:00
Dmitry Kozlyuk
51fcb6a1fe cmdline: make implementation logically opaque
struct cmdline exposes platform-specific members it contains, most
notably struct termios that is only available on Unix. While ABI
considerations prevent from hinding the definition on already supported
platforms, struct cmdline is considered logically opaque from now on.
Add a deprecation notice targeted at 20.11.

* Remove tests checking struct cmdline content as meaningless.

* Fix missing cmdline_free() in unit test.

* Add cmdline_get_rdline() to access history buffer indirectly.
  The new function is currently used only in tests.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-10-15 00:39:10 +02:00
Konstantin Ananyev
45da22e42e acl: add 512-bit AVX512 classify method
Introduce classify implementation that uses AVX512 specific ISA.
rte_acl_classify_avx512x32() is able to process up to 32 flows in parallel.
It uses 512-bit width registers/instructions and provides higher
performance then rte_acl_classify_avx512x16(), but can cause
frequency level change.
Note that for now only 64-bit version is supported.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-14 14:23:01 +02:00
Konstantin Ananyev
b64c2295f7 acl: add 256-bit AVX512 classify method
Introduce classify implementation that uses AVX512 specific ISA.
rte_acl_classify_avx512x16() is able to process up to 16 flows in parallel.
It uses 256-bit width registers/instructions only
(to avoid frequency level change).
Note that for now only 64-bit version is supported.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-14 14:23:00 +02:00
Konstantin Ananyev
ad20877a30 acl: remove classify methods count enum
Removal of unused enum value (RTE_ACL_CLASSIFY_NUM).
This enum value is not used inside DPDK, while it prevents
to add new classify algorithms without causing an ABI breakage.

Note that this change introduce a formal ABI incompatibility
with previous versions of ACL library.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
2020-10-14 14:23:00 +02:00
Konstantin Ananyev
28377e37ec doc: fix missing classify methods in ACL guide
Add brief description for missing ACL classify algorithms:
RTE_ACL_CLASSIFY_NEON and RTE_ACL_CLASSIFY_ALTIVEC.

Fixes: 34fa6c27c1 ("acl: add NEON optimization for ARMv8")
Fixes: 1d73135f9f ("acl: add AltiVec for ppc64")
Cc: stable@dpdk.org

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2020-10-14 14:23:00 +02:00
Guy Kaneti
09f84c9a6a usertools: add OCTEON TX2 REE device binding
Update the devbind script with new section of regex devices, also
added OCTEONTX2 REE device ID to regex device list

Signed-off-by: Guy Kaneti <guyk@marvell.com>
2020-10-14 10:41:26 +02:00
Guy Kaneti
4cd1c5fd9e regex/octeontx2: introduce REE driver
Add meson based build infrastructure along with the
OTX2 regexdev (REE) device functions.
Add Marvell OCTEON TX2 regex guide.

Signed-off-by: Guy Kaneti <guyk@marvell.com>
2020-10-14 10:41:21 +02:00
Mairtin o Loingsigh
17a937baed net: add CRC AVX512 implementation
This patch enables the optimized calculation of CRC32-Ethernet and
CRC16-CCITT using the AVX512 and VPCLMULQDQ instruction sets. This CRC
implementation is built if the compiler supports the required instruction
sets. It is selected at run-time if the host CPU, again, supports the
required instruction sets.

Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Signed-off-by: David Coyle <david.coyle@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2020-10-13 19:26:15 +02:00
Mairtin o Loingsigh
ef94569cf9 net: add CRC implementation runtime selection
This patch adds support for run-time selection of the optimal
architecture-specific CRC path, based on the supported instruction set(s)
of the CPU.

The compiler option checks have been moved from the C files to the meson
script. The rte_cpu_get_flag_enabled function is called automatically by
the library at process initialization time to determine which
instructions the CPU supports, with the most optimal supported CRC path
ultimately selected.

Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Signed-off-by: David Coyle <david.coyle@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-13 19:26:03 +02:00
Radu Nicolau
ad6f7399d2 net/ice: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2020-10-13 14:42:02 +02:00
Radu Nicolau
bc4c8309b7 net/ixgbe: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2020-10-13 14:41:59 +02:00
Radu Nicolau
0767e9eba1 common/qat: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
2020-10-13 14:41:42 +02:00
Radu Nicolau
0a65bf8d41 net/i40e: use write combining store for tail updates
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2020-10-13 14:37:15 +02:00
Radu Nicolau
8a00dfc738 eal: add write combining store
Add rte_write32_wc and rte_write32_wc_relaxed functions
that implement 32bit stores using write combining memory protocol.
Provided generic stubs and x86 implementation.

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-13 14:11:16 +02:00
Sachin Saxena
1736219b3c doc: fix diagram in dpaa2 guide
The diagram to show dpaa2 drivers brief
was missing in dpaa2.html file.

fix a typo in encoding for a literal block
to make it visible in generated doc file.

Fixes: 846a8305f2 ("doc: add DPAA2 NIC details")
Cc: stable@dpdk.org

Signed-off-by: Sachin Saxena <sachin.saxena@oss.nxp.com>
2020-10-12 22:52:48 +02:00
Min Hu (Connor)
b19da32e31 app/testpmd: add FEC command
This commit adds testpmd capability to query and config FEC
function of device. This includes:
- show FEC capabilities, example:
	testpmd> show port 0 fec capabilities
- show FEC mode, example:
	testpmd> show port 0 fec_mode
- config FEC mode, example:
	testpmd> set port <port_id> fec_mode auto|off|rs|baser

	where:

	auto|off|rs|baser are four kinds of FEC mode which dev
	support according to MAC link speed.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
2020-10-09 13:17:43 +02:00
Min Hu (Connor)
9bf2ea8dbc net/hns3: support FEC
Forward error correction (FEC) is a bit error correction mode.
It adds error correction information to data packets at the
transmit end, and uses the error correction information to correct
the bit errors generated during data packet transmission at the
receive end. This improves signal quality but also brings a delay
to signals. This function can be enabled or disabled as required.

This patch adds FEC support for ethdev.Introduce ethdev
operations which support query and config FEC information in
hardware.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
2020-10-09 13:17:43 +02:00
Min Hu (Connor)
b7ccfb09da ethdev: introduce FEC API
This patch adds Forward error correction(FEC) support for ethdev.
Introduce APIs which support query and config FEC information in
hardware.

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chengchang Tang <tangchengchang@huawei.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-09 13:17:43 +02:00
Lance Richardson
369f6077c5 net/bnxt: support fast mbuf free
Add support for DEV_TX_OFFLOAD_MBUF_FAST_FREE to bnxt
vector mode transmit. This offload may be enabled
only when multi-segment transmit is not needed, all
transmitted mbufs for a given queue will be allocated
from the same pool, and all transmitted mbufs will
have a reference count of 1.

Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-09 13:17:42 +02:00
Huisong Li
9f6dc8592d ethdev: fix data type in TC queues
Currently, base and nb_queue in the tc_rxq and tc_txq information
of queue and TC mapping on both TX and RX paths are uint8_t.
However, these data will be truncated when queue number under a TC
is greater than 256. So it is necessary for base and nb_queue to
change from uint8_t to uint16_t.

Fixes: 89d6728c78 ("ethdev: get DCB information")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Reviewed-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-08 19:58:11 +02:00
Ajit Khaparde
8b96a65ce5 net/bnxt: update HWRM structures
HWRM API to a newer 1.10.1.70 version.

Few fields have been renamed because of this.
rx_err_pkt -> rx_discard_pkts
rx_drop_pkts -> rx_error_pkts

tx_err_pkts -> tx_discard_pkts
tx_drop_pkts -> tx_error_pkts

link_signal_mode -> active_fec_signal_mode

tx_bd_long_hi.mss -> tx_bd_long_hi.kid_or_ts_high_mss
tx_bd_long_hi.hdr_size -> tx_bd_long_hi.kid_or_ts_low_hdr_size

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-08 19:58:11 +02:00
Ajit Khaparde
7ed45b1a7c net/bnxt: support RSS hash selection
Add support to select RSS hash based on innermost or outermost
headers. If an application is started without any specific settings
the default mode configured by FW or HW shall be used.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-08 19:58:11 +02:00
Chengchang Tang
fa29fe45a7 net/hns3: support queue start and stop
The new generation hns3 network engine supports independent enabling and
disabling of a single Tx/Rx queue. So, it can support the queue start
and stop feature. In addition, when different numbers of Tx and Rx
queues need to be enabled in some applications, hns3 pmd does not need
to create fake queues to enable these scenarios.

This patch Add queue start and stop feature for the new generation hns3
networking engine. Cancel the creation of fake queue on the new
generation network engine. And the previously improperly named queue
related function was renamed to improve readability.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
2020-10-08 19:58:10 +02:00
Kevin Laatz
2ae23f5647 raw/ioat: add fill operation
Add fill operation enqueue support for IOAT and IDXD. The fill enqueue is
similar to the copy enqueue, but takes a 'pattern' rather than a source
address to transfer to the destination address. This patch also includes an
additional test case for the new operation type.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
01863b9d23 raw/ioat: include example configuration script
Devices managed by the idxd kernel driver must be configured for DPDK use
before it can be used by the ioat driver. This example script serves both
as a quick way to get the driver set up with a simple configuration, and as
the basis for users to modify it and create their own configuration
scripts.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Kevin Laatz
777edf43ae raw/ioat: introduce vdev probe for DSA/idxd device
The Intel DSA devices can be exposed to userspace via kernel driver, so can
be used without having to bind them to vfio/uio. Therefore we add support
for using those kernel-configured devices as vdevs, taking as parameter the
individual HW work queue to be used by the vdev.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
d09d396fad raw/ioat: add skeleton for VFIO/UIO based DSA device
Add in the basic probe/remove skeleton code for DSA devices which are bound
directly to vfio or uio driver. The kernel module for supporting these uses
the "idxd" name, so that name is used as function and file prefix to avoid
conflict with existing "ioat" prefixed functions.

Since we are adding new files to the driver and there will be common
definitions shared between the various files, we create a new internal
header file ioat_private.h to hold common macros and function prototypes.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Kevin Laatz
43f9b521a7 usertools: support binding Intel DSA device
Intel Data Streaming Accelerator (Intel DSA) is a high-performance data
copy and transformation accelerator which will be integrated in future
Intel processors [1].

Add DSA device support to dpdk-devbind.py script.

[1] https://01.org/blogs/2019/introducing-intel-data-streaming-accelerator

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
f55d185540 raw/ioat: add separate API for fence call
Rather than having the fence signalled via a flag on a descriptor - which
requires reading the docs to find out whether the flag needs to go on the
last descriptor before, or the first descriptor after the fence - we can
instead add a separate fence API call. This becomes unambiguous to use,
since the fence call explicitly comes between two other enqueue calls. It
also allows more freedom of implementation in the driver code.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Bruce Richardson
979e29ddbb raw/ioat: rename functions to be operation-agnostic
Since the hardware supported by the ioat driver is capable of operations
other than just copies, we can rename the doorbell and completion-return
functions to not have "copies" in their names. These functions are not
copy-specific, and so would apply for other operations which may be added
later to the driver.

Also add a suitable warning using deprecation attribute for any code using
the old functions names.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Cheng Jiang
95b686a665 raw/ioat: add flag to control copying handle parameters
Add a flag which controls whether rte_ioat_enqueue_copy and
rte_ioat_completed_copies function should process handle parameters. Not
doing so can improve the performance when handle parameters are not
necessary.

Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
2020-10-08 14:33:20 +02:00
Gage Eads
1fb6301ccb doc: add stack mempool guide
This guide describes the two stack modes, their tradeoffs, and (via a
reference to the mempool guide) how to enable them.

Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
2020-10-08 09:34:58 +02:00
Yi Yang
e2d8110636 gro: support VXLAN UDP/IPv4
VXLAN UDP/IPv4 GRO can help improve VM-to-VM UDP
performance when UFO or GSO is enabled in VM, GRO
must be supported if UFO or GSO is enabled,
otherwise, performance can't get big improvement
if only GSO is there.

With this enabled in DPDK, OVS DPDK can leverage it
to improve VM-to-VM UDP performance, it will reassemble
VXLAN UDP/IPv4 fragments immediate after they are
received from a physical NIC. It is very helpful in
OVS DPDK VXLAN use case.

Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
2020-10-06 21:51:03 +02:00
Yi Yang
1ca5e67408 gro: support UDP/IPv4
UDP/IPv4 GRO can help improve VM-to-VM UDP performance
when UFO or GSO is enabled in VM, GRO must be supported
if UFO or GSO is enabled, otherwise, performance can't
get big improvement if only GSO is there.

With this enabled in DPDK, OVS DPDK can leverage it
to improve VM-to-VM UDP performance, it will reassemble
UDP fragments immediate after they are received from
a physical NIC. It is very helpful in OVS DPDK VLAN use
case.

Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
2020-10-06 21:51:03 +02:00
Thomas Monjalon
56bb5841fd kernel/linux: remove igb_uio
As decided in the Technical Board in November 2019,
the kernel module igb_uio is moved to the dpdk-kmods repository
in the /linux/igb_uio/ directory.

Minutes of Technical Board meeting:
https://mails.dpdk.org/archives/dev/2019-November/151763.html

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-06 14:50:13 +02:00
Rohit Raj
7807a4fd38 bus/fslmc: run secondary debug app without restriction
dpaa2 hw impose limits on some HW access devices like DPMCP(Management
control Port) and DPIO (HW portal). This causes issue in their shared
usages in case of multi-process applications. It can overcome by using
whitelist/blacklist in primary and secondary applications.
However it imposes restrictions on standard debugging apps like
dpdk-procinfo, which can be used to debug any existing application.

This patch introduces reserving extra DPMCP and DPIO to be used by
secondary process if devices are not blocked previously in primary
application.
This leaves the last DPMCP and DPIO for the secondary process usages.

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@oss.nxp.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2020-10-06 14:43:40 +02:00
Bruce Richardson
4b3f8119c7 test/raw: remove ioat-specific autotest
Since the rawdev autotest can now be used to test all rawdevs on the
system, there is no need for a dedicated ioat autotest command.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-06 09:26:28 +02:00
Bruce Richardson
39f7b298fe test/raw: run selftest on all devices
Rather than having each rawdev provide its own autotest command, we can
instead just use the generic rawdev_autotest to test any and all available
rawdevs.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-06 09:26:20 +02:00
Xiaoyun Li
f5057be340 raw/ntb: support Intel Ice Lake
Add NTB device support (4th generation) for Intel Ice Lake platform.

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2020-10-06 01:24:33 +02:00
Stephen Hemminger
483a914d24 doc: remove trailing white space
Run a simple script to remove trailing white space and blank
lines at end of file across all documents.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-10-06 00:42:21 +02:00
Bruce Richardson
1509ef9350 doc: fix formatting of notes in meson guide
The "note" callouts in the chapter describing the meson build were
incorrectly formatted, so adjust to use the correct markdown syntax.

Fixes: 9c3adc289c ("doc: add instructions on build using meson")
Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
2020-10-05 23:56:37 +02:00
Bruce Richardson
6572fc9260 doc: make sphinx comply with meson werror option
When the --werror meson build option is set, we can pass the "-W",
warning-as-errors, flag to sphinx to get the same behaviour for doc
building as for building the rest of DPDK. This can help catch
documentation errors sooner in the development process.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-05 23:52:02 +02:00
Gage Eads
100d9b8066 stack: promote library as stable
The stack library was first released in 19.05, and its interfaces have been
stable since their initial introduction. This commit promotes the full
interface to stable, starting with the 20.11 major version.

Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-10-05 11:56:17 +02:00
Robin Jarry
b1df4163a8 doc: remove references to python 2
Python 2 support has now been dropped. Remove references to it in the
documentation.

Since all python scripts now have a proper shebang that calls python3,
execute the scripts directly without specifying the interpreter.

Sphinx version from most Linux distros is OK in 2020, do not encourage
people to break their system by installing with pip. Use the distros
official packages.

Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-05 10:24:12 +02:00
Louise Kilheeney
3f6f83626c support python 3 only
Changed scripts to explicitly use Python 3 only, to avoid
maintaining Python 2.
Removed deprecation notices.

Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Robin Jarry <robin.jarry@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
2020-10-02 13:51:00 +02:00
Maxime Coquelin
6b90143705 net/virtio: introduce vhost-vDPA backend
vhost-vDPA is a new virtio backend type introduced by vDPA kernel
framework, which provides abstraction to the vDPA devices and
exposes an unified control interface through a char dev.

This patch adds support to the vhost-vDPA backend. As similar to
the existing vhost kernel backend, a set of virtio_user ops were
introduced for vhost-vDPA backend to handle device specific operations
such as:
 - device setup
 - ioctl message handling
 - queue pair enabling
 - dma map/unmap
vDPA relevant ioctl codes and data structures are also defined in
this patch.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2020-09-30 23:16:56 +02:00
Maxime Coquelin
cacf8267cc vhost: remove dequeue zero-copy support
Dequeue zero-copy removal was announced in DPDK v20.08.
This feature brings constraints which makes the maintenance
of the Vhost library difficult. Its limitations makes it also
difficult to use by the applications (Tx vring starvation).

Removing it makes it easier to add new features, and also remove
some code in the hot path, which should bring a performance
improvement for the standard path.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2020-09-30 23:16:56 +02:00
Ivan Dyukov
e2572e43f1 net/virtio: sync speed capability with ethdev
ethdev library was updated with new speed 200G

Add 200G speed capa to virtio device

Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-09-30 23:16:55 +02:00
Ivan Dyukov
b415b7a068 net/virtio: set default speed unknown
rte_ethdev states new rule for NICs: they should return UNKNOWN
speed if speed is unknown and interface is up, in case of down
interface, NONE speed should be returned.

Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-09-30 23:16:55 +02:00
Maxime Coquelin
eca9a0d6c8 vhost: promote vDPA API as stable
As announced in v20.08, this patch makes the vDPA
and related Vhost API stable.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2020-09-30 23:16:55 +02:00
Chenbo Xia
ea87c337e5 doc: fix ethdev port id size
The ethdev port id should be 16 bits now. This patch changes the
variable size of port id in docs from 8 bits to 16 bits.

Fixes: fdec9301f5 ("doc: add flow classify guides")
Fixes: 4a3ef59a10 ("examples/flow_filtering: add simple demo of flow API")
Cc: stable@dpdk.org

Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-09-30 19:19:15 +02:00
Andrew Rybchenko
dd45b8805b net/sfc: create virtual switch to enable VFs
PF driver is responsible for vSwitch creation and vPorts allocation
for VFs.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-09-30 19:19:15 +02:00
Ciara Loftus
288a85aef1 net/af_xdp: enable custom XDP program loading
The new 'xdp_prog=<string>' vdev arg allows the user to specify the path to
a custom XDP program to be set on the device, instead of the default libbpf
one. The program must have an XSK_MAP of name 'xsks_map' which will allow
for the redirection of some packets to userspace and thus the PMD, using
some criteria defined in the program. This can be useful for filtering
purposes, for example if we only want a subset of packets to reach
userspace or to drop or process a subset of packets in the kernel.

Note: a netdev may only load one program.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Tested-by: Xuekun Hu <xuekun.hu@intel.com>
2020-09-30 19:19:15 +02:00
Thomas Monjalon
fbd1913561 ethdev: remove old close behaviour
The temporary flag RTE_ETH_DEV_CLOSE_REMOVE is removed.
It was introduced in DPDK 18.11 in order to give time for PMDs to migrate.

The old behaviour was to free only queues when closing a port.
The new behaviour is calling rte_eth_dev_release_port() which does
three more tasks:
	- trigger event callback
	- reset state and few pointers
	- free all generic port resources

The private port resources must be released in the .dev_close callback.

The .remove callback should:
	- call .dev_close callback
	- call rte_eth_dev_release_port()
	- free multi-port device shared resources

Despite waiting two years, some drivers have not migrated,
so they may hit issues with the incompatible new behaviour.
After sending emails, adding logs, and announcing the deprecation,
the only last solution is to declare these drivers as unmaintained:
	ionic, liquidio, nfp
Below is a summary of what to implement in those drivers.

* The freeing of private port resources must be moved
from the ".remove(device)" function to the ".dev_close(port)" function.

* If a generic resource (.mac_addrs or .hash_mac_addrs) cannot be freed,
it must be set to NULL in ".dev_close" function to protect from
subsequent rte_eth_dev_release_port() freeing.

* Note 1:
The generic resources are freed in rte_eth_dev_release_port(),
after ".dev_close" is called in rte_eth_dev_close(), but not when
calling ".dev_close" directly from the ".remove" PMD function.
That's why rte_eth_dev_release_port() must still be called explicitly
from ".remove(device)" after calling the ".dev_close" PMD function.

* Note 2:
If a device can have multiple ports, the common resources must be freed
only in the ".remove(device)" function.

* Note 3:
The port is supposed to be in a stopped state when it is closed.
If it is not the case, it is free to the PMD implementation
how to react when trying to close a non-stopped port:
either try to stop it automatically or just return an error.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
2020-09-30 19:19:14 +02:00
Kiran Kumar K
2ea8e2919b net/octeontx2: support VLAN insert and strip actions
Adding support for RTE Flow VLAN insert and strip actions.

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-09-30 19:19:11 +02:00
Manish Chopra
5700d0f0b4 net/qede: support VF FLR
This patch adds required bit to handle VF FLR
indication from Management FW (MFW) of the device

With that VFs were able to load in VM (VF attached as PCI
passthrough to the guest VM) followed by FLR successfully

Updated the docs/guides with the feature support

Signed-off-by: Manish Chopra <manishc@marvell.com>
Signed-off-by: Igor Russkikh <irusskikh@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
2020-09-30 19:19:11 +02:00
Ajit Khaparde
c23f9ded03 net/bnxt: support 200G PAM4 link
Thor based NICs can support PAM4 as wells as NRZ link negotiation.
With this patch we are adding support for 200G link speeds based on
PAM4 signaling. While PAM4 can negotiate speeds for 50G and 100G as
well, the PMD will use NRZ signalling for these speeds.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2020-09-30 19:19:10 +02:00
David Liu
6b67721dee app/testpmd: add EEPROM command
Add module EEPROM/EEPROM dump command
   "show port <port_id> (module_eeprom|eeprom)"
Commands will dump the content of the EEPROM/module
EEPROM for the selected port.

Signed-off-by: David Liu <dliu@iol.unh.edu>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-09-30 19:19:10 +02:00
Ciara Loftus
74b46340e2 net/af_xdp: support shared UMEM
Kernel v5.10 will introduce the ability to efficiently share a UMEM
between AF_XDP sockets bound to different queue ids on the same or
different devices. This patch integrates that functionality into the AF_XDP
PMD.

A PMD will attempt to share a UMEM with others if the shared_umem=1 vdev
arg is set. UMEMs can only be shared across PMDs with the same mempool, up
to a limited number of PMDs goverened by the size of the given mempool.
Sharing UMEMs is not supported for non-zero-copy (aligned) mode.

The benefit of sharing UMEM across PMDs is a saving in memory due to not
having to register the UMEM multiple times. Throughput was measured to
remain within 2% of the default mode (not sharing UMEM).

A version of libbpf >= v0.2.0 is required and the appropriate pkg-config
file for libbpf must be installed such that meson can determine the
version.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
2020-09-30 19:19:09 +02:00
Junyu Jiang
b966461872 net/i40e: fix byte counters
This patch fixed the issue that rx/tx bytes statistics counters
overflowed on 48 bit limitation by enlarging the limitation.

Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org

Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-09-30 19:19:09 +02:00
Cristian Dumitrescu
c2b603bdf4 doc: add new SWX pipeline type to release notes
Add the new SWX pipeline type to the release notes.

Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2020-10-01 19:04:42 +02:00
Ciara Power
89c67ae2cb doc: remove references to make from prog guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-01 16:51:24 +02:00
Ciara Power
79238624c2 doc: remove references to make from howto guides
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-01 16:51:24 +02:00
Ciara Power
6a74b08a6b doc: remove references to make from FreeBSD guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-01 16:51:24 +02:00
Ciara Power
5c7cb08888 doc: remove references to make from Linux guide
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-01 16:50:52 +02:00
Thomas Monjalon
2e978b2627 doc: fix references to removed guide
The page "Development Kit Build System" was about make,
so it has been removed. A better help is in the Linux guide
(note: mlx4/mlx5 are supported on Linux only for now).

Fixes: 3cc6ecfdfe ("build: remove makefiles")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ciara Power <ciara.power@intel.com>
2020-10-01 16:41:15 +02:00
Radu Nicolau
84fb33fec1 build: remove deprecated cpuflag macros
Replace use of RTE_MACHINE_CPUFLAG macros with regular compiler
macros, which are more complete than those provided by DPDK, and as such
it allows new instruction sets to be leveraged without having to do
extra work to set them up in DPDK.

Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-09-25 11:13:57 +02:00
Phil Yang
f0f5d844d1 eal: remove deprecated coherent IO memory barriers
Since the 20.08 release deprecated rte_cio_*mb APIs because these APIs
provide the same functionality as rte_io_*mb APIs on all platforms, so
remove them and use rte_io_*mb instead.

Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-09-23 13:40:26 +02:00
Hyong Youb Kim
a4ab862e99 net/enic: support VXLAN decap action combined with VLAN pop
Flow Manager (flowman) provides DECAP_STRIP operation which
decapsulates VXLAN header and then removes VLAN header from the inner
packet. Use this operation to support vxlan_decap followed by
of_pop_vlan.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2020-09-21 18:10:38 +02:00
Hyong Youb Kim
f985387e44 net/enic: support priorities for TCAM flows
Group 0 corresponds to TCAM which supports priorities. Accept non-zero
priorities for group 0 flows.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2020-09-21 18:10:38 +02:00
Hyong Youb Kim
8ca08b7026 net/enic: support egress port id action
Use Flow Manager (flowman) to support egress PORT_ID action. It can
steer egress packets from PFs and VFs to any uplink port as long as
they are all on the same VIC adapter. It can also steer packets
between ports on the same VIC adapter (i.e. loopback).

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2020-09-21 18:10:38 +02:00
Hyong Youb Kim
a93cf16926 net/enic: enable flow API for VF representor
Use Flow Manager (flowman) to support flow API for
representors. Representor's flow handlers simply invoke PF handlers
and pass the representor's flowman structure. The PF flowman handlers
are aware of representors and perform appropriate devcmds to create
flows on the NIC.

Also use flowman to create internal flows for implicit VF-representor
path. With that, representor Tx/Rx is now functional.

Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
2020-09-21 18:05:38 +02:00
Chengchang Tang
61efaf5b62 ethdev: support getting Rx buffer size in Rx queue info
Add a field named rx_buf_size in rte_eth_rxq_info to indicate the buffer
size used in receiving packets for HW.

In this way, upper-layer users can get this information by calling
rte_eth_rx_queue_info_get.

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-09-21 18:05:38 +02:00
Ivan Dyukov
ba5509a6a8 app: use new link status print format
Add usage of rte_eth_link_to_str function to applications and docs.

Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-09-21 18:05:37 +02:00