This patch removes the dual keyword from dual license
definitions to avoid confusion. As the *dual* word is
not required to be added SPDX license.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The current DevX implementation of the relaxed ordering feature is
enabling relaxed ordering usage only if both relaxed ordering read AND
write are supported. In that case both relaxed ordering read and write
are activated.
This commit will optimize the usage of relaxed ordering by enabling it
when the read OR write features are supported. Each relaxed ordering
type will be activated according to its own capability bit.
This will align the DevX flow with the verbs implementation of
ibv_reg_mr when using the flag IBV_ACCESS_RELAXED_ORDERING
Fixes: 53ac93f71a ("net/mlx5: create relaxed ordering memory regions")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When PF command channel is error, the variables in the log has been
cleared, which is not printed yet.
Fixes: 214164a6bf ("net/hinic/base: remove unused function parameters")
Cc: stable@dpdk.org
Signed-off-by: Guoyang Zhou <zhouguoyang@huawei.com>
For device initialize, driver only supports four aeqs before,
and now driver can supports two or more aeqs from chip
config file.
Fixes: 611faa5f46 ("fix various typos found by Lintian")
Cc: stable@dpdk.org
Signed-off-by: Guoyang Zhou <zhouguoyang@huawei.com>
The ethdev port id is 16 bits now. This patch fixes the data type
of the variable for 'pid', which changing from uint32_t to uint16_t.
RTE_MAX_ETHPORTS is the maximum number of ports, which customized by
the user. To avoid 16-bit unsigned integer overflow, the valid value
of RTE_MAX_ETHPORTS should be set from 0 to UINT16_MAX, and it is
safer to cut one more port from space.
So we use RTE_BUILD_BUG_ON() to ensure that RTE_MAX_ETHPORTS is less
to UINT16_MAX.
Fixes: 5b7ba31148 ("ethdev: add port ownership")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update release notes with feature of outer IP hash for GTPC and GTPU.
Fixes: 6cd2d6adc7 ("net/iavf: support outer IP hash for GTPC")
Fixes: 262100a34a ("net/iavf: support outer IP hash for no inner GTPU")
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add PROT field into IPv4 and IPv6 protocol headers for rss hash.
Fixes: 91f27b2e39 ("net/iavf: refactor RSS")
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Coverity flags that 'rx_conf' variable is used before
it's checked for NULL. This patch fixes this issue.
Coverity issue: 363570
Fixes: 4ff702b5df ("ethdev: introduce Rx buffer split")
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_flow_item_eth and rte_flow_item_vlan items are refined.
The structs do not exactly represent the packet bits captured on the
wire anymore so set raw_encap/decap commands should only copy real
header instead of the whole struct.
Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
Fixes: 09315fc838 ("ethdev: add VLAN attributes to ethernet and VLAN items")
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Flow destructor tired to access flow related resources after the
flow object memory was already released and crashed dpdk process.
The patch moves flow memory release to the end of destructor.
Fixes: 4ec6360de3 ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The recent Rx code refactoring moved the incrementing
of the CQ completion index out of the rxq_cq_decompress_v()
function to the rxq_burst_v() function.
The advancing of CQ completion index was removed in SSE
version only causing Neon and Altivec Rx bursts to stall.
Remove the incrementation of CQ completion index for all
the architectures in order to fix the stall.
Fixes: 1ded26239a ("net/mlx5: refactor vectorized Rx")
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch fixes a couple of scenarios which were overlooked
by the patch which added VXLAN rte_flow offload support.
1. When a PMD application queries for flow counters, it could ask PMD
to reset the counters when the application is doing the counters
accumulation. In this case, PMD should not accumulate rather reset
the counter.
2. Some of the PMD applications may set the protocol field in the IPv4
spec but don't set the mask. So, consider the mask in the proto
value calculation.
4. The cached tunnel inner flow is not getting installed in the
context of tunnel outer flow create because of the wrong
error code check when tunnel outer flow is installed in the
hardware.
5. When a dpdk application offloads the same tunnel inner flow on
all the uplink ports, other than the first one the driver rejects
the rest of them. However, the first tunnel inner flow request
might not be of the correct physical port. This is fixed by
caching the tunnel inner flow entry for all the ports on which
the flow offload request has arrived on. The tunnel inner flows
which were cached on the irrelevant ports will eventually get
aged out as there won't be any traffic on these ports.
Fixes: 675e31d877 ("net/bnxt: support VXLAN decap offload")
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
In some instances link was not coming up if PAM4 signaling is enabled.
Added check to disable autoneg if FW indicates auto speeds are zero.
Use default auto speeds if PAM4 auto speeds is not set.
Added a fix for forced link setting.
Fixes: c23f9ded03 ("net/bnxt: support 200G PAM4 link")
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This attribute helps PMDs to tell actions supposed to work
on the so-called hardware e-switch level from regular ones.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
In a flow rule, attribute "transfer" means operation level
at which both traffic is matched and actions are conducted.
Add the very same attribute to shared action configuration.
If a driver needs to prepare HW resources in two different
ways, depending on the operation level, in order to set up
an action, then this new attribute will indicate the level.
Also, when handling a flow rule insertion, the driver will
be able to turn down a shared action if its level is unfit.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
When the max Rx packet length is smaller than the sum of MTU size and
ether overhead size, it should be enlarged, otherwise the VLAN packets
will be dropped.
Fixes: 35b2d13fd6 ("net: add rte prefix to ether defines")
Cc: stable@dpdk.org
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Some header files have included by others. Also,
some header files have a header file self-contained
error will trigger building warning. As a result,
it is unnecessary and move it into the correct
location.
Beside, here also remove some unused lines.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
If hardware does not support QL (quantity limiter), the int_ql_max
is 0, software should confirm ql_value is less than int_ql_max
before write QL register. This patch add check of int_ql_max
value from firmware and delete the unused variable coalesce_mode.
Fixes: 27911a6e62 ("net/hns3: add Rx interrupts compatibility")
Cc: stable@dpdk.org
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Scheduling rate of port-level in hns3 PF driver configured to
hardware is obtained from firmware, which determines the
bandwidth capability of the port. The rate in firmware is
generally configured with the maximum value for network engine
supporting multiple rates, such as 10G and 25G. It may cause
the following issues:
1) When a 10G optical module is used on the network engine, scheduling
rate of this port will also be configured to hardware with 25G.
However, the MAC rate of this port is 10G. In this case, it is
unreasonable that the port scheduling rate is different from the MAC
rate.
2) If default speed in firmware is not the maximum value, the 25G port
may not reach the capability of the port.
Therefore, we fix configurations of port-level scheduling rate
according to updating of MAC link speed.
Fixes: 59fad0f321 ("net/hns3: support link update operation")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Kupeng920 support tso and checksum offload for VXLAN_GPE with
the next protocol id 3(i.e., Ethernet).
Kupeng930 support TSO and checksum offload for VXLAN_GPE with
the next protocol id 1,2,3(i.e., IPv4, IPv6 and Ethernet).
This patch add support for this tunnel type.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently, the header length of all the layers are fixed, It would
lead to a csum error when the header length changed.
This patch fixes above problem by using the header length in mbuf
instead of the fixed header length to perform the TX cksum offload.
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently, there are two mistakes in Tx checksum outer header prepare.
1) Check whether the packet outer header is IPV4 based on PKT_TX_IPV4
which is incorrect.
2) For HIP08, the outer UDP cksum could not be offloaded. And driver
should ensure the outer udp cksum filed set to 0. In current code,
PKT_TX_UDP_CKSUM is used to determine whether the outer layer of
the packet is a UDP header. Actually, for tunnel TSO, the flag will
never be set.
For the first mistake, it is fixed by replacing PKT_TX_IPV4 with
PKT_TX_OUTER_IPV4. And the protocol number in L3 header is used to check
whether the outer L4 header is UDP.
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Fixes: 6dca716c9e ("net/hns3: support TSO")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
For Kunpeng920, both tx and rx promisc is set when the promisc mode
is enabled. In other words, all the ingress packets and the packets sent
from the PF and other VFs on the same physical port will be copied
to the function which set promisc mode on.
Kunpeng930 support to turn off the tx unicast promisc. A limit promisc
mode is introduced, which means turn off the tx unicast promisc when
promisc is set.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
For SCTP checksum offload, pmd driver does not parse payload offset
info, which may cause hardware calculate SCTP checksum failed.
Fixes: 8c8b61234f ("net/hinic: refactor checksum functions")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
This patch fixes outer_l3_len parse error when
PKT_TX_OUTER_IP_CKSUM is not set, which does not affect
checksum function, just be consistent with mbuf meta
information description.
The outer_l3_len is calculated wrong because 'vlan_hdr' is calculated
wrong, 'vlan_hdr' fixed and code refactored.
Fixes: 8c8b61234f ("net/hinic: refactor checksum functions")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
The hyperlink in the IGC documentation showed the whole link in italics
and was not clickable. This is now fixed to have a clickable label.
Fixes: 66fde1b943 ("net/igc: add skeleton")
Cc: stable@dpdk.org
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
When the number of forwarding cores changed in runtime, the issue may
be encountered:
If the nbcore set little than current nbcore, the forwarding thread
will still running on the extra cores. Therefore, trying to stop
forwarding will hang testpmd, since it will wait for the extra cores to
stop.
So do not allow to change nbcore number when forwarding is running.
Fixes: 0c0db76f42 ("app/testpmd: separate forward config setup from display")
Cc: stable@dpdk.org
Signed-off-by: Zhenghua Zhou <zhenghuax.zhou@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
In rte_fpga_do_pr, calling function read() may taints argument buffer
which turn to an untrusted value as argument of rte_free().
Coverity issue: 279449
Fixes: ef1e8ede3d ("raw/ifpga: add Intel FPGA bus rawdev driver")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
readlink() does not terminate string, add a null character at the end
of the string if readlink() succeeds.
Coverity issue: 362820
Fixes: 9c006c45d0 ("raw/ifpga: scan PCIe BDF device tree")
Cc: stable@dpdk.org
Signed-off-by: Wei Huang <wei.huang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When a RSS rule with symmetric hash function, the RSS type shouldn't
carry with l3/l4 SRC/DST_ONLY. This patch adds invalid RSS type check
for the case.
Fixes: 91f27b2e39 ("net/iavf: refactor RSS")
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Ptypes for GTPU with inner SCTP are not supported in current DDP pkg.
Thus, delete them in the default hash set config function.
Also clean up the rss vsi when calling the hash set config function.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
ASO age action mode is not supported in group 0 while counter base age
action mode supports group 0.
Allow using the 2 modes of age action in parallel, so group 0 flows will
use counter base age actions and group > 0 flows will use ASO age
actions.
Currently, counter base age action doesn't support shared action API so
group 0 flows cannot share age actions.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
Add support for rte_flow shared action API for ASO age action.
First step here to support validate, create, query and destroy.
The support is only for age ASO mode.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
The RSS shared action was saved in flow memory by a pointer.
It means that every flow memory includes 8B only for optional shared
RSS case.
Move the RSS objects to be used by indexed pool which reduces the flow
handle memory to 4B.
So, now, the shared action handler is also just a 4B index.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
A new ASO (Advanced Steering Operation) feature was added in the last
mlx5 adapters to support flow hit detection.
Using this new steering action, the driver can detect flow traffic hit
and to reset this indication any time.
The ASO age action cannot support flows in table 0.
Add support for flow aging action in rte_flow using this new feature.
The counter aging mode will be taken only when the ASO feature is not
supported for the user flow groups.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
This patch adds different PRM definitions, related to ASO flow hit
feature, in MLX5 PMD code.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add glue function to create the flow hit action using DV API,
if rdma-core support exists.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Read and store the device capability of FLOW_HIT_ASO general object,
using the DevX API.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
PRM defines the general object types using positive numbers.
The same values are used as index for the relevant bit in HCA
capabilities general_obj_types bit mask.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
CQE compression allows us to save the PCI bandwidth and improve
the performance by compressing several CQEs together to a miniCQE.
But the miniCQE size is only 8 bytes and this limits the ability
to successfully keep the compression session in case of various
traffic patterns.
The current miniCQE format only keeps the compression session alive
in case of uniform traffic with the Hash RSS as the only difference.
There are requests to keep the compression session in case of tagged
traffic by RTE Flow Mark Id and mixed UDP/TCP and IPv4/IPv6 traffic.
Add 2 new miniCQE formats in order to achieve the best performance
for these traffic patterns: Flow Tag and Packet Header miniCQEs.
The existing rxq_cqe_comp_en devarg is modified to specify the
desired miniCQE format. Specifying 2 selects Flow Tag format
for better compression rate in case of RTE Flow Mark traffic.
Specifying 3 selects Checksum format (existing format for MPRQ).
Specifying 4 selects L3/L4 Header format for better compression
rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The glue libraries are tightly bound to the mlx drivers of a dpdk
version and are packaged with them.
Keeping a separate ABI version prevents us from installing two versions
of dpdk.
Maintaining this separate version just adds confusion.
Align the glue library ABI version to the global ABI version.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit replaces mlx5_malloc and mlx5_free calls with Linux calls
malloc and free in file mlx5_glue.c.
The current mlx5_malloc calls have no flags, alignment or socket
selection, so they are equivalent to calling malloc. Rdma-core itself
is using malloc. When using mlx5_malloc the glue library is dependent
on common_mlx5 library which must be compiled first. Not doing so and
in case ibverbs_link=dlopen will result in compilation failure:
mlx5_glue.c: undefined reference to `mlx5_malloc'.
To make all of this simpler and remove the common_mlx5 dependency - this
commit does the alloc/free replacements.
Fixes: 66914d19d1 ("common/mlx5: convert control path memory to unified malloc")
Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Fix wrong assignment of allow_multi_pkt_send_wqe
in mlx5_devx_cmd_create_sq.
The incorrect assignment was introduced in the initial
mlx5_devx_cmd_create_sq implementation.
sq_attr->flush_in_error_en is
mistakenly assigned to both allow_multi_pkt_send_wqe and
flush_in_error_en, it was detected during Windows PMD development.
The fix is simply assigning the right value in mlx5_devx_cmd_create_sq
to sq_attr->allow_multi_pkt_send_wqe
Fixes: ae18a1ae96 ("net/mlx5: support Tx hairpin queues")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The MLX5 glue library wasn't following the standard
'librte_<class>_<name>.so' naming.
Fixes: a20b2c01a7 ("build: standardize component names and defines")
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The MLX4 library wasn't being successfully initialized with
-Dibverbs_link=dlopen because it expected a shared object file
with a different name.
Fixes: a20b2c01a7 ("build: standardize component names and defines")
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Pass 'eth_da' pointer instead of pass by value to bnxt_rep_port_probe()
Coverity issue: 360841
Fixes: 322bd6e702 ("net/bnxt: add port representor infrastructure")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
When receiving packets, netvsp puts data in a buffer mapped through UIO.
Depending on packet size, netvsc may attach the buffer as an external
mbuf. This is not a problem if this mbuf is consumed in the application,
and the application can correctly read data out of an external mbuf.
However, there are two problems with data in an external mbuf.
1. Due to the limitation of the kernel UIO implementation, physical
address of this external buffer is not exposed to the user-mode. If
this mbuf is passed to another driver, the other driver is unable to
map this buffer to iova.
2. Some DPDK applications are not aware of external mbuf, and may bug
when they receive an mbuf with external buffer attached.
Introduce a driver parameter "rx_extmbuf_enable" to control if netvsc
should use external mbuf for receiving packets. The default value is 0.
(netvsc doesn't use external mbuf, it always allocates mbuf and copy
data to mbuf) A non-zero value tells netvsc to attach external buffers
to mbuf on receiving packets, thus avoid copying memory.
Signed-off-by: Long Li <longli@microsoft.com>