Add support for rte_flow shared action API for ASO age action.
First step here to support validate, create, query and destroy.
The support is only for age ASO mode.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
The RSS shared action was saved in flow memory by a pointer.
It means that every flow memory includes 8B only for optional shared
RSS case.
Move the RSS objects to be used by indexed pool which reduces the flow
handle memory to 4B.
So, now, the shared action handler is also just a 4B index.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Dekel Peled <dekelp@nvidia.com>
A new ASO (Advanced Steering Operation) feature was added in the last
mlx5 adapters to support flow hit detection.
Using this new steering action, the driver can detect flow traffic hit
and to reset this indication any time.
The ASO age action cannot support flows in table 0.
Add support for flow aging action in rte_flow using this new feature.
The counter aging mode will be taken only when the ASO feature is not
supported for the user flow groups.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
This patch adds different PRM definitions, related to ASO flow hit
feature, in MLX5 PMD code.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add glue function to create the flow hit action using DV API,
if rdma-core support exists.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Read and store the device capability of FLOW_HIT_ASO general object,
using the DevX API.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
PRM defines the general object types using positive numbers.
The same values are used as index for the relevant bit in HCA
capabilities general_obj_types bit mask.
Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
CQE compression allows us to save the PCI bandwidth and improve
the performance by compressing several CQEs together to a miniCQE.
But the miniCQE size is only 8 bytes and this limits the ability
to successfully keep the compression session in case of various
traffic patterns.
The current miniCQE format only keeps the compression session alive
in case of uniform traffic with the Hash RSS as the only difference.
There are requests to keep the compression session in case of tagged
traffic by RTE Flow Mark Id and mixed UDP/TCP and IPv4/IPv6 traffic.
Add 2 new miniCQE formats in order to achieve the best performance
for these traffic patterns: Flow Tag and Packet Header miniCQEs.
The existing rxq_cqe_comp_en devarg is modified to specify the
desired miniCQE format. Specifying 2 selects Flow Tag format
for better compression rate in case of RTE Flow Mark traffic.
Specifying 3 selects Checksum format (existing format for MPRQ).
Specifying 4 selects L3/L4 Header format for better compression
rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The glue libraries are tightly bound to the mlx drivers of a dpdk
version and are packaged with them.
Keeping a separate ABI version prevents us from installing two versions
of dpdk.
Maintaining this separate version just adds confusion.
Align the glue library ABI version to the global ABI version.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit replaces mlx5_malloc and mlx5_free calls with Linux calls
malloc and free in file mlx5_glue.c.
The current mlx5_malloc calls have no flags, alignment or socket
selection, so they are equivalent to calling malloc. Rdma-core itself
is using malloc. When using mlx5_malloc the glue library is dependent
on common_mlx5 library which must be compiled first. Not doing so and
in case ibverbs_link=dlopen will result in compilation failure:
mlx5_glue.c: undefined reference to `mlx5_malloc'.
To make all of this simpler and remove the common_mlx5 dependency - this
commit does the alloc/free replacements.
Fixes: 66914d19d1 ("common/mlx5: convert control path memory to unified malloc")
Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Fix wrong assignment of allow_multi_pkt_send_wqe
in mlx5_devx_cmd_create_sq.
The incorrect assignment was introduced in the initial
mlx5_devx_cmd_create_sq implementation.
sq_attr->flush_in_error_en is
mistakenly assigned to both allow_multi_pkt_send_wqe and
flush_in_error_en, it was detected during Windows PMD development.
The fix is simply assigning the right value in mlx5_devx_cmd_create_sq
to sq_attr->allow_multi_pkt_send_wqe
Fixes: ae18a1ae96 ("net/mlx5: support Tx hairpin queues")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The MLX5 glue library wasn't following the standard
'librte_<class>_<name>.so' naming.
Fixes: a20b2c01a7 ("build: standardize component names and defines")
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The MLX4 library wasn't being successfully initialized with
-Dibverbs_link=dlopen because it expected a shared object file
with a different name.
Fixes: a20b2c01a7 ("build: standardize component names and defines")
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Pass 'eth_da' pointer instead of pass by value to bnxt_rep_port_probe()
Coverity issue: 360841
Fixes: 322bd6e702 ("net/bnxt: add port representor infrastructure")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
When receiving packets, netvsp puts data in a buffer mapped through UIO.
Depending on packet size, netvsc may attach the buffer as an external
mbuf. This is not a problem if this mbuf is consumed in the application,
and the application can correctly read data out of an external mbuf.
However, there are two problems with data in an external mbuf.
1. Due to the limitation of the kernel UIO implementation, physical
address of this external buffer is not exposed to the user-mode. If
this mbuf is passed to another driver, the other driver is unable to
map this buffer to iova.
2. Some DPDK applications are not aware of external mbuf, and may bug
when they receive an mbuf with external buffer attached.
Introduce a driver parameter "rx_extmbuf_enable" to control if netvsc
should use external mbuf for receiving packets. The default value is 0.
(netvsc doesn't use external mbuf, it always allocates mbuf and copy
data to mbuf) A non-zero value tells netvsc to attach external buffers
to mbuf on receiving packets, thus avoid copying memory.
Signed-off-by: Long Li <longli@microsoft.com>
The values for Rx and Tx copy break should be tunable rather
than hard coded constants.
The rx_copybreak sets the threshold where the driver uses an
external mbuf to avoid having to copy data. Setting 0 for copybreak
will cause driver to always create an external mbuf. Setting
a value greater than the MTU would prevent it from ever making
an external mbuf and always copy. The default value is 256 (bytes).
Likewise the tx_copybreak sets the threshold where the driver
aggregates multiple small packets into one request. If tx_copybreak
is 0 then each packet goes as a VMBus request (no copying).
If tx_copybreak is set larger than the MTU, then all packets smaller
than the chunk size of the VMBus send buffer will be copied; larger
packets always have to go as a single direct request. The default
value is 512 (bytes).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Long Li <longli@microsoft.com>
Adds support for merging a base steering rule with
all flow rules created on a VF.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
In nicvf_qset_rbdr_alloc(), we allocate memory for the 'rbdr'
structure but not released when allocate 'rbdr desc ring' fails.
Fixes: 7413feee66 ("net/thunderx: add device start/stop and close")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The v2.2.0 adds support for network interface metrics, includes some bug
fixes and updates HAL to the latest version.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
The ARMv8 platform support was tested and works fine with the ENA PMD.
It can be used on the AWS a1.* and m6g.* instances.
The ARMv8 support in ENA is at least from v19.11, where the VFIO DPDK
driver was fixed to work with 32-bit applications compiled for arm.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Latest generation HW requires IO completion queue descriptors to be
aligned to a 4K in order to achieve the best performance.
Because of that, the new allocation macros were added, which allows
driver to allocate the memory with specified alignment.
The previous allocation macros are now wrappers around the macros
doing the alignment, with the alignment value equal to cacheline size.
Fixes: b68309be44 ("net/ena/base: update communication layer for the ENAv2")
Cc: stable@dpdk.org
Signed-off-by: Ido Segev <idose@amazon.com>
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
The ID 0xEC21 is not associated with LLQ feature of the device, so it
would be misleading for the user. Because of that, the current
identifier is more precise.
Together with code update, the documentation was changed to reflect
current changes
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
The driver was never setting PKT_RX_*_CKSUM_GOOD flags, so the only way
of checking if the checksum was checked was by testing for the
PKT_RX_*_CKSUM_BAD. In that situation, the application couldn't detect
if the checksum was valid or unknown, as unknown flag is equal to 0.
Moreover, the l3_csum_err value is only valid if the l3_proto is
indicating IPv4, so it shouldn't be checked for other protocols.
Fixes: 1173fca25a ("ena: add polling-mode driver")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
There was a bug in a code, which was reading stat_offset value from the
ena_stats_rx_strings array instead of ena_stats_global_strings.
It wasn't causing real problems just because ena_stats_rx_strings was
not smaller than ena_stats_global_strings and both arrays hold the same
offsets.
Fixes: 7830e905b7 ("net/ena: expose extended stats")
Cc: stable@dpdk.org
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
Several functions use sizeof(struct rte_flow_item_eth) and
sizeof(struct rte_flow_item_ipv6) when copying headers. These sizes
used to coincide with the sizes of rte_ether_hdr and
rte_ipv6_hdr. But, with recently added fields, rte_flow_item_eth and
rte_flow_item_ipv6 have grown in size. Use sizeof(rte_ether_hdr) and
sizeof(rte_ipv6_hdr) instead.
Coverity issue: 363572, 363573
Fixes: ea7768b5bb ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The ARM SVE vector implementation defined macro is
__ARM_FEATURE_SVE and RTE_MACHINE_CPUFLAG macros
have replaced by regular compiler macros.
Besides, we remove the unused macro RTE_LIBRTE_HNS3_INC_VECTOR_SVE.
Fixes: 952ebacce4 ("net/hns3: support SVE Rx")
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently hns3vf_reinit_dev only judge whether the return value of
setting PCI bus function is not 0, while it will return a negative
value when execute failed.
Fixes: 243651cb6c ("net/hns3: check PCI config space reads")
Cc: stable@dpdk.org
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently, the rx HW ring is not cleared after queue stop.
When there are packets remaining in the HW rings and the
queues have been stopped, if upper layer user calls the
rx_burst function at this time, an illegal memory access
will occur due to the sw rings has been released.
This patch fix this by reset the sw ring after disable the
queue.
Fixes: fa29fe45a7 ("net/hns3: support queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently, u8 type variable is used to control to release fake queues in
hns3_fake_rx/tx_queue_config function. Although there is no case in
which more than 256 fake queues are created in hns3 network engine, it
is unreasonable to compare u8 variable with u16 variable.
Fixes: a951c1ed3a ("net/hns3: support different numbers of Rx and Tx queues")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
There are coverity defects related "calling
hns3_reset_all_tqps without checking return value
in hns3_do_start".
This patch fixes the warning by add "void" declaration
because here is exception handling, hns3_reset_all_tqps
will have the corresponding error message if it is
handled incorrectly, so it is not necessary to check
hns3_reset_all_tqps return value, here keep ret as the
error code causing the exception.
Coverity issue: 363048
Fixes: fa29fe45a7 ("net/hns3: support queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently, hns3 supports recognizing a lot of ptypes, but most
tunnel packet types are not reported to the API
rte_eth_dev_get_supported_ptypes.
And there are some errors in L2 and L3 packet recognition. The
ARP and LLDP are classified to L3 field in RX descriptor. So,
the ptype of LLDP and ARP packets will be set twice. And ptypes
are assigned by bitwise OR, which will eventually cause the ptype
result to be incorrect.
Besides, when a packet with only L2 header, its ptype will not
report by hns3 PMD. This is because the L2/L3 ptype table is not
initialized properly. In this case, the table query result is 0
by default.
As a result, it fixes missing supported ptypes and the mistake in
L2/L3 packet recognition and the unreported L2 packet ptype by
reporting its L2 type when the L3 type unrecognized..
Fixes: bba6366983 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Currently, driver uses the maximum number of queues configured by user
as the maximum queue id that can be specified by the RSS rule or the
reta_update api. It is unreasonable and may trigger an incorrect
behavior in the multi-TC scenario. The driver must ensure that the queue
id configured in the redirection table must be within the range of the
number of queues allocated to a TC.
Fixes: c37ca66f2b ("net/hns3: support RSS")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Implement the available and used rxd number count function.
In Kunpeng series, the NIC hardware supports to read the bd numbers
which wait processed from the hardware FBD (Full Buffer Descriptor),
and the driver maintains the bd number to be written back hardware.
Compare the number of FBDs with the number of BDs to be written back to
the hardware.
The number of used descriptors of a rx queue is computed as follows:
The fbd numbers of reading from FBD register plus the bd numbers to be
written back to hardware maintained by the driver.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Rename the dynamic mbuf name to 'intel_pmd_xxx' format, so that the
Intel PMD which has the protocol extraction feature will share the
same dynamic field/flags space in mbuf.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The file "rte_pmd_mlx5.h" is used to provide mlx5 PMD specific APIs
and it needs to be included in the document generation.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The multiplication of two u32 integers may cause an overflow with large
mempool sizes.
Fixes: 74b46340e2 ("net/af_xdp: support shared UMEM")
Signed-off-by: Martin Weiser <martin.weiser@allegro-packets.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
In the current implementation of eCPRI flow item parsing of the CLI,
the token items in the list are not connected properly.
A command containing "rtc_ctrl rtc_id spec 14857 rtc_id mask 0xff00"
will be considered invalid. In order to support spec with mask, the
common entry needs to be typed twice and the whole command will be
too long.
By changing the token lists, it could support spec with mask without
backing from the entry of the item.
Fixes: 17d103cc93 ("app/testpmd: add eCPRI in flow creation patterns")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Updated description of rte_eth_rx_burst() to reflect what drivers,
when using vector instructions, expect from nb_pkts.
Also discussed on the mailing list here:
http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35C61257@smartserver.smartshare.dk/
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Intel SSE has __m128i, but ARMv8 has __uint128_t. So, add compat
efsys_uint128_t to be used in driver source and have either __u128i
or __uint128_t behind.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
For Kunpeng930 NIC hardware, it supports to use dst/src port to
RSS hash for ipv6-sctp packet type. However, the Kunpeng920 NIC
hardware is different with it. The Kunpeng920 NIC only supports
dst/src ip to RSS hash for ipv6-sctp packet type.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
I am a new hns3 pmd developer and reviewer for upstreaming hns3
pmd driver. So I want to help out here as well.
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Acked-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
net/ixgbe driver is the only user of the struct rte_eth_l2_tunnel_conf.
Move it to the driver and use ixgbe_ prefix instead of rte_eth_.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>