In accordance with patches [1-4], MAE admin ethdev represents a
network port and not the PF which it sits on. Rework the way
how "ethdev" and "entity" m-ports are assigned in SW switch
port entries of independent ethdevs. Explain in comments.
[1] commit 081e42dab1 ("ethdev: add port representor item to flow API")
[2] commit 49863ae2bf ("ethdev: add represented port item to flow API")
[3] commit 8edb6bc026 ("ethdev: add port representor action to flow API")
[4] commit 88caad251c ("ethdev: add represented port action to flow API")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The function in question has an unfortunate name that reads
like finding a SW switch port entry. In fact just one of
the two m-ports is retrieved from that entry.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
VF representors do not own dedicated m-ports and thus cannot
be referred to as traffic endpoints in flow items or actions.
Fixes: a62ec90522 ("net/sfc: add port representors infrastructure")
Fixes: f55b61cec9 ("net/sfc: support port representor flow item")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
MLX5 PMD uses reference counting to manage RX queue resources.
After port stop shared RSS actions kept references to RX queues,
preventing resource release. As a result, internal PMD mempool
for such queues had been exhausted after a number of port restarts.
Diagnostic message from rte_eth_dev_start():
Rx queue allocation failed: Cannot allocate memory
Dereference RX queues used by indirect actions on port stop (detach)
and restore references on port start (attach) in order to allow RX queue
resource release, but keep indirect RSS across the port restart.
Replace queue IDs in HW by drop queue ID on detach and restore actual
queue IDs on attach.
When the port is stopped, create indirect RSS in the detached state.
As a result, MLX5 PMD is able to keep all its indirect actions
across port restart. Advertise this capability.
Fixes: 4b61b8774b ("ethdev: introduce indirect flow action")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Drop queue creation and destruction were not implemented for DevX
flow engine and Verbs engine methods were used as a workaround.
Implement these methods for DevX so that there is a valid queue ID
that can be used regardless of queue configuration via API.
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Maximum available flow priority was discovered using Verbs API
regardless of the selected flow engine. This required some Verbs
objects to be initialized in order to use DevX engine. Make priority
discovery an engine method and implement it for DevX using its API.
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
the specified behavior is the same as it had been before
this bit was introduced. Explicitly reset it in all PMDs
supporting rte_flow API in order to attract the attention
of maintainers, who should eventually choose to advertise
the new capability or not. It is already known that
mlx4 and mlx5 will not support this capability.
For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
similar action is not performed,
because no PMD except mlx5 supports indirect actions.
Any PMD that starts doing so will anyway have to consider
all relevant API, including this capability.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_flow_action_handle_create() did not mention what happens
with an indirect action when a device is stopped and started again.
It is natural for some indirect actions, like counter, to be persistent.
Keeping others at least saves application time and complexity.
However, not all PMDs can support it, or the support may be limited
by particular action kinds, that is, combinations of action type
and the value of the transfer bit in its configuration.
Add a device capability to indicate if at least some indirect actions
are kept across the above sequence. Without this capability the behavior
is still unspecified, and application is required to destroy
the indirect actions before stopping the device.
In the future, indirect actions may not be the only type of objects
shared between flow rules. The capability bit intends to cover all
possible types of such objects, hence its name.
Declare that the application can test for the persistence
of a particular indirect action kind by attempting to create
an indirect action of that kind when the device is stopped
and checking for the specific error type.
This is logical because if the PMD can to create an indirect action
when the device is not started and use it after the start happens,
it is natural that it can move its internal flow shared object
to the same state when the device is stopped and restore the state
when the device is started.
Indirect action persistence across a reconfigurations is not required.
In case a PMD cannot keep the indirect actions across reconfiguration,
it is allowed just to report an error.
Application must then flush the indirect actions before attempting it.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Previously, it was not specified what happens to the flow rules
when the device is stopped, possibly reconfigured, then started.
If flow rules were kept, it could be convenient for application
developers, because they wouldn't need to save and restore them.
However, due to the number of flows and possible creation rate it is
impractical to save all flow rules in DPDK layer. This means that flow
rules persistence really depends on whether PMD and HW can implement it
efficiently. It can also be limited by the rule item and action types,
and its attributes transfer bit (a combination of an item/action type
and a value of the transfer bit is called a rule feature).
Add a device capability bit for PMDs that can keep at least some
of the flow rules across restart. Without this capability behavior
is still unspecified and it is declared that the application must
flush the rules before stopping the device.
Allow the application to test for persistence of rules using
a particular feature by attempting to create a flow rule
using that feature when the device is stopped
and checking for the specific error.
This is logical because if the PMD can to create the flow rule
when the device is not started and use it after the start happens,
it is natural that it can move its internal flow rule object
to the same state when the device is stopped and restore the state
when the device is started.
Rule persistence across a reconfigurations is not required,
because tracking all the rules and configuration-dependent resources
they use may be infeasible. In case a PMD cannot keep the rules
across reconfiguration, it is allowed just to report an error.
Application must then flush the rules before attempting it.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Since v0.4.0, if the underlying kernel supports it, libbpf uses 'bpf
link' to manage the programs on the interfaces of the xsks. This has two
repercussions for the PMD.
1. In the case where the PMD asks libbpf to load the default XDP
program, the PMD no longer needs to remove it on teardown. This is
because bpf link handles the unloading under the hood.
2. In the case where the PMD loads a custom program, libbpf expects this
program to be linked via bpf link prior to creating the socket.
This patch introduces probes for the libbpf version and kernel support
for bpf link and orchestrates the loading and unloading of
programs according to the capabilities of the kernel and libbpf. The
libbpf version is checked with meson and pkg-config. The probe for
kernel support mirrors how it is implemented in libbpf. A bpf_link is
created and looked up on loopback device. If successful, bpf_link will
be used for the AF_XDP netdev.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
The RSS expansion algorithm is using a graph to find the possible
expansion paths. A graph node with the 'explicit' flag will be skipped,
if it is not found in the flow pattern.
The current implementation misses a check for the explicit flag when
expanding the pattern according to ETH item with EtherType.
For example:
testpmd> flow create 0 ingress pattern eth / ipv6 / udp / vxlan / eth
type is 2048 / end actions rss level 2 types udp end / end
The "eth type is 2048" item in the pattern may be expanded to "ETH IPv4".
The ETH node in the expansion graph is followed by VLAN node marked as
explicit. The fix is to skip the VLAN node and continue the expansion
with its next nodes, IPv4 and IPv6.
The expansion paths for the above example will be:
ETH IPV6 UDP VXLAN ETH END
ETH IPV6 UDP VXLAN ETH IPV4 UDP END
Fixes: 69d268b4ff ("net/mlx5: fix RSS expansion for explicit graph node")
Cc: stable@dpdk.org
Signed-off-by: Lior Margalit <lmargalit@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The ASO meter action with flows creation could be supported on
multiple threads. The meter pools were created to manage the meter
object resources, if there is no room in the current meter pool then
resize the meter pool to the new pool size and free the old one.
There's a race condition while one thread resizes the meter pool and
the old pool resource be freed, and another thread query the meter
object by index on the old pool, the return value is invalid.
This patch adds a read-write lock to protect the pool resource while
resizing and query.
Fixes: a5835d530f ("net/mlx5: optimize Rx queue match")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The age action with flows creation could be supported on the multiple
threads. The age pools were created to manage the age resources, if
there is no room in the current pool then resize the age pool to the new
pool size and free the old one.
There's a race condition while one thread resizes the age pool and the
old pool resource be freed, and another thread query the age action
value of the old pool so the queried value is invalid.
This patch uses the read-write lock to protect the pool resource while
resizing and query.
Fixes: a5835d530f ("net/mlx5: optimize Rx queue match")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch removes hns3vf_set_mc_mac_addr_list() and uses
hns3_set_mc_mac_addr_list() to do this.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, when configuring a group of multicast MAC addresses, the PF
driver reorder mc_addr array in hw struct to remove multicast MAC
addresses that are not in mc_addr_set array from user and then adds new
multicast MAC addresses. Actually, it can be simplified by removing all
previous MAC addresses and then adding new MAC addresses.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch uniforms a common function to check multicast address
validity for PF and VF.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The code logic of adding and removing MAC address in PF and VF is the
same.
This patch extracts two common interfaces to add and remove them
separately.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the interface logic for adding and deleting all MAC address
and multicast address in PF and VF driver is the same. This patch
extracts two common interfaces to configure them separately.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch uses APIs in hns3_hw_ops to configure MAC related features.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch adds hns3_hw_ops structure to operate hardware in PF and VF
driver.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch removes redundant hns3_remove_mc_addr_common(), which can be
replaced by hns3_remove_mc_mac_addr().
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch renames hns3_remove_uc_addr_common() to
hns3_remove_uc_mac_addr() in PF.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch removes hns3_add_mc_addr_common() in PF and
hns3vf_add_mc_addr_common() in VF.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Extract a common interface for PF and VF to check whether the configured
multicast MAC address from rte_eth_dev_mac_addr_add() is the same as the
multicast MAC address from rte_eth_dev_set_mc_addr_list().
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch renames hns3_remove_mc_addr() to hns3_remove_mc_mac_addr().
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch renames hns3_add_uc_addr() to hns3_add_uc_mac_addr().
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
stat_ctx_alloc is called within the context of each rx/tx ring.
i.e from bnxt_alloc_hwrm_rx_ring and bnxt_alloc_hwrm_tx_ring().
So, there is no need to invoke bnxt_alloc_all_hwrm_stat_ctxs()
from bnxt_start_nic().
Fixes: 657c2a7f1d ("net/bnxt: create aggregation rings when needed")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
During port stop, we clear "eth_dev->data->scattered_rx" at the
beginning. As a result, in bnxt_free_hwrm_rx_ring() the check
bnxt_need_agg_ring() returns false and we end up not freeing
the Rx aggregation rings which results in resource leak in the FW.
Fixes: 657c2a7f1d ("net/bnxt: create aggregation rings when needed")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
In Rx data path, it reads hardware registers per packet, resulting in
big performance drop. This patch improves performance from two aspects:
(1) replace per packet hardware register read by per burst.
(2) reduce hardware register read time from 3 to 2 when the low value of
time is not close to overflow.
Meanwhile, this patch refines "ice_timesync_read_rx_timestamp" and
"ice_timesync_read_tx_timestamp" API in which
"ice_tstamp_convert_32b_64b" is also used.
Fixes: 953e74e6b7 ("net/ice: enable Rx timestamp on flex descriptor")
Fixes: 646dcbe6c7 ("net/ice: support IEEE 1588 PTP")
Suggested-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Got error with: gcc 11.2.1 "cc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)"
Build error:
In function ‘i40e_flow_parse_fdir_pattern’,
inlined from ‘i40e_flow_parse_fdir_filter’
at ../drivers/net/i40e/i40e_flow.c:3274:8:
../drivers/net/i40e/i40e_flow.c:3052:69:
error: writing 1 byte into a region of size 0
[-Werror=stringop-overflow=]
3052 | filter->input.flow_ext.flexbytes[j] =
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
3053 | raw_spec->pattern[i];
| ~~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/net/i40e/i40e_flow.c:25:
../drivers/net/i40e/i40e_flow.c:
In function ‘i40e_flow_parse_fdir_filter’:
../drivers/net/i40e/i40e_ethdev.h:638:17:
note: at offset 16 into destination object ‘flexbytes’ of size 16
638 | uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
| ^~~~~~~~~
Fixing by adding range checks.
Fixes: 6ced3dd72f ("net/i40e: support flexible payload parsing for FDIR")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If for any reason, a socket could not be opened, mlx5_pmd_socket_init()
could close the 0 fd (which is valid, and has a fair chance to be stdin),
since server_socket == 0 from the variable being in .bss.
Fixes: e6cdc54cc0 ("net/mlx5: add socket server for external tools")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
A meter policy with RSS/Queue action is not supported
when dv_xmeta_en enabled.
When dv_xmeta_en enabled in legacy creating flow,
it will split into two flows
(one set_tag with jump flow and one RSS/queue action flow).
For meter policy as termination table,
it cannot split flow and
cannot support when dv_xmeta_en enabled.
Fixes: 51ec04dc7b ("net/mlx5: connect meter policy to created flows")
Cc: stable@dpdk.org
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The MODIFY_FIELD RTE action rejects copy to/from metadata
in case of the legacy mode extensive flow metadata support.
It is not consistent with SET_META action that has no such
restriction imposed. Registers A or B are used for META in
legacy mode. Allow meta modifications in legacy mode as well.
On other hand, SET_META rejects actions in case register C
is not available even though it is not needed in legacy mode.
Skip this check for legacy mode and allow setting META.
Fixes: edf325d421 ("net/mlx5: check extended metadata for meta modification")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Register C is used for the metadata within NIC Rx domain.
And its width can vary from 0 to 32 bits depending on
its kernel usage. But it is not the case within NIC Tx domain,
register A is always 32 bits there. Fix metadata width detection
for the modify_field flow API within NIC Tx domain.
Fixes: 6d5735c1cb ("net/mlx5: fix meta register conversion for extensive mode")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add check operation for vf function level reset,
mailbox messages and ack from vf.
Waiting to process the messages.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>