Commit Graph

506 Commits

Author SHA1 Message Date
Haifei Luo
5db9318f76 net/mlx5: add more details to flow dump
Currently the flow dump provides few information about actions
- just the pointers. Add implementations to display details for
counter, modify_hdr and encap_decap actions.

For counter, the regular flow operation query is engaged and
the counter content information is provided, including hits
and bytes values.For modify_hdr, encap_and decap actions,
the information stored in the ipool objects is dumped.

There are the formats of information presented in the dump:
	Counter: rec_type,id,hits,bytes
	Modify_hdr: rec_type,id,actions_number,actions
	Encap_decap: rec_type,id,buf

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-07-08 22:09:27 +02:00
Shun Hao
48fbc1be82 net/mlx5: fix meter policy flow match item
Currently when creating meter policy, a src port_id match item will
always be added in switch domain. So if one meter is used by another
port, it will not work correctly.

This issue is solved:
1. If policy fate action is port_id, add the src port_id match item,
and the meter cannot be shared by another port.
2. If policy fate action isn't port_id, don't add the src port_id
match, meter can be shared by another port.

This fix enables one meter being shared by different ports. User can
create a meter flow using a port_id match item to make this meter
shared by other port.

Fixes: afb4aa4f12 ("net/mlx5: support meter policy operations")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:26 +02:00
Shun Hao
efcce4dcdc net/mlx5: fix meter policy ID table container
The meter policy handlers are managed by user IDs and the driver used l3
table in order to map the user ID to the internal driver handler of the
policy.

The l3 table was wrongly saved in the shared device structure which
manages all the switch domain ports what made the user IDs shared
between different ethdev ports.

Move the policy l3 table to be per port by saving it in the port private
structure.

Fixes: afb4aa4f12 ("net/mlx5: support meter policy operations")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:25 +02:00
Shun Hao
a295c69a8b net/mlx5: optimize meter profile lookup
Currently a list is used to save all meter profile ids, which is
not efficient when looking up profile from huge amount of profiles.

This changes to use an l3 table instead to save meter profile ids,
so as to improve the lookup performance.

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-07-08 22:09:24 +02:00
Jiawei Wang
b3880af2ce net/mlx5: fix representor ID check for sampling
The representor definition was introduced in the latest code.
For non-representor port, like PF port, use the 0xffff instead of -1.

This patch updates the representor id checking during splitting sample
flow.

Fixes: cb95feefdd ("net/mlx5: support sub-function representor")
Cc: stable@dpdk.org

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
2021-07-08 22:09:22 +02:00
Bing Zhao
23233fd63a net/mlx5: fix loopback for Direct Verbs queue
In the past, all the queues and other hardware objects were created
through Verbs interface. Currently, most of the objects creation are
migrated to Devx interface by default, including queues. Only when
the DV is disabled by device arg or eswitch is enabled, all or some
of the objects are created through Verbs interface.

When using Devx interface to create queues, the kernel driver
behavior is different from the case using Verbs. The Tx loopback
cannot work properly even if the Tx and Rx queues are configured
with loopback attribute. To fix the support self loopback for Tx, a
Verbs dummy queue pair needs to be created to trigger the kernel to
enable the global loopback capability.

This is only required when TIR is created for Rx and loopback is
needed. Only CQ and QP are needed for this case, no WQ(RQ) needs to
be created.

Bugzilla ID: 645
Fixes: 6deb19e1b2 ("net/mlx5: separate Rx queue object creations")
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-18 10:30:45 +02:00
Li Zhang
ec962bad14 net/mlx5: fix metering cleanup on stop
A meter may handle Rx queue reference in his sub-policies.
In stop operation, all the Rx queues are released.

Wrongly, the meter reference was not released before
destroying the Rx queues what cause an error in stop.

Release the Rx queues meter references in stop operation.

Fixes: fc6ce56bba ("net/mlx5: prepare sub-policy for flow with meter")

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-05-16 14:58:23 +02:00
Bing Zhao
0a42911739 net/mlx5: validate connection tracking action
The validation of a CT action contains two parts. The first is the
CT action configurations parameter. When creating a CT action
context, some members need to be verified.

The second is that when creating a flow, the DR action of CT should
be validated with other actions and items as well. Currently, only
the TCP protocol support connection tracking.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:17 +02:00
Bing Zhao
2d084f69aa net/mlx5: add translation of connection tracking action
When creating a flow with this action context for CT, it needs to be
translated in 2 levels.

First, retrieve from action context to rte_flow action.
Second, translate it to the corresponding DR action with traffic
direction that was specified when creating or updating via
rte_flow_action_handle* API.

Before using the DR action in a flow, the CT context should be
available to use in the hardware. A synchronization is done before
inserting the flow rule with CT action to check the HW availability
of this CT context.

In order to release the DR actions and reuse the context of a CT,
the reference count should also be handled in the flow rule
destroying.

The CT index will be recorded in the rte_flow by reusing the ASO age
index to save memory, since only one ASO action is supported in one
flow rule currently. The action context type should also be saved
for CT. When destroying a flow rule, if the context type is CT and
the index is valid (non-zero), the release process should be
handled. By default, the handling will fall back to try to release
the ASO age if any.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:15 +02:00
Bing Zhao
cf75655636 net/mlx5: add ASO connection tracking query
After the connection tracking context is created and being used by
the flows, the context will be updated by the HW automatically after
a packet passed the CT validation. E.g., the ACK, SEQ, window and
state of CT can be updated with both direction traffic.

In order to query the updated contents of this context, a WQE should
be posted to the SQ with a return buffer. The data will be filled
into the buffer. And the profile will be filled with specific value.

During the execution of query command, the context may be updated.
The result of the query command may not be the latest one.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:14 +02:00
Bing Zhao
2db75e8b1d net/mlx5: add actions for connection tracking creation
Allocating a CT from the management pools and creating the DR actions
for both directions by default.

If there is no available connection tracking action, a new pool will
be created with a fixed size bulk allocation. Right now, all the
resources are controlled by the linked list.

The ASO connection tracking context associated with these actions
need to be updated via WQE before using for steering.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:13 +02:00
Bing Zhao
ebaf1b318c net/mlx5: support connection tracking modify
After the connection tracking object bulk is allocated, all the
objects' contents are filled with zero by default. Every
new-allocated object must be modified via WQE operation before it is
used.

In order to reduce the latency for the flow creation, an asynchronous
way is used instead of busy waiting for the CQE to be generated.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:12 +02:00
Bing Zhao
ee9e5fad03 net/mlx5: initialize connection tracking management
The definitions of ASO connection tracking objects management
structures are added.

Considering performance, the bulk allocation of ASO CT objects
should be used. The maximal value per bulk and the granularity could
be fetched from HCA capabilities 2. Right now, a fixed number of 64
is used for each bulk for a better management purpose.

The ASO QP for CT is initialized, the SQ will be used for both
modify and query command.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-05-05 14:30:11 +02:00
Michael Baum
cd414f81d1 net/mlx5: workaround ASO memory region creation
Due to kernel issue in direct MKEY creation using the DevX API for
physical memory, this patch replaces the ASO MR creation to use Verbs
API.

Fixes: f935ed4b64 ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-05-03 09:09:50 +02:00
Michael Baum
447d4d797d net/mlx5: fix flow age event triggering
A FLOW_AGE event should be invoked when a new aged-out flow is detected
by the PMD after the last user get-aged query calling.
The PMD manages 2 flags for this information and check them in order to
decide if an event should be invoked:
MLX5_AGE_EVENT_NEW - a new aged-out flow was detected. after the last
check.
MLX5_AGE_TRIGGER - get-aged query was called after the last aged-out
flow.
The 2 flags were unset after the event invoking.

When the user calls get-aged query from the event callback, the TRIGGER
flag was set inside the user callback and unset directly after the
callback what may stop the event invoking forever.

Unset the TRIGGER flag before the event invoking in order to allow set
it by the user callback.

Fixes: f935ed4b64 ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Reported-by: David Bouyeure <david.bouyeure@fraudbuster.mobi>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-30 12:41:07 +02:00
Michael Baum
f3191849f2 net/mlx5: support flow count action handle
Existing API supports counter action to count traffic of a single flow.
The user can share the count action among different flows using the
shared flag and the same counter ID in the count action configuration.

Recent patch [1] introduced the indirect action API.
Using this API, an action can be created as indirect, unattached to any
flow rule.
Multiple flows can then be created using the same indirect action.
The new API also supports query operation of an indirect action.

The new API is more efficient because the driver gets it's own handler
for the count action instead of managing a mapping between the user ID
to the driver handle.

Support create, query and destroy indirect action operations for flow
count action.

Application will use the indirect action query operation to query this
count action.

In the meantime the old sharing mechanism (with the sharing flag)
continues to be supported, and the user can choose the way he wants to
share the counter.
The new indirect action API is only supported in DevX, so sharing
counter action in Verbs can only be done through the old mechanism.

[1] https://mails.dpdk.org/archives/dev/2020-July/174110.html

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-30 12:41:07 +02:00
Viacheslav Ovsiienko
a4af5eed40 net/mlx5: remove drop queue function prototypes
There are some leftovers of removed code - there are
no drop queue handling routines anymore.

Fixes: 78be885295 ("net/mlx5: handle drop queues as regular queues")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-04-28 08:43:18 +02:00
Li Zhang
4443201863 net/mlx5: support meter creation with policy
Create a meter with the new pre-defined policy.

The following cases to be considered:
1.Add entry match with meter_id in global drop table.
2.For non-termination policy (policy id 0),
  add jump rule to suffix table for green and
  jump rule to drop table for red.
3.Allocate counter per meter in drop table.
4.Allocate meter resource per domain per color.
5.It can work with both ASO and legacy meter HW objects.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-27 13:20:35 +02:00
Li Zhang
afb4aa4f12 net/mlx5: support meter policy operations
MLX5 PMD checks the validation of actions in policy while add
a new meter policy, if pass the validation, allocates the new
policy object from the meter policy indexed memory pool.

It is common to use the same policy for multiple meters.
MLX5 PMD supports two types of policy: termination policy and
no-termination policy.

Implement the next policy operations:
validate:
The driver doesn't support to configure actions in the flow
after the meter action except one case when the meter policy
is configured to do nothing in GREEN\YELLOW and only DROP action
in RED, this special policy is called non-terminated policy
and is handed as a singleton object internally.

For all the terminated policies, the next actions are supported:
GREEN - QUEUE, RSS, PORT_ID, JUMP, DROP, MARK and SET_TAG.
YELLOW - not supported at all -> must be empty.
RED - must include DROP action.

Hence, in ingress case, for example,
QUEUE\RSS\JUMP must be configured as last action for GREEN color.

All the above limitations will be validated.

create:
Validate the policy configuration.
Prepare the related tables and actions.

destroy:
Release the created policy resources.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-27 13:20:28 +02:00
Li Zhang
5f0d54f372 ethdev: add pre-defined meter policy API
Currently, the flow meter policy does not support multiple actions
per color; also the allowed action types per color are very limited.
In addition, the policy cannot be pre-defined.

Due to the growing in flow actions offload abilities there is a potential
for the user to use variety of actions per color differently.
This new meter policy API comes to allow this potential in the most ethdev
common way using rte_flow action definition.
A list of rte_flow actions will be provided by the user per color
in order to create a meter policy.
In addition, the API forces to pre-define the policy before
the meters creation in order to allow sharing of single policy
with multiple meters efficiently.

meter_policy_id is added into struct rte_mtr_params.
So that it can get the policy during the meters creation.

Allow coloring the packet using a new rte_flow_action_color
as could be done by the old policy API.

Add two common policy template as macros in the head file.

The next API function were added:
- rte_mtr_meter_policy_add
- rte_mtr_meter_policy_delete
- rte_mtr_meter_policy_update
- rte_mtr_meter_policy_validate
The next struct was changed:
- rte_mtr_params
- rte_mtr_capabilities
The next API was deleted:
- rte_mtr_policer_actions_update

To support this API the following app were changed:
app/test-flow-perf: clean meter policer
app/testpmd: clean meter policer

To support this API the following drivers were changed:
net/softnic: support meter policy API
1. Cleans meter rte_mtr_policer_action.
2. Supports policy API to get color action as policer action did.
   The color action will be mapped into rte_table_action_policer.

net/mlx5: clean meter creation management
Cleans and breaks part of the current meter management
in order to allow better design with policy API.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-04-21 12:22:17 +02:00
Li Zhang
2d2cef5d4f net/mlx5: allow multiple flow tables on same level
The driver devices support creation of multiple flow tables.
Jump action can be used in order to move the packet steering
to different flow table.
Table 0 is always the root table for packet steering.

Jumping between tables may cause endless loops in steering mechanism,
that's why each table has level attribute,
the driver sub-system may not allow jumping to table with
equal or lower level than the current table.

Currently, in the driver, the table ID and level are always identical.

Allow multiple flow table creation with the same level attribute.

This patch adds the table id in flow table data entry, while
allocates the flow table, if the table level is same but the
different table id, the new table will be allocated with new
table object id. It supports 4M multiple flow tables on the
same level.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:12 +02:00
Li Zhang
cfd2037c14 net/mlx5: make ASO meter queue thread-safe
Synchronize ASO meter queue accesses from
different threads using a spinlock.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:10 +02:00
Li Zhang
c99b4f8bc2 net/mlx5: support ASO meter action
When ASO action is available, use it as the meter action

Signed-off-by: Shun Hao <shunh@nvidia.com>
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:08 +02:00
Li Zhang
e93c58da4d net/mlx5: add meter ASO queue management
This patch adds the ASO queue management for flow meter,
includes send WQE and CQE handle functions.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:05 +02:00
Li Zhang
29efa63a7e net/mlx5: initialize flow meter ASO SQ
Initialize the flow meter ASO SQ WQEs with
all the constant data that should not be updated
per enqueue operation.

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:04 +02:00
Li Zhang
e6100c7b62 net/mlx5: add flow meter pool to manage meter object
Add ASO flow meter pool to manage meter object

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:28:02 +02:00
Shun Hao
83306d6c46 net/mlx5: fix meter statistics
Currently, packets after meter will be steered to a global policer
table,
which includes green/red color rules for every meter, so as to have
counter statistics of each color in every meter.

There's a bug that all the rules in global policer table are matching
only color criteria, so all packets will be counted to one meter only,
and other meter statistics are always zero.

This patch does these:
1. The rules in policer table matches both meter index and color, so
packet after meter could be counted to the correct meter counter.
2. The meter index and flow index are now sharing the available
register bits dynamically. Meter index starts from lsb, and flow
index starts from msb.

Fixes: 46a5e6bc6a ("net/mlx5: prepare meter flow tables")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-21 08:27:46 +02:00
Haifei Luo
bd0a931543 net/mlx5: support single flow dump
Modify API mlx5_flow_dev_dump to support the feature.
Modify mlx5_socket since one extra arg flow_ptr is added.

The data structure sent to DPDK application from the utility triggering
the flow dumps should be packed and endianness must be specified.
The native host endianness can be used, all exchange happens within
the same host (we use sendmsg aux data and share the file handle,
remote approach is not applicable, no inter-host communication happens).

The message structure to dump one/all flow(s):
struct mlx5_flow_dump_req {
	uint32_t port_id;
	uint64_t flow_ptr;
} __rte_packed;

If flow_ptr is 0, all flows for the specified port will be dumped.

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-04-19 12:45:05 +02:00
Haifei Luo
50c383793b ethdev: dump single flow rule
Previous implementations support dump all the flows. Add new arg
rte_flow in rte_flow_dev_dump to dump one flow.

Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-04-14 13:19:55 +02:00
Dmitry Kozlyuk
89813a522e net: provide IP-related API on any OS
Users of <rte_ip.h> relied on it to provide IP-related defines,
like IPPROTO_* constants, but still had to include POSIX headers
for inet_pton() and other standard IP-related facilities.

Extend <rte_ip.h> so that it is a single header to gain access
to IP-related facilities on any OS. Use it to replace POSIX includes
in components enabled on Windows. Move missing constants from Windows
networking shim to OS shim header and include it where needed.

Remove Windows networking shim that is no longer needed.

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ranjit Menon <ranjit.menon@intel.com>
2021-04-15 01:56:43 +02:00
Viacheslav Ovsiienko
da845ae9d7 net/mlx5: fix drop action for Direct Rules/Verbs
There are multiple branches in rdma-core library backing
the rte flows:
  - Verbs
  - Direct Verbs (DV)
  - Direct Rules (DR)

The Verbs API always requires the specifying the queue even
if there is the drop action in the flow, though the kernel
optimizes out the actual queue usage for the flows containing
the drop action. The PMD handles the dedicated Rx queue to
provide Verbs API compatibility.

The DV/DR API does not require explicit specifying the queue
at the flow creation, but PMD still specified the dedicated
drop queue as action. It performed the packet forwarding to
the dummy queue (that was not polled at all) causing the
steering pipeline resources usage and degrading the overall
packet processing rate. For example, with inserted flow to
drop all the ingress packets the statistics reported only
15Mpps of 64B packets were received over 100Gbps line.

Since the Direct Rule API for E-Switch was introduced the
rdma-core supports the dedicated drop action, that is recognized
both for DV and DR and can be used for the entire device in
unified fashion, regardless of steering domain. The similar drop
action was introduced for E-Switch, the usage of this one can be
extended for other steering domains, not for E-Switch's one only.

This patch:
  - renames esw_drop_action to dr_drop_action to emphasize
    the global nature of the variable (not only E-Switch domain)
  - specifies this global drop action instead of dedicated
    drop queue for the DR/DV flows

Fixes: 34fa7c0268 ("net/mlx5: add drop action to Direct Verbs E-Switch")
Fixes: 65b3cd0dc3 ("net/mlx5: create global drop action")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-04-07 10:25:32 +02:00
Xueming Li
91766fae2b net/mlx5: probe host PF representor with sub-function
To simplify BlueField HPF representor(vf[-1]) probe, this patch allows
probe it with "sf" syntax: "sf[-1]".

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:39 +02:00
Xueming Li
f5f4c48237 net/mlx5: save bonding member ports information
Since kernel bonding netdev doesn't provide statistics counter that
reflects all member ports, PMD has to manually summarize counters from
each member ports.

As a preparation, this patch collects bonding member port information
and saves to shared context data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:33 +02:00
Xueming Li
f926cce3fa net/mlx5: refactor bonding representor probing
To probe representor on 2nd PF of kernel bonding device, had to specify
PF1 BDF in devarg:
  <PF1_BDF>,representor=0
When closing bonding device, all representors had to be closed together
and this implies all representors have to use primary PF of bonding
device. So after probing representor port on 2nd PF, when locating new
probed device using device argument, the filter used 2nd PF as PCI
address and failed to locate new device.

Conflict happened by using current representor devargs:
 - Use PCI BDF to specify representor owner PF
 - Use PCI BDF to locate probed representor device.
 - PMD uses primary PCI BDF as PCI device.

To resolve such conflicts, new representor syntax is introduced here:
  <primary BDF>,representor=pfXvfY
All representors must use primary PF as owner PCI device, PMD internally
locate owner PCI address by checking representor "pfX" part. To EAL, all
representors are registered to primary PCI device, the 2nd PF is hidden
to EAL, thus all search should be consistent.

Same to VF representor, HPF (host PF on BlueField) uses same syntax to
probe, example: representor=pf1vf[0-3,-1]

This patch also adds pf index into kernel bonding representor port name:
	<BDF>_<ib_name>_representor_pf<X>vf<Y>

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:28 +02:00
Xueming Li
9b03958aeb net/mlx5: revert setting bonding representor to first PF
With kernel bonding, representors on second PF are being probed by
devargs:
	<primary_bdf>,representor=pf1vf<N>
No need to save primary PF port ID and lookup when probing sibling
ports, revert patch [1]

[1]:
commit e6818853c0 ("net/mlx5: set representor to first PF in bonding mode")

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:27 +02:00
Xueming Li
cb95feefdd net/mlx5: support sub-function representor
This patch adds support for SF representor. Similar to VF representor,
switch port name of SF representor in phys_port_name sysfs key is
"pf<x>sf<y>".

Device representor argument is "representors=sf[list]", list member
could be mix of instance and range. Example:
  representors=sf[0,2,4,8-12,-1]

To probe VF representor and SF representor, need to separate into 2
devices:
  -a <BDF>,representor=vf[list] -a <BDF>,representor=sf[list]

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:25 +02:00
Thomas Monjalon
fb7ad441d4 ethdev: replace callback getting filter operations
Since rte_flow is the only API for filtering operations,
the legacy driver interface filter_ctrl was too much complicated
for the simple task of getting the struct rte_flow_ops.

The filter type RTE_ETH_FILTER_GENERIC and
the filter operarion RTE_ETH_FILTER_GET are removed.
The new driver callback flow_ops_get replaces filter_ctrl.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-03-26 18:37:13 +01:00
Viacheslav Ovsiienko
d61381ad46 net/mlx5: support timestamp format
This patch adds support for the timestamp format settings for
the receive and send queues. If the firmware version x.30.1000
or above is installed and the NIC timestamps are configured
with the real-time format, the default zero values for newly
added fields cause the queue creation to fail.

The patch queries the timestamp formats supported by the hardware
and sets the configuration values in queue context accordingly.

Fixes: 86fc67fc93 ("net/mlx5: create advanced RxQ object via DevX")
Fixes: ae18a1ae96 ("net/mlx5: support Tx hairpin queues")
Fixes: 15c3807e86 ("common/mlx5: support DevX QP operations")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2021-03-16 10:05:34 +01:00
Matan Azrad
e6988afdc7 net/mlx5: fix imissed statistics
The imissed port statistic counts packets that were dropped by the
device Rx queues.

In mlx5, the imissed counter summarizes 2 counters:
	- packets dropped by the SW queue handling counted by SW.
	- packets dropped by the HW queues due to "out of buffer" events
	  detected when no SW buffer is available for the incoming
	  packets.

There is HW counter object that should be created per device, and all
the Rx queues should be assigned to this counter in configuration time.

This part was missed when the Rx queues were created by DevX what
remained the "out of buffer" counter clean forever in this case.

Add 2 options to assign the DevX Rx queues to queue counter:
	- Create queue counter per device by DevX and assign all the
	  queues to it.
	- Query the kernel counter and assign all the queues to it.

Use the first option by default and if it is failed, fallback to the
second option.

Fixes: e79c9be915 ("net/mlx5: support Rx hairpin queues")
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-03 17:27:21 +01:00
Suanming Mou
2b36c30b8c net/mlx5: fix port attach in secondary process
Currently, the secondary process port UAR register mapping used by Tx
queue is done during port initializing.

Unluckily, in port hot-plug case, the secondary process was requested
to initialize the port when primary process did not complete the
device configuration and the port Tx queue number is not configured
yet. Hence, the secondary process gets the zero Tx queue number during
probing, causing the UAR registers not be mapped in the correct
fashion.

This commit checks the configured number of Tx queues in secondary
process when the port start is requested. In case the Tx queue
number mismatch found the UAR mapping is reinitialized accordingly.

Fixes: 2aac5b5d11 ("net/mlx5: sync stop/start with secondary process")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-29 18:16:08 +01:00
Bruce Richardson
df96fd0d73 ethdev: make driver-only headers private
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
2021-01-29 20:59:09 +01:00
Shiri Kuzin
f15f0c3806 net/mlx5: create GENEVE TLV option management
Currently firmware supports the only TLV object per device
to match on the GENEVE header option.

This patch adds the simple TLV object management to the mlx5 PMD.

Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-01-19 03:30:16 +01:00
Michael Baum
6e0a3637d8 net/mlx5: move Rx RQ creation to common
Using common function for Rx RQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
389ab7f5fd net/mlx5: move ASO SQ creation to common
Using common function for ASO SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
74e918604a net/mlx5: move Tx SQ creation to common
Using common function for Tx SQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
71011bd56b net/mlx5: move rearm and clock queue SQ creation to common
Using common function for DevX SQ creation for rearm and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
5cd33796dd net/mlx5: move Rx CQ creation to common
Using common function for Rx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
5f04f70ccf net/mlx5: move Tx CQ creation to common
Using common function for Tx CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
c7d41d98a7 net/mlx5: move ASO CQ creation to common
Use common function for ASO CQ creation.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00
Michael Baum
a7787bb0b7 net/mlx5: move rearm and clock queue CQ creation to common
Using common function for CQ creation at rearm queue and clock queue.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-01-14 10:12:36 +01:00