The "supported capability" obtained from firmware on copper ports
includes the speed capability, auto-negotiation capability, and flow
control capability. Therefore, this patch changes "supported_capa" to
"supported_speed" and parses the speed capability supported by the
driver from the "supported capability".
Fixes: 2e4859f3b362 ("net/hns3: support PF device with copper PHYs")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
In the previous patch, driver will calculate packet type by ignoring
VLAN information because the packet type may calculate error when
exist VLAN and VLAN strip.
So here remove the following ptypes from support list:
1) RTE_PTYPE_L2_ETHER_VLAN
2) RTE_PTYPE_L2_ETHER_QINQ
3) RTE_PTYPE_INNER_L2_ETHER_VLAN
4) RTE_PTYPE_INNER_L2_ETHER_QINQ
Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Kunpeng 930 supports RXD advanced layout. If enabled the layout, the
hardware will report packet type by 8-bit PTYPE filed in the Rx
descriptor, and the supported ptypes are different from original
scheme. So this patch adds supported list for RXD advanced layout.
Fixes: fb5e90694022 ("net/hns3: support Rx descriptor advanced layout")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The PTP depends on special packet type reported by hardware which
enabled rxd advanced layout, so if the hardware doesn't support rxd
advanced layout, driver should ignore the PTP capability.
Fixes: 438752358158 ("net/hns3: get device capability from firmware")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch supports reporting TUNNEL GRE packet type when rxd advanced
layout enabled.
Fixes: fb5e90694022 ("net/hns3: support Rx descriptor advanced layout")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
This patch adds RTE_PTYPE_L4_UDP flag when parsed tunnel vxlan packet.
Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The type of return value of hns3_cmd_send is int, some function declare
the return value as hns3_cmd_status.
This patch fix the incorrect use of the enum hns3_cmd_status.
Fixes: 737f30e1c3ab ("net/hns3: support command interface with firmware")
Fixes: 02a7b55657b2 ("net/hns3: support Rx interrupt")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, when processing MBX messages, the system timestamp is obtained
to determine whether timeout occurs. However, the gettimeofday function
is not monotonically increasing. Therefore, this may lead to incorrect
judgment or difficulty exiting the loop. And actually, in this scenario,
it is not necessary to obtain the timestamp.
This patch deletes the call to the gettimeofday function during MBX
message processing.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
VF will build two queues (csq: command send queue, crq: command receive
queue) with firmware, the crq may contain the following messages:
1) mailbox response message which was the ack of mailbox sync request.
2) PF's link status change message which may send by PF at anytime;
Currently, any threads in the primary and secondary processes could
send mailbox sync request, so it will need to process the crq messages
in there own thread context.
If the crq hold two messages: a) PF's link status change message, b)
mailbox response message when secondary process deals with the crq
messages, it will lead to report lsc event in secondary process
because it uses the policy of processing all pending messages at once.
We use the following scheme to solve it:
1) threads in secondary process could only process specifics messages
(eg. mailbox response message) in crq, if the message processed, its
opcode will rewrite with zero, then the intr thread in primary
process will not process again.
2) threads other than intr thread in the primary process use the same
processing logic as the threads in secondary process.
3) intr thread in the primary process could process all messages.
Fixes: 76a3836b98c4 ("net/hns3: fix setting default MAC address in bonding of VF")
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the mailbox synchronous communication between VF and PF use
the following fields to maintain communication:
1. Req_msg_data which was combined by message code and subcode, used to
match request and response.
2. Head which means the number of requests successfully sent by VF.
3. Tail which means the number of responses successfully received by VF.
4. Lost which means the number of requests which are timeout.
There may possible mismatches of the following situation:
1. VF sends message A with code=1 subcode=1.
Then head=1, tail=0, lost=0.
2. PF was blocked about 500ms when processing the message A.
3. VF will detect message A timeout because it can't get the response
within 500ms.
Then head=1, tail=0, lost=1.
4. VF sends message B with code=1 subcode=1 which equal message A.
Then head=2, tail=0, lost=1.
5. PF processes the first message A and send the response message to VF.
6. VF will update tail field to 1, but the lost field will remain
unchanged because the code/subcode equal message B's, so driver will
return success because now the head(2) equals tail(1) plus lost(1).
This will lead to mismatch of request and response.
To fix the above bug, we use the following scheme:
1. The message sent from VF was labelled with match_id which was a
unique 16-bit non-zero value.
2. The response sent from PF will label with match_id which got from the
request.
3. The VF uses the match_id to match request and response message.
This scheme depends on the PF driver, if the PF driver don't support
then VF will uses the original scheme.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, driver will copy mailbox messages body into arq ring when
process HNS3_MBX_LINK_STAT_CHANGE and HNS3_MBX_LINK_STAT_CHANGE
message, and then call hns3_mbx_handler API which will direct process
pre-copy messages. In the whole process, the arq ring don't have a
substantial effect.
Note: The arq ring is designed for kernel environment which could not
do much job in interrupt context, but for DPDK it's not required.
Also we rename hns3_handle_link_change_event to
hns3pf_handle_link_change_event which add 'pf' suffix to make it
better to distinguish.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Add a specific path for RX AVX512 (flexible descriptor).
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a specific path for RX AVX512 (traditional).
In this path, support the HW offload features, like,
checksum, VLAN stripping, RSS hash.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a specific path for TX AVX512.
In this path, support the HW offload features, like,
checksum insertion, VLAN insertion.
This path is chosen automatically according to the
configuration.
'inline' is used, then the duplicate code is generated
by the compiler.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add the offload flag for RX queues to know which offload
features are set.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Fix segment fault when failing to get the memory from the pool.
If there's no memory in the default cache, fall back to the
previous process.
The previous AVX2 rearm function is changed to add some AVX512
instructions and changed to a callee of the AVX2 and AVX512
rearm functions.
Fixes: e6a6a138919f ("net/i40e: add AVX512 vector path")
Cc: stable@dpdk.org
Reported-by: David Coyle <david.coyle@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: David Coyle <david.coyle@intel.com>
Fix segment fault when failing to get the memory from the pool.
If there's no memory in the default cache, fall back to the
previous process.
The previous AVX2 rearm function is changed to add some AVX512
instructions and changed to a callee of the AVX2 and AVX512
rearm functions.
Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
Cc: stable@dpdk.org
Reported-by: David Coyle <david.coyle@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: David Coyle <david.coyle@intel.com>
Fix segment fault when failing to get the memory from the pool.
If there's no memory in the default cache, fall back to the
previous process.
The previous AVX2 rearm function is changed to add some AVX512
instructions and changed to a callee of the AVX2 and AVX512
rearm functions.
Fixes: 31737f2b66fb ("net/iavf: enable AVX512 for legacy Rx")
Cc: stable@dpdk.org
Reported-by: David Coyle <david.coyle@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: David Coyle <david.coyle@intel.com>
Previous implementations support dump all the flows. Add new arg
rte_flow in rte_flow_dev_dump to dump one flow.
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
These headers are used but not included explicitly, including them.
"arpa/inet.h" is included for 'htons' and friends.
"netinet/in.h" is included for 'IPPROTO_IP'.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Rasesh Mody <rmody@marvell.com>
Currently meter algorithms only supports bytes per second(BPS).
Check packet_mode set to TRUE are rejected.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Liron Himi <lironh@marvell.com>
Currently meter algorithms only supports bytes per second(BPS).
Check packet_mode set to TRUE are rejected.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The hardware outer/inner VLAN protocol types are now updated to map to
new interface VLAN protocol types, so update the application to use new
VLAN protocol types when the rte_flow is QinQ filter type.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Adds a support for switch filter: GTP-U using just inner fields.
If user doesn't specify outer protocol and its fields but wants to
add switch filter for GTP-U using inner protocols and related fields
such as inner L3 and/or inner L4, this patch enables such filtering.
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add some new macros of PTYPE values to support PPPoL2TPv2oUDP.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The dummy packet should be QinQ PPPoE ipv6 when ppp protocol is ipv6.
Fixes: bb3386f348dd ("net/ice: enable QinQ filter for switch")
Cc: stable@dpdk.org
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Since VLAN protocol type 'ICE_VLAN_OFOS' has been changed to map
the hardware VLAN protocol ID to 'ICE_VLAN_OF_HW (16)' when in Double
VLAN mode, and to 'ICE_VLAN_OL_HW (17)' when in Single VLAN mode.
So 'ICE_VLAN_OFOS' can't be used with 'ICE_VLAN_EX' which is outer VLAN
hardware protocol ID 'ICE_VLAN_OF_HW (16)' to do the QinQ VLAN pattern.
Introduce the new inner VLAN protocol type 'ICE_VLAN_IN', which is inner
VLAN hardware protocol ID 'ICE_VLAN_OL_HW (17)'.
Now for QinQ VLAN pattern, the protocol 'ICE_VLAN_EX' and 'ICE_VLAN_IN'
should be used to set the related protocol header fields like VLAN ID.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add helper functions to set the GPIO pin state or get the value of a
GPIO signal that's the part of the topology based on AQ commands.
This change is needed to setup GPIO pins state for PTP, SyncE etc.
Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Enable I2C read/write AQ commands. They are now required for
controlling the external physical connectors via external I2C
port expander on E810-T adapters.
Signed-off-by: Maciej Machnikowski <maciej.machnikowski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Check priority when look for a recipe which matches our request
to enable flow priority for switch filter.
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Protocol id for first vlan in Double VLAN Mode (DVM) should be
ICE_VLAN_OF_HW = 16, but for Single VLAN Mode (SVM) this should be
ICE_VLAN_OL_HW = 17.
Change protocol id in type to id translation array for outer vlan
to 17 when DVM is enabled, which means the driver, package,
and firmware support DVM.
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Add support for PPPoL2TPv2oUDP RSS hash. L2TPv2 and PPP ptypes
and flow headers are added. Protocol id for PPP is added.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Set E823C device's MAC type as generic.
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Remove the unused ptype entry, and use the gcc extension for
ranged initializers in arrays for Linux, and explicitly target
each table entry by index when initializing under Linux.
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Adding a function ice_flow_rem_vsi_prof() to remove flow entries
associated to the SW VSI handle. Once complete, clear the vsi index from
the flow profile bitmap. This will ensure that a VSI once removed
can be re-added and the package block rules will be added again.
Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
1. There are a lots of function header mismatch its function name.
2. remove unnecessary header file include.
3. remove unnecessary macro.
4. remove unnecessary comment.
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Restrict the "remove l3_l4_xsum_errors from rx_errors" to 82599 only for
hardware errata.
Fixes: 256ff05a9cae ("ixgbe: fix Rx errors statistics for UDP checksum")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The mlx5 PMD allocated the resources of the sample actions, and then
moved these ones to the destination actions array. The original indices
were not cleared and the resources were referenced twice in the
flow object - as the fate actions and in the destination actions array.
This causes the failure on flow destroy because PMD tried to release the
same objects twice.
The patch clears the original indices, add the missed checking for zero
and eliminates multiple object releasing.
Fixes: 00c10c22118a ("net/mlx5: update translate function for mirroring")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
If RSS action contains non zero hash key length and NULL
key buffer pointer the default hash key should be used.
The check for the NULL pointer this was missing in the mlx4
PMD causing crash, for example, in testpmd with command:
flow validate 0 ingress group 0
pattern eth / ipv4 / end
actions rss queues 0 end key_len 40 / end
Fixes: ac8d22de2394 ("ethdev: flatten RSS configuration in flow API")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When RSS expand, if there is no expansion happened but completion
happened because user only input next protocol field instead of item
i.e, ether type == 0x8100 instead of VLAN, an extra flow is created with
missing item in order to filter traffic strictly.
However, after [1] and [2] the rte_flow_item_eth itself is enough to
filter out VLAN traffic, the VLAN item is not needed.
[1]: commit 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN items")
[2]: commit 86b59a1af671 ("net/mlx5: support VLAN matching fields")
This redundant flow will cause failure in some scenarios on group 0 due
to they are the same FTE.
Fixes: fc2dd8dd492f ("ethdev: fix expand RSS flows")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Extend the range of immediate value used in the MODIFY_FIELD action
from 32 to 64 bits to conform to the rte_flow_action_modify_data spec.
Apply appropriate big endian conversion to the immediate value
according to a destination field bit width.
Fixes: 641dbe4fb053 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Converting modify_field action masks to the big endian format is wrong
for small (less than 4 bytes) fields. Use the BE conversions appropriate
for a field size, not rte_cpu_to_be_32 for everything.
Fixes: 144127ba5660 ("net/mlx5: adjust modify field action endianness")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Mellanox hardware can only modify any packet field in 32-bit chunks,
which means 4 such chunks are needed to modify an IPv6 address.
The modification order of these chunks starts from the most significant
bits for the IPv6 address. That leads to confusing results when trying
to modify either source or destination address via the MODIFY_FIELD
action. Fix the order of 32-bit chunks for IPv6 addresses modification
by starting from the least significant bits.
Fixes: 641dbe4fb053 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In Windows DevX returns the rate of the current link speed
in bit/s, this rate was converted to Mibit/s instead of the Mbit/s
rate expected by DPDK resulting in wrong link speed reporting.
Fixes: 6fbd73709ee4 ("net/mlx5: support link update on Windows")
Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The MODIFY_FIELD action requires the extended metadata support
in order to manipulate on METADATA register as well as on MARK register.
Check if it is supported and reject the MODIFY_FIELD action if it is not
just like it was done before for the MARK register modifications.
Fixes: 0588d64ffde3 ("net/mlx5: check extended metadata for mark modification")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet.
Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>