18751 Commits

Author SHA1 Message Date
Arek Kusztal
75fd4bbc94 crypto/qat: support SM3 hash algorithm
Added support for ShangMi 3 (SM3) hash algorithm.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
2022-10-02 20:33:24 +02:00
Arek Kusztal
92522c84e4 crypto/qat: support SM4 encryption algorithm
Added support for ShangMi 4 (SM4) encryption algorithms.
Supported modes: ECB, CBC, CTR.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
2022-10-02 20:33:24 +02:00
Srujana Challa
68d25915d2 security: remove user data get API
The API rte_security_get_userdata() was being unused by most of
the drivers and it was retrieving userdata from mbuf dynamic field.
Hence, the API was removed and the application can directly get the
userdata from dynamic field. This helps in removing extra checks
in datapath.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-02 20:33:24 +02:00
Abdullah Sevincer
9c9e72326b event/dlb2: handle enqueuing more than maximum depth
This patch addresses an issue of enqueuing more than
max_enq_depth and not able to dequeuing events equal
to max_cq_depth in a single call of rte_event_enqueue_burst
and rte_event_dequeue_burst.

Apply fix for restricting enqueue of events to max_enq_depth
so that in a single rte_event_enqueue_burst() call at most
max_enq_depth events are enqueued.

Also set per port and domain history list sizes based on
cq_depth. This results in dequeuing correct number of
events as set by max_cq_depth.

Fixes: f3cad285bb88 ("event/dlb2: add infos get and configure")
Cc: stable@dpdk.org

Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
2022-09-30 10:57:40 +02:00
Abdullah Sevincer
bdd0b609a1 event/dlb2: optimize credit allocations
This commit implements the changes required for using suggested
port type hint feature. Each port uses different credit quanta
based on port type specified using port configuration flags.

Each port has separate quanta defined in dlb2_priv.h
Producer and consumer ports will need larger quanta value to reduce number
of credit calls they make. Workers can use small quanta as they mostly
work out of locally cached credits and don't request/return credits often.

Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
2022-09-30 10:26:42 +02:00
Abdullah Sevincer
d8c16de5df event/dlb2: add fence bypass option for producer ports
If producer thread is only acting as a bridge between NIC and DLB, then
performance can be greatly improved by bypassing the fence instruction.
DLB enqueue API calls memory fence once per enqueue burst.  If producer
thread is just reading from NIC and sending to DLB without updating
the read buffers or buffer headers OR producer is not writing
to data structures with dependencies on the enqueue write order, then
fencing can be safely disabled.

Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
2022-09-30 10:25:43 +02:00
Abdullah Sevincer
8d1d9070bb event/dlb2: optimize producer port probing
For best performance, applications running on certain cores should use
the DLB device locally available on the same tile along with other
resources. To allocate optimal resources, probing is done for each
producer port (PP) for a given CPU and the best performing ports are
allocated to producers. The CPU used for probing is either the first
core of producer coremask (if present) or the second core of EAL
coremask. This will be extended later to probe for all CPUs in the
producer coremask or EAL coremask.

Producer coremask can be passed along with the BDF of the DLB devices.
"-a xx:y.z,producer_coremask=<core_mask>"

Applications also need to pass RTE_EVENT_PORT_CFG_HINT_PRODUCER during
rte_event_port_setup() for producer ports for optimal port allocation.

For optimal load balancing ports that map to one or more QIDs in common
should not be in numerical sequence. The port->QID mapping is application
dependent, but the driver interleaves port IDs as much as possible to
reduce the likelihood of sequential ports mapping to the same QID(s).

Hence, DLB uses an initial allocation of Port IDs to maximize the
average distance between an ID and its immediate neighbors. Using
the initialport allocation option can be passed through devarg
"default_port_allocation=y(or Y)".

When events are dropped by workers or consumers that use LDB ports,
completions are sent which are just ENQs and may impact the latency.
To address this,  probing is done for LDB ports as well. Probing is
done on ports per 'cos'. When default cos is used, ports will be
allocated from best ports from the best 'cos', else from best ports of
the specific cos.

Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
2022-09-30 10:24:36 +02:00
Pavan Nikhilesh
edbb4c09c5 event/cnxk: fix missing xstats operations
Fix missing xstats ops registration when initializing event device.

Fixes: b5a52c9d97e2 ("event/cnxk: add event port and queue xstats")
Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
2022-09-30 09:15:48 +02:00
Ivan Malov
cd8e2d8292 net/sfc: clarify Rx buffer size calculation
The user has the right to supply pools with excessively large
buffers, regardless of the maximum supported Rx packet length
reported by the adapter. However, in this PMD, on EF10 boards,
a Rx descriptor has only 14 bits to specify the buffer length.

To avoid potential problems, use this information accordingly.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-09-28 11:51:39 +02:00
Ivan Malov
61b3e9e79a common/sfc_efx/base: report maximum Rx data count
Such information is useful to client drivers which deal with
large Rx pool buffers (16-bit wide data count) and thus need
to avoid overflow when setting EF10's 14-bit wide data count.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-09-28 11:51:39 +02:00
Ivan Malov
79b0c19261 common/sfc_efx/base: fix maximum Tx data count
Maximum data count of a Tx descriptor is advertised to users,
however, this value is mistakenly derived from the Rx define.
Use the Tx one instead. The resulting value will be the same.

Fixes: 1e43fe3cb41e ("net/sfc/base: separate limitations on Tx DMA descriptors")
Cc: stable@dpdk.org

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-09-28 11:51:39 +02:00
Bhagyada Modali
91907ec247 net/axgbe: save segment data in scattered Rx
Saving the current segments of the packet, when the next segment data is
not ready.

Fixes: 965b3127d425 ("net/axgbe: support scattered Rx")
Cc: stable@dpdk.org

Signed-off-by: Bhagyada Modali <bhagyada.modali@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
2022-09-21 16:39:08 +02:00
Bhagyada Modali
30ff4d00d9 net/axgbe: clear buffer on scattered Rx chaining failure
Clearing mbuf, first_seg when chaining mbufs fail.
Increment the error count for the same.

Fixes: 965b3127d425 ("net/axgbe: support scattered Rx")
Cc: stable@dpdk.org

Signed-off-by: Bhagyada Modali <bhagyada.modali@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
2022-09-21 16:39:08 +02:00
Bhagyada Modali
2a761aec22 net/axgbe: reset end of packet in scattered Rx
Reset the EOP in the failure scenario and also after the last segment.
Remove updating the packet length explicitly as it is done in chaining.

Fixes: 965b3127d425 ("net/axgbe: support scattered Rx")
Cc: stable@dpdk.org

Signed-off-by: Bhagyada Modali <bhagyada.modali@amd.com>
Acked-by: Chandubabu Namburu <chandu@amd.com>
2022-09-21 16:39:08 +02:00
Kiran Kumar K
e5d0e3c759 common/cnxk: update base rule merging mechanism
Added changes to base rule install mechanism.
If action type is IPsec and multi channel is set,
then base rule will not be merged.

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Satheesh Paul <psatheesh@marvell.com>
2022-09-30 09:10:39 +02:00
Hanumanth Pothula
8a96c7f8a5 net/cnxk: fix DF bit in vector mode
In vector mode, DF bit is not programmed correctly, as the
return value of vsetq_lane_u64() is ignored, which actually
contains the updated value, leading HW to free mbufs though
NIX_TX_OFFLOAD_MBUF_NOFF_F flag is set.

Hence, save return value of vsetq_lane_u64() appropriately so
that DF bit is programmed correctly.

Fixes: 862e28128707 ("net/cnxk: add vector Tx for CN9K")
Fixes: f71b7dbbf04b ("net/cnxk: add vector Tx for CN10K")
Cc: stable@dpdk.org

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
2022-09-30 09:09:23 +02:00
Hanumanth Pothula
5a0f64d84b net/cnxk: fix configuring large Rx/Tx queues
While configuring NIX, local variables 'nb_rxq' and 'nb_txq'
are declared as 8bit variables, leading to an integer overflow
when an application sends Rxq/Txq value greater than 255.

Hence, declare local variables, 'nb_rxq' and 'nb_txq' as 16bit
variable. Also, during the cleanup, make sure PFC tree is not created.

Fixes: b75e0aca84b0 ("net/cnxk: add device configuration operation")
Cc: stable@dpdk.org

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
2022-09-30 09:09:18 +02:00
Kevin Liu
ccf33dccf7 net/ice: check illegal packet sizes
If the length of data_len in mbuf is less than 17 or
greater than the maximum frame size, it is illegal.

These illegal packets will lead to Tx/Rx hang and
can't recover automatically.

This patch check those illegal packets and protect
Tx/Rx from hanging.

Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-30 10:28:29 +02:00
Kevin Liu
19ee91c6bd net/iavf: check illegal packet sizes
If the length of data_len in mbuf is less than 17 or
greater than the maximum frame size, it is illegal.

These illegal packets will lead to Tx/Rx hang and
can't recover automatically.

This patch check those illegal packets and protect
Tx/Rx from hanging.

Fixes: a2b29a7733ef ("net/avf: enable basic Rx Tx")
Cc: stable@dpdk.org

Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-30 10:26:32 +02:00
Shun Hao
9b7fcf395c net/mlx5: fix meter profile delete after disable
If a meter's profile is changed after meter disabled, there's an issue
that will fail when deleting the old profile.

This patch fixes this by adding the correct process to decrease the old
profile's reference count when changing profile.

Fixes: 63ffeb2ff2 ("net/mlx5: support meter profile update")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:55 +02:00
Shun Hao
530dc58073 net/mlx5: fix meter ID tag for meter hierarchy
Currently, when flow usese meter hierarchy, a tag action is always applied
to set the first meter's meter id, so as to update the first meter's drop
count. But it's not considered if first meter doesn't have drop count.

This patch fixes it, that in hierarchy, if the first meter doesn't have
drop count, no need to add the meter id tag action. No change for
non-hierarchy meter.

Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:54 +02:00
Shun Hao
ca7e6051e7 net/mlx5: limit meter flow when matching all ports
If there's no param in represented_port item, it will be treated as
matching all ports by default. But there's some limitation when using it
with meter hierarchy.

This patch adds the limitation that when matching all ports, the meter
hierarchy should not contain any meter having drop count.

Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:54 +02:00
Shun Hao
33d506b9e5 net/mlx5: fix meter hierarchy with represented port item
There is a new item type represented_port, and currently it will fail
when using meter hierarchy in flow using represented_port match.

This patch fixes this fail by adding support for represented_port item
in meter hierarchy flow split.

Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org

Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:54 +02:00
Jiawei Wang
9f71a297da net/mlx5: fix modify action with tunnel decapsulation
The driver splits the flow with sample action into two sub-flows,
sub prefix flow and sub suffix flow.

In the case of tunnel flow including a decap action, the driver should
translate the inner as outer for actions coming after the decap action.
In the case of flow splitting, the packet layers, used to detect the
attributes, are inherited from the prefix flow to the suffix flow but
the driver wrongly didn't handle the decap adjustment and the inner
layers didn't shift to the outer.

This patch adjusts the inherited layers in case of decap.

Fixes: 6e77151286b2 ("net/mlx5: fix match information in meter")
Cc: stable@dpdk.org

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:53 +02:00
Raja Zidane
130bb7da53 net/mlx5: fix Tx check for hardware descriptor length
If hardware descriptor (WQE) length exceeds one the HW can handle,
the Tx queue failure occurs. PMD does the length check but there was
a bug - the length limit was expressed in 16B units (WQEBB segments),
while the calculated WQE length and limit were in 64B units (WQEBBs).
Fix the condition to avoid subsequent Tx queue failure.

Fixes: 18a1c20 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-02 09:13:53 +02:00
Viacheslav Ovsiienko
d15bfd2930 net/mlx5: fix inline length exceeding descriptor limit
The hardware descriptor (WQE) length field is 6 bits wide
and we have the native limitation for the overall descriptor
length. To improve the PCIe bandwidth the packet data can be
inline into descriptor. If PMD was configured to inline large
amount of data it happened there was no enough space remaining
in the descriptor to specify all the packet data segments and
PMD rejected problematic packets.

The patch tries to adjust the inline data length conservatively
and allows to avoid error occurring.

Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")
Fixes: e2259f93ef45 ("net/mlx5: fix Tx when inlining is impossible")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
2022-10-02 09:13:52 +02:00
Viacheslav Ovsiienko
166f185fef net/mlx5: fix single not inline packet storing
The mlx5 PMD can inline packet data into transmitting descriptor (WQE)
and free mbuf immediately as data no longer needed, for non-inline
packets the mbuf pointer should be stored in elts array for coming
freeing on send completion. There was an optimization on storing
pointers in batch and there was missed storing mbuf for single
packet if non-inline was explicitly requested by flag.

Fixes: cacb44a09962 ("net/mlx5: add no-inline Tx flag")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-02 09:13:52 +02:00
Viacheslav Ovsiienko
37d6fc30c1 net/mlx5: fix check for orphan wait descriptor
The mlx5 PMD supports send scheduling feature, it allows
to send packets at specified moment of time, to do that
PMD pushes special wait descriptor (WQE) to the hardware
queue and then pushes descriptor for packet data as usual.
If queue is close to be full or there is no enough elts
buffers to store mbufs being sent the data descriptors might
be not pushed and the orphan wait WQE (not followed by the
data) might reside in queue on tx_burst routine exit.

To avoid orphan wait WQEs there was the check for enough
free space in the queue WQE buffer and enough amount of the
free elts in queue mbuf storage. This check was incomplete
and did not cover all the cases for Enhanced Multi-Packet
Write descriptors.

Fixes: 2f827f5ea6e1 ("net/mlx5: support scheduling on send routine template")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-02 09:13:52 +02:00
Bassam Zaid AlKilani
6fa815722e net/mlx5: fix flow matching priority for ESP item
ESP is one of IPSec protocols over both IPv4 and IPv6 and is considered
a tunnel layer that cannot be followed by any other layer. Taking that
into consideration, ESP is considered as a 4 layer.

Not defining ESP's priority will make it match with the same priority as
its prior IP layer, which has a layer 3 priority. This will lead to
issues in matching and will match the packet with the first matching
rule even if it doesn't have an esp layer in its pattern, disregarding
any following rules that could have an esp item and can be actually
a more accurate match since it will have a longer matching criterion.

This is fixed by defining the priority for the ESP item to have a
layer 4 priority, making the match be for the rule with the more
accurate and longer matching criteria.

Fixes: 18ca4a4ec73a ("net/mlx5: support ESP SPI match and RSS hash")
Cc: stable@dpdk.org

Signed-off-by: Bassam Zaid AlKilani <bzalkilani@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
2022-10-02 09:13:51 +02:00
Michael Baum
593f913a8e net/mlx5: fix LRO requirements check
One of the conditions to allow LRO offload is the DV configuration.

The function incorrectly checks the DV configuration before initializing
it by the user devarg; hence, LRO cannot be allowed.

This patch moves this check to mlx5_shared_dev_ctx_args_config, where DV
configuration is initialized.

Fixes: c4b862013598 ("net/mlx5: refactor to detect operation by DevX")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reported-by: Gal Shalom <galshalom@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:51 +02:00
Gregory Etelson
2d8dde8d63 common/mlx5: lower some DevX log level
Current PMD logs all DevX errors at error level.

DevX interface can fail queue counters allocation on some hardware
types. That is a known issue.
PMD fallbacks to Verbs API to allocate queue counters
when it detects the fault.
That DevX failure should not be logged as PMD error.

The patch provides DevX with flexible API that selects log level.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2022-10-02 09:13:50 +02:00
Long Li
bc5d8fdb70 net/mlx5: fix Verbs FD leak in secondary process
FDs passed from rte_mp_msg are duplicated to the secondary process and
need to be closed.

Fixes: 9a8ab29b84 ("net/mlx5: replace IPC socket with EAL API")
Cc: stable@dpdk.org

Signed-off-by: Long Li <longli@microsoft.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-02 09:13:50 +02:00
Long Li
ff9c3548c0 net/mlx4: fix Verbs FD leak in secondary process
FDs passed from rte_mp_msg are duplicated to the secondary process and
need to be closed.

Fixes: 0203d33a10 ("net/mlx4: support secondary process")
Cc: stable@dpdk.org

Signed-off-by: Long Li <longli@microsoft.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-02 09:13:48 +02:00
Alexander Chernavin
52bd03e969 net/virtio: fix crash when configured twice
When first attempt to configure a device with RX interrupt enabled fails
for some reason (e.g. because "Multiple intr vector not supported"),
second attempt to configure the device with RX interrupt disabled and
feature set unchanged will succeed but will leave virtio queues not
allocated. Accessing the queues will cause a segfault.

First attempt:
  - virtio_dev_configure()
    - virtio_init_device() is called to reinit the device because
      "dev->data->dev_conf.intr_conf.rxq" is "1"
      - virtio_configure_intr() fails and returns an error
      - virtio_free_queues() frees previously allocated virtio queues
    - virtio_init_device() fails and returns an error
  - virtio_dev_configure() fails and returns an error

Second attempt:
  - virtio_dev_configure()
    - This time virtio_init_device() is not called, virtio queues
      are not allocated

With this fix, reinit the device during configuration if virtio queues
are not allocated.

Fixes: 2b38151f745a ("net/virtio: fix queue memory leak on error")
Cc: stable@dpdk.org

Signed-off-by: Alexander Chernavin <achernavin@netgate.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2022-09-29 10:13:22 +02:00
Zhichao Zeng
f7c8c36fde net/iavf: enable inner and outer Tx checksum offload
This patch is to enable scalar path inner and outer Tx checksum offload
for tunnel packet by configure ol_flags.

Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-25 16:07:02 +02:00
Zhichao Zeng
3b8c645afa net/iavf: fix outer checksum flags
When receiving tunneled packets, the testpmd output log shows 'ol_flags'
value always as 'RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN', but expected value
should be 'RX_OUTER_L4_CKSUM_GOOD' or 'RX_OUTER_L4_CKSUM_BAD'.

Adding 'RX_OUTER_L4_CKSUM_GOOD' and 'RX_OUTER_L4_CKSUM_BAD' to 'flags' for
normal path, 'l3_l4_flags_shuf' for AVX2 and AVX512 vector path and
'cksum_flags' for SSE vector path to ensure that the 'ol_flags'
can match correct flags.

Fixes: b8b4c54ef9b0 ("net/iavf: support flexible Rx descriptor in normal path")
Fixes: 1162f5a0ef31 ("net/iavf: support flexible Rx descriptor in SSE path")
Fixes: 5b6e8859081d ("net/iavf: support flexible Rx descriptor in AVX path")
Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
Cc: stable@dpdk.org

Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-25 16:00:43 +02:00
Zhichao Zeng
1aaacea174 net/iavf: fix processing VLAN TCI in SSE path
The SSE RX path does not process the vlan tci correctly when it's stored
in L2TAG2, so the vlan tci could not be extracted from descriptor,
then would not be put into mbuf either.

Add processing when vlan tci is stored in L2TAG2.

Fixes: 1162f5a0ef31 ("net/iavf: support flexible Rx descriptor in SSE path")
Cc: stable@dpdk.org

Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-23 10:47:08 +02:00
Simei Su
e35f6b1cd1 net/ice: fix PTP init
Because of base code update, "ice_ptp_init_phc" API for E810/E822 depends
on PHY configuration, not whether the device is E810 based. So before this
API is called, assign specific value to phy cfg.

Fixes: 646dcbe6c701 ("net/ice: support IEEE 1588 PTP")
Cc: stable@dpdk.org

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-20 10:57:35 +02:00
Markus Theil
f082b51958 net/ice: support LED control
Added LED control support.

Signed-off-by: Markus Theil <markus.theil@secunet.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-09-20 10:44:57 +02:00
Qi Zhang
61da836257 net/ice/base: update copyright
Updated copyright to 2022 and update base code version.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
47bf7453a6 net/ice/base: clean up
1. remove unused code
2. reduce variable scope
3. fix comment

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
d56833508e net/ice/base: expose sched element move function
Exposed ice_aq_move_sched_elems to support sched element moving
by AQ command.

Signed-off-by: Ben Shelton <benjamin.h.shelton@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
41d843bd60 net/ice/base: check for PTP HW lock more frequently
PTP HW semaphore can be held for ~50 ms in worst case.
SW should wait longer and check more frequently if
the HW lock is held.

Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
dbbf83ce0e net/ice/base: add GTP tunnel
Added GTP tunnel type and also re-order the code to align with
kernel driver.

Signed-off-by: Marcin Szycik <marcin.szycik@intel.com>
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
a7b867e989 net/ice/base: remove unnecessary fields
Remove unnecessary fields in data structure for 1588 and QoS
func capabilities.

Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
20c549a89d net/ice/base: convert 1588 structs to use bit fields
Use bitfields in 1588 structs so they don't waste too much space.

Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
7ab3a759d4 net/ice/base: handle default VSI lookup type
ICE_SW_LKUP_DFLT is handled in ice_update_vsi_list_rule and
ice_aq_alloc_free_vsi_list.

Signed-off-by: Lukasz Kupczak <lukasz.kupczak@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
ed86e63543 net/ice/base: add function to parse DCBX config
LLDP MIB Change Event (opcode 0x0A01) already contains MIB, which
has been changed. Add ice_dcb_process_lldp_set_mib_change() function,
which will set local/remote DCBX config from LLDP MIB Change Event's
buffer.

This function will be used in a base driver handler for LLDP MIB
Change Event.

Signed-off-by: Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
97e32e8d48 net/ice/base: complete pending LLDP MIB
Completed structure ice_aqc_lldp_get_mib.
Added 'Pending Event Enable' bit.

Signed-off-by: Tsotne Chakhvadze <tsotne.chakhvadze@intel.com>
Signed-off-by: Karen Sornek <karen.sornek@intel.com>
Signed-off-by: Anatolii Gerasymenko <anatolii.gerasymenko@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00
Qi Zhang
04ab2b2f0b net/ice/base: fix comment of overloaded GCO bit
The bit that is overloaded is bit 11 in the flex descriptor,
updating the comment to have the right one reflected.

Signed-off-by: Alice Michael <alice.michael@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2022-09-18 16:12:32 +02:00