Commit Graph

34225 Commits

Author SHA1 Message Date
Gowrishankar Muthukrishnan
410d016961 crypto/cnxk: support exponent type private key
This patch adds support for RTE_RSA_KEY_TYPE_EXP in cnxk crypto
driver.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
2022-10-29 13:01:38 +02:00
Gowrishankar Muthukrishnan
8ee030b40d examples/fips_validation: randomize message for conformance test
FIPS conformance tests require randomizing message based on SP 800-106.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
2022-10-29 13:01:38 +02:00
Gowrishankar Muthukrishnan
ae5ae3bf59 examples/fips_validation: encode digest with hash OID
FIPS RSA validation requires hash digest be encoded with ASN.1
value for digest info.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
2022-10-29 13:01:38 +02:00
Gowrishankar Muthukrishnan
36128a67c2 examples/fips_validation: add asymmetric validation
Add support for asymmetric crypto validation starting with RSA.
For the generation of crypto values which is multiprecision in
math, openssl library is used only for this purpose.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Acked-by: Brian Dooley <brian.dooley@intel.com>
2022-10-29 13:01:38 +02:00
Ciara Power
63e1fbc343 test/crypto: add remaining blockcipher SGL cases
The current blockcipher test function only has support for two types of
SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into
the function, with the number of segments always set to 3.

To ensure all SGL types are tested, blockcipher test vectors now have
fields to specify SGL type, and the number of segments.
If these fields are missing, the previous defaults are used,
either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments.

Some AES and Hash vectors are modified to use these new fields, and new
AES tests are added to test the SGL types that were not previously
being tested.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2022-10-29 13:01:38 +02:00
Ciara Power
8e71da225e test/crypto: add OOP SNOW3G SGL cases
More tests are added to test variations of OOP SGL for SNOW3G.
This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2022-10-29 13:01:38 +02:00
Ciara Power
f9dfb59edb crypto/ipsec_mb: support remaining SGL
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.

This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.

Existing AESNI_MB testcases are passing with these feature flags added.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2022-10-29 13:01:38 +02:00
Ciara Power
dc3f6c5347 test/crypto: fix wireless auth digest segment
The segment size for some tests was too small to hold the auth digest.
This caused issues when using op->sym->auth.digest.data for comparisons
in AESNI_MB PMD after a subsequent patch enables SGL.

For example, if segment size is 2, and digest size is 4, then 4 bytes
are read from op->sym->auth.digest.data, which overflows into the memory
after the segment, rather than using the second segment that contains
the remaining half of the digest.

Fixes: 11c5485bb2 ("test/crypto: add scatter-gather tests for IP and OOP")
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2022-10-29 13:01:38 +02:00
Srujana Challa
068d2647da common/cnxk: add CPT LF reset sequence
Adds code to reset CPT LF as part of cpt_lf_fini.

Signed-off-by: Srujana Challa <schalla@marvell.com>
2022-10-29 13:01:38 +02:00
Brian Dooley
c8956fd284 examples/fips_validation: add parsing for AES-CTR
Added functionality to parse algorithm for AES CTR test

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
2022-10-29 13:01:37 +02:00
Brian Dooley
e27268bd21 examples/fips_validation: add parsing for AES-GMAC
Added functionality to parse algorithm for AES GMAC test.

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
2022-10-29 13:01:37 +02:00
Brian Dooley
f7f2e8a9dd crypto/qat: reallocate on OpenSSL version check
This patch reallocates the OpenSSL version check from
qat_session_configure() to a proper qat_security_session_create()
routine.

Fixes: 3227bc7138 ("crypto/qat: use intel-ipsec-mb for partial hash and AES")
Cc: stable@dpdk.org

Signed-off-by: Brian Dooley <brian.dooley@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
2022-10-29 13:01:37 +02:00
Ali Alnubani
26fbb735e3 examples/l2fwd-crypto: fix typo in error message
Fixes spelling in one of the app's exit messages.

Fixes: 387259bd6c ("examples/l2fwd-crypto: add sample application")
Cc: stable@dpdk.org

Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
2022-10-29 13:01:37 +02:00
Nicolas Chautru
3b5b854b7d baseband/turbo_sw: remove Flexran SDK build option
The related dependency to build the PMD based on the
SDK libraries is now enabled through pkgconfig.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
2022-10-29 13:01:37 +02:00
Volodymyr Fialko
253265f8fb examples/ipsec-secgw: reduce queues for event lookaside
Limit number of queue pairs to one for event lookaside mode,
since all cores are using same queue in this mode.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-29 13:01:37 +02:00
Volodymyr Fialko
1d5078c6cf examples/ipsec-secgw: support event vector in lookaside mode
Added vector support for event crypto adapter in lookaside mode.
Once --event-vector is enabled, event crypto adapter will group
processed crypto operation into rte_event_vector event with type
RTE_EVENT_TYPE_CRYPTODEV_VECTOR.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-29 13:01:37 +02:00
Volodymyr Fialko
f44481ef43 examples/ipsec-secgw: add event mode statistics
Added per core statistic (Rx/Tx) counters for event mode worker.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-29 13:01:37 +02:00
Volodymyr Fialko
6938fc92c4 examples/ipsec-secgw: add lookaside event mode
Added base support for lookaside event mode.
Events that are coming from ethdev will be enqueued
to the event crypto adapter, processed and
enqueued back to ethdev for the transmission.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-29 13:01:37 +02:00
Volodymyr Fialko
c12871e437 examples/ipsec-secgw: add queue for event crypto adapter
Add separate event queue for event crypto adapter processing,
to resolve queue contention between new and already processed events.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-29 13:01:37 +02:00
Volodymyr Fialko
0dbe550a4a examples/ipsec-secgw: initialize event crypto adapter
Added support to create, configure and start an event crypto adapter.
This adapter will be used in lookaside event mode processing.

Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
2022-10-29 13:01:37 +02:00
Thomas Monjalon
487599f121 common/mlx5: move build config initialization and check
The variable mlx5_config may be used by other mlx5 drivers
and should be always initialized.
By moving its initialization (with configuration file generation),
it is made consistent for Linux and Windows builds.

And the check of mlx5_config in net/mlx5 is moved at the top of
net/mlx5/hws/meson.build so HWS requirements are in the right context.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Alex Vesker <valex@nvidia.com>
2022-10-30 15:55:46 +01:00
Thomas Monjalon
3df380f617 common/mlx5: fix disabling build
If the dependency common/mlx5 is explicitly disabled,
but net/mlx5 is not explicitly disabled,
Meson will read the full recipe of net/mlx5
and will fail when accessing a variable from common/mlx5:
drivers/net/mlx5/meson.build:76:4: ERROR: Unknown variable "mlx5_config".

The solution is to stop parsing net/mlx5 if common/mlx5 is disabled.
The deps array must be defined before stopping, in order to automatically
disable the build of net/mlx5 and print the reason.

The same protection is applied to other mlx5 drivers,
so it will allow using the variable mlx5_config in future.

Fixes: 22681deead ("net/mlx5/hws: enable hardware steering")

Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Alex Vesker <valex@nvidia.com>
2022-10-30 15:55:10 +01:00
Tal Shnaiderman
5976328d91 net/mlx5: fix thread termination check on Windows
The mlx5_is_thread_alive function always returns false
(terminated) regardless to the actual thread state.

Fixed to return the correct thread state.

Bugzilla ID: 1089
Fixes: 5d55a494f4 ("net/mlx5: split multi-thread flow handling per OS")
Cc: stable@dpdk.org

Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2022-10-30 08:11:21 +01:00
Morten Brørup
b77f58604a mempool: align cache objects on cache lines
Add __rte_cache_aligned to the objs array.

It makes no difference in the general case, but if get/put operations are
always 32 objects, it will reduce the number of memory (or last level
cache) accesses from five to four 64 B cache lines for every get/put
operation.

For readability reasons, an example using 16 objects follows:

Currently, with 16 objects (128B), we access to 3
cache lines:

      ┌────────┐
      │len     │
cache │********│---
line0 │********│ ^
      │********│ |
      ├────────┤ | 16 objects
      │********│ | 128B
cache │********│ |
line1 │********│ |
      │********│ |
      ├────────┤ |
      │********│_v_
cache │        │
line2 │        │
      │        │
      └────────┘

With the alignment, it is also 3 cache lines:

      ┌────────┐
      │len     │
cache │        │
line0 │        │
      │        │
      ├────────┤---
      │********│ ^
cache │********│ |
line1 │********│ |
      │********│ |
      ├────────┤ | 16 objects
      │********│ | 128B
cache │********│ |
line2 │********│ |
      │********│ v
      └────────┘---

However, accessing the objects at the bottom of the mempool cache is a
special case, where cache line0 is also used for objects.

Consider the next burst (and any following bursts):

Current:
      ┌────────┐
      │len     │
cache │        │
line0 │        │
      │        │
      ├────────┤
      │        │
cache │        │
line1 │        │
      │        │
      ├────────┤
      │        │
cache │********│---
line2 │********│ ^
      │********│ |
      ├────────┤ | 16 objects
      │********│ | 128B
cache │********│ |
line3 │********│ |
      │********│ |
      ├────────┤ |
      │********│_v_
cache │        │
line4 │        │
      │        │
      └────────┘
4 cache lines touched, incl. line0 for len.

With the proposed alignment:
      ┌────────┐
      │len     │
cache │        │
line0 │        │
      │        │
      ├────────┤
      │        │
cache │        │
line1 │        │
      │        │
      ├────────┤
      │        │
cache │        │
line2 │        │
      │        │
      ├────────┤
      │********│---
cache │********│ ^
line3 │********│ |
      │********│ | 16 objects
      ├────────┤ | 128B
      │********│ |
cache │********│ |
line4 │********│ |
      │********│_v_
      └────────┘
Only 3 cache lines touched, incl. line0 for len.

Credits go to Olivier Matz for the nice ASCII graphics.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2022-10-30 10:07:58 +01:00
Stephen Hemminger
7e017238f4 maintainers: update for Microsoft vmbus and netvsc
These are my last couple of days at Microsoft.
Remove the old email from MAINTAINERS.
Will no longer have free access to Azure to work on Netvsc.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2022-10-30 09:47:17 +01:00
Michael Baum
86fe1b01fa ethdev: add structure for indirect flow age update
Add a new structure for indirect AGE update.

This new structure enables:
1. Update timeout value.
2. Stop AGE checking.
3. Start AGE checking.
4. restart AGE checking.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-10-28 12:41:03 +02:00
Michael Baum
966eb55e9a ethdev: add queue-based API to report aged flow rules
When application use queue-based flow rule management and operate the
same flow rule on the same queue, e.g create/destroy/query, API of
querying aged flow rules should also have queue id parameter just like
other queue-based flow APIs.

By this way, PMD can work in more optimized way since resources are
isolated by queue and needn't synchronize.

If application do use queue-based flow management but configure port
without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate
a given flow rule on different queues, the queue id parameter will
be ignored.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-10-28 12:41:03 +02:00
Michael Baum
dcc9a80c20 ethdev: add strict queue to pre-configuration flow hints
The data-path focused flow rule management can manage flow rules in more
optimized way than traditional one by using hints provided by
application in initialization phase.

In addition to the current hints we have in port attr, more hints could
be provided by application about its behaviour.

One example is how the application do with the same flow rule ?
A. create/destroy flow on same queue but query flow on different queue
   or queue-less way (i.e, counter query)
B. All flow operations will be exactly on the same queue, by which PMD
   could be in more optimized way then A because resource could be
   isolated and access based on queue, without lock, for example.

This patch add flag about above situation and could be extended to cover
more situations.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2022-10-28 12:41:03 +02:00
Vamsi Attunuru
f496b86506 net/cnxk: handle SA hard expiry events
Based on the hard limits configured in the SA context,
PMD passes corresponding event subtype to the application
to notify hard expiry event

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
2022-10-18 12:59:55 +02:00
Nithin Dabilpuram
0ed7107373 net/cnxk: remove duplicate mempool debug checks
Remove duplicate mempool debug checks for mbufs received.

Fixes: 592642c494 ("net/cnxk: align prefetches to CN10K cache model")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
2022-10-18 12:59:55 +02:00
Nithin Dabilpuram
ea84910903 net/cnxk: remove unnecessary DPTR update
Removed unnecessary datapointer(DPTR) update and remove ESN update
from microcode command word 0 based on the latest microcode.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
2022-10-18 12:59:55 +02:00
Sunil Kumar Kori
28b3ca7b9c common/cnxk: fix channel to BPID mapping
As per recent change in Linux-5.4.x AF driver, mailbox is updated to
configure mapping between channel and BPID.
Due to mbox mismatch, PFC was broken. Patch syncs mailbox definition
for the same. Also fixes the PFC configuration issues.

Fixes: ff1400aa9d ("net/cnxk: add receive channel backpressure for SDP")
Cc: stable@dpdk.org

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
2022-10-18 12:59:55 +02:00
Vamsi Attunuru
3d7a584430 net/cnxk: handle SA soft packet and byte expiry events
Handle SA soft packet and byte expiry event for Inline outbound SA.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
2022-10-18 12:59:55 +02:00
Satha Rao
d7bde0454c common/cnxk: set hysteresis bit to one
Setting non zero FC_HYST_BITS to reduce mesh traffic to
reduce system resources.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
2022-10-18 12:59:55 +02:00
Satha Rao
902a4c02e2 common/cnxk: sync NIX HW info mbox structure with kernel
Sync nix_hw_info structure with kernel.

Maintain default RR_QUANTUM for VF TL2 same as kernel to make
equal distribution among all VFs.

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
2022-10-18 12:59:55 +02:00
Satha Rao
1669a84d57 common/cnxk: fix schedule weight update
Each TX schedule config mail box supports a maximum 20 register updates.
This patch will send node weight updates in multiple mailboxes when
TM is created with more than 20 scheduler nodes.

Fixes: 464c9f9193 ("common/cnxk: support NIX TM dynamic update")
Cc: stable@dpdk.org

Signed-off-by: Satha Rao <skoteshwar@marvell.com>
2022-10-18 12:59:24 +02:00
Nithin Dabilpuram
5781638519 common/cnxk: fix RQ mask config for CN10KB chip
RQ mask config needs to enable SPB_ENA in order for zero for
being able to override it with meta AURA.

Also fix flow control config to catch invalid rxchan config
errors.

Fixes: ddf955d391 ("common/cnxk: support CPT second pass")
Fixes: da57d4589a ("common/cnxk: support NIX flow control")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
2022-10-18 12:37:35 +02:00
Nithin Dabilpuram
b354dc053a net/cnxk: use NIX Tx offset for CN10KB
In outbound inline case, use NIX Tx offset instead of
NIX Tx address for cn10kb as per new instruction format.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
2022-10-18 12:36:54 +02:00
Nithin Dabilpuram
480b03be9d net/cnxk: fix later skip to include mbuf private data
Fix later skip to include mbuf priv data as mbuf->buf_addr
is populated based on calculation including per-mbuf priv area.

Fixes: 706eeae607 ("net/cnxk: add multi-segment Rx for CN10K")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
2022-10-18 12:36:45 +02:00
Nithin Dabilpuram
aa728ea474 common/cnxk: add soft expiry poll frequency argument
Add support to override soft expiry poll frequency via devargs.
Also provide helper API to indicate reassembly support on a chip
and documentation for devargs that are already present.

Fixes: 780b9c8924 ("net/cnxk: support zero AURA for inline meta")

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
2022-10-18 12:36:21 +02:00
Sathesh Edara
e98f583129 common/cnxk: set MTU size on SDP based on SoC type
Set maximum frame size on SDP NIX side to 16KB for CN93 A0 and B0,
CNF95N A0 and CNF95O A0 SOC type. Rest of the SoCs SDP NIX to 64KB.

Signed-off-by: Sathesh Edara <sedara@marvell.com>
2022-10-18 12:35:51 +02:00
Sunil Kumar Kori
b7d3a0fe71 net/cnxk: support congestion management operations
Added support for congestion management.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
2022-10-12 08:41:58 +02:00
Sunil Kumar Kori
5df46167ca common/cnxk: support congestion management ROC API
Add support for congestion management RoC APIs.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
2022-10-12 08:41:48 +02:00
Radu Nicolau
cc9317e2b0 net/iavf: fix handling of IPsec events
Verify that the message length is non zero and keep processing
virtual channel messages after the event is received.

Fixes: 6bc987ecb8 ("net/iavf: support IPsec inline crypto")
Cc: stable@dpdk.org

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-10-25 17:38:30 +02:00
Mingjin Ye
e91659806a net/ice: support VXLAN-GPE tunnel offload
PMD tx path does not support VXLAN_GPE tunnel offload. Because it does not
process RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE flag in mbuf, and then the "L4TUNT"
field will not be set in Tx context descriptor.

This patch is to add the RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE flag to
support Tx VXLAN_GPE offload under the scenario if the offload tso
and VXLAN_GPE tunnel are both required, so that it would avoid
tx queue overflowing.

Fixes: daa02b5cdd ("mbuf: add namespace to offload flags")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
2022-10-25 17:33:30 +02:00
Radu Nicolau
96e66d38bc net/iavf: fix queue stop for large VF
Use large VF queue stop request when large VF is enabled

Fixes: 9cf9c02bf6 ("net/iavf: add enable/disable queues for large VF")
Cc: stable@dpdk.org

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-10-25 17:33:30 +02:00
Yiding Zhou
cb5c1b91f7 net/iavf: add thread for event callbacks
All callbacks registered for ethdev events are called in
eal-intr-thread, and some of them execute virtchnl commands.
Because interrupts are disabled in the intr thread, no response
will be received for these commands. So all callbacks should
be called in a new context.

When the device is bonded, the bond pmd registers a callback for
the LSC event to execute virtchnl commands to reinitialize the
device, and it would also raise the above issue.

This commit adds a new thread to call all event callbacks.

Fixes: 48de41ca11 ("net/avf: enable link status update")
Fixes: 8410842505 ("net/iavf: support asynchronous virtual channel message")
Cc: stable@dpdk.org

Signed-off-by: Yiding Zhou <yidingx.zhou@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-10-25 17:33:30 +02:00
Steve Yang
0d8d7bd720 net/ice: support DDP dump switch rule binary
Dump ICE ddp runtime switch rule binary via following command:
testpmd> ddp dump switch <port_id> <output_file>

Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-10-25 17:33:30 +02:00
Yuan Wang
11a13cb004 net/ice: fix judgment order of buffer split
proto_hdr defines a bit mask of the protocol sequence as RTE_PTYPE_*,
The last RTE_PTYPE* in the mask indicates the split position.

To get the split position from proto_hdr, the order of judgement should
be from inner to outer layer, so for tunneling packets the tunnel header
should be placed at the end of the judgement condition.

Fixes: 629dad3ef3 ("net/ice: support buffer split in scalar Rx")

Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2022-10-25 17:33:30 +02:00
Radu Nicolau
055c9fc0d8 net/txgbe: fix security session destroy
Replace mempool_put with memset 0, the internal session memory block
is no longer allocated from a mempool

Fixes: 3f3fc3308b ("security: remove private mempool usage")

Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Reviewed-by: Jiawen Wu <jiawenwu@trustnetic.com>
2022-10-25 17:33:30 +02:00