The mlx5_args function reads the devargs and checks if they are valid
for this driver and if not it returns an error.
This was normal behavior as long as all the devargs come to this driver,
but since it is possible to run several drivers together, the function
may return an error for another driver's devarg even though it is
completely valid.
In addition the function does not allow the user to know which of the
devargs is incorrect, but returns an error without printing the
unknown devarg.
This patch eliminates the error return in the case of an unknown devarg,
and prints a warning for each such devarg specifically.
Fixes: 7b4f1e6bd3 ("common/mlx5: introduce common library")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Memory region (MR) was being looked up incorrectly
for the data address of an externally-attached mbuf.
A search was attempted for the mempool of the mbuf,
while mbuf data address does not belong to this mempool
in case of externally-attached mbuf.
Only attempt the search:
1) for not externally-attached mbufs;
2) for mbufs coming from MPRQ mempool;
3) for externally-attached mbufs from mempools
with pinned external buffers.
Fixes: 08ac03580e ("common/mlx5: fix mempool registration")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The context of the device opens once in the common probe and closes with
its removal.
If the probe of one of the drivers fails, it releases its resources and
then the common closes the context.
But mistakenly in the compress probe, if there isn't enough capabilities
to support compress operations, it closes the device and then common
probe closes it again.
Remove the redundant closing from compress probe.
Fixes: 2efd265445 ("compress/mlx5: support partial transformation")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Removing the use of driver following PMD as its unnecessary.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Remove the use of double "the" as it does not make sense.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
While joining the shared Rx queue to the existing queue group,
the queue configurations is checked to be the same as it was
specified in the first group queue creation - all shared
queues should be created with identical configurations.
During the Rx queue creation the buffer split segment
configuration can be altered - the zero segment sizes are
substituted with the actual ones, inherited from the pools,
number of segments can be extended to cover the maximal
packet length, etc. It means the actual queue segment
configuration can not be used directly to match the
configuration provided in the queue setup call.
To resolve an issue we should store original parameters
in the shared queue structure and perform the check against
one of these stored ones.
Fixes: 09c2555303 ("net/mlx5: support shared Rx queue")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GENEVE and VXLAN-GPE item matching is done similarly to GRE matching.
Users can skip the specification of the protocol type and expect that
this type is deducted from the inner header type automatically.
But the inner header type may not be specified in order to match all the
protocol types. In this case, PMD should not specify the protocol type.
Check if we have the inner header type before setting the protocol type.
Fixes: 690391dd0e ("net/mlx5: fix GENEVE protocol type translation")
Fixes: 861fa3796f ("net/mlx5: fix VXLAN-GPE next protocol translation")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
GRE protocol type is implicitly set in the matching translation in case
an application doesn't specify any type explicitly in a flow rule.
It is extracted from the inner header type, but this type may be absent.
In this case, GRE item matching is broken. Check if we have the inner
header type before setting it to allow matching on all GRE packets.
Fixes: be26e81bfc ("net/mlx5: fix GRE protocol type translation")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
mlx5_ind_table_obj_modify() was not changing the reference counters
of neither the new set of RxQs, nor the old set of RxQs.
On the other hand, creation of the RSS incremented the RxQ refcnt.
If an RxQ was present in both the initial and the modified set,
its reference counter was incremented one extra time
compared to the queues that were only present in the new set.
This prevented releasing said RxQ resources on port stop:
flow indirect_action 0 create action_id 1 \
action rss queues 0 1 end / end
flow indirect_action 0 update 1 \
action rss queues 2 3 end / end
quit
...
mlx5_net: mlx5.c:1622: mlx5_dev_close():
port 0 some Rx queue objects still remain
mlx5_net: mlx5.c:1626: mlx5_dev_close():
port 0 some Rx queues still remain
Increment reference counters for the new set of RxQs
and decrement them for the old set of RxQs when needed.
Remove explicit referencing of RxQ from mlx5_ind_table_obj_attach()
because it reuses mlx5_ind_table_obj_modify() code doing this.
Fixes: ec4e11d41d ("net/mlx5: preserve indirect actions on restart")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
mlx5_ind_table_obj_setup() was incrementing RxQ reference counters
even when the port was stopped, which prevented RxQ release
and triggered an internal assertion.
Only increment reference counter when the port is started.
Fixes: ec4e11d41d ("net/mlx5: preserve indirect actions on restart")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
If mlx5_rxq_start() failed and rxq_ctrl was not initialized,
mlx5_rxq_obj_verify() would segfault in an attempt to dereference it.
Add a check that rxq_ctrl is not NULL before accessing its members.
Fixes: 09c2555303 ("net/mlx5: support shared Rx queue")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Memory regions (MRs) were allocated in one chunk of memory with a
mempool registration object. However, MRs can be reused among
different mempool registrations.
When the registration that allocated the MRs originally was
destroyed, the dangling pointers to the MRs could be left in other
registrations sharing these MRs.
Splitting the memory allocation of registration structure and MRs in
this commit solves this pointer reference issue.
Fixes: 690b2a88c2 ("common/mlx5: add mempool registration facilities")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
This patch fixes segfault which was triggered when port, with indirect
actions created, was closed. Segfault was occurring only when
RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant decrement
of RX queues refcount:
- refcount was decremented when port was stopped and indirect actions
were detached from RX queues (port stop),
- refcount was decremented when indirect actions objects were destroyed
(port close or destroying of indirect action).
This patch fixes behavior. Dereferencing Rx queues is done if and only
if indirect action is explicitly destroyed by the user or detached on
port stop. Dereferencing Rx queues on action destroy operation depends
on an argument to the wrapper of indirect action destroy operation,
introduced in this patch.
Fixes: ec4e11d41d ("net/mlx5: preserve indirect actions on restart")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch fixes the assertion failure triggered when the user
configured minimum inline length requirements and the application
transmitted multi segment packets. Failure was triggered when space left
in TX queue was not enough to cover this requirement.
This patch limits the length of data to be copied to the available space
in TX queue.
Fixes: cacb44a099 ("net/mlx5: add no-inline Tx flag")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In a meter hierarchy, all the meters are marked with having RSS if
the final meter's termination action is RSS.
When validating a flow rule with meter hierarchy, the RSS action
should not be fetched from the current meter if it is not the final
one.
The fate action union is next meter ID instead of the pointer to the
RSS action. By using the final meter in the hierarchy, the flow rule
validation will succeed without any crash caused by the invalid RSS
action pointer access.
Fixes: 1ce19ab1f4 ("net/mlx5: fix RSS validation with meter policy")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Reviewed-by: Li Zhang <lizh@nvidia.com>
Reviewed-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
If there are sample action and the meter action in the same flow,
mlx5 PMD performs several levels of splitting. For example, sampling
feature splits the original flow into prefix subflow with sample action,
and suffix subflow with the rest of actions. Then, metering feature
splits the sampling suffix subflow into its own meter subflows.
If mark action was added before the sample and meter action, the
flow mark flag was kept in the sample subflows but reset on
handling the metering split, causing the flow mark value missed.
This patch keeps the flow mark flag of previous subflow, and then
the following meter subflows handle the flow mark correctly.
Fixes: 9ade91dfe8 ("net/mlx5: fix group value of sample suffix flow")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When the ETH spec is empty MLX4 PMD doesn't allow match other criteria,
which means the flow should be promisc one.
Currently, PMD validates this by setting flow->promisc bit when ETH spec
is empty and checking whether there is other rte_flow_item followed
when flow->promisc is on.
However, commit [1] adds support to match traffic only on VLAN id, the
above validation logic should be changed accordingly.
This patch changes the above validate logic by skipping flow->promisc
check if this item is VLAN.
[1]:
Fixes: c0d2392631 ("net/mlx4: support flow w/o ETH spec and with VLAN")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Mempool registration was not correctly processing
mempools with RTE_PKTMBUF_F_PINEND_EXT_BUF flag set
("pinned mempools" for short), because it is not known
at registration time whether the mempool is a pktmbuf one,
and its elements may not yet be initialized to analyze them.
Attempts had been made to recognize such pools,
but there was no robust solution, only the owner of a mempool
(the application or a device) knows its type.
This patch extends common/mlx5 registration code
to accept a hint that the mempool is a pinned one
and uses this capability from net/mlx5 driver.
1. Remove all code assuming pktmbuf pool type
or trying to recognize the type of a pool.
2. Register pinned mempools used for Rx
and their external memory on port start.
Populate the MR cache with all their MRs.
3. Change Tx slow path logic as follows:
3.1. Search the mempool database for a memory region (MR)
by the mbuf pool and its buffer address.
3.2. If not MR for the address is found for the mempool,
and the mempool contains only pinned external buffers,
perform the mempool registration of the mempool
and its external pinned memory.
3.3. Fall back to using page-based MRs in other cases
(for example, a buffer with externally attached memory,
but not from a pinned mempool).
Fixes: 690b2a88c2 ("common/mlx5: add mempool registration facilities")
Fixes: fec28ca0e3 ("net/mlx5: support mempool registration")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The mlx5 PMD introduced the table id attribute to allow multiple
flow tables on the same table level for flow metering, there can be
multiple flow table objects with the same table level but different
table ids.
If the extended metadata mode is enabled, all flows containing
destination Queue/RSS actions are split into two subflows - prefix one
jumps to the MLX5_FLOW_MREG_CP_TABLE_GROUP flow table to copy
MARK action data, and suffix one to perform the destination Queue/RSS
action. The table_id for the jump in the metadata split prefix flow
is always 0.
If flow itself was the metering split suffix subflow the table id was
set to 1 in the flow split structure and the metadata split suffix
subflow was created in the table with wrong table id, causing the
metadata suffix flow mismatch.
This patch resets the table id to 0 while creating the metadata
suffix flows.
Fixes: 51ec04dc7b ("net/mlx5: connect meter policy to created flows")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In the metadata flow split, PMD created the prefix subflow
with removed Queue or RSS action and appended the set tag and
copy table jump actions. If the flow being split for metadata
was the meter prefix subflow, the driver supposed to share the same
meter split tag action for the metadata split flow. There was the wrong
check for preceding meter split tag action, causing append with metadata
split set tag action and resulting the meter suffix subflow was missed
due to tag value mismatch.
This patch adds the checking before copying into extend action list,
to make sure the correct shared tag is used.
Fixes: 8d72fa6689 ("net/mlx5: share tag between meter and metadata")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The size of the svif in the generic tables was incorrectly set to the
Wh+ size of 8 bits. This resulted in incorrect l2 context entries being
associated with a flow once the svif became greater than 255.
Fixes: ad9eed0248 ("net/bnxt: support flow template for Thor")
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Fix driver init when Rx mbuf allocation fails.
If we continue to use the driver with whatever rings were
created successfully, it can cause unexpected behavior.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
bnxt_stop_rxtx() does not stop data path processing as intended
as it does not update the recently introduced fast-path pointers
'(struct rte_eth_fp_ops)->rx_pkt_burst'. Since both the burst routines
only use the fast-path pointer, the real burst routines get invoked
instead of the dummy ones set by bnxt_stop_rxtx() leading to crashes
in the data path (e.g. dereferencing freed structures)
Fix the segfault by updating the fast-path pointer as well
Fixes: c87d435a4d ("ethdev: copy fast-path API into separate structure")
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
If autonegotiation was enabled, driver was not passing the
'auto_pam4_link_speeds' obtained during init and stored in bp->link_info
to bnxt_hwrm_port_phy_cfg(). This would result in an incorrect setting
being passed to the HW during PHY configuration. This in turn, would
result in invalid settings being retrieved and configured in subsequent
application loads resulting in launch failures.
Bugzilla ID: 791
Fixes: c23f9ded03 ("net/bnxt: support 200G PAM4 link")
Cc: stable@dpdk.org
Reported-by: Charles Brett <cfb@hpe.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Device iterator RTE_DEV_FOREACH() failed to return devices from
classifier like "class=vdpa", because matching name from empty kvargs
returns no result. If device name not specified in kvargs, the function
should iterate all devices.
This patch allows empty devargs or devargs without name specified.
Fixes: 6aebb94290 ("kvargs: add function to get from key and value")
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
dpaa, dpaa2 and caam_jr drivers do not support
SA expiry. This result in failure of test cases in
test app. This patch returns ENOTSUP to skip the
SA lifetime test cases.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
dpaa, dpaa2 and caam_jr drivers decrement the inner IP header
TTL for all packets and ignoring the dec_ttl option of SA.
In this patch, using the dec_ttl to decide to decrement the
packets inner IP header TTL or not.
Fixes: 0a23d4b6f4 ("crypto/dpaa2_sec: support protocol offload IPsec")
Fixes: 3e33486f80 ("crypto/caam_jr: add security offload")
Fixes: 1f14d500bc ("crypto/dpaa_sec: support IPsec protocol offload")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
An indirect mkey is created for each descriptor in a QP, number of
descriptors per QP is configured by the user on QP setup callback.
In mlx cryptodev autotest, the max number of QPs (provided by the driver)
is created, and due to mkey resource limits, QPs creation fail which leads
to the test failing.
Since there is no capability of max number of descriptors provided to
the user, we can't give an exact number of max QPs available.
Reduce the max number of QPs to 128, which supports standard descriptors
numbers, including the 4K number provided in the test.
Fixes: 6152534e21 ("crypto/mlx5: support queue pairs operations")
Cc: stable@dpdk.org
Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
When authenticating with SNOW3G, KASUMI and ZUC,
the pointers for encryption/decryption keys is not set.
If a cipher algorithm such as AES-CBC is also used,
the application would seg fault.
Hence, these pointers should be set to some value by default.
Command line to replicate the issue:
./build/app/dpdk-test-crypto-perf -l 4,5 -n 6 --vdev="crypto_aesni_mb" -- \
--devtype="crypto_aesni_mb" --optype=cipher-then-auth --auth-algo \
snow3g-uia2 --auth-key-sz 16 --auth-iv-sz 16 --digest-sz 4 --silent \
--total-ops 1000000 --auth-op generate --burst-sz 32 \
--cipher-algo aes-ctr --cipher-key-sz 16 --cipher-iv-sz 16
Fixes: ae8e085c60 ("crypto/aesni_mb: support KASUMI F8/F9")
Fixes: 6c42e0cf4d ("crypto/aesni_mb: support SNOW3G-UEA2/UIA2")
Fixes: fd8df85487 ("crypto/aesni_mb: support ZUC-EEA3/EIA3")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Set correct rte_errno variable in CUDA driver
and return -rte_errno in case of error.
rte_errno values are compliant with the gpudev library documentation.
Fixes: 1306a73b19 ("gpu/cuda: introduce CUDA driver")
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
The gpudev functions free, register and unregister
return gracefully if input pointer is NULL or size 0,
as API doc was indicating no-op accepted values.
CUDA driver checks are removed because redundant
with the checks added in gpudev library.
Fixes: e818c4e2bf ("gpudev: add memory API")
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
Multiple drivers are defining macros for VLAN header length, to remove
the redundancy defining macro in the ether header.
And updated drivers to use the new macro.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Jiawen Wu <jiawenwu@trustnetic.com>
If the modify field action requests the field copy/set transaction
from other field, the destination field hardware bit offset was
assigned incorrectly with non-zero byte offset, causing wrong
translations for the fields with sizes larger than 32 bits.
Fixes: 40c8fb1fd3 ("net/mlx5: update modify field action")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The "mlx5_devx_cmd_mkey_create" DevX cmd allocates DevX object using
mlx5_malloc and then creates MKey using glue function.
Compatibly, "mlx5_devx_cmd_destroy" cmd releases first the MKey using
glue function, and then free the DevX object using mlx5_free.
On Windows OS, the reg_mr function creates Mkey using
"mlx5_devx_cmd_mkey_create" cmd, but dereg_mr function using directly
glue function without freeing the object.
This behavior causes memory leak at each MR release.
In addition, the dereg_mr function makes sure that the MR pointer is
valid before destroying its fields, but always calls the memset function
that makes a difference to it.
This patch moves the dereg_mr function to use "mlx5_devx_cmd_destroy"
instead of glue function, and extends the validity test to the whole
function.
Fixes: ba42071982 ("common/mlx5: add reg/dereg MR on Windows")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
To detect the timestamp mode configured on the NIC the mlx5 PMD uses the
firmware command ACCESS_REGISTER_USER.
The HCA capability command has an attribute flag checking whether
firmware supports the command.
However, the HCA capability query command read the flag from wrong place
in PRM structure.
This patch move the flag to correct place.
Fixes: 972a1bf812 ("common/mlx5: fix user mode register access command")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
mlx5 PMD incorrectly overwrote outer layer fields in MPLS tunnel
rte_flow patterns using defaults for MPLS tunnels. This included
overwriting UDP destination port in MPLSoUDP and GRE protocol field in
MPLSoGRE.
This patch fixes this behavior. If application provides the values in
flow pattern items preceding the MPLS flow item the provided values will
be used, otherwise the defaults will be applied.
Fixes: d1abe664dd ("net/mlx5: add MPLS to Direct Verbs flow engine")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Assuming a user tried to send multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, using a device with
minimum inlining requirements (such as ConnectX-4 Lx or when user
specified them explicitly), sending such packets caused segfault.
Segfault was caused by failed invariants in
mlx5_tx_packet_multi_inline function.
This patch introduces a logic for multi-segment packets, with
RTE_PMD_MLX5_FINE_GRANULARITY_INLINE flag set, to omit mbuf scanning for
filling inline buffer and inline only minimal amount of data required.
Fixes: ec837ad0fc ("net/mlx5: fix multi-segment inline for the first segments")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
These edits affect the outermost header in the current processing state
of the packet, which might have been decapsulated by prior action DECAP.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
In a tunnel packet, these actions affect the inner header if
action DECAP is set; otherwise, they affect the outer header.
Adding these actions is done in two steps: add the action to
the action mask and indicate the MAC address entry ID to use.
This allows the user to check the order of actions first and
allocate resources when time comes to enable the action rule.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Add a context structure to simplify handling of action sets.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
The number of counters being non-zero can be detected before
the action set registry traversal, so move the check outside.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Remove the vdev args check for secondary process which prevents the
secondary from attaching to the device created by the primary process
via the hotplug framework. This check was removed for other vdevs but
was missed for failsafe.
Fixes: 4852aa8f6e ("drivers/net: enable hotplug on secondary process")
Cc: stable@dpdk.org
Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
DMA on SN1022 SoC requires extra mapping of the memory via MCDI.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
NIC DMA memory regions API allows to establish mapping of DMA addresses
used by NIC to host IOVA understood by the host when IOMMU is absent
and NIC cannot address entire host IOVA space because of too small
DMA mask.
The API does not allow to address entire host IOVA space, but allows
arbitrary regions of the space really used for the NIC DMA.
A DMA region needs to be mapped in order to perform MCDI initialization.
Since the NIC has not been probed at that point, its configuration cannot
be accessed and there an UNKNOWN mapping type is assumed.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Fixes: 60e53c078d ("net/sfc: support decrement IP TTL actions in transfer flows")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Fixes: 96fd2bd69b ("net/sfc: support flow action count in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
The driver internal variable to track the next consumer index on
the Rx ring was not being set if there was an mbuf allocation
failure. In that scenario, eventually it would fall out of sync
with the actual consumer index and raise a false alarm on Thor
needlessly causing a segmentation fault with testpmd
Fixes: 03c8f2fe11 ("net/bnxt: detect bad opaque in Rx completion")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
The ULP context list was not updated when high availability
feature was deinitialized. This caused the ULP context list
to acquire the lock when it is not supposed to causing a
deadlock. The fix is to correctly clear the list.
Fixes: 3184b1ef66 ("net/bnxt: add HA support in ULP")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>