Resources should be freed on error conditions. i.e, VNIC and
VNIC context created in HW and memory allocated in
bnxt_vnic_grp_alloc() should be freed.
Added a new function bnxt_vnic_destroy() to do the cleanup.
This lightweight function can be used in flow destroy/flush
path to avoid duplicate code as well.
Fixes: d24610f7bf ("net/bnxt: allow flow creation when RSS is enabled")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Also removed a log message which does not convey any
useful information.
Fixes: d24610f7bf ("net/bnxt: allow flow creation when RSS is enabled")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In bnxt_flow_validate(), when bnxt_get_unused_filter() fails due to
no filter resources available, driver is not setting flow error using
"rte_flow_error_set".
Also, fixed the error code.
Fixes: 5ef3b79fdf ("net/bnxt: support flow filter ops")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The bnxt_vnic_prep() can fail due to multiple reasons.
But when bnxt_vnic_prep() fails, PMD is not returning
the actual error/string to the application.
Fix it by moving the "rte_flow_error_set" to bnxt_vnic_prep()
to set the actual error code.
Fixes: d24610f7bf ("net/bnxt: allow flow creation when RSS is enabled")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
There is a HW bug that can result in certain stats being reported as
zero.
Workaround this by ignoring stats with a value of zero based on the
previously stored snapshot of the same stat.
This bug mainly manifests in the output of func_qstats as FW aggregrates
each ring's stat value to give the per function stat and if one of
them is zero, the per function stat value ends up being lower than the
previous snapshot which shows up as a zero PPS value in testpmd.
Eliminate invocation of func_qstats and aggregate the per-ring stat
values in the driver itself to derive the func_qstats output post
accounting for the spurious zero stat value.
Bugzilla ID: 641
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
There is a rare hardware bug that can cause a bad opaque value in the RX
or TPA start completion. When this happens, the hardware may have used the
same buffer twice for 2 Rx packets. In addition, the driver might also
crash later using the bad opaque as an index into the ring.
The Rx opaque value is predictable and is always monotonically increasing.
The workaround is to keep track of the expected next opaque value and
compare it with the one returned by hardware during RX and TPA start
completions. If they miscompare, log it, discard the completion,
schedule a ring reset and move on to the next one.
Fixes: 0958d8b643 ("net/bnxt: support LRO")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The burst receive function should return all packets currently
present in the receive ring up to the requested burst size,
update vector mode receive functions accordingly.
Fixes: 3983583414 ("net/bnxt: support NEON")
Fixes: bc4a000f2f ("net/bnxt: implement SSE vector mode")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Make the definition of the table used to map hardware packet type
information to DPDK packet type more generic.
Add macro definitions for constants used in creating table
indices, use these to eliminate raw constants in code.
Add build-time assertions to validate ptype mapping constants.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Check that pointers are valid before using them.
Fixes: 7bc8e9a227 ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Signed-off-by: Thierry Herbelot <thierry.herbelot@6wind.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The VF reset can be triggered by the PF reset event, then the PCI bus
master will be cleared, the VF will be not allowed to issue any Memory
or I/O Requests.
So after the reset event is detected, always enable the PCI bus master.
And if failed, the device or system may be in an invalid state, so keep
the VF reset state to mark it as I/O error.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The VF reset can be triggered by the PF reset event, then the PCI bus
master will be cleared, the VF will be not allowed to issue any Memory
or I/O Requests.
So after the reset event is detected, always enable the PCI bus master.
And if failed, the device or system may be in an invalid state, so keep
the VF reset state to mark it as I/O error.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add the API to set 'Bus Master Enable' bit to be enabled or disabled in
the PCI command register.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Optimized dequeue using x86 vector instructions was added
in 21.05, but due to limited testing the default has been
changed back to the scalar mode implementation. The vector mode
implementation can be enabled via the devargs option
"vector_opts_enabled=<y/Y>".
Fixes: 000a7b8e75 ("event/dlb2: optimize dequeue operation")
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
The HW scheduling type was not being extracted properly
in the vector optimized dequeue path. It was also not
being recorded in the xstats.
Fixes: 000a7b8e75 ("event/dlb2: optimize dequeue operation")
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Deferred scheduling is a DLB v1.0 feature, and is not valid for
DLB v2.0 or v2.5.
Fixes: bc62748bd7 ("event/dlb2: add private data structures and constants")
Fixes: a2e4f1f5e7 ("event/dlb2: add dequeue and its burst variants")
Cc: stable@dpdk.org
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
MCDI helper routines in libefx include length checks for response
messages, to ensure that short replies and optional fields are
handled correctly.
If the MCDI response message from the firmware is larger than the
caller's buffer then the response length reported to the caller
should be limited to the buffer size. Otherwise length checks in
the caller may allow reading past the end of the buffer.
Fixes: 6f619653b9 ("net/sfc/base: import MCDI implementation")
Cc: stable@dpdk.org
Signed-off-by: Andy Moreton <amoreton@xilinx.com>
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The modify_field flow API assumes that the META item is 32 bits wide.
But the C register that is used for META item can be 16 or 32 bits
wide depending on kernel and firmware configurations.
Take this into consideration and use the appropriate META width.
Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In the past, all the queues and other hardware objects were created
through Verbs interface. Currently, most of the objects creation are
migrated to Devx interface by default, including queues. Only when
the DV is disabled by device arg or eswitch is enabled, all or some
of the objects are created through Verbs interface.
When using Devx interface to create queues, the kernel driver
behavior is different from the case using Verbs. The Tx loopback
cannot work properly even if the Tx and Rx queues are configured
with loopback attribute. To fix the support self loopback for Tx, a
Verbs dummy queue pair needs to be created to trigger the kernel to
enable the global loopback capability.
This is only required when TIR is created for Rx and loopback is
needed. Only CQ and QP are needed for this case, no WQ(RQ) needs to
be created.
Bugzilla ID: 645
Fixes: 6deb19e1b2 ("net/mlx5: separate Rx queue object creations")
Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
When the port is link down state, it is meaningless to display the
port link speed. It should be an undefined state.
Fixes: 59fad0f321 ("net/hns3: support link update operation")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Whether the enable bit of the pfc ("pfc_en") is changed or not is one of
the conditions for reconfiguring the DCB. Currently, pfc_en is not
rolled back when DCB configuration fails. This patch fixes it.
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the DCB configuration takes effect in the dev_start stage, and
the mapping between TCs and queues are also updated in this stage.
However, the DCB configuration is delivered in the dev_configure stage.
If the configuration fails, it should be intercepted in this stage. If
the configuration succeeds, the user should be able to obtain the
corresponding updated information, such as the mapping between TCs and
queues. So this patch moves DCB configuration to dev_configure.
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Packet buffer allocation and hardware pause configuration fail normally
when a reset occurs. If the execution fails, rollback of the packet
buffer still fails. So this rollback is meaningless.
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the "requested_fc_mode" lacks rollback when enabling link
FC or PFC fails.
For example, this may result an incorrect FC mode after a reset.
Fixes: d4fdb71a0e ("net/hns3: fix flow control mode")
Fixes: 62e3ccc2b9 ("net/hns3: support flow control")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The Rx/Tx queue numbers should be greater than TC number, this patch adds
this check for PF before updating the mapping between TC and queue.
Fixes: a951c1ed3a ("net/hns3: support different numbers of Rx and Tx queues")
Fixes: 76d794566d ("net/hns3: maximize queue number")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
The vDPA PCI device unplug process should release all the private
device resources and also to unregister the device.
The device unregistration was missed what remained the device data
invalid in the rte_vhost library.
Unregister the device in unplug process via the remove operation.
Fixes: 95276abaaf ("vdpa/mlx5: introduce Mellanox vDPA driver")
Cc: stable@dpdk.org
Reported-by: Eli Britstein <elibr@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Tested-by: Eli Britstein <elibr@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
The net/vhost PMD does not comply with the ethdev offload API as it does
not report Rx/Tx offload capabilities wrt TSO and checksum offloading.
On the other hand, the net/vhost PMD lets guest negotiates TSO and
checksum offloading.
Changing the behavior for Rx/Tx offload flags handling won't
improve/fix this situation and will break applications that might have
been relying on implicit support of TSO in this driver.
Revert this behavior change until we have a complete fix.
Fixes: ca7036b4af ("vhost: fix offload flags in Rx path")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
A meter may handle Rx queue reference in his sub-policies.
In stop operation, all the Rx queues are released.
Wrongly, the meter reference was not released before
destroying the Rx queues what cause an error in stop.
Release the Rx queues meter references in stop operation.
Fixes: fc6ce56bba ("net/mlx5: prepare sub-policy for flow with meter")
Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, the counter offset support is discovered by creating the
rule with invalid offset counter and drop action in root table. If
the rule creation fails with EINVAL errno, that mean counter offset
is not supported in root table.
However, drop action may not be supported in some rdma-core version
in root table. In this case, the discover code will not work properly.
This commits changes flow attribute to egress. That removes all the
extra fate actions in the flow to avoid any unsupported fate actions
make the discover code fail time to time.
Fixes: 994829e695 ("net/mlx5: remove single counter container")
Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, configuring a mlx device, it will allocate its
own process private in mlx5_proc_priv_init() and only frees
it when closing the device. This will lead to a memory leak,
when a device is configured repeatedly.
For example:
for(...)
do
rte_eth_dev_configure
rte_eth_rx_queue_setup
rte_eth_tx_queue_setup
rte_eth_dev_start
rte_eth_dev_stop
done
Fixes: 120dc4a7dc ("net/mlx5: remove device register remap")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Currently, configuring a mlx device, it will allocate its
own process private in mlx5_proc_priv_init() and only frees
it when closing the device. This will lead to a memory leak,
when a device is configured repeatedly.
For example:
for(...)
do
rte_eth_dev_configure
rte_eth_rx_queue_setup
rte_eth_tx_queue_setup
rte_eth_dev_start
rte_eth_dev_stop
done
Fixes: 97d37d2c1f ("net/mlx4: remove device register remap")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Depending on the breakout mode of the physical ports the internal NFP
port number might differ from the actual physical port number. Prior to
this patch the physical port number was used when making configuration
changes to the physical ports (enable, admin up etc). After this change
the internal port number is now correctly used for configuration
changes.
Fixes: 5e15e799d6 ("net/nfp: create separate entity for PF device")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
GCC 11 complains that some arrays may be uninitialized in
process_zuc_hash_op(). This is because their initialization
depends on num_ops being > 0.
This function is only called with num_ops > 0 because of
checks in process_zuc_hash_op().
To remove the warning initialize the arrays.
Fixes: 0b133c36ad ("crypto/zuc: support IPsec Multi-buffer lib v0.54")
Cc: stable@dpdk.org
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
These files were added without required SPDX headers.
Fixes: 7525ebd8eb ("common/mlx5: add glue functions on Windows")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
The special fast-path for returning completed descriptors without
reporting status or user-handles returns the number of completed ring
slots used, rather than the number of actual user-submitted jobs. This
means that the counts returned are too high, as the batch descriptor
slots would be included in the total. Therefore remove this special
case, and use the normal status-processing path so that the returned
count is correct in all cases.
Fixes: 245efe544d ("raw/ioat: report status of completed jobs")
Reported-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
When enqueuing a descriptor, when checking that there is at least one
slot free for the current descriptor and a later batch descriptor, we
need to test for both two free and one free, in case the last write
was a batch descriptor which is allowed to use the "spare" slot.
Similarly, when computing the free space in the ring to return to the
user, we need to take account of the same condition, so that we do not
return a "-1" ring space value, by blindly subtracting "2".
Fixes: 245efe544d ("raw/ioat: report status of completed jobs")
Reported-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
This patch fixes the NULL auth generation case where the request
shouldn't contain the authentication result address. Allows to run
ipsec_autotest with a QAT device.
Fixes: 65beb9abca ("crypto/qat: fix null auth when using VFIO")
Cc: stable@dpdk.org
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
When getting meter flow_id bits, there's an issue that not handling
correctly if flow_id is 0.
This fix this issue that when flow_id is 0, treat it as 1 bit.
Fixes: 83306d6c46 ("net/mlx5: fix meter statistics")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
One of the user parameters for the flow AGE action is the
action context. This context should be provided back to the
user when the action is aged-out.
While this context is NULL, a default value should be provided
by the PMD: the rte_flow pointer in case of rte_flow_create API
and the action pointer in case of the rte_flow_action_handle API.
The default for rte_flow_action_handle was set correctly,
while in case of rte_flow_create it wrongly remained NULL.
This patch set the default value for rte_flow_create case to be
the rte_flow pointer.
Fixes: f9bc5274a6 ("net/mlx5: allow age modes combination")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Current the ASO age action was supported in the non-root table,
and the counter based age action was be used in the root table.
The FDB table skips group 0 on MLX5 PMD by adding implicit rule
that jump to non-root table, but PMD code use the original group
value for checking.
This patch adds the transfer checking for ASO age action.
Fixes: f9bc5274a6 ("net/mlx5: allow age modes combination")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently RSS expansion only supports GRE and GRE KEY.
This patch adds RSS expansion for NVGRE item so PMD can expand flow item
correctly.
Fixes: ea81c1b816 ("net/mlx5: fix NVGRE matching")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
While there's mirror action prior to the meter action in the E-Switch
flow, means that the packets should be duplicated into port firstly,
and then do meter and send to the original destination.
MLX5 PMD will split the above E-Switch flow into two sub flows,
similar as mirror with modify action before.
Fixes: 07627fbf15 ("net/mlx5: support E-Switch mirroring with modify action")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In case of bonding, orchestrator wants to use same devargs for LAG and
non-LAG scenario to probe representor on PF1 using PF1 PCI address
like "<DBDF_PF1>,representor=pf1vf[0-3]".
This patch changes PCI address check policy to allow PF1 PCI address for
representors on PF1.
Note: detaching PF0 device can't remove representors on PF1. It's
recommended to use primary(PF0) PCI address to probe representors on
both PFs.
Fixes: f926cce3fa ("net/mlx5: refactor bonding representor probing")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The memory barrier is used to ensure that the response is returned
only after the Tx/Rx function is set, it should place after the Rx/Tx
function is set.
Fixes: 2aac5b5d11 ("net/mlx5: sync stop/start with secondary process")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The memory barrier is used to ensure that the response is returned
only after the Tx/Rx function is set, it should place after the Rx/Tx
function is set.
Fixes: 0203d33a10 ("net/mlx4: support secondary process")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reproduced with '--buildtype=debugoptimized' config,
compiler version: gcc (GCC) 12.0.0 20210509 (experimental)
There are multiple build errors, like:
In file included from ../drivers/net/tap/tap_flow.c:13:
In function ‘rte_jhash_2hashes’,
inlined from ‘rte_jhash’ at ../lib/hash/rte_jhash.h:284:2,
inlined from ‘tap_flow_set_handle’ at
../drivers/net/tap/tap_flow.c:1306:12,
inlined from ‘rss_enable’ at ../drivers/net/tap/tap_flow.c:1909:3,
inlined from ‘priv_flow_process’ at
../drivers/net/tap/tap_flow.c:1228:11:
../lib/hash/rte_jhash.h:238:9:
warning: ‘flow’ may be used uninitialized [-Wmaybe-uninitialized]
238 | __rte_jhash_2hashes(key, length, pc, pb, 1);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/tap/tap_flow.c: In function ‘priv_flow_process’:
../lib/hash/rte_jhash.h:81:1: note: by argument 1 of type ‘const void *’
to ‘__rte_jhash_2hashes.constprop’ declared here
81 | __rte_jhash_2hashes(const void *key, uint32_t length, uint32_t *pc,
| ^~~~~~~~~~~~~~~~~~~
../drivers/net/tap/tap_flow.c:1028:1: note: ‘flow’ declared here
1028 | priv_flow_process(struct pmd_internals *pmd,
| ^~~~~~~~~~~~~~~~~
Fix strict aliasing rule by using union.
Bugzilla ID: 690
Fixes: de96fe68ae ("net/tap: add basic flow API patterns and actions")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Reproduced with '--buildtype=debugoptimized' config,
compiler version: gcc (GCC) 12.0.0 20210509 (experimental)
There are multiple build errors, like:
../drivers/net/ice/base/ice_switch.c: In function ‘ice_add_marker_act’:
../drivers/net/ice/base/ice_switch.c:3727:15:
warning: array subscript ‘struct ice_aqc_sw_rules_elem[0]’
is partly outside array bounds of ‘unsigned char[52]’
[-Warray-bounds]
3727 | lg_act->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LG_ACT);
| ^~
In file included from ../drivers/net/ice/base/ice_type.h:52,
from ../drivers/net/ice/base/ice_common.h:8,
from ../drivers/net/ice/base/ice_switch.h:8,
from ../drivers/net/ice/base/ice_switch.c:5:
../drivers/net/ice/base/ice_osdep.h:209:29:
note: referencing an object of size 52 allocated by ‘rte_zmalloc’
209 | #define ice_malloc(h, s) rte_zmalloc(NULL, s, 0)
| ^~~~~~~~~~~~~~~~~~~~~~~
../drivers/net/ice/base/ice_switch.c:3720:50:
note: in expansion of macro ‘ice_malloc’
lg_act = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rules_size);
These errors are mainly because allocated memory is cast to
"struct ice_aqc_sw_rules_elem *" but allocated size is less than the size
of "struct ice_aqc_sw_rules_elem".
"struct ice_aqc_sw_rules_elem" has multiple other structs has unions,
based on which one is used allocated memory being less than the size of
"struct ice_aqc_sw_rules_elem" is logically correct but compiler is
complaining about it.
Since the allocation is done explicitly and both producer and consumer
are internal, safe to ignore the warnings. Also to prevent any side
affect disabling the compiler warning for now, until proper fix done.
Reducing the warning disable to gcc >= 11 version.
Bugzilla ID: 678
Fixes: c7dd159311 ("net/ice/base: add virtual switch code")
Fixes: 02acdce2f5 ("net/ice/base: add MAC filter with marker and counter")
Fixes: f89aa3affa ("net/ice/base: support removing advanced rule")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>