The generic RTE_LOGTYPE_PMD is a historical relic and should
be deprecated. Every driver must register its own logtype.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The generic RTE_LOGTYPE_PMD is a historical relic and should
be deprecated. Every driver must register its own logtype.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Replace the boilerplate BSD license text with the equivalent
SPDX license id.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tetsuya Mukawa <mtetsuyah@gmail.com>
Acked-by: Takanari Hayama <taki@igel.co.jp>
Fix some mistakes in Tx bursts in regard to TSO flag check.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
For the rte_flow_action_raw_encap, the header definition for
encapsulation must be available, otherwise it will lead to crash on some
OFED versions and logically it should be rejected.
Fixes: 8ba9eee4ce ("net/mlx5: add raw data encap/decap to Direct Verbs")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Dekel Peled <dekelp@mellanox.com>
The ops pointers in fm10k_stats_get() are set up from primary
process, when secondary process calls these ops pointers,
a segment fault will happen.
Fixes: 7223d200c2 ("fm10k: add base driver")
Cc: stable@dpdk.org
Signed-off-by: Lu Qiuwen <luqiuwen@iie.ac.cn>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
I40evf queue can not work properly with kernel pf driver for X722 vf.
Eg. when configure 8 queues pair, only 4 queues can receive packets,
and half packets will be lost if using 2 queues pair.
This issue is caused by misconfiguration of look up table, the original
code of LUT configuration did not work for X722 vf, use aq command to
setup the LUT to make it work properly.
Fixes: cea7a51c17 ("i40evf: support RSS")
Cc: stable@dpdk.org
Acked-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Rather than the global promiscuous mode on all slaves, let's try to use
allmulti.
To do this, we apply the same mechanism than for promiscuous and drop in
rx_burst.
When adding a slave, we first try to use allmulti, else we try
promiscuous.
Because of this, we must be able to handle allmulti on the bonding
interface, so the associated dev_ops is added with checks on which mode
has been applied on each slave.
Rather than add a new flag in the bond private structure, we can remove
promiscuous_en since ethdev already maintains this information.
When starting the bond device, there is no promisc/allmulti re-apply
as, again, ethdev does this itself.
On reception, we must inspect if the packets are multicast or unicast
and take a decision to drop based on which mode has been enabled on the
bonding interface.
Note: in the future, if we define an API so that we can add/remove mc
addresses to a port (rather than the current global set_mc_addr_list
API), we can refine this to only register the LACP multicast mac
address.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
By default, the 802.3ad code enables promisc mode on all slaves.
To avoid all packets going to the application (unless the application
asked for promiscuous mode), all frames are supposed to be filtered in
the rx burst handler.
However the incriminated commit broke this because non pure ethernet
frames (basically any unicast Ether()/IP() packet) are not filtered
anymore.
Fixes: 71b7b37ec9 ("net/bonding: use ptype flags for LACP Rx filtering")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
Fast queue Rx burst function is missing checks on promisc and the
slave collecting state.
Define an inline wrapper to have a common base.
Fixes: 112891cd27 ("net/bonding: add dedicated HW queues for LACP control")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
We'd better consolidate the fast queue and the normal tx burst functions
under a common inline wrapper for maintenance.
But looking closer at the bufs_slave_port_idxs[] mapping array in those
tx burst functions, its size is invalid since up to nb_bufs are handled
here.
A previous patch [1] fixed this issue for balance tx burst function
without mentioning it.
802.3ad and balance modes are functionally equivalent on transmit.
The only difference is on the slave id distribution.
Add an additional inline wrapper to consolidate even more and fix this
issue.
[1]: https://git.dpdk.org/dpdk/commit/?id=c5224f623431
Fixes: 09150784a7 ("net/bonding: burst mode hash calculation")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chas Williams <chas3@att.com>
This reverts commit aa2c00702b, which
introduced the possibility of an invalid address exception when running
an application with a stopped receive queue. The issues with rxq stop/start
will be revisited in the 19.11 release timeframe.
Fixes: aa2c00702b ("net/bnxt: fix traffic stall on Rx queue stop/start")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The "and" condition offset == 0 && offset == NVM_INVALID_PTR
can never be true.
Fixes: cf3af5aa56 ("net/ixgbe/base: add functions to get version info")
Cc: stable@dpdk.org
Signed-off-by: Congwen Zhang <zhang.congwen@zte.com.cn>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
The copying of sent mbufs pointers might be deferred to the end of
tx_burst() routine to be copied in one call of rte_memcpy.
For the multi segment packets this optimization is not applicable,
because number of packets does not match with number of mbufs and
we do not have linear array of pointers in pkts parameter.
The completion request generating routine wrongly took into account
the inconsistent (for multi-segment packets) deferred pointer copying.
Fixes: 5a93e173b8 ("net/mlx5: fix Tx completion request generation")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
gcc-9 is stricter on NULL arguments for printf.
Fix the following build error by avoiding NULL argument to printf.
In file included from drivers/net/memif/memif_socket.c:26:
In function 'memif_socket_create',
inlined from 'memif_socket_init' at net/memif/memif_socket.c:965:12:
net/memif/rte_eth_memif.h:35:2: error:
'%s' directive argument is null [-Werror=format-overflow=]
35 | rte_log(RTE_LOG_ ## level, memif_logtype, \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
36 | "%s(): " fmt "\n", __func__, ##args)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 09c7e63a71 ("net/memif: introduce memory interface PMD")
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
The shared Infiniband device context should be included
into memory event callback list only once on context creation,
and removed from the list only once on context destroying.
Multiple insertions of the same object caused the infinite
loop on the list processing.
Fixes: ccb3815346 ("net/mlx5: update memory event callback for shared context")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The RSS expand function for IP-in-IP tunnel type is missed,
which leads to create following flow failed:
flow create 0 ingress pattern eth / ipv4 proto is 4 /
ipv4 / udp / end actions rss queues 0 1 end level 2
types ip ipv4-other udp ipv4 ipv4-frag end /
mark id 221 / count / end
In order to make RSS expand function working correctly,
now the way to check whether a IP tunnel existing is to
check whether there is the second IPv4/IPv6 item and whether the
first IPv4/IPv6 item's next protocl is IPPROTO_IPIP/IPPROTO_IPV6.
For example:
... pattern eth / ipv4 proto is 4 / ipv4 / ....
Fixes: 5e33bebdd8 ("net/mlx5: support IP-in-IP tunnel")
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function flow_dv_zero_encap_udp_csum() uses a while loop to iterate
over vlan items in flow rule.
Pointer next_hdr is incremented to the next item before it is used,
so the first item is skipped.
This patch moves the incrementing of next_hdr to the correct place.
Fixes: bf1d7d9a03 ("net/mlx5: zero out UDP checksum in encapsulation")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When the link is down, the link speed returned by ethtool is
UINT32_MAX and the link status is 0.
In this case, the DPDK ethdev link speed should be set to
ETH_SPEED_NUM_NONE.
Otherwise since link speed is non-zero but link status is zero, this
is an inconsistent situation and -EAGAIN is returned, which is not right.
Fixes: 1884087198 ("net/mlx5: fix support for newer link speeds")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
There is the limit on completion descriptor fetch to improve
latency. If burst size is large there might be not enough
resources freed in completion processing. This fix reiterates
sending loop and allows multiple completion descriptor fetch
and processing.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch fixes the default settings for packet size to inline
with Enhanced Multi-Packet Write feature, allowing 256B packets
to be inlined with Out-Of-the-Box settings.
Fixes: 50724e1bba ("net/mlx5: update Tx definitions")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
If the minimal inline data are required the data inline feature
must be engaged. There were the incorrect settings enabling the
entire small packet inline (in size up to 82B) which may result
in sending rate declining if there is no enough cores. The same
problem was raised if inline was enabled to support VLAN tag
insertion by software.
Fixes: 38b4b397a5 ("net/mlx5: add Tx configuration and setup")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The completion loop speed optimizations for error-free
operations are done - no CQE field fetch on each loop
iteration. Also, code size is oprimized - the flush
buffers routine is invoked once.
Fixes: 318ea4cfa1 ("net/mlx5: fix Tx completion descriptors fetching loop")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The debug assert wrongly triggers on the inline data 18B,
this should be passed successfully.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The patch [Fixes] sets the default value of required minimal
inline data to 0 bytes. On some configurations (depends
on switchdev/legacy settings and FW version/settings)
the ConnectX-4LX NIC requires minimal 18 bytes of
Tx descriptor inline data to operate correctly.
Wrongly set to 0 default value may prevent NIC from operating
with out-of-the-box settings, this patch reverts default
value for ConnectX-4LX back to 18 bytes (inline L2).
Fixes: 9f350504bb ("net/mlx5: fix ConnectX-4LX minimal inline data limit")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The rte_flow_item_vlan has the inner_type, which is missing on
DR/DV flow engine.
By adding this support, the example testpmd commands could be:
- matching all vlan traffic with id 2:
testpmd> flow create 0 ingress pattern eth / vlan vid is 2 / end
actions queue index 2 / end
- matching all ipv4 traffic in vlan with id 2:
testpmd> flow create 0 ingress pattern eth / vlan vid is 2
inner_type is 0x0800 / end actions queue index 2 / end
Fixes: fc2c498ccb ("net/mlx5: add Direct Verbs translate items")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Some flow rules were not configured.
Part of the code in function flow_dv_matcher_enable() is enclosed in
'#ifdef HAVE_MLX5DV_DR' preprocessor directive.
Using this directive is not needed here, and prevents compilation of
relevant code.
This patch removes the unnecessary preprocessor directive.
Fixes: 4f84a19779 ("net/mlx5: add Direct Rules API")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_flow_validate_item_vlan() validates the user setting
is supported by NIC, using a mask with TCI mask 0x0fff.
This check will reject a flow rule specifying a vlan pcp item.
This patch updates mlx5_flow_validate_item_vlan() to use mask 0xffff,
so flow rules with vlan pcp item are accepted.
Fixes: 23c1d42c71 ("net/mlx5: split flow validation to dedicated function")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
mlx4_dev_info_get calls mlx4_get_ifname, but mlx4_get_ifname
uses priv->ctx which is not a valid pointer in a secondary
process. The fix is to cache the value in primary.
In the primary process, get and store the interface index of
the device so that secondary process can see it.
Bugzilla ID: 320
Fixes: 61cbdd4194 ("net/mlx4: separate device control functions")
Cc: stable@dpdk.org
Reported-by: Suyang Ju <sju@paloaltonetworks.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Matan Azrad <matan@mellanox.com>
The function mlx5_set_min_inline() includes a switch() that checks
various PCI device IDs in order to set the txq_inline_min value. No
value is set when the PCI device ID matches the ConnectX-5 adapters,
resulting in an assert() failure later in the function
mlx5_set_txlimit_params().
This error was encountered on an IBM Power 9 system running RHEL 7.6
w/o Mellanox OFED installed.
Fixes: 38b4b397a5 ("net/mlx5: add Tx configuration and setup")
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
MLX5 PMD limits the number of SW steering tables to 32.
This patch updates the limit to 65535, to allow wide range of values.
Fixes: e2b4925ef7 ("net/mlx5: support Direct Rules E-Switch")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
On some virtual setups (particularly on ESXi) when we have SR-IOV and
E-Switch enabled there is the problem to receive VLAN traffic on VF
interfaces. The NIC driver in ESXi hypervisor does not setup E-Switch
vport setting correctly and VLAN traffic targeted to VF is dropped.
The patch provides the temporary workaround - if the rule
containing the VLAN pattern is being installed for VF the VLAN
network interface over VF is created, like the command does:
ip link add link vf.if name mlx5.wa.1.100 type vlan id 100
The PMD in DPDK maintains the database of created VLAN interfaces
for each existing VF and requested VLAN tags. When all of the RTE
Flows using the given VLAN tag are removed the created VLAN interface
with this VLAN tag is deleted.
The name of created VLAN interface follows the format:
evmlx.d1.d2, where d1 is VF interface ifindex, d2 - VLAN ifindex
Implementation limitations:
- mask in rules is ignored, rule must specify VLAN tags exactly,
no wildcards (which are implemented by the masks) are allowed
- virtual environment is detected via rte_hypervisor() call,
and the type of hypervisor is checked. Currently we engage
the workaround for ESXi and unrecognized hypervisors (which
always happen on platforms other than x86 - it means workaround
applied for the Flow over PCI VF). There are no confirmed data
the other hypervisors (HyperV, Qemu) need this workaround,
we are trying to reduce the list of configurations on those
workaround should be applied.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343416
Fixes: fe65e1e1ce ("fm10k: add vector scatter Rx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343447
Fixes: 319c421f38 ("net/avf: enable SSE Rx Tx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343422, 343403
Fixes: ca74903b75 ("net/i40e: extract non-x86 specific code from vector driver")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 343452, 343407
Fixes: c68a52b8b3 ("net/ice: support vector SSE in Rx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes (dereference after null check) coverity issue.
The address of first segmented packets was not set correctly during
reassembling packets which led to this issue.
Coverity issue: 13245
Fixes: 8a44c15aa5 ("net/ixgbe: extract non-x86 specific code from vector driver")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add return value checking when reading configure information from PCI
register to avoid Coverity issue.
Fixes: 1fc97012 ("net/e1000: fix i219 hang on reset/close")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Given the fact that dev parameter is used in ice_dev_configure.
Fixes: 50370662b7 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When working as a secondary process, it uses eth_memif_rx in PMD egress.
It should be eth_memif_tx.
Fixes: c41a04958b ("net/memif: support multi-process")
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Fixed a check in bnxt_alloc_hwrm_rx_ring() while initializing
the rx ring.
Driver should not change "deferred_start" status of rx/tx queues.
It should get the status in queue_setup_op() and use that value.
Fixes: 9b63c6fd70 ("net/bnxt: support Rx/Tx queue start/stop")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Octeontx2 PMD's mailbox client uses device memory to send messages
to mailbox server in the admin function Linux kernel driver.
The device memory used for the mailbox communication needs to
be qualified as volatile memory type to avoid unaligned device
memory accesses because of compiler's memory access coalescing.
This patch modifies the mailbox request and responses as volatile
type which were non-volatile earlier and accessed from unaligned
memory addresses which resulted in bus errors on Fedora 30 with
gcc 9.1.1.
Fixes: 2b71657c86 ("common/octeontx2: add mbox request and response definition")
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Since 18.11, it is suggested that driver should release all its private
resources at the dev_close routine. So all resources previously released
in remove routine are now released at the dev_close routine, and the
dev_close routine will be called in driver remove routine in order to
support removing a device without closing its ports.
Above behavior changes are supported by setting RTE_ETH_DEV_CLOSE_REMOVE
flag during probe stage.
Signed-off-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Yuri Chipchev <yuric@marvell.com>
Fix the PCIe detach segfault by releasing eth_dev resources
by adding nicvf cleanup support on PCI detach.
Fixes: fdf91e0f2f ("drivers/net: do not use ethdev driver")
Cc: stable@dpdk.org
Signed-off-by: Amit Gupta <agupta3@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Update workaround changes for erratas that are fixed on 96xx A1.
This patch also enables cq drop for all the passes for
maintaining performance along with updating a default
Rx ring size in dev_info.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Patch extends minimum supported max_sqb_count devarg value
such that it can limit the max sqb count to 8 buffers and
also defines NIX_DEF_SQB and uses it to compute the number
of sqe buffers required for the egress traffic.
NIX_DEF_SQB is defined as 16 which is optimal across multiple
octeontx2 platforms to scale up the performance proportional
to the corresponding port/queue to lcore mappings.
Fixes: fb0198b7dc ("net/octeontx2: add devargs parsing functions")
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
From B0 HW revision onwards, HW can drop the Rx and L2 error packets.
Enable this by default if the feature is available.
Since this bit field is used as reserved in old HW revisions,
No need to have additional HW version check.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
During an if-condition evaluation, a 2-bit flag evaluates to 'true' for
'0x1', '0x2' and '0x3'. Thus, from this perspective these flags are
indistinguishable. To make them distinct, respective bits must be
extracted with a mask and then checked for strict equality.
Specifically here, even if `PKT_TX_UDP_CKSUM` (value '0x3') was set, the
expression `mbuf->ol_flags & PKT_TX_TCP` (the second flag of value
'0x1') is evaluated first and the result is 'true'. In consequence, for
UDP packets the execution flow enters an incorrect branch.
Fixes: 56b8b9b7e5 ("net/ena: convert to new Tx offloads API")
Cc: stable@dpdk.org
Reported-by: Eduard Serra <eserra@vmware.com>
Signed-off-by: Maciej Bielski <mba@semihalf.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
When using RTE_PKTMBUF_HEADROOM as 0, dpaa driver throws compilation error
error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
This patch change it into run-time check.
Bugzilla ID: 335
Fixes: beb2a7865d ("bus/fslmc: define hardware annotation area size")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
When using RTE_PKTMBUF_HEADROOM as 0, dpaa driver throws compilation error
error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
This patch change it into run-time check.
Bugzilla ID: 335
Fixes: ff9e112d78 ("net/dpaa: add NXP DPAA PMD driver skeleton")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
There should not be blank lines at end of files.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
IOMMU capabilities won't change and must be checked even if no PCI device
seem to be supported yet when EAL initialised.
This is to accommodate with SPDK that registers its drivers after
rte_eal_init(), especially on PPC platform where the IOMMU does not
support VA.
Fixes: 703458e19c ("bus/pci: consider only usable devices for IOVA mode")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Takeshi Yoshimura <tyos@jp.ibm.com>
This macro is unused after a previous fix.
Fixes: fe822eb8c5 ("bus/pci: use IOVA DMA mask check when setting IOVA mode")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Fixed to return the checksum status of rx packets by setting
"ol_flags" correctly in vector mode receive.
These changes have been there for non vector mode receive.
In vector mode receive also indicate inner and outer checksum
errors individually in "ol_flag" to indicate L3 and L4 error.
Fixes: bc4a000f2f ("net/bnxt: implement SSE vector mode")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
There is a bug in context memory allocation because of which
it results in reusing the context memory allocated for the first
port while allocating memory for next ports.
Fix it by passing the port id in the name field while
allocating context memory.
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Add extern to variable declaration to avoid some compiler treating it
as variable definition.
build error log:
lib/librte_pmd_virtio.a(vhost_kernel.o):(.rodata+0x110):
multiple definition of `vhost_msg_strings'
lib/librte_pmd_virtio.a(vhost_user.o):(.data.rel.ro.local+0x0):
first defined here
lib/librte_pmd_virtio.a(virtio_user_dev.o):(.rodata+0xe8):
multiple definition of `vhost_msg_strings'
lib/librte_pmd_virtio.a(vhost_user.o):(.data.rel.ro.local+0x0):
first defined here
Fixes: 33d24d65fe ("net/virtio-user: abstract backend operations")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The driver names for rawdevs were both different in make and meson builds
and were non-standard in the make version in that some included "rawdev" in
the name while others didn't.
Therefore, for global consistency of naming, we can use "rte_rawdev" rather
than "rte_pmd" for the prefix for the libraries. While most other driver
categories use "rte_pmd" as a prefix, there is precedent for this in the
mempool drivers use "rte_mempool" as a prefix.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The ifpga and skeleton rawdev drivers included "rawdev" in their directory
names, which was superfluous given that they were in the drivers/raw
directory. Shorten the names via this patch.
For meson builds, this will rename the final library .so/.a files
produced, but those will be renamed again later via a patch to
standardize rawdev names.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
OTX2 AP core can sometimes fissure STP instructions when it is more
optimal to send such writes into the pipeline as 2 separate
instructions. However registers should be excluded from such
optimization. This commit ensures that no CSR write is ever fissured
by introducing zero cost workaround by setting STP pre-index by zero to
make sure OTX2 AP core prevent fissure.
Fixes: 8a4f835971 ("common/octeontx2: add IO handling APIs")
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
In case of QINT interrupt occurrence, SW fails to clear the QINT
line resulting in recursive interrupts because currently interrupt
handler gets the cause of the interrupt by reading
NIX_LF_RQ[SQ/CQ/AURA/POOL]_OP_INT but does not write 1 to clear
RQ[SQ/CQ/ERR]_INT field in respective NIX_LF_RQ[SQ/CQ/AURA/POOL]_OP_INT
registers.
Fixes: dc47ba15f6 ("net/octeontx2: handle queue specific error interrupts")
Fixes: 50b95c3ea7 ("mempool/octeontx2: add NPA IRQ handler")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The packets transmitting in mlx5 is performed by building
Tx descriptors (WQEs) and sending last ones to the NIC.
The descriptor can contain the special flags, telling the NIC
to generate Tx completion notification (CQEs). At the beginning
of tx_burst() routine PMD checks whether there are some Tx
completions and frees the transmitted packet buffers.
The flags to request completion generation must be set once
per specified amount of packets to provide uniform stream
of completions and freeing the Tx queue in uniform fashion.
The previous implementation sets the completion request
generation once per burst, if burst size if big enough it may
latency in CQE generation and freeing large amount of buffers
in tx_burst routine on multiple completions which also
affects the latency and even causes the Tx queue overflow
and Tx drops.
This patches enforces the completion request will be set
in the exact Tx descriptor if specified amount of packets
is already sent.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Mellanox ConnectX-4LX NIC in configurations with disabled
E-Switch can operate without minimal required inline data
into Tx descriptor. There was the hardcoded limit set to
18B in PMD, fixed to be no limit (0B).
Fixes: 38b4b397a5 ("net/mlx5: add Tx configuration and setup")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This patch limits the amount of fetched and processed
completion descriptors in one tx_burst routine call.
The completion processing involves the buffer freeing
which may be time consuming and introduce the significant
latency, so limiting the amount of processed completions
mitigates the latency issue.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Enabling LRO offload per queue makes sense because the user will
probably want to allocate different mempool for LRO queues - the LRO
mempool mbuf size may be bigger than non LRO mempool.
Change the LRO offload to be per queue instead of per port.
If one of the queues is with LRO enabled, all the queues will be
configured via DevX.
If RSS flows direct TCP packets to queues with different LRO enabling,
these flows will not be offloaded with LRO.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When a user configures LRO in the port offloads, he probably wants each
TCP packet will have a chance to open an LRO session.
The PMD wasn't configure LRO in the flow TIR if the flow is not
explicitly configured TCP item despite the flow included TCP traffic.
For example, the next flows were not LRO offloaded:
pattern eth / end, pattern eth / ip / end, pattern eth / ipv6 / end.
Enable LRO configuration for all the TIRs if LRO is configured in the
port.
No performance impact for non-LRO traffic in these TIRs.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.
In this case the SW should fix the relevant packet headers because
the HW doesn't update them according to the new created packet
characteristics but provides the update values in the CQE.
Add update header code to the regular Rx burst function to support LRO
feature.
Make sure the first mbuf has enough space to include each TCP header,
otherwise the header update may cross mbufs what complicates the
operation too match.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The alignment requested by the FW for WQ buffer allocation is 512.
Change it from cache line alignment to 512.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
LRO support was only for MPRQ, hence mprq Rx burst was selected when
LRO was configured in the port.
The current support for MPRQ is suffering from bad memory utilization
since an external mempool is allocated by the PMD for the packets data
in addition to the user mempool, besides that, the user may get packet
data addresses which were not configured by him.
Even though MPRQ has the best performance for packet receiving in the
most cases and because of the above facts it is better to remove the
automatic MPRQ select when LRO is configured.
Move MPRQ to be selected only when the user force it by the PMD
arguments including LRO case.
Allow LRO offload using the regular RQ with the regular Rx burst
function.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When the Rx queue is not in striding RQ mode it should be configured as
cyclic RQ.
In this case the type remains 0 which means linked-list type.
Set the RQ type to be cyclic when the queue is not in striding RQ mode.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The WQ size configuration via DevX didn't take into account the maximum
number of segments per packet what wrongly caused to configure bigger
WQE size than the size expected by the PMD in other places.
The scatter mode stride size should be the size of segment multiplied
by the number of maximum segments per packet.
The number of WQEs per WQ should be the number of descriptors divided by
the number of the maximum segments per packet.
Fix the size calculations to the above rule.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Patch [1] zeroes the mbuf headroom when the port is configured with LRO
because when working with more than one stride per packet the HW cannot
guaranty an headroom in the start stride of each packet.
Change the solution to support mbuf headroom by adding an empty buffer
as the first packet segment, scatter mode must be enabled to support it.
[1] http://patches.dpdk.org/patch/56912/
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When mbuf is allocated by rte_pktmbuf_alloc the offload flag is reset by
it, so data-path function should not do it again.
Remove the above offload flag reset from MPRQ data-path.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The field max_rx_pkt_len in Rx configuration indicates the maximum size
for Rx packet to be received.
There was no any field to indicate the maximum size of LRO packet to be
received by the application.
Assuming the user configures max_rx_pkt_len as the maximum LRO packet
length when LRO is configured on the port, the PMD limits the maximum
LRO packet size received from HW to be max_rx_pkt_len.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
If the mbuf size of the Rx mempool supplied by the user in the Rx setup
is unable to contain the maximum Rx packet length in addition to the
mbuf head-room, the Rx scatter offload must be configured. Otherwise,
there is not enough space in single mbuf to contain a packet with size
of the maximum Rx packet length.
The PMD did not return an error in the above mentioned case.
Return an error in the above case.
Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Fixes: edad38fcd0 ("net/mlx: enhance Rx scatter mode detection")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The patch fix the issue that LLDP packet can't be forwarded to host.
Fixes: 59d151de66 ("net/ice: stop LLDP by default")
Cc: stable@dpdk.org
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Prior to the fix, RTE_LOGTYPE_INFO messages were displayed by default.
After the fix, only NOTICE level and higher were displayed by default
and INFO level were not. There are INFO level vNIC config related
messages which customers and tech support currently depend on for
debugging and so on and to suddenly hide these messages is not a good
idea.
This patch changes the default log level to RTE_LOG_INFO for enic so
messages are printed as before the fix.
Fixes: bbd8ecc054 ("net/enic: remove PMD log type references")
Signed-off-by: John Daley <johndale@cisco.com>
Adding support to parse GRE KEY for octeontx2 Flow.
Matching on GRE Key will only work, if checksum and routing
bits in the GRE header are equal to 0.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch implements read clock api whose purpose is to return
raw clock ticks. Using this API real time ticks spent in
processing a packet can be known:
<read_clock val at any time> - mbuf->timestamp
Calling mbox for reading raw clock ticks in fastpath is very
expensive so its value is derived from time stamp counter(tsc)
using freq multiplier (ratio of raw clock ticks and tsc) and clock
delta (by how much tsc is lagging from raw clock value).
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
A huge drop in per core MPPS value was observed when PTP stack is
enabled. The reason behind the bottleneck is HW serialises the
transfer of all SQEs, which seeks timestamp capture, on the same
send DMA path. Hence only those packets which requires timestamp
capture should set SETTSTAMP in send mem alg.
With this patch timestamping would be done only for those packets
with PKT_TX_IEEE1588_TMST set.
Fixes: fb3ae0951a ("net/octeontx2: support Tx")
Fixes: 8980a15300 ("event/octeontx2: support PTP for SSO")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Earlier implementation for enabling ptp via RX offload flag was
causing segmentation fault as it was getting executed in the
device configuration stage where RX and TX queues were not
configured. As in the ptp enable process rx queues are used for
mbuf setup while tx queues are used for send descriptor setup.
Moving the logic in dev start as all the resources will be
configured.
Fixes: b5dc314044 ("net/octeontx2: support base PTP")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Multi segmented packet may be spliced with indirect mbufs also.
Currently driver causes buffer leak for indirect mbufs as they
were not being freed to packet pool.
Patch fixes handling of indirect mbufs for following use cases
- packet contains all indirect mbufs only.
- packet contains mixed mbufs i.e. direct and indirect both.
Fixes: cbd5710db4 ("net/octeontx2: add Tx multi segment version")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
missed_pkts reflects the number of packets that the driver did not manage
to send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: 5f05e95cd5 ("net/vhost: fix Tx error counting")
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
err_pkts reflects the number of packets that the driver did not manage
to send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: e1e4017751 ("ring: add new driver")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
n_err reflects the number of packets that the driver did not manage to
send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: 09c7e63a71 ("net/memif: introduce memory interface PMD")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
err_pkts reflects the number of packets that the driver did not manage to
send.
This is a temporary situation, those packets are not freed and the
application can still retry to send them later.
Hence, we can't count them as transmit failed.
Fixes: 75e2bc54c0 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The delta between what the application asked to receive and what was
indeed received, can not be called an error counter.
This counter is not reported anywhere, remove it.
Fixes: 75e2bc54c0 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This Tx counter has never been used.
Fixes: 9658d17da2 ("virtio: maintain stats per queue")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
This Tx counter has never been used.
Fixes: c743e50c47 ("null: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This Tx counter is now unused.
Fixes: 10edf857fd ("net/af_xdp: make reserve/submit peek/release consistent")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
This Rx counter has never been used.
Fixes: 364e08f2bb ("af_packet: add PMD for AF_PACKET-based virtual devices")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Filling correct RETA table size at ixgbevf_dev_info_get,
so RETA table update will be supported for VF port.
For X540_vf and 82599_vf, since they don't support
RETA table update, set RETA size to 0.
Fixes: 2144f6630f ("ixgbe: add redirection table size in device info")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This commit fixes an issue with the error checking in flow
MARK action. Previously, (ANY + MARK) would fail, as the
(mark_spec == 0) condition would cause an early error return,
however really it is (mark_spec != 0) that should cause the
early error return.
Flipping the binary comparison corrects the behaviour, and
(ANY + MARK) now succeeds, while (MARK + MARK) fails.
Fixes: 0bbcfc706a ("net/i40e: support MARK and RSS flow action")
Suggested-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Mesut Ali Ergin <mesut.a.ergin@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Change loop index from uint16_t to uint32_t since max
index 65535 could be exceeded when ring size is 2k+.
Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Cc: stable@dpdk.org
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Two cores can send multi segment packets on two different pcap ports.
Because of this, we can't have one single buffer to linearize packets.
Use rte_pktmbuf_read() to copy the packet into a buffer on the stack
and remove eth_pcap_gather_data() when necessary (if the mbuf is
contiguous, rte_pktmbuf_read() just points at the buffer address).
Fixes: 6db141c91e ("pcap: support jumbo frames")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
When a packet cannot be transmitted, the driver is supposed to free this
packet and report it as handled.
This is to prevent the application from retrying to send the same packet
and ending up in a liveloop since the driver will never manage to send
it.
Fixes: 49a0a2ffd5 ("net/pcap: fix possible mbuf double freeing")
Fixes: 6db141c91e ("pcap: support jumbo frames")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
If the pkt pool contains only buffers smaller than the default headroom,
then the driver will compute an invalid buffer size (negative value cast
to an uint16_t).
Rely on the mbuf api to check how much space is available in the mbuf.
Fixes: 6eb0ae218a ("pcap: fix mbuf allocation")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Change verbosity of a message to DEBUG from ERROR.
This is just debug message.
Fixes: 3e92fd4e4e ("net/bnxt: use dynamic log type")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Use rte_cpu_to_le_16/32 while parsing the hwrm command response.
Fixes: 11e5e19695 ("net/bnxt: support redirecting tunnel packets to VF")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Modified to send the tunnel redirect commands to Chimp HWRM channel as
Kong does not support these commands.
Fixes: 11e5e19695 ("net/bnxt: support redirecting tunnel packets to VF")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
We were trying to fill in more rx extended stats than the size allocated
for stats causing segfault. Fixed this by adding an explicit check.
Rearranged the code to return statistic values in xstats_get as per the
names returned in xstats_get_names.
Fixes: f55e12f334 ("net/bnxt: support extended port counters")
Cc: stable@dpdk.org
Signed-off-by: Rahul Gupta <rahul.gupta@broadcom.com>
Signed-off-by: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
This commit enables the creation of a dedicated completion
ring for asynchronous event handling instead of handling these
events on a receive completion ring.
For the stingray platform and other platforms needing tighter
control of resource utilization, we retain the ability to
process async events on a receive completion ring.
For Thor-based adapters, we use a dedicated NQ (notification
queue) ring for async events (async events can't currently
be received on a completion ring due to a firmware limitation).
Rename "def_cp_ring" to "async_cp_ring" to better reflect its
purpose (async event notifications) and to avoid confusion with
VNIC default receive completion rings.
Allow rxq 0 to be stopped when not being used for async events.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Substitute driver-defined IS_P2ALIGNED() with EFX_IS_P2ALIGNED()
defined in libefx.
Add type argument and cast value and alignment to one specified type.
Fixes: e1b9445985 ("net/sfc: build libefx")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Substitute driver-defined P2ALIGN() with EFX_P2ALIGN() defined in
libefx.
Cast value and alignment to one specified type to guarantee result
correctness.
Fixes: e1b9445985 ("net/sfc: build libefx")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Substitute driver-defined P2ROUNDUP() h with EFX_P2ROUNDUP()
defined in libefx.
Cast value and alignment to one specified type to guarantee result
correctness.
Fixes: e1b9445985 ("net/sfc: build libefx")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Function rxq_release_rq_resources() releases resources of RQ object
created by DevX API.
This patch updates this function to properly clear the released
resources, to avoid repeated release of the same resource.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_rxq_release() calls mlx5_release_dbr() to release the
doorbell allocated for this Rx queue.
This call is relevant only for Rx queue objects created using
DevX API.
This patch adds the required check, to call mlx5_release_dbr()
only when relevant.
It also updates mlx5_release_dbr() to use the input offset correctly.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
For VXLAN/NVGRE packet, vni/tni should be included in the matching
keys. This patch fixes this issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The memzone's start virtual address pointer (addr) is of type void *,
no need to add type cast.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch fixes X722 VF problem when received packet don't have
HASH value.
1) Packet classifier types update should support X722 VF, not only
for X722 PF;
2) MAC type is invalid for X722 VF when set packet classifier type,
so move it after MAC type is set correctly;
Fixes: a286ebeb07 ("net/i40e: add dynamic mapping of SW flow types to HW pctypes")
Cc: stable@dpdk.org
Signed-off-by: Peng Huang <peng.huang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When the VF configuration is larger than the number of queues reserved
by PF, VF sends the request queue command through admin queue. When PF
received this command, it may reset the VF and send a notification
before resetting. If this notification is read by the timed task alarm,
Task request queue will lost notification. This patch prevents two
tasks from running simultaneously.
Fixes: ee653bd800 ("net/i40e: determine number of queues per VF at run time")
Cc: stable@dpdk.org
Signed-off-by: Tao Zhu <taox.zhu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
There was an issue with ice_and_bitmap and ice_or_bitmap when
dealing with bit array sizes that are not even multiples of 32,
where some of relevant bits in the highest 32 bits were being
cleared. This patch fixes those problems.
Fixes: c9e37832c9 ("net/ice/base: rework on bit ops")
Cc: stable@dpdk.org
Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Cleanup hardware registers macros in ice_auto_generator.h.
Fixes: 51c7f09f3f ("net/ice/base: add registers for Intel E800 Series NIC")
Cc: stable@dpdk.org
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
use __func__ instead of function name in ice_debug calls.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Change ptype variable to correctly be 16-bits in ice_prof_map
structure.
Fixes: 51d04e4933 ("net/ice/base: add flexible pipeline module")
Cc: stable@dpdk.org
Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
We don't free s_rule if ice_aq_sw_rules() returns a non-zero status. If
it returned a zero status, s_rule would be freed right after, so this
implies it should be freed within the scope of the function regardless.
Fixes: c7dd159311 ("net/ice/base: add virtual switch code")
Cc: stable@dpdk.org
Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
The dummy packets for GRE were set up for IP, but not inner
TCP or UDP. There are some applications that want to be
able to parse on those inner L4 headers so add them to
the dummy packets.
Also, the GRE dummy packet was formatted differently from
the other dummy packets so change the formatting to match
all the other dummy packets.
Fixes: 839c0a4b77 ("net/ice/base: enable additional switch rules")
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Unit hang may occur if multiple descriptors are available in the rings
during reset or close. This state can be detected by configure status
by bit 8 in register. If the bit is set and there are pending
descriptors in one of the rings, we must flush them before reset or
close.
Fixes: 805803445a ("e1000: support EM devices (also known as e1000/e1000e)")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add missing return after setting the error status in case of
invalid flush_flag in the operation.
The issue was found by the coverity scan as the fin_flush variable,
not initialized in such case, was used later in the flow.
Coverity issue: 340859
Fixes: c7b436ec95 ("compress/zlib: support burst enqueue/dequeue")
Cc: stable@dpdk.org
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Some builds with clang report an error because '<>' rather than '""' were
used for including the ioat spec header file.
Target: x86_64-native-bsdapp-clang
error: 'rte_ioat_spec.h' file not found with <angled> include; use "quotes" instead
#include <rte_ioat_spec.h>
^~~~~~~~~~~~~~~~~
"rte_ioat_spec.h"
1 error generated.
Since this file should always be in the same directory as the main header,
we can safely change the include line to fix this error.
Fixes: abff4333ec ("raw/ioat: create device on probe and destroy on release")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
NVGRE has a GRE header with c_rsvd0_ver value 0x2000 and protocol
value 0x6558.
These should be matched when item_nvgre is provided.
This patch adds validation function of NVGRE item.
It also updates the translate function of NVGRE item, to add the
required values, if they were not specified.
Original work by Xiaoyu Min <jackmin@mellanox.com>
Fixes: fc2c498ccb ("net/mlx5: add Direct Verbs translate items")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Xiaoyu Min <jackmin@mellanox.com>
LRO message is contained in the MPRQ strides.
While the LRO message size cannot be bigger than 65280 according to the
PRM, the strides which contain it may be bigger than the maximum buffer
size allowed in dpdk mbuf - 0xFFFF.
Adjust the maximum LRO message size to avoid buffer length overflow.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
LRO packet may consume all the stride memory, hence the PMD cannot
guaranty head-room for the LRO mbuf.
The issue is lack in HW support to write the packet in offset from the
stride start.
A new striding RQ feature may be added in CX6 DX to allow head-room and
tail-room for the LRO strides.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.
In this case the SW should fix the relevant packet headers because the
HW doesn't update them according to the new created packet
characteristics.
Add update header code to the mprq Rx burst function to support LRO
feature.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Update the CQE structure to include LRO fields.
Some reserved values were changed, hence also data-path code used the
reserved values were updated accordingly.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
As an arrangement to the LRO support when a packet can consume all the
stride memory, the external mbuf shared information cannot be anymore
in the end of the stride, because the HW may write the packet data to
all the stride memory.
Move the shared information memory from the stride to the control
memory of the external mbuf.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement LRO support using a single RQ object per DPDK RxQ.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_rxq_obj_new(), previously called mlx5_rxq_ibv_new(),
supports creating Rx queue objects using verbs.
This patch expands the relevant functions, to support creating
verbs or DevX Rx queue objects:
Function mlx5_rxq_obj_new() updated to create RQ object using DevX.
Function mlx5_ind_table_obj_new() updated to create RQT object using DevX.
Function mlx5_hrxq_new() updated to create TIR object using DevX.
New utility functions added to perform specific operations:
mlx5_devx_rq_new(), mlx5_devx_wq_attr_fill(),
mlx5_devx_create_rq_attr_fill().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Verbs WQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of verbs WQ to dedicated function
mlx5_ibv_wq_new().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Verbs CQ for RxQ is created inside function mlx5_rxq_obj_new().
This patch moves the creation of CQ to dedicated function
mlx5_ibv_cq_new().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_alloc_shared_ibctx() allocates Protection Domain using
verbs API, as part of shared IB device context.
This patch adds reading and storing of pdn value from the created PD
object, using DV API.
The pdn value is required when creating WQ using DevX API.
This patch also updates function flow_dv_create_counter_stat_mem_mng()
which uses the pdn value as well.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_queue_state_modify_primary() was implemented to handle
state change for queues created using Verbs API.
This patch update function mlx5_queue_state_modify_primary() to
support state change of RQ object created using DevX API.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing use of DevX TIR object.
Hash Rx queue is currently created using verbs QP only.
The next patches will add the option to create it with a TIR object
using DevX.
This patch renames hrxq_ibv to hrxq wherever relevant, and adds
the DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing of DevX RQT object.
Rx indirection table object is currently created using verbs only.
The next patches will add the option to create an RQT object using
DevX.
This patch renames ind_table_ibv to ind_table_obj wherever relevant,
and adds the DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing of DevX RxQ object.
RxQ object is currently created using verbs only.
The next patches will add the option to create RxQ object using DevX.
This patch renames rxq_ibv to rxq_obj wherever relevant, and adds the
DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When using DevX API, memory for door-bell records should be allocated
by PMD and registered using DevX API.
This patch implements the utility functions to support it:
- Add struct mlx5_devx_dbr_page, containing door-bells page data.
- Add list of struct mlx5_devx_dbr_page door-bell pages to device
private data.
- Implement function mlx5_alloc_dbr_page() to allocate page for
door-bell records, and register it using DevX API.
- Implement function mlx5_get_dbr(). to acquire a door-bell record
from the door-bells page, allocating a new page if needed.
- Implement function mlx5_release_dbr() to release a door-bell
record that is no longer needed, freeing the containing page if
it becomes empty.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement function mlx5_devx_cmd_create_rqt() to create RQT
object using DevX API.
Add related structs in mlx5.h and mlx5_prm.h.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement function mlx5_devx_cmd_create_tir() to create TIR
object using DevX API..
Add related structs in mlx5.h and mlx5_prm.h.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement function mlx5_devx_cmd_modify_rq() to modify RQ.
Add related structs in mlx5.h and mlx5_prm.h.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement function mlx5_devx_cmd_create_rq() to create RQ object using
DevX API.
Add related structs in mlx5.h and mlx5_prm.h.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Update function mlx5_txq_ibv_new(), query and store the TIS
transport domain value.
It is required later on Rx side when creating matching TIR.
Add field in mlx5 data structure to store Transport Domain ID.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Use DevX API to read device LRO capabilities.
Check if LRO is supported and can be enabled.
Check if MPRQ is supported and can be used.
Enable MPRQ for LRO use if not enabled by user.
Added note for mlx5_mprq_enabled(), to emphasize that LRO
enables MPRQ.
Disable CQE compression and CRC stripping if LRO is enabled.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Add compile option HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR, and matching
dest_tir flag in device configuration structure.
Add glue function pointer dv_create_flow_action_dest_devx_tir, and
function mlx5_glue_dv_create_flow_action_dest_devx_tir(),
to invoke API mlx5dv_dr_action_create_dest_devx_tir();
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Add function mlx5_glue_devx_qp_query().
Add glue function pointer devx_qp_query to run it.
Glue version updated to 19.08.0.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Add command-line argument to set LRO session timeout.
Add LRO settings struct in PMD configuration struct.
Add support of LRO offload in port configuration.
Add macros and function to check if LRO is supported and enabled.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
A variable of type struct ibv_cq_ex is declared in 2 unions, but
isn't used.
This patch removes the 2 redundant declarations.
Fixes: 6218063b39 ("net/mlx5: refactor Rx data path")
Fixes: 1d88ba1719 ("net/mlx5: refactor Tx data path")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
We should tear down the fdir when the last flow is destroyed, current
logic is opposite to expected behavior, this patch fixes this issue.
Fixes: 2e67a7fbf3 ("net/i40e: config flow director automatically")
Cc: stable@dpdk.org
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
i40e FDIR doesn't allow to create flow with empty spec and mask for
ethertype pattern. Without this patch, below flow would be created
successfully which is unexpected.
> flow create 0 ingress pattern eth / end actions drop / end
Fixes: 7d83c152a2 ("net/i40e: parse flow director filter")
Cc: stable@dpdk.org
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
When configure returns error, e.g. in case not supported offloads
(outer ip and sctp) driver released Rx,Tx queues. Then in case of
correct configuration the driver could not start due to queues already
released but the driver thought it was configured correctly.
Secondly if driver returns error from configuration librte_ethdev will
release, rx queues and tx queues, without chaining driver configured
state.
Fix that by 'releasing' configuration and changing driver state when
error is returned from otx2_nix_configure.
Fixes: 548b5839a3 ("net/octeontx2: add device configure operation")
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Reviewed-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Set RTE_ETH_DEV_CLOSE_REMOVE upon probe so all the resources
for the port can be freed by rte_eth_dev_close()
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Queue index was incorrectly incremented with port, which
caused incorrect queue packet placement. This manifested
when port number was != 0.
Fixes: c33d45af36 ("net/ark: add Tx initial version")
Cc: stable@dpdk.org
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
This patch resets frc and ctrl in sg tx fd to avoid corruption.
Fixes: 774e9ea919 ("net/dpaa2: add support for multi seg buffers")
Cc: stable@dpdk.org
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add check to return error as the handling
for external buffer packets with SG is currently missing.
Fixes: 37f9b54bd3 ("net/dpaa: support Tx and Rx queue setup")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The associated device index is retrieved via Netlink request to
underlying Infiniband device driver. This network device index
is permanent throughout the lifetime of device. We do not
spawn the rte_eth_dev ports without associated network device, and
if network device is being unbound we get the remove notification
message and rte_eth_dev port is also detached. So, we may store
the ifindex in mlx5_device_spawn() routine at rte_eth_dev port
creation and initialization time and use the cached value further
instead of doing actual Netlink request.
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch fills the tx_desc_lim.nb_seg_max and
tx_desc_lim.nb_mtu_seg_max fields of rte_eth_dev_info
structure to report thee maximal number of packet
segments, requested inline data configuration is
taken into account in conservative way.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch adds the implementation of tx_burst routine template.
The template supports all Tx offloads and multiple optimized
tx_burst routines can be generated by compiler from this one.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Mellanox NICs support the wide set of Tx offloads. The supported
offloads are reported by the mlx5 PMD in rte_eth_dev_info
tx_offload_capa field.
An application may choose any combination of supported offloads
and configure the device appropriately. Some of Tx offloads may be
not requested by application, or ever all of them may be omitted.
Most of the Tx offloads require some code branches in tx_burst routine
to support ones. If Tx offload is not requested the tx_burst routine
code may be significantly simplified and consume less CPU cycles.
For example, if application does not engage TSO offload this code
can be omitted, if multi-segment packet is not supposed the tx_burst
may assume single mbuf packets only, etc.
Currently, the mlx5 PMD implements multiple tx_burst subroutines
for most common combinations of requested Tx offloads, each branch
has its own dedicated implementation. It is not very easy to update,
support and develop such kind of code - multiple branches impose
the multiple points to process. Also many of frequently requested
offload combinations are not supported yet. That leads to selecting of
not completely matching tx_burst routine and harms the performance.
This patch introduces the new approach for tx_burst code. It is proposed
to develop the unified template for tx_burst routine, which supports
all the Tx offloads and takes the compile time defined parameter
describing the supposed set of supported offloads. On the base
of this template, the compiler is able to generate multiple tx_burst
routines highly optimized for the statically specified set of
Tx offloads.
Next, in runtime, at Tx queue configuration the best matching optimized
implementation of tx_burst is chosen.
This patch intentionally omits the template internal implementation,
but just introduces the template itself to emboss the approach of
the multiple specially tuned tx_burst routines.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch updates the Tx datapath control and configuration
structures and code for managing Tx datapath settings.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch extends the NIC attributes query via DevX.
The appropriate interface structures are borrowed from
kernel driver headers and DevX calls are added to
mlx5_devx_cmd_query_hca_attr() routine.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch updates Tx datapath definitions, mostly hardware related.
The Tx descriptor structures are redefined with required fields,
size definitions are renamed to reflect the meanings in more
appropriate way. This is a preparation step before introducing
the new Tx datapath implementation.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch introduces new mlx5 PMD devarg options:
- txq_inline_min - specifies minimal amount of data to be inlined into
WQE during Tx operations. NICs may require this minimal data amount
to operate correctly. The exact value may depend on NIC operation
mode, requested offloads, etc.
- txq_inline_max - specifies the maximal packet length to be completely
inlined into WQE Ethernet Segment for ordinary SEND method. If packet
is larger the specified value, the packet data won't be copied by the
driver at all, data buffer is addressed with a pointer. If packet
length is less or equal all packet data will be copied into WQE.
- txq_inline_mpw - specifies the maximal packet length to be completely
inlined into WQE for Enhanced MPW method.
Driver documentation is also updated.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch removes the existing Tx datapath code
as preparation step before introducing the new
implementation. The following entities are being
removed:
- deprecated devargs support
- tx_burst() routines
- related PRM definitions
- SQ configuration code
- Tx routine selection code
- incompatible Tx completion code
The following devargs are deprecated and ignored:
- "txq_inline" is going to be converted to "txq_inline_max"
for compatibility issue
- "tx_vec_en"
- "txqs_max_vec"
- "txq_mpw_hdr_dseg_en"
- "txq_max_inline_len" is going to be converted
to "txq_inline_mpw" for compatibility issue
The deprecated devarg keys are recognized by PMD
and ignored/converted to the new ones in order not
to block device probing.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
A spelling mistake was found in comment.
This patch fixes it.
Fixes: 8e49376400 ("net/mlx4: add external allocator for Verbs object")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
In functions flow_dv_translate() and flow_dv_validate(), the flow
items are scanned and each item is marked in item_flags bitmap.
The code handling some of the items was ported from another project,
where items are marked in a slightly different manner.
This patch fixes the setting of items in bitmap, adapting it to the
required manner.
Fixes: d53aa89aea ("net/mlx5: support matching on ICMP/ICMP6")
Fixes: 5865955ad994 ("net/mlx5: match GRE key and present bits")
Fixes: 2e4c987aad ("net/mlx5: validate Direct Rule E-Switch")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Xiaoyu Min <jackmin@mellanox.com>
Adding support for ipv6_ext header parsing in the octeontx2 flow.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add build bug on on fast path used fields that are
dependent on their positions and values.
Fixes: f1eff76ab6 ("net/octeontx2: add Rx vector version")
Fixes: ddc1bc26e9 ("net/octeontx2: add Tx vector version")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
All log messages should use driver logtype. RTE_LOGTYPE_PMD is
planned to be deprecated in the future.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Jerin Jacob <jerinj@marvell.com>
TAILQ_FOREACH macro is not safe to remove elements
during iterating tailq lists. Replace it with
TAILQ_FOREACH_SAFE.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Intel® 100/200 Series Chipset platforms reduced the round-trip
latency for the LAN Controller DMA accesses, causing in some high
performance cases a buffer overrun while the I219 LAN Connected
Device is processing the DMA transactions. I219LM and I219V devices
can fall into unrecovered Tx hang under very stressfully UDP traffic
and multiple reconnection of Ethernet cable. This Tx hang of the LAN
Controller is only recovered if the system is rebooted. Slightly slow
down DMA access by reducing the number of outstanding requests.
This workaround could have an impact on TCP traffic performance
on the platform. Disabling TSO eliminates performance loss for TCP
traffic without a noticeable impact on CPU performance.
Please, refer to I218/I219 specification update:
https://www.intel.com/content/www/us/en/embedded/products/networking/
ethernet-connection-i218-family-documentation.html
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The vector mode transmit path does not currently support VLAN tag
insertion, so we need to disable vector transmit when transmit
VLAN insertion offload is enabled.
Fixes: bc4a000f2f ("net/bnxt: implement SSE vector mode")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Rearm will intimate hardware that current interrupts are processed
and it can continue to send more.
Fixes: 1fe427fd08 ("net/bnxt: support enable/disable interrupt")
Cc: stable@dpdk.org
Signed-off-by: Rahul Gupta <rahul.gupta@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The receive interrupt vector should be offset by the constant
RTE_INTR_VEC_RXTX_OFFSET; otherwise setting up some queue interrupts
will fail.
Fixes: 1fe427fd08 ("net/bnxt: support enable/disable interrupt")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Tested-by: Rahul Gupta <rahul.gupta@broadcom.com>
Added support for new thor device.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
HWRM_CHECK_RESULT macro is used to check the status of HWRM commands.
Fixes: 18c2854b96 ("net/bnxt: configure a default VF VLAN")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
We were adding the VLAN filters to all the VNICs of the function.
Also, we were adding these VLANs to all the existing MAC only filters.
This was resulting in fewer VLANs getting added. By default we should
allocate MAC+VLAN filter only to the default VNIC of the function using
the default mac address.
Similar logic was followed in the VLAN deletion code. This patch fixes
it. Use inner VLAN fields instead of outer VLAN during filter deletion
to be in sync with VLAN addition code.
Fixes: 246c5cc5f0 ("net/bnxt: use correct flags during VLAN configuration")
Cc: stable@dpdk.org
Signed-off-by: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Avoid overrun in rte_eth_stats struct when the number of tx/rx
rings in use is greater than RTE_ETHDEV_QUEUE_STAT_CNTRS.
Fixes: 57d5e5bc86 ("net/bnxt: add statistics")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The current implementation erroneously passes the address of the
beginning of RSS table for each 64-entry context instead of the
address of the appropriate suitable for the context. This results
in only the first 64 receive queues being used. Fix by passing the
correct address for each context.
Fixes: 38412304b5 ("net/bnxt: enable RSS for thor-based controllers")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
BCM57500-based adapters use a variable number of RSS contexts
depending upon the number of receive rings in use. The current
implementation is erroneously using the maximum possible number
of RSS contexts instead of the actual number allocated when
setting up RSS tables in the adapter. Fix by using the actual
number of allocated contexts.
Fixes: 38412304b5 ("net/bnxt: enable RSS for thor-based controllers")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
In bnxt_hwrm_vnic_rss_cfg_thor, we were exiting if hash_type is 0.
This was preventing RSS getting disabled. Fixing it by removing the
check for hash_type while configuring RSS.
Fixes: 38412304b5 ("net/bnxt: enable RSS for thor-based controllers")
Signed-off-by: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
HWRM response was parsed after releasing the spinlock.
Fixes: 19e6af01bb ("net/bnxt: support get/set EEPROM")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Fixed the return values of few routines to return standard error code.
Also fixed few error logs to more meaningful one.
Fixes: 804e746c7b ("net/bnxt: add hardware resource manager init code")
Fixes: e3d8f1e6a6 ("net/bnxt: cache address of doorbell to subsequent access")
Fixes: 19e6af01bb ("net/bnxt: support get/set EEPROM")
Fixes: b7435d660a ("net/bnxt: add ntuple filtering support")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
HWRM_CHECK_RESULT() checks the return value of HWRM command and returns
in case the command fails. There is no need of return value check after
HWRM_CHECK_RESULT().
Fixes: 49947a13ba ("net/bnxt: support Tx loopback, set VF MAC and queues drop")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
rte_intr_callback_unregister() can fail if the handler happens to
be active at the time of the call. Add logic to retry a reasonable
number of times to help ensure that the callback is unregistered
on uninit.
Fixes: 7bc8e9a227 ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Use the completion ring associated with the default Rx ring
when configuring the default completion ring ID instead
of the async completion ring ID.
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
If interrupt registration fails during device init, driver invokes
uninit which in turn causes error messages while trying to free
vnic filters. Fix this by moving filter initialization call before
interrupt registration.
Fixes: 1b533790f4 ("net/bnxt: avoid invalid vnic id in set L2 Rx mask")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
1. bnxt_dev_init() invokes bnxt_dev_uninit() on failure. So there is
no need to do individual function cleanups in failure path.
2. rearrange the check for primary process to remove an unwanted goto.
3. fix to invoke bnxt_hwrm_func_buf_unrgtr() in bnxt_dev_uninit() when
it is needed.
Fixes: b7778e8a1c ("net/bnxt: refactor to properly allocate resources for PF/VF")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
1. Default filter is tied to VNIC 0 at index 0. After finding the filter
with mac_index 0 and set the new MAC address, looping through
remaining filters is unnecessary.
2. Added a check for NULL MAC address.
3. bnxt_hwrm_set_l2_filter() clears the existing filter configuration
first before applying new filter settings. Hence there is no need to
invoke bnxt_hwrm_clear_l2_filter() explicitly in
bnxt_set_default_mac_addr_op().
Fixes: d69851df12 ("net/bnxt: support multicast filter and set MAC addr")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
There is an unconditional delay in link update op.
Fixed it to wait only if wait for request completion is set.
Fixes: 7bc8e9a227 ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
HWRM command to add MAC address can fail. Driver should check
the return value of HWRM command and do the house keeping properly.
Fixes: 778b759ba1 ("net/bnxt: add MAC address")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
rte_mem_virt2iova() function returns RTE_BAD_IOVA on failure, not zero.
Fixes: 62196f4e09 ("mem: rename address mapping function to IOVA")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
1. during port start, if bnxt_init_chip() return error
bnxt_dev_start_op() invokes bnxt_shutdown_nic() which in turn calls
bnxt_free_all_hwrm_resources() to free up resources. Hence remove the
bnxt_free_all_hwrm_resources() from bnxt_init_chip() failure path.
2. fix to check the return value of rte_intr_enable() as this call
can fail.
3. set bp->dev_stopped to 0 only when port start succeeds.
4. handle failure cases in bnxt_init_chip() routine to do proper
cleanup and return correct error value.
Fixes: b7778e8a1c ("net/bnxt: refactor to properly allocate resources for PF/VF")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
When NVM API version is 1.7 or above adminq operation to set TPID is
set as supported. This cause using adminq instead of registers.
For SFP X722 FW4.16, reported NVM API version is 1.8, and this cause
adminq operation to set as supported but it is not supported on FW4.16
Additional check added for SFP X722 to not enable adminq operation.
Fixes: 73cd7d6dc8 ("net/i40e: use set switch AQ instead of register setting")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
Add return value check for i40e_vsi_delete_mac call in
rte_pmd_i40e_remove_vf_mac_addr as per coverity issue.
Coverity issue: 277224
Fixes: e0cb96204b ("net/i40e: add support for representor ports")
Cc: stable@dpdk.org
Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
When set flow ipv6 tc rule, ice_get_flow_field will set error.
This patch fixes this issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Action is a list. We should check each element of the action
rather than the first one.
This patch fixes this issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
ice_get_flow_field should not set error if item->type is
RTE_FLOW_ITEM_TYPE_VOID.
This patch fixes this issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Ying A Wang <ying.a.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When secondary process is run was noticing that the log always
contained complaints about unable to parse devargs.
It turns out that an empty devargs turns into "" and this
value is not parsable. Change the failsafe secondary to just
skip doing devargs if it empty.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The generic RTE_LOGTYPE_PMD is a historical relic and should
not be used. Bonding driver was still using it in one place.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add checking for vnic id before sending message to chimp in
bnxt_hwrm_vnic_plcmode_cfg().
Fixes: db678d5c2b ("net/bnxt: add HWRM VNIC configure")
Cc: stable@dpdk.org
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
If ntuple filtering is disabled, FW will return max_vnics=1.
Due to this only single Rxq is created.
Change to max_rx_rings = RTE_MIN(bp->max_rx_rings, bp->max_stat_ctx) to
fix it.
Fixes: 6d8109bcb3 ("net/bnxt: check VF resources if resource manager is enabled")
Cc: stable@dpdk.org
Signed-off-by: Qingmin Liu <qingmin.liu@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
The cleanup/rollback operation post rte_eth_dev_start failure might end
up invoking an HWRM cmd even on an invalid vNIC resulting in error
messages being logged needlessly.
Fix to check for the same before issuing the HWRM cmd.
Fixes: c09f57b49c ("net/bnxt: add start/stop/link update operations")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Avoid null pointer dereference when allocating an insulated
completion ring by basing nq ring allocation on whether an
nq ring was requested instead of whether the device supports
nq rings.
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Simplify nq doorbell handling code by removing redundant db
parameter and consolidating NQ doorbell macro into the inline
function that uses it.
Add "enable interrupt" variant of nq write. This will be used
in a subsequent commit.
When initializing nq doorbell, don't assume that only the
"disable interrupt" form will be used.
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Save the number of EM flow count returned by the FW in HWRM_FUNC_QCFG
and use it to calculate the overall pool of L2 contexts supported by FW.
Fixes: 6d8109bcb3 ("net/bnxt: check VF resources if resource manager is enabled")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
For Tx-ring # 104 and higher, the doorbell register was incorrectly
configured due to which FW was not able to receive the notification
of packet to transmit.
With this fix, user can run traffic upto 256 rings.
Fixes: 6eb3cc2294 ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org
Signed-off-by: Rahul Gupta <rahul.gupta@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Move call site of bnxt_rxq_vec_setup() to ensure that rxq->rxrearm_nb
and rxq->rxrearm_start are reinitialized correctly when a port is
restarted.
Fixes: bc4a000f2f ("net/bnxt: implement SSE vector mode")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Christopher Reder <christopher.reder@broadcom.com>
Initialize the state of the completion valid indicator
when a completion ring is freed, otherwise completions may
not be processed when a new ring is allocated.
Fixes: 5735eb2419 ("net/bnxt: support Tx batching")
Cc: stable@dpdk.org
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
VF driver should not fail probe if the host PF driver has not assigned
any MAC address for the VF. It should generate a random MAC address and
configure the MAC and then continue probing the device.
Fixes: be160484a4 ("net/bnxt: check if MAC address is all zeros")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Fixed couple of possible segfaults due to NULL pointer
dereference in case of probe failure.
Fixes: c09f57b49c ("net/bnxt: add start/stop/link update operations")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
1. refactor stats allocation code to new routine
2. check for extended statistics support depends on "hwrm_spec_code"
which is set in bnxt_hwrm_ver_get called later. Hence we were never
querying extended port stats as flags field was not updated. Fixed
this by moving the stats allocation after the call to
bnxt_hwrm_ver_get.
3. we were incorrectly passing the host address used for port
statistics to PORT_QSTATS_EXT command. Fixed this by passing the
correct extended stats address.
Fixes: f55e12f334 ("net/bnxt: support extended port counters")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Previously in the PCAP PMD queues has to be defined as RxQ and TxQ
pairs, even if the need is only Rx or only Tx:
"--vdev net_pcap0,tx_pcap=tx.pcap,rx_pcap=rx.pcap"
Following commit enabled only providing Rx queue, and if Tx queue is
not provided PMD drops the Tx packets automatically:
Commit a3f5252e5c ("net/pcap: enable infinitely Rx a pcap file")
"--vdev net_pcap0,rx_pcap=rx.pcap"
This commit enables same thing for Rx queue, user no more have to
provide a Rx queue (rx_iface or rx_pcap), for this case a dummy Rx
burst function is used which doesn't return any packet at all:
"--vdev net_pcap0,tx_pcap=tx.pcap"
This makes only saving packets to a pcap file use case easy.
When both Rx and Tx queues are missing PMD will return an error.
(Single interface is still supported: "--vdev net_pcap0,iface=eth0")
Signed-off-by: Aideen McLoughlin <aideen.mcloughlin@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Recent modifications to admin command queue polling logic
did not support 32-bit applications. Updated the driver to
work for 32 or 64 bit applications
Fixes: 3adcba9a89 ("net/ena: update HAL to the newer version")
Cc: stable@dpdk.org
Signed-off-by: David Harton <dharton@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
In case the asynchronous devx commands are not supported in RDMA core
fallback to use a basic counter management.
Here, the PMD counters cashe is redundant and the host thread doesn't
update it. hence, each counter operation will go to the FW and the
acceleration reduces.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
All the DV counters are cashed in the PMD memory and are contained in
pools which are contained in containers according to the counters
allocation type - batch or single.
Currently, the flow counter query is done synchronously in pool
resolution means that on the user request a FW command is triggered to
read all the counters in the pool.
A new feature of devX to asynchronously read batch of flow counters
allows to accelerate the user query operation.
Using the DPDK host thread, the PMD periodically triggers asynchronous
query in pool resolution for all the counter pools and an interrupt is
triggered by the FW when the values are updated.
In the interrupt handler the pool counter values raw data is replaced
using a double buffer algorithm (very fast).
In the user query, the PMD just returns the last query values from the
PMD cache - no system-calls and FW commands are triggered from the user
control thread on query operation!
More synchronization is added with the host thread:
Container resize uses double buffer algorithm.
Pools growing in container uses atomic operation.
Pool query buffer replace uses a spinlock.
Pool minimum devX counter ID uses atomic operation.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
When the counter countainer has no more space to store more counter
pools try to resize the container to allow more pools to be created.
So, the only limitation for the maximum counter number is the memory.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
The DevX interface exposes a new feature to the PMD that can allocate a
batch of counters by one FW command. It can improve the flow
transaction rate (with count action).
Add a new counter pools mechanism to manage HW counters in the PMD.
So, for each flow with counter creation the PMD will try to find a free
counter in the PMD pools container and only if there is no a free
counter, it will allocate a new DevX batch counters.
Currently we cannot support batch counter for a group 0 flow, so
create a 2 container types, one which allocates counters one by
one and one which allocates X counters by the batch feature.
The allocated counters objects are never released back to the HW
assuming the flows maximum number will be close to the actual value of
the flows number.
Later, it can be updated, and dynamic release mechanism can be added.
The counters are contained in pools, each pool with 512 counters.
The pools are contained in counter containers according to the
allocation resolution type - single or batch.
The cache memory of the counters statistics is saved as raw data per
pool.
All the raw data memory is allocated for all the container in one
memory allocation and is managed by counter_stats_mem_mng structure
which registers all the raw memory to the HW.
Each pool points to one raw data structure.
The query operation is in pool resolution which updates all the pool
counter raw data by one operation.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
We need to check devargs pointer before dereference it, if no devargs
specified then this driver just skips the device.
Fixes: 40ef35f4a5 ("net/ifc: detect if VDPA mode is specified")
Cc: stable@dpdk.org
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
In 2.0.1 ENA, there were patches for:
* assigning NUMA node to the IO queue
commit 4217cb0b7d ("net/ena: fix assigning NUMA node to IO queue")
* statistics counters (Rx checksum errors and per-queue number of the
Tx packets)
commit ef74b5f7b6 ("net/ena: fix Rx checksum errors statistics")
commit 5673e285a6 ("net/ena: fix Tx statistics")
* SMP support
commit 117ba4a604 ("net/ena: get device info statically")
* setting Rx checksum support
commit ef538c1a7f ("net/ena: fix checksum feature flag")
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Should allow the outer input set be empty.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The iavf driver crashes when forwarding packets with TSO
enabled. The reason is that the tx context descriptor
configuration is not transferred to tx-ring. This step is
added in this patch.
Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Cc: stable@dpdk.org
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Because of the commit mentioned below the default case was changed and
this broke single_iface support. This patch adds a check to fix
single_iface support.
Fixes: a3f5252e5c ("net/pcap: enable infinitely Rx a pcap file")
Signed-off-by: Aideen McLoughlin <aideen.mcloughlin@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
In the eth_pcap_tx() and eth_pcap_tx_dumper() functions mbufs were freed
without incrementing num_tx.
This may lead application also try to free or use invalid mbuf.
To fix the issue, the mbuf freeing was removed.
Fixes: 6db141c91e ("pcap: support jumbo frames")
Cc: stable@dpdk.org
Signed-off-by: Aideen McLoughlin <aideen.mcloughlin@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Fix an overrun of the ring group array with BCM5750X-based
adapters by ensuring that the ring group array is not allocated
or accessed for adapters that do not support ring groups.
Fixes: f8168ca0e6 ("net/bnxt: support thor controller")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The conditional used to determine whether freeing RSS
contexts for thor vs. non-thor controller was reversed.
Fix this, also reset number of active RSS contexts to
zero after release in the thor case.
Fixes: 38412304b5 ("net/bnxt: enable RSS for thor-based controllers")
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Don't use RTE_LOGTYPE_PMD as it is too general.
Also, just use 1 log type for all of enic PMD (pmd.net.enic)
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
There are some implicit downcast errors in TX offload information
parsing by lgtm tool. This patch is to solve these errors.
Fixes: 64727024d2 ("net/hinic: add device initialization")
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
Adding support for flags based extraction in octeontx2 Flow.
Patch supports extracting data greater than 32 bytes using lflags.
When flags based extraction is enabled, lower 4 bits will be
considered (16 flags) for indexing the flags, and will be used
for extraction.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
When pattern has ETH, it may contain two kinds of lookup
parameters, MAC and ethertype.
So increasing item number for memory malloc in order
to reserve one more memory slot for ETH which may
consume 2 lookup items.
Fixes: 57c4f26935 ("net/ice: enable switch filter")
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
If the input set is outer or inner protocol was distinguished by
checking if the item appears once or twice.
But this is not working when the user doesn't configure the outer
input set, this patch fixes the issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 3298fa4853 ("raw/dpaa2_cmdif: introduce DPAA2 command interface driver")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 61c592a8d0 ("raw/skeleton: introduce skeleton rawdev driver")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 050fe6e9ff ("drivers/net: use ethdev allocation helper for vdev")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 740feaf349 ("ethdev: remove driver name from device private data")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 050fe6e9ff ("drivers/net: use ethdev allocation helper for vdev")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 050fe6e9ff ("drivers/net: use ethdev allocation helper for vdev")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 050fe6e9ff ("drivers/net: use ethdev allocation helper for vdev")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 050fe6e9ff ("drivers/net: use ethdev allocation helper for vdev")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The rte_vdev_drivers are declared twice.
The first one is not necessary.
Fixes: 740feaf349 ("ethdev: remove driver name from device private data")
Fixes: 204d026a39 ("net/tap: support tun")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Print system error to make easier diagnosis of errors with af_packet.
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Enabled IP-in-IP tunnel type support on DV/DR flow engine.
This includes the following combination:
- IPv4 over IPv4
- IPv4 over IPv6
- IPv6 over IPv4
- IPv6 over IPv6
MLX5 NIC supports IP-in-IP tunnel via FLEX Parser so
need to make sure fw using FLEX Paser profile 0.
mlxconfig -d <mst device> -y set FLEX_PARSER_PROFILE_ENABLE=0
The example testpmd commands would be:
- Match on IPv4 over IPv4 packets and do inner RSS:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 0x04 /
ipv4 / udp / end actions rss level 2 queues 0 1 2 3 end / end
- Match on IPv6 over IPv4 packets and do inner RSS:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 0x29 /
ipv6 / udp / end actions rss level 2 queues 0 1 2 3 end / end
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
ice_flow_valid_attr will return zero on success and a negative value
on error.
Current return value check logic is opposite of the expected behavior.
This patch fixes this issue.
Fixes: d76116a467 ("net/ice: add generic flow API")
Cc: stable@dpdk.org
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The phys_addr concept is deprecated in rte_memzone, change it to access
iova member, and use the type 'rte_iova_t'.
Also rename the rx/tx_ring_phys_addr definitions to rx/tx_ring_dma that
matches the IOVA concept design.
Fixes: 50370662b7 ("net/ice: support device and queue ops")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Procedure xdp_get_channels_info was returning error code -1 in case of
ioctl command SIOCETHTOOL was not supported. This patch sets return
value back to 0 as it is valid case.
Fixes: 339b88c6a9 ("net/af_xdp: support multi-queue")
Signed-off-by: Július Milan <jmilan.dev@gmail.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Support matching on the present bits (C,K,S)
as well as the optional key field.
If the rte_flow_item_gre_key is specified in pattern,
it will set K present match automatically.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
DR engine support matching on GRE protocol field without MPLS supports.
So bypassing the MPLS check when DR is enabled.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When OS package is not provided driver silently goes into safe mode,
since safe mode is missing most of advanced features, this may confuse
the users.
Instead of going into safe mode silently, add devarg for safe mode
enabling only for users that are asking for it.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Ray Kinsella <ray.kinsella@intel.com>
Remove devarg "max_queue_pair_num" related code since
it is not complete implemented.
Fixes: f9cf4f8641 ("net/ice: support device initialization")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Using spin lock to protect critical resources
of sending mgmt messages. This will make high
CPU usage for rte_delay_ms when sending mgmt
messages frequently. We can use mutex to protect
the critical resources and usleep to reduce CPU
usage while keep functioning properly.
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
Adding PF and VF action support for octeontx2 flow driver.
If RTE_FLOW_ACTION_TYPE_PF action is set from VF, then the packet
will be sent to the parent PF.
If RTE_FLOW_ACTION_TYPE_VF action is set and original is specified,
then the packet will be sent to the original VF, otherwise the packet
will be sent to the VF specified in the vf_id.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
For SRIOV, fastpath status blocks are not allocated resulting in
segfault. Separate out fastpath DMA allocation/free from rest of
memory allocation/free. It is now done as part of NIC load/unload.
Comment indentation changes in bnx2x_alloc_hsi_mem() and
bnx2x_free_hsi_mem() APIs.
Fixes: f0219d98de ("net/bnx2x: fix interrupt flood")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rmody@marvell.com>
We do not need to schedule periodic poll for slowpath link events
for SRIOV. The link events are handled by the PF driver.
Fixes: 6041aa619f ("net/bnx2x: fix poll link status")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rmody@marvell.com>
The logic to read vf_id used by ACQUIRE/TEARDOWN_Q/RELEASE TLVs,
multiplexed return value to convey vf_id value and status of read vf_id
API. This lets to segfault at dev_start() as resources are not properly
cleaned and re-allocated.
Fix read vf_id API to differentiate between vf_id value and return
status. Adjust the status checking accordingly.
Added bnx2x_vf_teardown_queue() API and moved relevant code from
bnx2x_vf_unload() to new API.
Fixes: 540a211084 ("bnx2x: driver core")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rmody@marvell.com>
Replace rte_intr_enable() with rte_intr_ack() API
for acking an interrupt in interrupt handlers and
rx_queue_intr_enable() callbacks of PMD's.
This is inline with original intent of this change in PMDs
to ack interrupts after handling is completed if
device is backed by UIO, IGB_UIO or VFIO(with INTx).
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Shahed Shaikh <shshaikh@marvell.com>
Tested-by: Shahed Shaikh <shshaikh@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
This reverts commit 89aac60e0b.
"vfio: fix interrupts race condition"
The above mentioned commit moves the interrupt's eventfd setup
to probe time but only enables one interrupt for all types of
interrupt handles i.e VFIO_MSI, VFIO_LEGACY, VFIO_MSIX, UIO.
It works fine with default case but breaks below cases specifically
for MSIX based interrupt handles.
* Applications like l3fwd-power that request rxq interrupts
while ethdev setup.
* Drivers that need > 1 MSIx interrupts to be configured for
functionality to work.
VFIO PCI for MSIx expects all the possible vectors to be setup up
when using VFIO_IRQ_SET_ACTION_TRIGGER so that they can be
allocated from kernel pci subsystem. Only way to increase the number
of vectors later is first free all by using VFIO_IRQ_SET_DATA_NONE
with action trigger and then enable new vector count.
Above commit changes the behavior of rte_intr_[enable|disable] to
only mask and unmask unlike earlier behavior and thereby
breaking above two scenarios.
Fixes: 89aac60e0b ("vfio: fix interrupts race condition")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Shahed Shaikh <shshaikh@marvell.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
In order to align name with other PCI driver flag such as
RTE_PCI_DRV_NEED_MAPPING and to reflect its purpose, change
RTE_PCI_DRV_IOVA_AS_VA flag name as RTE_PCI_DRV_NEED_IOVA_AS_VA.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
The incriminated commit broke the use of RTE_PCI_DRV_IOVA_AS_VA which
was intended to mean "driver only supports VA" but had been understood
as "driver supports both PA and VA" by most net drivers and used to let
dpdk processes to run as non root (which do not have access to physical
addresses on recent kernels).
The check on physical addresses actually closed the gap for those
drivers. We don't need to mark them with RTE_PCI_DRV_IOVA_AS_VA and this
flag can retain its intended meaning.
Document explicitly its meaning.
We can check that a driver requirement wrt to IOVA mode is fulfilled
before trying to probe a device.
Finally, document the heuristic used to select the IOVA mode and hope
that we won't break it again.
Fixes: 703458e19c ("bus/pci: consider only usable devices for IOVA mode")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This reverts commit 0cb86518db.
The PCI bus now reports DC when faced with a device bound to an unknown
driver and, in such a case, the IOVA mode is selected against physical
address availability.
As a consequence, there is no reason for this special case for Mellanox
drivers.
Fixes: 703458e19c ("bus/pci: consider only usable devices for IOVA mode")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
fslmc_vfio.c:387:36: note: format string is defined here
DPAA2_BUS_DEBUG("VFIO dmamap 0x%llx:0x%llx, size 0x%llx\n",
format ‘%llx’ expects argument of type ‘long long unsigned int’
argument 6 has type ‘__u64 {aka long unsigned int}’
Fixes: 2b5fa25708 ("mempool/dpaa2: map external memory with VFIO")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch removes the unnecessary err prints when using
non-dpaa2 devices.
Fixes: e67a61614d ("bus/fslmc: support device iteration")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch removes the unnecessary err prints when using
non-dpaa devices.
Fixes: e79df833d3 ("bus/dpaa: support hotplug ops")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Update softnic tm function for configuration flexiblity of pipe
traffic classes and queues size.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Abraham Tovar <abrahamx.tovar@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
The session init shall return failure if the internal
session create fails for any reasons.
Fixes: 13273250ee ("crypto/dpaa2_sec: support AES-GCM and CTR")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
BT0 block type padding after rfc2313 has been discontinued.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Shally Verma <shallyv@marvell.com>
Asymmetric nature of RSA algorithm suggest to use
additional field for output. In place operations
still can be done by setting cipher and message pointers
with the same memory address.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Shally Verma <shallyv@marvell.com>
This patch fixes fail status returned from compression PMD
in case destination buffer size is not enough to store
all data.
Fixes: 3dc9ef2d23 ("compress/qat: fix returned status on overflow")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Resolution for seg-faults observed:
1) in buffer re-alignment in-place sgl case
2) case where data end is exactly at end of an sgl segment.
Also renamed variable and increased comments for clearer code.
Fixes: 40002f6c2a ("crypto/qat: extend support for digest-encrypted auth-cipher")
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Damian Nowak <damianx.nowak@intel.com>
This patch changes the key pointer data types in cipher, auth,
and aead xforms from "uint8_t *" to "const uint8_t *" for a
more intuitive and safe sessionn creation.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Liron Himi <lironh@marvell.com>
Since direct register access is used in npa_lf_aura_op_alloc_bulk()
use __rte_noinline instead of __rte_always_inline to preserve ABI.
Based on the compiler npa_lf_aura_op_alloc_bulk might be inlined
differently which may lead to undefined behaviour due to handcoded
asm.
Fixes: 29893042c2 ("mempool/octeontx2: fix clang build for arm64")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
When initializing EAL with "-w 0:0.0", this error is blocking:
munmap_chunk(): invalid pointer
ElectricFence reports this root cause:
free(7fffeec25a11): address not from malloc()
Fixes: e67a61614d ("bus/fslmc: support device iteration")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Thomas Monjalon <thomas@monjalon.net>
Putting color escape sequences in the log look pretty for the
developer but fails in real world DPDK usage. A real application
will put DPDK log to syslog, and syslog does not handle escape
sequences.
Fixes: dd543124cd ("common/octeontx2: add runtime log infra")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Jerin Jacob <jerinj@marvell.com>
LS1088 platform CENA operation are causing issues
at high load. CINH (cache inhibited) mode is working
fine with minor performance impact.
This patch enables CINH mode selectively on LS1088 platform
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
This patch adds following:
1. 'g_container' variable name is not right way to represent the
FSLMC container. Renaming it to fslmc_container.
2. dynamic selection of IOMMU mode based on run environment
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Recent (18.11+), devargs structure was changed and so was DPDK port
usage in applications like OVS. Applications are now allowed to
plug/unplug ports (eth) using APIs (hotplug) based on device
arguments.
This patch enables the plug/unplug function (which are dummy for
FSLMC) and the iterator function for rte_dev_probe() and similar
API support.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Update platform support of CNF95xx in documentation and
also, update the HW cap based on PCI subsystem id and revision id.
This patch also changes HW capability handling to be based on
PCI Revision ID. PCI Revision ID contains a unique identifier
to identify chip, major and minor revisions.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Populating the eventfd in rte_intr_enable in each request to vfio
triggers a reconfiguration of the interrupt handler on the kernel side.
The problem is that rte_intr_enable is often used to re-enable masked
interrupts from drivers interrupt handlers.
This reconfiguration leaves a window during which a device could send
an interrupt and then the kernel logs this (unsolicited from the kernel
point of view) interrupt:
[158764.159833] do_IRQ: 9.34 No irq handler for vector
VFIO api makes it possible to set the fd at setup time.
Make use of this and then we only need to ask for masking/unmasking
legacy interrupts and we have nothing to do for MSI/MSIX.
"rxtx" interrupts are left untouched but are most likely subject to the
same issue.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1654824
Fixes: 5c782b3928 ("vfio: interrupts")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Tested-by: Shahed Shaikh <shshaikh@marvell.com>
Use rte_ether_unformat_addr rather than sscanf.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Use rte_ether_unformat_addr rather than sscanf.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Matan Azrad <matan@mellanox.com>
Use rte_ether_unformat_addr rather than sscanf.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Use rte_ether_unformat_addr rather than sscanf.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The cmdline library used to be the only way to parse a
mac address. Now there is rte_ether_unformat_addr.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Formatting Ethernet address and getting a random value are
not in critical path so they should not be inlined.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Rami Rosen <ramirose@gmail.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
XSK_UMEM__DEFAULT_FRAME_SIZE has been changed to 4096 in kernel commit
123e8da1d330 (xsk: Change the default frame size to 4096 and allow
controlling it),
but we still need to keep ETH_AF_XDP_FRAME_SIZE as 2048 to fit most
dpdk apps.
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
On DV/DR flow engine, MLX5 can match on ICMP/ICMP6's code and type field
via FLEX Parser, which can be enabled by config FW using FLEX Parser
profile 2:
mlxconfig -d <mst device> -y set FLEX_PARSER_PROFILE_ENABLE=2
The testpmd commands could be:
testpmd> flow create 0 ingress pattern eth / ipv4 /
icmp type is 8 code is 0 / end
actions rss queues 0 1 end / end
testpmd> flow create 0 ingress pattern eth / ipv6 /
icmp6 type is 128 code is 0 / end
actions rss queues 0 1 end / end
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Mellanox NICs do not support UDP checksum hardware tx offload over IPv6.
This limitation becomes critical for UDP based tunnels like VXLAN.
Beside the UDP checksum validity is required by IPv6 there is an option
in Linux to allow accepting UDP zero sum (see udp6zerocsumrx in iproute2
package).
This patch zeroes out the UDP checksum field for encapsulation headers
in raw encap action.
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently mlx4/mlx5 support only Linux.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Since 18.11, it is suggested that driver should release all its private
resources at the dev_close routine. So all resources previously released
in remove routine are now released at the dev_close routine, and the
dev_close routine will be called in driver remove routine in order to
support removing a device without closing its ports.
Above behavior changes are supported by setting RTE_ETH_DEV_CLOSE_REMOVE
flag during probe stage.
Signed-off-by: Yuri Chipchev <yuric@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
The shift left operation "pkt->vlan_tci << 16" gets vlan_tci extended
to signed type and may cause invalid descriptor. Also the same issue for
the "data_len" field. This patch fixes it by casting them to uint64_t.
Fixes: 21f13c541e ("fm10k: add vector Tx")
Cc: stable@dpdk.org
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch implements statistics read and reset function for ipn3ke.
Signed-off-by: Andy Pei <andy.pei@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
ipn3ke can work on 10G mode and 25G mode.
10G mode and 25G mode has different MAC register address for statistics.
This patch implements statistics registers for 10G mode and 25G mode.
Also implements different stats clearing per mode.
Fixes: c01c748e4a ("net/ipn3ke: add new driver")
Cc: stable@dpdk.org
Signed-off-by: Andy Pei <andy.pei@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Original code is compatible with older device, whose mac register
address is no more than 10 bits. Now we have mac register address
longer than 10 bits, so we just delete the mask here.
Fixes: c01c748e4a ("net/ipn3ke: add new driver")
Cc: stable@dpdk.org
Signed-off-by: Andy Pei <andy.pei@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
In i40e_flow_flush_fdir_filter, i40e_fdir_teardown is called, so
i40e_fdir_setup is required to be called before create a new fdir flow.
Bugzilla ID: 265
Fixes: 2e67a7fbf3 ("net/i40e: config flow director automatically")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
The APIs in the rte_bus_vdev.h file were not part of the API
documentation. I added this header file to the doxygen config file with
the name vdev.
Signed-off-by: Aideen McLoughlin <aideen.mcloughlin@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add PTP support for SSO based on rx_offloads of the queue connected to
it.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Fixes: 371a688fc1 ("event/sw: support linking queues to ports")
Cc: stable@dpdk.org
Signed-off-by: Dilshod Urazov <dilshod.urazov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
mbox_unregister_vf_irq and mbox_unregister_pf_irq returns void value.
mbox_unregister_irq also returns void.
Clang with flags '-Wall -Wextra -pedantic' complains about:
void function should not return void expression
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
The functions rte_service_may_be_active(), rte_service_lcore_attr_get(),
and rte_service_attr_reset_all() were introduced nearly a year ago in DPDK
18.08. They can be considered non-experimental for the 19.08 release.
rte_service_may_be_active() is used by the sw PMD, and this commit allows
it to not need any experimental API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Fix npa pool range errors observed while creating mempool, this issue
happens when mempool objects are from different mem segments.
During mempool creation, octeontx2 mempool driver populates pool range
fields before enqueuing the buffers. If any enqueue or dequeue operation
reaches npa hardware prior to the range field's HW context update,
those ops result in npa range errors. Patch adds a routine to read back
HW context and verify if range fields are updated or not.
Fixes: e5271c507a ("mempool/octeontx2: add remaining slow path ops")
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The existing code is enabling the Tx queues as per
the number of lcore count, which is causing issue
in case of secondary process running on different number
of cores.
This patch fixes the Tx queues to number of DPAA cores,
which helps in using fixed number of Tx queues across
processes access.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
1. need to use the bpool with rte_malloc instead of rte_free
2. Option to give portal to the secondary process thread.
Signed-off-by: Radu Bulie <radu-andrei.bulie@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Akhil Goyal <akhil.goyal@nxp.com>
Parse and find_device have specific function - former is for parsing a
string passed as argument, whereas the later is for iterating over all
the devices in the bus and calling a callback/handler. They have been
corrected with their right operations to support hotplugging/devargs
plug/unplug calls.
Support for plug/unplug too has been added.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Implementation still based on Intel SDK libraries
optimized for AVX512 instructions set and 5GNR.
This can be also build for AVX2 for 4G capability or
without SDK dependency for maintenance.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Extension to BBDEV operations to support 5G
on top of existing 4G operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Renaming of the enums and structure which were LTE specific to
allow for extension and support for 5GNR operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
This patch adds condition to be met when using
out-of-place auth-cipher operations. It checks
if the digest location overlaps with the data to
be encrypted or decrypted and if so, treats as a
digest-encrypted case.
Patch adds checking, if the digest is being
encrypted or decrypted partially and extends PMD
buffers accordingly.
It also adds feature flag for QuickAssist
Technology to emphasize it's support for digest
appended auth-cipher operations.
Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
In case big number need to be freed, data it contains should
also be cleared before especially if it is critical data like
private keys.
Fixes: 3e9d6bd447 ("crypto/openssl: add RSA and mod asym operations")
Cc: stable@dpdk.org
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
When ixgbe_crypto_add_sa() is called, it checks whether the ip type is
IPv6 or IPv4 to write correct addresses to the registers. Type itself
is never specified, and act as IPv4, which is the default value.
It causes lack of support for IPv6.
To fix that, ip type needs to be stored in device private data, based on
crypto session ip type field, before the checking is done.
Fixes: ec17993a14 ("examples/ipsec-secgw: support security offload")
Fixes: 9a0752f498 ("net/ixgbe: enable inline IPsec")
Cc: stable@dpdk.org
Signed-off-by: Mariusz Drost <mariuszx.drost@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Turbo_sw PMD driver now building with meson/ninja
with or without SDK libraries.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Adding compile flag to allow to build the turbo_sw PMD
without dependency to have the SDK libraries installed.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch fixes the out-of-bounds coverity issue by adding
missed algorithms to the array.
Coverity issue: 337683
Fixes: c68d7aa354 ("crypto/aesni_mb: use architecture independent macros")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
ANSI C memcmp is not constant time function per spec so it should
be avoided in cryptography usage.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Cc: stable@dpdk.org
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>