This is an optimization to prefetch next buffer while the
current one is being processed.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Tiwei Bie <tiwei.bie@intel.com>
This is an optimization to prefetch next buffer while the
current one is being processed.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Tiwei Bie <tiwei.bie@intel.com>
To ease packed ring layout integration, this patch makes
the dequeue path to re-use buffer vectors implemented for
enqueue path.
Doing this, copy_desc_to_mbuf() is now ring layout type
agnostic.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Tiwei Bie <tiwei.bie@intel.com>
Relax used ring contention by reusing the shadow used
ring feature used by enqueue path.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Tiwei Bie <tiwei.bie@intel.com>
Fixes: 3d12dceed2df ("ethdev: add new offload flag to keep CRC")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
ethdev layer introduced checks for application requested RSS hash
functions and returns error for ones unsupported by hardware
This check breaks some sample applications which blindly configures
RSS hash functions without checking underlying hardware support.
Updated examples to mask out unsupported RSS has functions during device
configuration.
Prints a log if configuration values updated by this check.
Fixes: aa1a6d87f1 ("ethdev: force RSS offload rules again")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Meijuan Zhao <meijuanx.zhao@intel.com>
Tested-by: Yingya Han <yingyax.han@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
When the bond PMD is stopped, the active slave count is reset.
For 802.3ad mode this potentially leaks memory and clears state since
a second sequential activate_slave() will occur when the bond PMD is
restarted and the LSC callback is triggered while the active slave
count is 0. To fix this, don't clear the active slave count when
stopping. Only deactivate_slave() should be used to clear the slaves.
Fixes: 2efb58cbab ("bond: new link bonding library")
Cc: stable@dpdk.org
Signed-off-by: Chas Williams <chas3@att.com>
Set the Rx channel map and ingress queue type properly to allow firmware
to manage the internal mapping correctly.
Fixes: 6c2809628c ("net/cxgbe: improve latency for slow traffic")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Query firmware for max Tx and Rx queues that can be allocated.
Move the code to determine max queues to common place for both
PF and VF.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Enable RSS on IPv4 fragmented packets and IPv6 packets with extension
headers based on 2-tuple hash.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add ops to set link up and down for both PF and VF. If wait_to_complete
is set, poll for link update for up to 10 seconds max.
Original work by Surendra Mobiya <surendra@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add action to redirect matched packets to specified egress physical
port without sending them to host.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add support to match packets based on ingress physical port.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add interface to enable hit counters for flows offloaded in HASH
region.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add interface to delete offloaded flows in HASH region. Use the
hash index saved during insertion to delete the corresponding flow.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add interface to offload flows to HASH region. Translate internal
filter specification to requests to offload flows to HASH region.
Save the returned hash index of the offloaded flow for deletion later.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
CLIP region holds destination IPv6 addresses to be matched for
corresponding flows. Query firmware for CLIP resources and allocate
table to manage them. Also update LE-TCAM to use CLIP to reduce
number of slots needed to offload IPv6 flows.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Fetch supported match items in HASH region. Ensure the mask
is all set for all the supported match items to be offloaded
to HASH region. Otherwise, offload them to LE-TCAM region.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Fetch available HASH filter resources and allocate table for managing
them. Currently only supported on Chelsio T6 family of NICs.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
In DPDK 17.11, the ethdev offloads API has changed:
commit cba7f53b71 ("ethdev: introduce Tx queue offloads API")
commit ce17eddefc ("ethdev: introduce Rx queue offloads API")
The new API is documented in the programmer's guide:
http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#hardware-offload
For reminder, the main concepts in the new API were:
- All offloads are disabled by default
- Distinction between per port and per queue offloads.
The transition bits are now removed:
- Translation of the old API in ethdev
- rte_eth_conf.rxmode.ignore_offload_bitfield
- ETH_TXQ_FLAGS_IGNORE
The old API bits are now removed:
- Rx per-port rte_eth_conf.rxmode.[bit-fields]
- Tx per-queue rte_eth_txconf.txq_flags
- ETH_TXQ_FLAGS_NO*
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
The macro FM10K_SIMPLE_TX_FLAG was used with old Tx queue flags.
It is no longer used and was forgotten when cleaning old Tx flags.
Fixes: 1778ef67e2 ("net/fm10k: remove dependence on Tx queue flags")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Some test applications and examples were not converted
to the new offload API introduced in 17.11.
For reference, see "Hardware Offload" in
doc/guides/prog_guide/poll_mode_drv.rst
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The global variables rx_mode and fdir_conf
are not used in this test file.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The example code is showing how to use KNI, and can be found in
examples/kni/
The documentation guide for this example is explaining the code
to ease the understanding of the example.
And inside this documentation, there are a lot of examples code
which are copy/pasted. It is really too much and hard to maintain.
The code inside this documentation is replaced by the name
of the functions.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add DEV_RX_OFFLOAD_CRC_STRIP to virtual drivers since they don't
use CRC at all, when an application requires this offload virtual
PMDs should not return error.
Fixes: 3d12dceed2df ("ethdev: add new offload flag to keep CRC")
Signed-off-by: Phil Yang <phil.yang@arm.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Replace the BSD license header with the SPDX tag.
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Implement EF10 ESSB Rx datapath callback to get number of pending
descriptors.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Number of buffers left in completed descriptor may be 0. If so,
all buffers of the descriptor are freed once again.
Fixes: 390f9b8d82 ("net/sfc: support equal stride super-buffer Rx mode")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Support rx of in direction packets only
Useful for apps that also tx to eth_pcap ports in order to not see them
echoed back in as rx when out direction is also captured
Example:
In case using rx_iface and sending *single* packet to eth1
it will loop forever as the when it is sent to tx_iface=eth1
it will be captured again on the rx_iface=eth1 and so on
$RTE_TARGET/app/testpmd l 0-3 -n 4 \
--vdev 'net_pcap0,rx_iface=eth1,tx_iface=eth1'
…
---------------------- Forward statistics for port 0 ------------
RX-packets: 758 RX-dropped: 0 RX-total: 758
TX-packets: 758 TX-dropped: 0 TX-total: 758
------------------------------------------------------------------
While if using rx_iface_in it will not be captured on the way out and
be forwarded only once
$RTE_TARGET/app/testpmd l 0-3 -n 4 \
--vdev 'net_pcap0,rx_iface_in=eth1,tx_iface=eth1'
…
---------------------- Forward statistics for port 0 ------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 1 TX-dropped: 0 TX-total: 1
------------------------------------------------------------------
Signed-off-by: Ido Goshen <ido@cgstowernetworks.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
TCI may contain PCP or DEI bits. Matching of these bits is not
supported, but the bits still may be set in specification value and
not covered by mask. So, these bits should be ignored.
Fixes: 894080975e ("net/sfc: support VLAN in flow API filters")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Documents the assumption that 'xstats[i].id == i' and
key=xstats_names[i].name, value=xstats[i].value
xstats[i].id is still used for xstats _by_id() APIs.
This patch reverts some part of the commit 6d52d1d4af ("ethdev:
clarify extended statistics documentation")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Instead of checking the multiple Virtio features bits for
every packet, let's do the check once at configure time and
store it in virtio_hw struct.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
This patch improves the Tx offload features selection depending
on whether the application request for offloads.
When the application doesn't request for Tx offload features,
the corresponding features bits aren't negotiated.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
The simple Tx path does not comply with the Virtio specification.
Now that VIRTIO_F_IN_ORDER feature is supported by the Virtio PMD,
let's use this optimized path instead.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Starting from rdma-core v19, Mellanox OFED 4.4, the Verbs resources
cleanup is properly activated in plug-out process when setting the
MLX5_DEVICE_FATAL_CLEANUP environment variable to 1.
Set the aforementioned variable to 1.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
In the default Rx handler stop processing packets at the end of
the completion ring so that wrapping doesn't have to be checked
in the inner while loop.
Also, check the color bit in the completion without using a conditional.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
The default tx handler checks the maximum packet size. Check it in the
prepare handler too. WQ stops working if the app/driver tries to send
oversized packets, so these checks are unavoidable.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add a much-simplified handler that works when all offloads are
disabled, except mbuf fast free. When compared against the default
handler, under ideal conditions, cycles per packet drop by 60+%.
The driver tries to use the simple handler first.
The idea of using specialized/simplified handlers is from the Intel
and Mellanox drivers.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Request one completion update per roughly 32 buffers. It saves DMA
resources on the NIC, PCIe utilization, and cache miss rates.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
WQ is currently using vnic_wq_buf to store mbuf pointers for Tx
packets. But, it contains an unused mempool pointer and mbuf is
unnecessarily cast to void pointer. Remove vnic_wq_buf entirely and
use an mbuf pointer array instead.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>