Add zlib pmd feature support and user guide with
build and run instructions
Signed-off-by: Sunila Sahu <sunila.sahu@caviumnetworks.com>
Signed-off-by: Shally Verma <shally.verma@caviumnetworks.com>
Signed-off-by: Ashish Gupta <ashish.gupta@caviumnetworks.com>
This patch adds chained mbuf support for input or output buffers
during compression/decompression operations.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch adds packet-size-distr mode specific parameter parser
to support different threshold packet size value other than default
128 bytes.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
This patch adds Scatter-Gather List (SGL) feature to
QAT compression PMD.
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Added support for 3DES cipher algorithm which
will support 8, 16 and 24 byte keys, which also has been
added in the v0.50 of the IPSec Multi-buffer lib.
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds support for the v0.50 of the IPsec Multi-buffer lib.
The library now exposes its version, with the idea
of maintaining backwards compatibility in the future,
avoiding breaking the compilation of the PMD every time
there is a new version available.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds support for the v0.50 of the IPsec Multi-buffer lib.
The library now exposes its version, with the idea
of maintaining backwards compatibility in the future,
avoiding breaking the compilation of the PMD every time
there is a new version available.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Extend QAT guide to cover crypto and compression and common
information, particularly about kernel driver dependency.
Update release note.
Update compression feature list for qat.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
udp_cksum is duplicated, second one should be tcp_cksum
Fixes: c73a907187 ("app/testpmd: add commands to test new offload API")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
This patch adds support for an additional bus type Virtual Machine BUS
(VMBUS) on Microsoft Hyper-V in Windows 10, Windows Server 2016
and Azure. Most of this code was extracted from FreeBSD and some of
this is from earlier code donated by Brocade.
Only Linux is supported at present, but the code is split
to allow future FreeBSD and Windows support.
The bus support relies on the uio_hv_generic driver from Linux
kernel 4.16. Multiple queue support requires additional sysfs
interfaces which is in kernel 5.0 (a.k.a 4.17).
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
This patch adds support for building and running mlx5 PMD on
32bit systems such as i686.
The main issue to tackle was handling the 32bit access to the UAR
as quoted from the mlx5 PRM:
QP and CQ DoorBells require 64-bit writes. For best performance, it
is recommended to execute the QP/CQ DoorBell as a single 64-bit write
operation. For platforms that do not support 64 bit writes, it is
possible to issue the 64 bits DoorBells through two consecutive
writes,
each write 32 bits, as described below:
* The order of writing each of the Dwords is from lower to upper
addresses.
* No other DoorBell can be rung (or even start ringing) in the midst
of an on-going write of a DoorBell over a given UAR page.
The last rule implies that in a multi-threaded environment, the access
to a UAR page (which can be accessible by all threads in the process)
must be synchronized (for example, using a semaphore) unless an atomic
write of 64 bits in a single bus operation is guaranteed. Such a
synchronization is not required for when ringing DoorBells on different
UAR pages.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Prior to this patch, all port representors detected on a given device were
probed and Ethernet devices instantiated for each of them.
This patch adds support for the standard "representor" parameter, which
implies that port representors are not probed by default anymore, except
for the list provided through device arguments.
(Patch based on prior work from Yuanhan Liu)
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Xueming Li <xuemingl@mellanox.com>
As per deprecation notice [1], move DPDK runtime config to default
DPDK runtime data location. Also, remove the deprecation notice and
update release notes to indicate the changes.
[1] http://dpdk.org/patch/40418
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
For asynchronous requests, user callback may be triggered either from
IPC thread or from interrupt thread. Because of this, delivery of
other interrupt-based events such as alarms may not be possible inside
the asynchronous IPC request callback handler. Document this
limitation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Previously, it was possible to limit maximum amount of memory
allowed for allocation by creating validator callbacks. Although a
powerful tool, it's a bit of a hassle and requires modifying the
application for it to work with DPDK example applications.
Fix this by adding a new parameter "--socket-limit", with syntax
similar to "--socket-mem", which would set per-socket memory
allocation limits, and set up a default validator callback to deny
all allocations above the limit.
This option is incompatible with legacy mode, as validator callbacks
are not supported there.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add ability for application to register a callback function
for SW transfers, the callback can decide which packets can
be enqueued to the event device.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Add support for interrupt driven queues when eth device is
configured for rxq interrupts and servicing weight for the
queue is configured to be zero.
A interrupt driven packet received counter has been added to
rte_event_eth_rx_adapter_stats.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
This patch updates the programmer guide and testpmd user guide for
UDP/IPv4 GSO.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Set the starting point that all commits on master branch
with Fixes tag should be backported to relevant stable/LTS
branches, and explain that the submitter may indicate it is
not suitable for backport.
Of course there will be exceptions that will crop up from time
to time that need discussion, so also add a sentence for that.
This is to ensure that there is consistency between what is
backported to stable/LTS branches, remove some subjectivity
as to what constitutes "a fix" and avoid possible conflicts
for future backports.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Aaron Conole <aconole@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This is the guide for cross compiling ARM64 DPDK from X86 hosts.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Enabling crypto devs to specify the minimum headroom and tailroom it
expects in the mbuf. For net PMDs, standard headroom has to be honoured
by applications, which is not strictly followed for crypto devs. This
prevents crypto devs from using free space in mbuf (available as
head/tailroom) for internal requirements in crypto operations. Addition
of head/tailroom requirement will help PMDs to communicate such
requirements to the application.
The availability and use of head/tailroom is an optimization if the
hardware supports use of head/tailroom for crypto-op info. For devices
that do not support using the head/tailroom, they can continue to operate
without any performance-drop.
Signed-off-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The name private_data is confusing in these APIs:
rte_cryptodev_sym_session_set_private_data()
rte_cryptodev_sym_session_get_private_data()
It refers to data added at the end of the session hdr for
use by the application.
The session already contains sess_private_data[index]
which is used to store private pmd data and most references to private
data refer to that.
e.g. external apis
rte_cryptodev_sym_get_private_session_size() and internal
set/get_session_private_data() refer to sess_private_data[].
So rename to user_data, i.e.
rte_cryptodev_sym_session_set_user_data()
rte_cryptodev_sym_session_get_user_data()
Refers to changes introduced here:
https://patches.dpdk.org/patch/38172/
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
As announced in the previous release,
The API to attach/dettach a session to a queue pair
is removed, as it was only used in DPAA, and it is not
actually needed.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The current mbuf scatter gatter feature flag is
too ambiguous, as it is not clear if input and/or output
buffers can be scatter gather mbufs or not, plus
if in-place and/or out-of-place is supported.
Therefore, five new flags will replace this flag:
- RTE_CRYPTODEV_FF_IN_PLACE_SGL
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT
- RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Removed rte_cryptodev_get_header_session_size
and rte_cryptodev_get_private_session_size functions,
as they have been substituted with functions
specific for symmetric operations, with _sym_ word
after "rte_cryptodev_".
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Removed cryptodev queue start/stop functions,
as they were marked deprecated in 18.05, since they
were not implemented by any driver.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
In release 18.05, a deprecation notice to remove the `sym`
structure in the cryptodev info structure was sent.
However, only one of the fields inside the structure will
be removed, so the notice is not actually correct.
In any case, it needs to be removed.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Most crypto PMDs do not have a limitation
of the number of the sessions that can be handled
internally. The value that was set before was not
actually used at all, since the sessions are created
at the application level.
Therefore, this value is not parsed from the initial
crypto parameters anymore and it is set to 0,
meaning that there is no actual limit.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Currently, the info structure contains the maximum number
of sessions that a device can manage.
This field was useful when the session mempool was created inside
each device, but now it is created at the application level.
Most PMDs do not have a limitation on the sessions managed,
but a few do, therefore this field must remain in the structure.
However, a new value, 0, can be used to indicate that
a device does not have an actual maximum of sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Structure rte_cryptodev_info has currently PCI device
information ("struct rte_pci_device") in it.
This information is not generic to all devices,
so this gets replaced with the generic "rte_device" structure,
compatible with all crypto devices.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The current mbuf scatter gather feature flag is
too ambiguous, as it is not clear if input and/or output
buffers can be scatter gather mbufs or not.
Therefore, three new flags will replace this flag:
- RTE_COMP_FF_OOP_SGL_IN_SGL_OUT
- RTE_COMP_FF_OOP_SGL_IN_FB_OUT
- RTE_COMP_FF_OOP_LB_IN_SGL_OUT
Note that out-of-place flat buffers is supported by default
and in-place is not supported by the library.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Renamed feature "Bypass" to "Pass-through",
as it is a more explicit name, meaning that the PMD
is capable of passing the mbufs through it,
without making any modifications (i.e.. NULL algorithm).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
In PMD feature matrices (.ini files), it is not required to
have the list of features that are not supported,
just the ones that are.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Fixes: 3d12dceed2df ("ethdev: add new offload flag to keep CRC")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
In DPDK 17.11, the ethdev offloads API has changed:
commit cba7f53b71 ("ethdev: introduce Tx queue offloads API")
commit ce17eddefc ("ethdev: introduce Rx queue offloads API")
The new API is documented in the programmer's guide:
http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#hardware-offload
For reminder, the main concepts in the new API were:
- All offloads are disabled by default
- Distinction between per port and per queue offloads.
The transition bits are now removed:
- Translation of the old API in ethdev
- rte_eth_conf.rxmode.ignore_offload_bitfield
- ETH_TXQ_FLAGS_IGNORE
The old API bits are now removed:
- Rx per-port rte_eth_conf.rxmode.[bit-fields]
- Tx per-queue rte_eth_txconf.txq_flags
- ETH_TXQ_FLAGS_NO*
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
Some test applications and examples were not converted
to the new offload API introduced in 17.11.
For reference, see "Hardware Offload" in
doc/guides/prog_guide/poll_mode_drv.rst
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The example code is showing how to use KNI, and can be found in
examples/kni/
The documentation guide for this example is explaining the code
to ease the understanding of the example.
And inside this documentation, there are a lot of examples code
which are copy/pasted. It is really too much and hard to maintain.
The code inside this documentation is replaced by the name
of the functions.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Support rx of in direction packets only
Useful for apps that also tx to eth_pcap ports in order to not see them
echoed back in as rx when out direction is also captured
Example:
In case using rx_iface and sending *single* packet to eth1
it will loop forever as the when it is sent to tx_iface=eth1
it will be captured again on the rx_iface=eth1 and so on
$RTE_TARGET/app/testpmd l 0-3 -n 4 \
--vdev 'net_pcap0,rx_iface=eth1,tx_iface=eth1'
…
---------------------- Forward statistics for port 0 ------------
RX-packets: 758 RX-dropped: 0 RX-total: 758
TX-packets: 758 TX-dropped: 0 TX-total: 758
------------------------------------------------------------------
While if using rx_iface_in it will not be captured on the way out and
be forwarded only once
$RTE_TARGET/app/testpmd l 0-3 -n 4 \
--vdev 'net_pcap0,rx_iface_in=eth1,tx_iface=eth1'
…
---------------------- Forward statistics for port 0 ------------
RX-packets: 1 RX-dropped: 0 RX-total: 1
TX-packets: 1 TX-dropped: 0 TX-total: 1
------------------------------------------------------------------
Signed-off-by: Ido Goshen <ido@cgstowernetworks.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add a new devarg "ig-vlan-rewrite" to allow the user to set
non-default rewrite mode. The UCS VIC may add/remove/modify the VLAN
header of an ingress packet depending on the ingress VLAN rewrite
mode.
By default, the driver sets the pass-through mode, which tells the NIC
"do not touch VLAN header and preserve it as is". This mode is usually
sufficient, but can complicate deployments for certain environments.
For example, OVS-DPDK in UCS blade environments may want to use "untag
default VLAN mode", which removes the VLAN header from an ingress
packet if it matches vNIC's default VLAN.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
rte_eth_rx_descritpr_status and rte_eth_tx_descriptor_status
are supported by fm10K.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For i40evf, internal rx interrupt and adminq interrupt share the same
source, that cause a lot cpu cycles be wasted on interrupt handler
on rx path. This is complained by customers which require low latency
(when set I40E_ITR_INTERVAL to small value), but have to be sufferred by
tremendous interrupts handling that eat significant CPU resources.
The patch disable pci interrupt and remove the interrupt handler,
replace it with a low frequency (50ms) interrupt polling daemon
which is implemented by registering a alarm callback periodly, this
save CPU time significently: On a typical x86 server with 2.1GHz CPU,
with low latency configure (32us) we saw CPU usage from top commmand
reduced from 20% to 0% on management core in testpmd).
Also with the new method we can remove compile option: I40E_ITR_INTERVAL
which is used to balance between low latency and low CPU usage previously.
Now we don't need it since we can reach both at same time.
Suggested-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
logic.
Rx path select logic: If IN_ORDER and merge-able are enabled will select
IN_ORDER Rx path. If IN_ORDER is enabled, Rx offload and merge-able are
disabled will select simple Rx path. Otherwise will select normal Rx
path.
Tx path select logic: If IN_ORDER is enabled will select IN_ORDER Tx
path. Otherwise will select default Tx path.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add parameters for configuring VIRTIO_NET_F_MRG_RXBUF and
VIRTIO_F_IN_ORDER feature bits. If feature is disabled, also update
corresponding unsupported feature bit.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
DEV_RX_OFFLOAD_KEEP_CRC offload flag is added. PMDs that support
keeping CRC should advertise this offload capability.
DEV_RX_OFFLOAD_CRC_STRIP flag will remain one more release
default behavior in PMDs are to keep the CRC until this flag removed
Until DEV_RX_OFFLOAD_CRC_STRIP flag is removed:
- Setting both KEEP_CRC & CRC_STRIP is INVALID
- Setting only CRC_STRIP PMD should strip the CRC
- Setting only KEEP_CRC PMD should keep the CRC
- Not setting both PMD should keep the CRC
A helper function rte_eth_dev_is_keep_crc() has been added to be able to
change the no flag behavior with minimal changes in PMDs.
The PMDs that doesn't report the DEV_RX_OFFLOAD_KEEP_CRC offload can
remove rte_eth_dev_is_keep_crc() checks next release, related code
commented to help the maintenance task.
And DEV_RX_OFFLOAD_CRC_STRIP has been added to virtual drivers since
they don't use CRC at all, when an application requires this offload
virtual PMDs should not return error.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The optimal values of several transmission & reception related
parameters, such as burst sizes, descriptor ring sizes, and number
of queues, varies between different network interface devices. This
patch adds the values for the ixgbe PMD.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Multi-Packet Receive Queue is to receive multiple packets on a single large
buffer. The number of consumed strides in CQE is accumulated to keep track
of the current stride index. However, it is safer to directly use stride
index in CQE to avoid out-of-order situation which can possibly be caused
by introducing LRO in the future.
If Rx CQE compression is enabled, HW can be configured to store the stride
index in a mini-CQE but this will need newer version of library/driver.
Therefore, since this change, MPRQ is only supported with the newer
library/driver and Rx hash result is not supported if MPRQ is enabled along
with Rx CQE compression.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
This feature is used to create a hole between HEADROOM
and actual data.Size of hole is specified in bytes as
module param to pmd
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Document the driver and device naming formats.
Changed the underscores alignment.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Add small amount of additional code, use consistent variable names
across code blocks, change the image to represent queues and
CPU cores intuitively. These help improve the eventdev library
documentation.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Make the compiler switch name and document name consistent as ``ifc`` to
avoid confusion. Also rename the map file to standard name for meson
build in the process.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Introduce rte_flow skeleton and implement validate operation.
Parse and convert <item>, <action>, <attributes> into hardware
specification. Perform validation, including basic sanity tests
and underlying device's supported filter capability checks.
Currently add support for:
<item>: IPv4, IPv6, TCP, and UDP.
<action>: Drop, Queue, and Count.
Also add sanity checks to ensure filters are created at specified
index in LE-TCAM region. The index in LE-TCAM region indicates
the filter rule's priority with index 0 having the highest priority.
If no index is specified, filters are created at closest available
free index.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add template release notes for DPDK 18.08 with inline
comments and explanations of the various sections.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
Fix grammar, spelling and formatting of DPDK 18.05 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Also, reference Bugzilla entry for keeping most current information in
one place.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Added known issue of rte_abort taking a long time
on FreeBSD due to recent memory subsystem rework.
Also, reference Bugzilla entry for keeping most
current information in one place.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Added contribution guideline for adding tags
when sending patches that have been raised on
Bugzilla
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Device querying and declaration has been postponed to 18.08.
Additionally, while working on the feature, some changes previously
announced won't be enacted.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
This expands the ISA-L documentation to give more a detailed explanation
of the mapping between the compressdev API levels and PMD levels.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Deprecate rte_eal_mbuf_default_mempool_ops(), it shall be replaced by
rte_mbuf_best_mempool_ops().
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Most of this work was already done, but runtime config path is
considered to be part of public API because of high likelihood of
it being used by various tools in the DPDK ecosystem, and thus
requires a deprecation notice.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Document new command-line switches and the principles behind the
new memory subsystem. Also, replace outdated malloc heap picture.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Describe all the capabilities of DPDK IPC, and provide some
insight into how to best make use of it.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
New default configuration without flexible payload
is implemented now.
Fixes: d2f9fe8ae3 ("net/i40e: turn off flexible payload on driver init")
Cc: stable@dpdk.org
Signed-off-by: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
If the NIC has a queue number larger than 128, then we need to change
the ``MAX_QUEUES`` to a larger number to make sure we allocate a big
enough memory pool for device setup.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Add LINUX VFIO support to feature list.
Add GRE tunneling offload to supported feature list.
Update firmware references to use newer FW image 8.33.12.0. Update
management FW version as well to 8.34.x.x.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
Remove reference to deprecated ethdev offload parameter from octeontx
documentation.
Fixes: a92870896b ("net/octeontx: use the new offload APIs")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove reference to deprecated ethdev offload parameter from thunderx
documentation.
Fixes: c97da2cbc2 ("net/thunderx: convert to new offload API")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Some files were left with full license and wrong copyright format.
They are switched to this format:
SPDX-License-Identifier: BSD-3-Clause
Copyright 2017 Mellanox Technologies, Ltd
Fixes: 5feecc57d9 ("align SPDX Mellanox copyrights")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
We have many stable branches being maintained at the same time, and
sometimes it's not clear which branch a patch is being backported for.
Note in the guidelines that it should be specified via the cover letter,
annotation or using --subject-prefix.
Also note to send only to stable@dpdk.org, not dev@dpdk.org.
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Added contribution guideline for adding stable
tags when sending patches all fix patches to the
master branch that are candidates for backporting
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Added contribution guideline for adding tags
when sending patches that have been raised by
coverity
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Since we move to new offload APIs, IXGBE_SIMPLE_FLAGS is not used.
Fixes: 51215925a3 ("net/ixgbe: convert to new Tx offloads API")
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Add force_link_up devargs to always force link as up for VFs.
This enables VFs on the same NIC to send traffic to each other
even when physical link is down.
Also add RTE_PMD_REGISTER_PARAM_STRING to export all supported
devargs.
Fixes: 011ebc236d ("net/cxgbe: add skeleton VF driver")
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
This patch corrects some spelling issues in i40e.rst and clarifies
which controllers and connections are part of the 700 Series.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Update information for blacklist and whitelist options for secondary
process to share same from primary process.
Signed-off-by: Vipin Varghese <vipin.varghese@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Intel's libsso_zuc library has been moved to a
new location, under "Intel Resource & Design Center".
The installation section of this PMD has been updated
to include the new instructions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Intel's libsso_snow3g library has been moved to a
new location, under "Intel Resource & Design Center".
The installation section of this PMD has been updated
to include the new instructions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Intel's libsso_kasumi library has been moved to a
new location, under "Intel Resource & Design Center".
The installation section of this PMD has been updated
to include the new instructions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
rte_cryptodev_get_header_session_size() and
rte_cryptodev_get_private_session_size() functions are
targeting symmetric sessions.
With the future addition of asymmetric operations,
these functions need to be renamed from *cryptodev_*_session_*
to *cryptodev_sym_*_session_* to be symmetric specific.
The two original functions are marked as deprecated
and will be removed in 18.08, so applications can still
use the functions in 18.05.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Some feature flags, such as RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER
are ambiguous. In this case, it is not clear if Scatter Gather lists
are supported in the input buffer and/or output buffer.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Functions rte_cryptodev_queue_pair_start/stop
are not really used in any of the crypto drivers
(they all just return 0 or -ENOTSUP).
Therefore, this API can be deprecated from 18.05
and removed in 18.08.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Functions rte_cryptodev_queue_pair_attach_sym_session
and rte_cryptodev_queue_pair_detach_sym_sessions
are not really used in any of the crypto drivers
(only one driver implements it and it just return 0).
Therefore, this API can be deprecated from 18.05
and removed in 18.08.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Cryptodev info structure currently contains
a pointer to an rte_pci_device structure.
This field depends on a specific bus type (PCI),
which is not following a bus independent design.
Following the same approach taken in ethdev, the field
will be replaced with a pointer to an rte_device structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Since the API changes made in 17.08, the session mempool
is not created anymore in each crypto device.
Therefore, there is no need to have, in the cryptodev info
structure, the maximum number of sessions supported per device
and per queue pair.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Add following testpmd run-time commands to support test of
new Rx offload API:
show port <port_id> rx_offload capabilities
show port <port_id> rx_offload configuration
port config <port_id> rx_offload <offload> on|off
port <port_id> rxq <queue_id> rx_offload <offload> on|off
Above last 2 commands should be run when the port is stopped.
And <offload> can be one of "vlan_strip", "ipv4_cksum", ...
Add following testpmd run-time commands to support test of
new Tx offload API:
show port <port_id> tx_offload capabilities
show port <port_id> tx_offload configuration
port config <port_id> tx_offload <offload> on|off
port <port_id> txq <queue_id> tx_offload <offload> on|off
Above last 2 commands should be run when the port is stopped.
And <offload> can be one of "vlan_insert", "udp_cksum", ...
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe
bandwidth by posting a single large buffer for multiple packets. Instead of
posting a buffer per a packet, one large buffer is posted in order to
receive multiple packets on the buffer. A MPRQ buffer consists of multiple
fixed-size strides and each stride receives one packet.
Rx packet is mem-copied to a user-provided mbuf if the size of Rx packet is
comparatively small, or PMD attaches the Rx packet to the mbuf by external
buffer attachment - rte_pktmbuf_attach_extbuf(). A mempool for external
buffers will be allocated and managed by PMD.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
This is the new design of Memory Region (MR) for mlx PMD, in order to:
- Accommodate the new memory hotplug model.
- Support non-contiguous Mempool.
There are multiple layers for MR search.
L0 is to look up the last-hit entry which is pointed by mr_ctrl->mru (Most
Recently Used). If L0 misses, L1 is to look up the address in a fixed-sized
array by linear search. L0/L1 is in an inline function -
mlx4_mr_lookup_cache().
If L1 misses, the bottom-half function is called to look up the address
from the bigger local cache of the queue. This is L2 - mlx4_mr_addr2mr_bh()
and it is not an inline function. Data structure for L2 is the Binary Tree.
If L2 misses, the search falls into the slowest path which takes locks in
order to access global device cache (priv->mr.cache) which is also a B-tree
and caches the original MR list (priv->mr.mr_list) of the device. Unless
the global cache is overflowed, it is all-inclusive of the MR list. This is
L3 - mlx4_mr_lookup_dev(). The size of the L3 cache table is limited and
can't be expanded on the fly due to deadlock. Refer to the comments in the
code for the details - mr_lookup_dev(). If L3 is overflowed, the list will
have to be searched directly bypassing the cache although it is slower.
If L3 misses, a new MR for the address should be created -
mlx4_mr_create(). When it creates a new MR, it tries to register adjacent
memsegs as much as possible which are virtually contiguous around the
address. This must take two locks - memory_hotplug_lock and
priv->mr.rwlock. Due to memory_hotplug_lock, there can't be any
allocation/free of memory inside.
In the free callback of the memory hotplug event, freed space is searched
from the MR list and corresponding bits are cleared from the bitmap of MRs.
This can fragment a MR and the MR will have multiple search entries in the
caches. Once there's a change by the event, the global cache must be
rebuilt and all the per-queue caches will be flushed as well. If memory is
frequently freed in run-time, that may cause jitter on dataplane processing
in the worst case by incurring MR cache flush and rebuild. But, it would be
the least probable scenario.
To guarantee the most optimal performance, it is highly recommended to use
an EAL option - '--socket-mem'. Then, the reserved memory will be pinned
and won't be freed dynamically. And it is also recommended to configure
per-lcore cache of Mempool. Even though there're many MRs for a device or
MRs are highly fragmented, the cache of Mempool will be much helpful to
reduce misses on per-queue caches anyway.
'--legacy-mem' is also supported.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
This patch removes current support of Memory Region (MR) in order to
accommodate the dynamic memory hotplug patch. This patch can be compiled
but traffic can't flow and HW will raise faults. Subsequent patches will
add new MR support.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
This is the new design of Memory Region (MR) for mlx PMD, in order to:
- Accommodate the new memory hotplug model.
- Support non-contiguous Mempool.
There are multiple layers for MR search.
L0 is to look up the last-hit entry which is pointed by mr_ctrl->mru (Most
Recently Used). If L0 misses, L1 is to look up the address in a fixed-sized
array by linear search. L0/L1 is in an inline function -
mlx5_mr_lookup_cache().
If L1 misses, the bottom-half function is called to look up the address
from the bigger local cache of the queue. This is L2 - mlx5_mr_addr2mr_bh()
and it is not an inline function. Data structure for L2 is the Binary Tree.
If L2 misses, the search falls into the slowest path which takes locks in
order to access global device cache (priv->mr.cache) which is also a B-tree
and caches the original MR list (priv->mr.mr_list) of the device. Unless
the global cache is overflowed, it is all-inclusive of the MR list. This is
L3 - mlx5_mr_lookup_dev(). The size of the L3 cache table is limited and
can't be expanded on the fly due to deadlock. Refer to the comments in the
code for the details - mr_lookup_dev(). If L3 is overflowed, the list will
have to be searched directly bypassing the cache although it is slower.
If L3 misses, a new MR for the address should be created -
mlx5_mr_create(). When it creates a new MR, it tries to register adjacent
memsegs as much as possible which are virtually contiguous around the
address. This must take two locks - memory_hotplug_lock and
priv->mr.rwlock. Due to memory_hotplug_lock, there can't be any
allocation/free of memory inside.
In the free callback of the memory hotplug event, freed space is searched
from the MR list and corresponding bits are cleared from the bitmap of MRs.
This can fragment a MR and the MR will have multiple search entries in the
caches. Once there's a change by the event, the global cache must be
rebuilt and all the per-queue caches will be flushed as well. If memory is
frequently freed in run-time, that may cause jitter on dataplane processing
in the worst case by incurring MR cache flush and rebuild. But, it would be
the least probable scenario.
To guarantee the most optimal performance, it is highly recommended to use
an EAL option - '--socket-mem'. Then, the reserved memory will be pinned
and won't be freed dynamically. And it is also recommended to configure
per-lcore cache of Mempool. Even though there're many MRs for a device or
MRs are highly fragmented, the cache of Mempool will be much helpful to
reduce misses on per-queue caches anyway.
'--legacy-mem' is also supported.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
This patch removes current support of Memory Region (MR) in order to
accommodate the dynamic memory hotplug patch. This patch can be compiled
but traffic can't flow and HW will raise faults. Subsequent patches will
add new MR support.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
This patch check if a input requested offloading is valid or not.
Any reuqested offloading must be supported in the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If any offloading is enabled in rte_eth_dev_configure() by application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(),
it can be enabled or disabled for individual queue in
ret_eth_[rt]x_queue_setup().
A new added offloading is the one which hasn't been enabled in
rte_eth_dev_configure() and is reuqested to be enabled in
rte_eth_[rt]x_queue_setup(), it must be per-queue type,
otherwise trigger an error log.
The underlying PMD must be aware that the requested offloadings
to PMD specific queue_setup() function only carries those
new added offloadings of per-queue type.
This patch can make above such checking in a common way in rte_ethdev
layer to avoid same checking in underlying PMD.
This patch assumes that all PMDs in 18.05-rc2 have already
converted to offload API defined in 17.11 . It also assumes
that all PMDs can return correct offloading capabilities
in rte_eth_dev_infos_get().
In the beginning of [rt]x_queue_setup() of underlying PMD,
add offloads = [rt]xconf->offloads |
dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API
defined in 17.11 to avoid upper application broken due to offload
API change.
PMD can use the info that input [rt]xconf->offloads only carry
the new added per-queue offloads to do some optimization or some
code change on base of this patch.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
There are two capabilities related to CRC stripping:
1. mlx4 HW capability to perform CRC stripping on a received packet.
This capability is built in mlx4 HW. It should be returned by the API
call mlx4_get_rx_queue_offloads().
2. mlx4 driver capability to enable/disable HW CRC stripping. This
capability is dependent on the driver version.
Before this commit the second capability was falsely returned by
the mentioned API. This commit fixes it by returning the first
capability.
mlx4 HW performs CRC stripping by default and this capability is
always reported as "true".
The ability to enable/disable CRC stripping is supported since this
commit and requires OFED version 4.3-1.5.0.0 or rdma-core version v18.
CRC stripping will be done by default regardless of its configuration
when working with OFED or rdma-core versions earlier than those
previously specified or before this commit.
Fixes: de1df14e6e ("net/mlx4: support CRC strip toggling")
Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Report on TAP supported RSS functions as part of dev_infos_get
callback: ETH_RSS_IP, ETH_RSS_UDP and ETH_RSS_TCP.
Known limitation: TAP supports all of the above hash functions together
and not in partial combinations.
Previous to this commit RSS support was reported as none. Since the
introduction of [1] it is required that all RSS configurations will be
verified.
[1] commit 8863a1fbfc ("ethdev: add supported hash function check")
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Company policy discourages the mention of unreleased hardware in
guides and release notes.
Fixes: 08df773 ("doc: update enic guide and features")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Add more descriptions regarding SR-IOV and RSS settings.
Remove 'Multicast MAC filter' and add 'Allmulticast mode' to the
features.
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Reviewed-by: Aaron Conole <aconole@redhat.com>
After the commit 2a04139f66 ("eal: add single file segments option"),
one hugepage file could contain multiple hugepages which are further
mapped to different memory regions.
Original enumeration implementation cannot handle this situation.
This patch filters out the duplicated files; and adjust the size after
the enumeration.
Fixes: 6a84c37e39 ("net/virtio-user: add vhost-user adapter layer")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In VM2NIC case zero copy may need some tuning to get best performance.
This patch describes the zero copy starved case and provides a tuning
tip.
Signed-off-by: Junjie Chen <junjie.j.chen@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add two new command-line parameters for either enabling or
disabling locking all memory at app startup.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce API to install BPF based filters on ethdev RX/TX path.
Current implementation is pure SW one, based on ethdev RX/TX
callback mechanism.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Introduce rte_bpf_elf_load() function to provide ability to
load eBPF program from ELF object file.
It also adds dependency on libelf.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
librte_bpf provides a framework to load and execute eBPF bytecode
inside user-space dpdk based applications.
It supports basic set of features from eBPF spec
(https://www.kernel.org/doc/Documentation/networking/filter.txt).
Not currently supported features:
- JIT
- cBPF
- tail-pointer call
- eBPF MAP
- skb
- function calls for 32-bit apps
- mbuf pointer as input parameter for 32-bit apps
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Defined FPGA-BUS for Acceleration Drivers of AFUs
1. FPGA PCI Scan (1st Scan) follows DPDK UIO/VFIO PCI Scan Process,
probe Intel FPGA Rawdev Driver, it will be covered in following patches.
2. AFU Scan(2nd Scan) bind DPDK driver to FPGA Partial-Bitstream.
This scan is trigged by hotplug of IFPGA Rawdev probe, in this scan
the AFUs will be created and their drivers are also probed.
This patch will introduce rte_afu_device which describe the AFU device
listed in the FPGA-BUS.
Signed-off-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Tianfei Zhang <tianfei.zhang@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
CCP PMD supports authentication offload to either of CCP or CPU.
The earlier version of patch provides this option as compile time.
This patch changes this option from compile time to run time.
User can pass "ccp_auth_opt=1" as an additional arguments to vdev
parameter to enable authentication operations on CPU.
Signed-off-by: Ravi Kumar <ravi1.kumar@amd.com>
Change baseband device name:
- from turbo_sw to baseband_turbo_sw
- from bbdev_null to baseband_null
To keep backwards compatibility the old names are still valid
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
This adds general compression drivers feature guide
as well as the ISA-L PMD documentation and guide.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adding basic skeleton of the ISA-L compression driver.
No compression functionality, but lays the foundation for
operations in the rest of the patchset.
The ISA-L compression driver utilizes Intel's ISA-L compression
library and compressdev API.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Update the test app documentation:
- description of tests added
- usage of test app updated
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Added a note to enable building as a shared lib.
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Creation of new vectors to test and validate BBDevice capabilities
Test app documentation updated accordingly
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Update test vectors directory for Wireless Baseband Device:
- update test vectors names
- python script used for tests execution updated
Update the test app documentation:
- vector test names updated
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Splitting Queue Groups into UL/DL Groups in Turbo Software
Driver. They are independent for Decode/Encode.
Release note updated accordingly.
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Support for optional CRC overlap in decode processing implemented
in Turbo Software driver
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Update Turbo Software driver for Wireless Baseband Device:
- function scaling input LLR values to specific range [-16, 16] added
- new test vectors to check device capabilities added
- release note updated accordingly
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Adjusting BaseBand drivers code to changes in FlexRAN 1.4.0:
- update usage of crc functions after API changes
Update the documentation describing Wireless Baseband Device:
- FlexRAN releases mapping table added
- download and build instructions for BBDEV turbo_sw driver in
compliance with FlexRAN 1.4.0 release added
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Added structures and enums specific to compression,
including the compression operation structure and the
different supported algorithms, checksums and compression
levels.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Shally Verma <shally.verma@caviumnetworks.com>
Signed-off-by: Ashish Gupta <ashish.gupta@caviumnetworks.com>
Add basic functions to manage compress devices,
including driver and device allocation, and the basic
interface with compressdev PMDs.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Shally Verma <shally.verma@caviumnetworks.com>
Signed-off-by: Ashish Gupta <ashish.gupta@caviumnetworks.com>
Picking a company stock ticker for a PMD name might not be a best approach
in a long run since name is too generic.
This patch addresses that and renames mrvl to mvsam.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Ethernet port ID data size has been extended to 16 bits size 17.11
Update the Rx event adapter interface and implementation accordingly.
This commit bumps the library version to refect the ABI change
caused by extending the ethernet port parameter in Rx adapter
functions from 8 to 16 bits.
Fixes: 9c38b704d2 ("eventdev: add eth Rx adapter implementation")
Cc: stable@dpdk.org
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Add entries in the programmer's guide, API index, maintainer's file
and release notes for the event crypto adapter.
Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch introduces event crypto adapter APIs. It
also provides information on working model/adapter
modes & their usage. Application is expected to use
this interface to transfer packets between the crypto
device & the event device.
Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Traffic manager provides an API for resuming
an arbitrary node in a hierarchy.
This commit adds support for calling this API
from testpmd.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
Traffic manager provides an API for suspending
an arbitrary node in a hierarchy.
This commit adds support for calling this API from testpmd.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Reviewed-by: Jasvinder Singh <jasvinder.singh@intel.com>
There are two API's which are required by NXP specific Command Interface
Application (AIOP CMDIF). This patch exposes these two API's.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
EAL option -m is supported in FreeBSD,
so move it under supported heading from non
supported heading.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
New version of the packages with dependencies for the szedata2
driver is needed due to the new API of the libsze2 library which
is used in the driver.
The documentation and the release notes are updated to contain
the information about the required versions.
Signed-off-by: Matej Vido <vido@cesnet.cz>
Acked-by: Jan Remes <remes@netcope.com>
Library folder name and output library name are same except a few flaws
including librte_ether.
This library is network device abstraction layer, the name "ethdev" fits
better than "ether", and library & header files already named as ethdev.
Also there is a rte_ether.h in the net library which can cause confusion.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Add device argument to customize Rx descriptor wait timeout which
is supported in DPDK firmware variant only in equal stride super-buffer
Rx mode only.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
DPDK firmware variant supports equal stride super-buffer Rx mode which
provides higher packet rate and packet marks but requires dedicated
mempool manager with contiguous object block allocation (e.g. bucket).
Also the firmware supports subvariant without checksumming on Tx which
allows to reach higher packet rates on transmit if checksumming is not
required.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
HW Rx descriptor represents many contiguous packet buffers which
follow each other. Number of buffers, stride and maximum DMA
length are setup-time configurable per Rx queue based on provided
mempool. The mempool must support contiguous block allocation and
get info API to retrieve number of objects in the block.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Add rte_flow_action_count action data structure to enable shared
counters across multiple flows on a single port or across multiple
flows on multiple ports within the same switch domain. Also this enables
multiple count actions to be specified in a single flow action.
This patch also modifies the existing rte_flow_query API to take the
rte_flow_action structure as an input parameter instead of the
rte_flow_action_type enumeration to allow querying a specific action
from a flow rule when multiple actions of the same type are specified.
This patch also contains updates for the bonding, failsafe and mlx5 PMDs
and testpmd application which are affected by this API change.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Introduces a new action type RTE_FLOW_ITEM_TYPE_MARK which enables
flow patterns to specify arbitrary integer values to match aginst
set by the RTE_FLOW_ACTION_TYPE_MARK action in previously matched
flows.
Add support for specification of new MARK flow item in testpmd's cli.
Update testpmd documentation to describe new MARK flow item support.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Add jump action type which defines an action which allows a matched
flow to be redirect to the specified group. This allows physical and
logical flow table/group hierarchies to be defined through rte_flow.
This breaks ABI compatibility for the following public functions (as it
modifes the ordering of the rte_flow_action_type enumeration):
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Add support for specification of new JUMP action to testpmd's flow
cli, and update the testpmd documentation to describe this new
action.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Add new flow action types and associated action data structures to
support the encapsulation and decapsulation of VXLAN and NVGRE tunnel
endpoints.
The RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP action will cause the
matching flow to be encapsulated in the tunnel endpoint overlay
defined in the [vxlan/nvgre]_encap action data.
The RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP action will cause all
headers associated with the outer most tunnel endpoint of the specified
type for the matching flows.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Add support for virtual function representor ports to the ixgbe PF
driver. When SR-IOV virtual functions devices are enabled a
corresponding representor port for each VF can be enabled in the
process in which the ixgbe PMD is running within, by specifying the
representor devargs with the list of VF ports that representors
are to be created for.
An example of the devargs which would create VF representor for virtual
functions 0,2,4,5,6 and 7 is:
-w DBDF,representor=[0,2,4-7]
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Remy Horton <remy.horton@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for virtual function representor ports to the i40e PF
driver. When SR-IOV virtual functions devices are enabled a
corresponding representor port for each VF can be enabled, in the
process in which the i40e PMD is running, by specifying the
representor devargs with the list of VF ports that representors
are to be created for.
An example of the devargs which would create VF representor for virtual
functions 0,2,4,5,6 and 7 is:
-w DBDF,representor=[0,2,4-7]
and to just specify a single representor on virtual function 3 (switch
port id):
-w DBDF,representor=3
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Remy Horton <remy.horton@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Introduces a new structure, rte_eth_devargs, to support generic
ethdev arguments common across NET PMDs, with a new API
rte_eth_devargs_parse API to support PMD parsing these arguments. The
patch add support for a representor argument passed with passed with
the EAL -w option. The representor parameter allows the user to specify
which representor ports to initialise on a device.
The argument supports passing a single representor port, a list of
port values or a range of port values.
-w BDF,representor=1 # create representor port 1 on pci device BDF
-w BDF,representor=[1,2,5,6,10] # create representor ports in list
-w BDF,representor=[0-31] # create representor ports in range
Signed-off-by: Remy Horton <remy.horton@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Introduces a new port attribute to ethdev port's which denotes the
switch domain a port belongs to. By default all port's switch
identifiers are set to RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID. Ports
which supported the concept of switch domains can be configured with
the same switch domain id.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add document to describe the model for representing switching capable
devices in DPDK, using a general ethdev port model and through port
representors. This document also details the port model and the
rte_flow semantics required for flow programming, as well as listing
some example use cases.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch support L3 VXLAN, no inner L2 header comparing to standard
VXLAN protocol. L3 VXLAN using specific overlay UDP destination port to
discriminate against standard VXLAN, device parameter and FW has to be
configured to support it:
sudo mlxconfig -d <device> -y s IP_OVER_VXLAN_EN=1
sudo mlxconfig -d <device> -y s IP_OVER_VXLAN_PORT=<port>
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Add support for the following OpenFlow-defined actions:
- RTE_FLOW_ACTION_OF_POP_VLAN: pop the outer VLAN tag.
- RTE_FLOW_ACTION_OF_PUSH_VLAN: push a new VLAN tag.
- RTE_FLOW_ACTION_OF_SET_VLAN_VID: set the 802.1q VLAN id.
- RTE_FLOW_ACTION_OF_SET_VLAN_PCP: set the 802.1q priority.
- RTE_FLOW_ACTION_OF_POP_MPLS: pop the outer MPLS tag.
- RTE_FLOW_ACTION_OF_PUSH_MPLS: push a new MPLS tag.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Add VXLAN-GPE support to csum forwarding engine and rte flow.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
VXLAN-GPE enables VXLAN for all protocols. Protocol link:
https://www.ietf.org/id/draft-ietf-nvo3-vxlan-gpe-05.txt
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
RTE_FLOW_ACTION_TYPE_PORT_ID brings the ability to inject matching traffic
into a different device, as identified by its DPDK port ID.
This is normally only supported when the target port ID has some kind of
relationship with the port ID the flow rule is created against, such as
being exposed by a common physical device (e.g. a different port of an
Ethernet switch).
The converse pattern item, RTE_FLOW_ITEM_TYPE_PORT_ID, makes the resulting
flow rule match traffic whose origin is the specified port ID. Note that
specifying a port ID that differs from the one the flow rule is created
against is normally meaningless (if even accepted), but can make sense if
combined with the transfer attribute.
These must not be confused with their PHY_PORT counterparts, which refer to
physical ports using device-specific indices, but unlike PORT_ID are not
necessarily tied to DPDK port IDs.
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch adds the missing action counterpart to the PHY_PORT pattern
item, that is, the ability to directly inject matching traffic into a
physical port of the underlying device.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
While RTE_FLOW_ITEM_TYPE_PORT refers to physical ports of the underlying
device using specific identifiers, these are often confused with DPDK port
IDs exposed to applications in the global name space.
Since this pattern item is seldom used, rename it RTE_FLOW_ITEM_PHY_PORT
for better clarity.
No ABI impact.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Contrary to all other pattern items, these are inconsistently documented as
affecting traffic instead of simply matching its origin, without provision
for the latter.
This commit clarifies documentation and updates PMDs since the original
behavior now has to be explicitly requested using the new transfer
attribute.
It breaks ABI compatibility for the following public functions:
- rte_flow_create()
- rte_flow_validate()
Impacted PMDs are bnxt and i40e, for which the VF pattern item is now only
supported when a transfer attribute is also present.
Fixes: b1a4b4cbc0 ("ethdev: introduce generic flow API")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This new attribute enables applications to create flow rules that do not
simply match traffic whose origin is specified in the pattern (e.g. some
non-default physical port or VF), but actively affect it by applying the
flow rule at the lowest possible level in the underlying device.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
VLAN TCI is a 16-bit field broken down as PCP (3b), DEI (1b) and VID (12b).
The default mask used by PMDs for the VLAN pattern when one isn't provided
by the application comprises the entire TCI, which is problematic because
most devices only support VID matching.
This forces applications to always provide a mask limited to the VID part
in order to successfully apply a flow rule with a VLAN pattern item.
Moreover, applications rarely want to match PCP and DEI intentionally.
Given the above and since VID is what is commonly referred to when talking
about VLAN, this commit excludes PCP and DEI from the default mask.
Fixes: 6de5c0f130 ("ethdev: define default item masks in flow API")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
TPID handling in rte_flow VLAN and E_TAG pattern item definitions is not
consistent with the normal stacking order of pattern items, which is
confusing to applications.
Problem is that when followed by one of these layers, the EtherType field
of the preceding layer keeps its "inner" definition, and the "outer" TPID
is provided by the subsequent layer, the reverse of how a packet looks like
on the wire:
Wire: [ ETH TPID = A | VLAN EtherType = B | B DATA ]
rte_flow: [ ETH EtherType = B | VLAN TPID = A | B DATA ]
Worse, when QinQ is involved, the stacking order of VLAN layers is
unspecified. It is unclear whether it should be reversed (innermost to
outermost) as well given TPID applies to the previous layer:
Wire: [ ETH TPID = A | VLAN TPID = B | VLAN EtherType = C | C DATA ]
rte_flow 1: [ ETH EtherType = C | VLAN TPID = B | VLAN TPID = A | C DATA ]
rte_flow 2: [ ETH EtherType = C | VLAN TPID = A | VLAN TPID = B | C DATA ]
While specifying EtherType/TPID is hopefully rarely necessary, the stacking
order in case of QinQ and the lack of documentation remain an issue.
This patch replaces TPID in the VLAN pattern item with an inner
EtherType/TPID as is usually done everywhere else (e.g. struct vlan_hdr),
clarifies documentation and updates all relevant code.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Summary of changes for PMDs that implement ETH, VLAN or E_TAG pattern
items:
- bnxt: EtherType matching is supported with and without VLAN, but TPID
matching is not and triggers an error.
- e1000: EtherType matching is only supported with the ETHERTYPE filter,
which does not support VLAN matching, therefore no impact.
- enic: same as bnxt.
- i40e: same as bnxt with existing FDIR limitations on allowed EtherType
values. The remaining filter types (VXLAN, NVGRE, QINQ) do not support
EtherType matching.
- ixgbe: same as e1000, with additional minor change to rely on the new
E-Tag macro definition.
- mlx4: EtherType/TPID matching is not supported, no impact.
- mlx5: same as bnxt.
- mvpp2: same as bnxt.
- sfc: same as bnxt.
- tap: same as bnxt.
Fixes: b1a4b4cbc0 ("ethdev: introduce generic flow API")
Fixes: 99e7003831 ("net/ixgbe: parse L2 tunnel filter")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
RSS hash types (ETH_RSS_* macros defined in rte_ethdev.h) describe the
protocol header fields of a packet that must be taken into account while
computing RSS.
When facing encapsulated (e.g. tunneled) packets, there is an ambiguity as
to whether these should apply to inner or outer packets. Applications need
the ability to tell exactly "where" RSS must be performed.
This is addressed by adding encapsulation level information to the RSS flow
action. Its default value is 0 and stands for the usual unspecified
behavior. Other values provide a specific encapsulation level.
Contrary to the change announced by commit 676b605182 ("doc: announce
ethdev API change for RSS configuration"), this patch does not affect
struct rte_eth_rss_conf but struct rte_flow_action_rss as the former is not
used anymore by the RSS flow action. ABI impact is therefore limited to
rte_flow.
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
By definition, RSS involves some kind of hash algorithm, usually Toeplitz.
Until now it could not be modified on a flow rule basis and PMDs had to
always assume RTE_ETH_HASH_FUNCTION_DEFAULT, which remains the default
behavior when unspecified (0).
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Since its inception, the rte_flow RSS action has been relying in part on
external struct rte_eth_rss_conf for compatibility with the legacy RSS API.
This structure lacks parameters such as the hash algorithm to use, and more
recently, a method to tell which layer RSS should be performed on [1].
Given struct rte_eth_rss_conf will never be flexible enough to represent a
complete RSS configuration (e.g. RETA table), this patch supersedes it by
extending the rte_flow RSS action directly.
A subsequent patch will add a field to use a non-default RSS hash
algorithm. To that end, a field named "types" replaces the field formerly
known as "rss_hf" and standing for "RSS hash functions" as it was
confusing. Actual RSS hash function types are defined by enum
rte_eth_hash_function.
This patch updates all PMDs and example applications accordingly.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
[1] commit 676b605182 ("doc: announce ethdev API change for RSS
configuration")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch replaces C99-style flexible arrays in struct rte_flow_action_rss
and struct rte_flow_item_raw with standard pointers to the same data.
They proved difficult to use in the field (e.g. no possibility of static
initialization) and unsuitable for C++ applications.
Affected PMDs and examples are updated accordingly.
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Fixes: b1a4b4cbc0 ("ethdev: introduce generic flow API")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This patch makes the following changes to flow rule actions:
- List order now matters, they are redefined as performed first to last
instead of "all simultaneously".
- Repeated actions are now supported (e.g. specifying QUEUE multiple times
now duplicates traffic among them). Previously only the last action of
any given kind was taken into account.
- No more distinction between terminating/non-terminating/meta actions.
Flow rules themselves are now defined as always terminating unless a
PASSTHRU action is specified.
These changes alter the behavior of flow rules in corner cases in order to
prepare the flow API for actions that modify traffic contents or properties
(e.g. encapsulation, compression) and for which order matter when combined.
Previously one would have to do so through multiple flow rules by combining
PASSTRHU with priority levels, however this proved overly complex to
implement at the PMD level, hence this simpler approach.
This breaks ABI compatibility for the following public functions:
- rte_flow_create()
- rte_flow_validate()
PMDs with rte_flow support are modified accordingly:
- bnxt: no change, implementation already forbids multiple actions and does
not support PASSTHRU.
- e1000: no change, same as bnxt.
- enic: modified to forbid redundant actions, no support for default drop.
- failsafe: no change needed.
- i40e: no change, implementation already forbids multiple actions.
- ixgbe: same as i40e.
- mlx4: modified to forbid multiple fate-deciding actions and drop when
unspecified.
- mlx5: same as mlx4, with other redundant actions also forbidden.
- sfc: same as mlx4.
- tap: implementation already complies with the new behavior except for
the default pass-through modified as a default drop.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Upcoming changes in relation to the handling of actions list will make the
DUP action redundant as specifying several QUEUE actions will achieve the
same behavior. Besides, no PMD implements this action.
By removing an entry from enum rte_flow_action_type, this patch breaks ABI
compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This section has become less relevant since the flow API (rte_flow) is now
a mature DPDK API with applications developed directly on top of it instead
of an afterthought.
This patch removes it for the following reasons:
- It has never been updated to track the latest changes in the legacy
filter types and never will.
- Many provided examples are theoretical and misleading since PMDs do not
implement them. Others are obvious.
- Upcoming work on the flow API will alter the behavior of several pattern
items, actions and in some cases, flow rules, which will in turn cause
existing examples to be wrong.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Although pattern items and actions examples end with "and so on", these
lists include all existing definitions and as a result are updated almost
every time new types are added. This is cumbersome and pointless.
This patch also synchronizes Doxygen and external API documentation wording
with a slight clarification regarding meta pattern items.
No fundamental API change.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
These enable more precise reporting of objects responsible for errors.
This breaks ABI compatibility for the following public functions:
- rte_flow_create()
- rte_flow_destroy()
- rte_flow_error_set()
- rte_flow_flush()
- rte_flow_isolate()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add new callbacks for eth_dev_ops of i40e to get the information
and data of plugin module eeprom.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add new callbacks for eth_dev_ops of e1000 to get the information and
data of plugin module EEPROM.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add new callbacks for eth_dev_ops of ixgbe to get the information
and data of plugin module eeprom.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add a new command "module-eeprom" to get the data of plugin
module EEPROM.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This change adds a new option to "port config" command to
add udp tunnel port for VXLAN and GENEVE tunneling protocols.
This command can be extended for other tunneling protocols
listed in "enum rte_eth_tunnel_type" as and when needed.
usage:
port config <port_id> udp_tunnel_port add|rm vxlan|geneve <udp_port>
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ethernet devices which are grouped by bonding PMD, aka slaves, are
sharing the same queues and RSS configurations and their Rx burst
functions must be managed by the bonding PMD according to the bonding
architecture.
So, it makes sense to configure the same flow rules for all the bond
slaves to allow consistency in packet flow management.
Add rte flow support to the bonding PMD to manage all flow
configuration to the bonded slaves.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Expose the runtime queue configuration capability and enhance
i40e_dev_[rx|tx]_queue_setup to handle the situation when
device already started.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add command to change specific queue's ring size configure,
the new value will only take effect after command that restart
the device(port stop <port_id>/port start <port_id>) or command
that setup the queue(port <port_id> rxq <qid> setup) at runtime.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new command to setup queue, rte_eth_[rx|tx]_queue_setup will
be called corresponsively.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
It's not possible to setup a queue when the port is started
because of a check in ethdev layer. New capability flags are
added in order to relax this check for devices which support
queue setup in runtime. The functions rte_eth_[rx|tx]_queue_setup
will raise an error only if the port is started and runtime setup
of queue is not supported.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
If the netvsc driver starts in blacklist mode, it does not
automatically probe IP associated netvsc devices. Therefore, the only
way to probe them is to specify them by the EAL command line, using the
"force" parameter to skip the IP check in the driver.
>From now on, the user does not need to add the "force" parameter if he
specifies an IP associated netvsc device by the EAL command line, and the
responsibility of the IP check is now in the user's hands.
However, in the absence of any specification, the driver still skips IP
associated netvsc devices.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
This patches add "default" parameter to "port config all rss" command.
"default" means all supported hash types reported by device info.
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Update link status related feature document items and minor updates in
some link status related functions.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Users cannot override the default RSS settings when entering a RSS action,
only a list of queues can be provided.
This patch enables them to set a RSS hash key and types for a flow rule.
Fixes: 05d34c6e9d ("app/testpmd: add queue actions to flow command")
Cc: stable@dpdk.org
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Recent NIC models support overlay offload. The overlay offload
feature enables the following on the NIC.
- Rx/Tx checksum offloads for both inner and outer packets.
- Rx inner packet type classification.
- TSO.
- Inner RSS.
TX descriptors do not require any changes, except the header length
for TSO. The NIC parses outer/inner packets and performs offloads on
them as necessary. The header length for tunneled TSO includes both
inner and outer headers.
The NIC actually parses and performs the above for NVGRE as well. DPDK
currently has no offload flags for NVGRE, and the hardware has no
controls to individually enable tunnel types either. So do nothing for
now.
The driver enables overlay offload by default. Add a devargs
'disable-overlay=<0|1>' to allow the app to disable it.
Also update the enic guide doc.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
According to
commit 315ee8374e ("doc: reduce initial offload API rework scope
to drivers")
All PMDs should have moved to the new offloads API. Therefore it is safe
to remove the new->old convert helps.
The old->new helpers will remain to support application which still use
the old API.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
If we want a virtio device to work in vDPA (vhost data path acceleration)
mode, we could add a "vdpa=1" devarg for this device to specify the mode.
This patch let virtio pmd skip device probe when detecting this parameter.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.
Signed-off-by: Artem V. Andreev <artem.andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The manager provides a way to allocate physically and virtually
contiguous set of objects.
Signed-off-by: Artem V. Andreev <artem.andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
rte_lcore_has_role() returns 0 if role of lcore matches requested
role. The return value of the API is confusing, and this is a known
problem with a deprecation notice announcing the change to more
intuitive semantics:
Commit 064518f68d ("doc: announce EAL API change to lcore role function")
Implement changes announced in the deprecation notice, and remove it.
Also, fix usages of this API to reflect the change. Control thread patches
expected new behavior and were broken before, now they are fixed as well.
Fixes: d651ee4919 ("eal: set affinity for control threads")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
A .svg extension was added instead of .* which caused
the pdf docs to not build this change fixes that.
Fixes: a5e1231f09 ("net/szedata2: do not affect Ethernet interfaces")
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
This commit removes the experimental tags from the
service cores functions, they now become part of the
main DPDK API/ABI.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Regular expressions are not the best way to match a hierarchical
pattern like dynamic log levels. And the separator for dynamic
log levels is period which is the regex wildcard character.
A better solution is to use filename matching 'globbing' so
that log levels match like file paths. For compatibility,
use colon to separate pattern match style arguments. For
example:
--log-level 'pmd.net.virtio.*:debug'
This also makes the documentation match what really happens
internally.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
The previous symbols were deprecated for two releases.
They are now marked as such and cannot be used anymore.
They are replaced by ones respecting the new namespace that are marked
experimental.
As a result, eth_dev attach and detach are slightly reworked to follow
the changes.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>