This updates mlx4 documentation and DPDK release notes
to reflect the PMD support for rdma-core from linux-rdma.
- PMD is now freed from Mellanox OFED and now only depends on the
public rdma-core package (v15 and above) instead.
(see https://github.com/linux-rdma/rdma-core/releases)
This PMD should run under Linux v4.14 and above.
- In case any of the above requirements can't be satisfied,
Mellanox OFED v4.2 and above also provide an updated rdma-core
as well back-ported kernel modules for most Linux distributions
and previous Linux versions.
(see http://www.mellanox.com/page/products_dyn?product_family=26).
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Since the release notes has a new section for removed items,
the dom0 removal notice can be moved there.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Move the vdev bus from lib/librte_eal to drivers/bus.
As the crypto vdev helper function refers to data structure
in rte_vdev.h, so we move those helper function into drivers/bus
too.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Remove rte_cryptodev_create_vdev() for duplication.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Qemu versions from v2.7.0 to v2.9.0 have their reply-ack protocol
feature implementation broken with multiqueue. The reply-ack
protocol feature is optional except for IOMMU feature.
This patch introduce a new RTE_VHOST_USER_IOMMU_SUPPORT flag to
enable VIRTIO_F_IOMMU_PLATFORM virtio feature.
By default, the IOMMU support is now disabled.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yliu@fridaylinux.org>
Tested-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
API and ABI of mempool library has been changed in 17.11.
Fixes: 02604520b2f2 ("mempool: remove unused flags argument")
Fixes: 0cc0f8aaa35d ("mempool: change flags from int to unsigned int")
Fixes: 6eac187bff30 ("mempool: add flags arg in xmem size and usage")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The wording changes have been done in the API without breaking
the ABI. The deprecated fields and symbols can be removed later
when an another ABI change will be required.
The deprecation notice can be removed.
The release notes describe the new available API with IOVA wording.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The PCI lib defines the types and methods allowing to use PCI elements.
The PCI bus implements a bus driver for PCI devices by constructing
rte_bus elements using the PCI lib.
Move the relevant code out of the EAL to its expected place.
Libraries, drivers, unit tests and applications are updated to use the
new rte_bus_pci.h header when necessary.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Do not expose the minute implementations of PCI parsing.
This leaves only the all-purpose rte_pci_addr_parse, which is simpler to
use.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Some devices may not support or fail setting VLAN offload
configuration based on dynamic circumstances so the
vlan_offload_set_t vector is modified to return an int so
the caller can determine success or not.
rte_eth_dev_set_vlan_offload is updated to return the
value provided by the vector when called along with restoring
the original offload configs on failure.
Existing vlan_offload_set_t vectors are modified to return
an int. Majority of cases return 0 but a few that actually
can fail now return their failure codes.
Finally, a vlan_offload_set_t vector is added to virtio
to facilitate dynamically turning VLAN strip on or off.
Signed-off-by: David Harton <dharton@cisco.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
PKT_RX_VLAN_PKT and PKT_RX_QINQ_PKT are deprecated for a while.
As explained in [1], these flags were kept to let the applications and
PMDs move to the new flag. There is also a need to support Rx vlan
offload without vlan strip (at least for the ixgbe driver).
This patch renames the old flags for this feature, knowing that some
PMDs were using PKT_RX_VLAN_PKT and PKT_RX_QINQ_PKT to indicate that
the vlan tci has been saved in the mbuf structure.
It is likely that some PMDs do not set the proper flags when doing vlan
offload, and it would be worth making a pass on all of them.
Link: [1] http://dpdk.org/ml/archives/dev/2017-June/067712.html
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This flag is not necessary at the ether layer anymore.
Buses are able to advertise their hotplug support. The ether layer can
rely upon this capability instead of a special flag.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Removes any dependency of librte_cryptodev on the PCI device
infrastructure code and removes the functions which were virtual
device specific.
Updates QAT crypto PMD to remove dependencies on rte_cryptodev_pci.h
and replaces those calls with the new bus independent functions.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Removes any dependency of librte_cryptodev on the virtual device
infrastructure code and removes the functions which were virtual
device specific.
Updates all virtual PMDs to remove dependencies on rte_cryptodev_vdev.h
and replaces those calls with the new bus independent functions.
Due to these changes, the cryptodev ABI version gets bumped.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Tested-by: Tomasz Duszynski <tdu@semihalf.com>
Adds new PMD assist functions which are bus independent for driver to
create and destroy new device instances.
Also includes function to parse parameters which can be passed to
driver on device initialisation.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The following APIs's are implemented in the
librte_flow_classify library:
rte_flow_classifier_create
rte_flow_classifier_free
rte_flow_classifier_query
rte_flow_classify_table_create
rte_flow_classify_table_entry_add
rte_flow_classify_table_entry_delete
The following librte_table API's are used:
f_create to create a table.
f_add to add a rule to the table.
f_del to delete a rule from the table.
f_free to free a table
f_lookup to match packets with the rules.
The library supports counting of IPv4 five tupple packets only,
ie IPv4 UDP, TCP and SCTP packets.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Remove rte_set_log_level(), rte_get_log_level(),
rte_set_log_type(), and rte_get_log_type().
Also update librte_eal.so version in docuementation.
The LIBABIVER variable in eal has already been modified in
commit f26ab687a74f ("eal: remove Xen dom0 support").
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The libraries which have their ABI version increased in this release
must be prepended with a + sign to make them appear clearly.
Fixes: f8244c6399d9 ("ethdev: increase port id range")
Fixes: ec51443cc99a ("gso: add Generic Segmentation Offload API framework")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Zhiyong Yang <zhiyong.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support to AES-CCM, for 128, 192 and 256-bit keys.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
The Multi-buffer library now supports DES-CBC
and DES-DOCSISBPI algorithms, so this commit
extends adds support for them in the PMD.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Radu Nicolau <radu.nicolau@intel.com>
Since the crypto perf application is flexible enough
to cover all the crypto performance tests, these are not needed
anymore, so they will be removed to avoid duplications.
Besides, the crypto perf application gives the user more options
to get performance, for every single supported algorithm,
such as varying the buffer size as the user wants.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds a new benchmarking mode, which is intended for
microbenchmarking individual parts of the cryptodev framework,
specifically crypto ops alloc-build-free, cryptodev PMD enqueue
and cryptodev PMD dequeue.
It works by first benchmarking crypto operation alloc-build-free
loop (no enqueues/dequeues happening), and then benchmarking
enqueue and dequeue separately, by first completely filling up the
TX queue, and then completely draining the RX queue.
Results are shown as cycle counts per alloc/build/free, PMD enqueue
and PMD dequeue.
One new test mode is added: "pmd-cyclecount"
(called with --ptest=pmd-cyclecount)
New command-line argument is also added:
--pmd-cyclecount-delay-ms: this is a pmd-cyclecount-specific parameter
that controls the delay between enqueue and dequeue. This is
useful for benchmarking hardware acceleration, as hardware may
not be able to keep up with enqueued packets. This parameter
can be increased if there are large amounts of dequeue
retries.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Don't write CSR tail until we processed enough TX descriptors.
To avoid crypto operations sitting in the TX ring indefinitely,
the "force write" threshold is used:
- on TX, no tail write coalescing will occur if number of inflights
is below force write threshold
- on RX, check if we have a number of crypto ops enqueued that is
below force write threshold that are not yet submitted to
processing.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Don't write CSR head until we processed enough RX descriptors.
Also delay marking them as free until we are writing CSR head.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Replacing atomics in the QAT driver with simple 16-bit integers for
number of inflight packets.
This adds a new limitation to the QAT driver: each queue pair is
now explicitly single-threaded.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
When register a crypto driver, a cryptodev driver
structure was being allocated, using malloc.
Since this call may fail, it is safer to allocate
this memory statically in each PMD, so driver registration
will never fail.
Coverity issue: 158645
Fixes: 7a364faef185 ("cryptodev: remove crypto device type enumeration")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
The stats_get dev op API doesn't include return value, so PMD cannot
return an error in case of failure at stats getting process time.
Since PCI devices can be removed and there is a time between the
physical removal to the RMV interrupt, the user may get invalid stats
without any indication.
This patch changes the stats_get API return value to be int instead of
void.
All the net PMDs stats_get dev ops are adjusted by this patch.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Some devices do not support reset of eth stats. An application may
need to know not to clear shadow stats if the device cannot.
rte_eth_stats_reset is updated to provide a return code to share
whether the device supports reset or not.
Signed-off-by: David Harton <dharton@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Allow sufficient space for UUID in string form (36+1).
Needed to use UUID with Hyper-V.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add mrvl net pmd driver skeleton providing base for the further
development. Besides the basic functionality QoS configuration is
introduced as well.
Signed-off-by: Jacek Siuda <jck@semihalf.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
This patch adds GSO support for GRE-tunneled packets. Supported GRE
packets must contain an outer IPv4 header, and inner TCP/IPv4 headers.
They may also contain a single VLAN tag. GRE GSO doesn't check if all
input packets have correct checksums and doesn't update checksums for
output packets. Additionally, it doesn't process IP fragmented packets.
As with VxLAN GSO, GRE GSO uses a two-segment MBUF to organize each
output packet, which requires multi-segment mbuf support in the TX
functions of the NIC driver. Also, if a packet is GSOed, GRE GSO reduces
its MBUF refcnt by 1. As a result, when all of its GSOed segments are
freed, the packet is freed automatically.
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch adds a framework that allows GSO on tunneled packets.
Furthermore, it leverages that framework to provide GSO support for
VxLAN-encapsulated packets.
Supported VxLAN packets must have an outer IPv4 header (prepended by an
optional VLAN tag), and contain an inner TCP/IPv4 packet (with an optional
inner VLAN tag).
VxLAN GSO doesn't check if input packets have correct checksums and
doesn't update checksums for output packets. Additionally, it doesn't
process IP fragmented packets.
As with TCP/IPv4 GSO, VxLAN GSO uses a two-segment MBUF to organize each
output packet, which mandates support for multi-segment mbufs in the TX
functions of the NIC driver. Also, if a packet is GSOed, VxLAN GSO
reduces its MBUF refcnt by 1. As a result, when all of its GSO'd segments
are freed, the packet is freed automatically.
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch adds GSO support for TCP/IPv4 packets. Supported packets
may include a single VLAN tag. TCP/IPv4 GSO doesn't check if input
packets have correct checksums, and doesn't update checksums for
output packets (the responsibility for this lies with the application).
Additionally, TCP/IPv4 GSO doesn't process IP fragmented packets.
TCP/IPv4 GSO uses two chained MBUFs, one direct MBUF and one indrect
MBUF, to organize an output packet. Note that we refer to these two
chained MBUFs as a two-segment MBUF. The direct MBUF stores the packet
header, while the indirect mbuf simply points to a location within the
original packet's payload. Consequently, use of the GSO library requires
multi-segment MBUF support in the TX functions of the NIC driver.
If a packet is GSO'd, TCP/IPv4 GSO reduces its MBUF refcnt by 1. As a
result, when all of its GSOed segments are freed, the packet is freed
automatically.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
Generic Segmentation Offload (GSO) is a SW technique to split large
packets into small ones. Akin to TSO, GSO enables applications to
operate on large packets, thus reducing per-packet processing overhead.
To enable more flexibility to applications, DPDK GSO is implemented
as a standalone library. Applications explicitly use the GSO library
to segment packets. To segment a packet requires two steps. The first
is to set proper flags to mbuf->ol_flags, where the flags are the same
as that of TSO. The second is to call the segmentation API,
rte_gso_segment(). This patch introduces the GSO API framework to DPDK.
rte_gso_segment() splits an input packet into small ones in each
invocation. The GSO library refers to these small packets generated
by rte_gso_segment() as GSO segments. Each of the newly-created GSO
segments is organized as a two-segment MBUF, where the first segment is a
standard MBUF, which stores a copy of packet header, and the second is an
indirect MBUF which points to a section of data in the input packet.
rte_gso_segment() reduces the refcnt of the input packet by 1. Therefore,
when all GSO segments are freed, the input packet is freed automatically.
Additionally, since each GSO segment has multiple MBUFs (i.e. 2 MBUFs),
the driver of the interface which the GSO segments are sent to should
support to transmit multi-segment packets.
The GSO framework clears the PKT_TX_TCP_SEG flag for both the input
packet, and all produced GSO segments in the event of success, since
segmentation in hardware is no longer required at that point.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This commit bumps the library version to refect the ABI change
caused by removing the individual rte_event_port_count, queue_count,
and other get functions. These functions are superseded by the
get-attribute style API, which allows fetching values without API/ABI
changes.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>