As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. Old release notes and
the kernel related documentation is not accounted for this change.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Removing the rawdev based octeontx2-dma driver as the dependent
common/octeontx2 will be soon be going away. Also a new DMA driver will
be coming in this place once the rte_dmadev library is in.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
The release notes comments mention how to generate the documentation
with the old & removed build system.
Rather than fixing these comments, all old release notes comments
are removed, because they are useful only for the current release.
Few extra blank lines are removed for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.
Cc: stable@dpdk.org
Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
Add tested Marvell integrated NIC platforms to v19.08 release note.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add tested Intel platforms with Intel NICs to v19.08 release note.
Signed-off-by: Wenjie Li <wenjiex.a.li@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The driver names for rawdevs were both different in make and meson builds
and were non-standard in the make version in that some included "rawdev" in
the name while others didn't.
Therefore, for global consistency of naming, we can use "rte_rawdev" rather
than "rte_pmd" for the prefix for the libraries. While most other driver
categories use "rte_pmd" as a prefix, there is precedent for this in the
mempool drivers use "rte_mempool" as a prefix.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
When IOMMU is not available, /sys/kernel/iommu_groups will not be
populated. This is happening since at least 3.6 when VFIO support
was added. If the directory is empty, EAL should not pick IOVA as
VA as the default IOVA mode.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Update release notes for recently supported features in IPsec library and
IPsec Security Gateway application.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add command-line argument to set LRO session timeout.
Add LRO settings struct in PMD configuration struct.
Add support of LRO offload in port configuration.
Add macros and function to check if LRO is supported and enabled.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch introduces new mlx5 PMD devarg options:
- txq_inline_min - specifies minimal amount of data to be inlined into
WQE during Tx operations. NICs may require this minimal data amount
to operate correctly. The exact value may depend on NIC operation
mode, requested offloads, etc.
- txq_inline_max - specifies the maximal packet length to be completely
inlined into WQE Ethernet Segment for ordinary SEND method. If packet
is larger the specified value, the packet data won't be copied by the
driver at all, data buffer is addressed with a pointer. If packet
length is less or equal all packet data will be copied into WQE.
- txq_inline_mpw - specifies the maximal packet length to be completely
inlined into WQE for Enhanced MPW method.
Driver documentation is also updated.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Previously in the PCAP PMD queues has to be defined as RxQ and TxQ
pairs, even if the need is only Rx or only Tx:
"--vdev net_pcap0,tx_pcap=tx.pcap,rx_pcap=rx.pcap"
Following commit enabled only providing Rx queue, and if Tx queue is
not provided PMD drops the Tx packets automatically:
Commit a3f5252e5c ("net/pcap: enable infinitely Rx a pcap file")
"--vdev net_pcap0,rx_pcap=rx.pcap"
This commit enables same thing for Rx queue, user no more have to
provide a Rx queue (rx_iface or rx_pcap), for this case a dummy Rx
burst function is used which doesn't return any packet at all:
"--vdev net_pcap0,tx_pcap=tx.pcap"
This makes only saving packets to a pcap file use case easy.
When both Rx and Tx queues are missing PMD will return an error.
(Single interface is still supported: "--vdev net_pcap0,iface=eth0")
Signed-off-by: Aideen McLoughlin <aideen.mcloughlin@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
All the DV counters are cashed in the PMD memory and are contained in
pools which are contained in containers according to the counters
allocation type - batch or single.
Currently, the flow counter query is done synchronously in pool
resolution means that on the user request a FW command is triggered to
read all the counters in the pool.
A new feature of devX to asynchronously read batch of flow counters
allows to accelerate the user query operation.
Using the DPDK host thread, the PMD periodically triggers asynchronous
query in pool resolution for all the counter pools and an interrupt is
triggered by the FW when the values are updated.
In the interrupt handler the pool counter values raw data is replaced
using a double buffer algorithm (very fast).
In the user query, the PMD just returns the last query values from the
PMD cache - no system-calls and FW commands are triggered from the user
control thread on query operation!
More synchronization is added with the host thread:
Container resize uses double buffer algorithm.
Pools growing in container uses atomic operation.
Pool query buffer replace uses a spinlock.
Pool minimum devX counter ID uses atomic operation.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Enabled IP-in-IP tunnel type support on DV/DR flow engine.
This includes the following combination:
- IPv4 over IPv4
- IPv4 over IPv6
- IPv6 over IPv4
- IPv6 over IPv6
MLX5 NIC supports IP-in-IP tunnel via FLEX Parser so
need to make sure fw using FLEX Paser profile 0.
mlxconfig -d <mst device> -y set FLEX_PARSER_PROFILE_ENABLE=0
The example testpmd commands would be:
- Match on IPv4 over IPv4 packets and do inner RSS:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 0x04 /
ipv4 / udp / end actions rss level 2 queues 0 1 2 3 end / end
- Match on IPv6 over IPv4 packets and do inner RSS:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 0x29 /
ipv6 / udp / end actions rss level 2 queues 0 1 2 3 end / end
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Support matching on the present bits (C,K,S)
as well as the optional key field.
If the rte_flow_item_gre_key is specified in pattern,
it will set K present match automatically.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Remove unused macros from the library, and update release
notes.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Abraham Tovar <abrahamx.tovar@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
This patch changes the key pointer data types in cipher, auth,
and aead xforms from "uint8_t *" to "const uint8_t *" for a
more intuitive and safe sessionn creation.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Liron Himi <lironh@marvell.com>
This patch adds a benchmark part to
compression-perf-tool as a separate test case, which can be
executed multi-threaded.
Also updates release notes.
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Artur Trybula <arturx.trybula@intel.com>
Acked-by: Shally Verma <shallyv@marvell.com>
The pdump tool works as the secondary process. When the primary process
exits and the residual secondary process keeps running, it will make the
primary process can't start up again. Since the ex-fbarry files are still
attached by the secondary process pdump, the 'new' primary process can't
get these files locked.
The patch is to set up an alarm which runs every 0.5s periodically
to monitor the primary process in the pdump. Once the primary exits,
so will the pdump.
Signed-off-by: Suanming Mou <mousuanming@huawei.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
On DV/DR flow engine, MLX5 can match on ICMP/ICMP6's code and type field
via FLEX Parser, which can be enabled by config FW using FLEX Parser
profile 2:
mlxconfig -d <mst device> -y set FLEX_PARSER_PROFILE_ENABLE=2
The testpmd commands could be:
testpmd> flow create 0 ingress pattern eth / ipv4 /
icmp type is 8 code is 0 / end
actions rss queues 0 1 end / end
testpmd> flow create 0 ingress pattern eth / ipv6 /
icmp6 type is 128 code is 0 / end
actions rss queues 0 1 end / end
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently PKT_TX_IP_CKSUM is being set into mbuf->ol_flags
during fragmentation and reassemble operation implicitly.
Because of this, application is forced to use checksum offload
whether it is supported by platform or not.
Also documentation does not provide any expected value of ol_flags
in returned mbuf (reassembled or fragmented) so application will never
come to know that which offloads are enabled. So transmission may be failed
for the platforms which does not support checksum offload.
Also, IPv6 does not contain any checksum field in header so setting
mbuf->ol_flags with PKT_TX_IP_CKSUM is itself invalid.
So removing mentioned flag from the library.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
If there are multiple threads contending, they all attempt to take the
spinlock lock at the same time once it is released. This results in a
huge amount of processor bus traffic, which is a huge performance
killer. Thus, if we somehow order the lock-takers so that they know who
is next in line for the resource we can vastly reduce the amount of bus
traffic.
This patch added MCS lock library. It provides scalability by spinning
on a CPU/thread local variable which avoids expensive cache bouncings.
It provides fairness by maintaining a list of acquirers and passing the
lock to each CPU/thread in the order they acquired the lock.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
This patch updates the ipsec-secgw application to support
header reconstruction. In addition a series of tests have
been added to prove the implementation's correctness.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Implementation still based on Intel SDK libraries
optimized for AVX512 instructions set and 5GNR.
This can be also build for AVX2 for 4G capability or
without SDK dependency for maintenance.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Extension to BBDEV operations to support 5G
on top of existing 4G operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
This patch adds condition to be met when using
out-of-place auth-cipher operations. It checks
if the digest location overlaps with the data to
be encrypted or decrypted and if so, treats as a
digest-encrypted case.
Patch adds checking, if the digest is being
encrypted or decrypted partially and extends PMD
buffers accordingly.
It also adds feature flag for QuickAssist
Technology to emphasize it's support for digest
appended auth-cipher operations.
Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Adding a new field, ff_disable, to allow applications to control the
features enabled on the crypto device. This would allow for efficient
usage of HW/SW offloads.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch implements additional actions of packet header
modifications.
Add actions:
- INC_TCP_SEQ - Increase sequence number in the outermost TCP header.
- DEC_TCP_SEQ - Decrease sequence number in the outermost TCP header.
- INC_TCP_ACK - Increase acknowledgment number in the outermost TCP
header.
- DEC_TCP_ACK - Decrease acknowledgment number in the outermost TCP
header.
Original work by Xiaoyu Min.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This feature was added in the following commit:
commit a3f5252e5c ("net/pcap: enable infinitely Rx a pcap file")
Signed-off-by: Cian Ferriter <cian.ferriter@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Now that everything that has ever accessed the shared memory
config is doing so through the public API's, we can make it
internal. Since we're removing quite a few headers from
rte_eal_memconfig.h, we need to add them back in places
where this header is used.
This bumps the ABI, so also change all build files and make
update documentation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Introduce rawdev driver support for NTB (Non-transparent Bridge) which
can help to connect two separate hosts with each other.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add stubs for ioat rawdev driver support in DPDK, specifically:
* makefile and meson build hooks
* initial public header file
* rawdev main C file, with probe and release functions
* release note update announcing the driver
* initial documentation for the new section in the rawdev doc
* unit test stubs for device unit tests
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Tested-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds ice_flow_create, ice_flow_destroy,
ice_flow_flush and ice_flow_validate support,
these are used to handle all the generic filters.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Replace the mbuf pointer array in the event eth Rx adapter
callback with an event array. Using an event array allows
the application to change attributes of the events enqueued
by the SW adapter.
The callback can drop packets and populate a callback
argument with the number of dropped packets. Add a Rx adapter
stats field to keep track of the total number of dropped packets.
This commit removes the experimental tags from
the callback and stats APIs, the experimental tag from eventdev
is also removed and eventdev functions become part of the
main DPDK API/ABI.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The function rte_malloc_set_limit was defined but never implemented.
Mark it as deprecated for now, and remove in next release.
There is no point in keeping dead code.
"You Aren't Going to Need It"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This patch enables need_wakeup flag for Tx and fill rings, when this
flag is set by the driver, it means that the userspace application has
to explicitly wake up the kernel Rx or kernel Tx processing by issuing
a syscall. Poll() can wake up both and sendto() or its alternatives
will wake up Tx processing only.
This feature is to provide efficient support for case that application
and driver executing on the same core.
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add build and doc files along with hinic_pmd_ethdev.c
which just includes PMD register and log initialization
for compilation.
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add a function rte_rand_max() which generates an uniformly distributed
pseudo-random number less than a user-specified upper bound.
The commonly used pattern rte_rand() % SOME_VALUE creates biased
results (as in some values in the range are more frequently occurring
than others) if SOME_VALUE is not a power of 2.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This commit replaces rte_rand()'s use of lrand48() with a DPDK-native
combined Linear Feedback Shift Register (LFSR) (also known as
Tausworthe) pseudo-random number generator.
This generator is faster and produces better-quality random numbers
than the linear congruential generator (LCG) of lib's lrand48(). The
implementation, as opposed to lrand48(), is multi-thread safe in
regards to concurrent rte_rand() calls from different lcore threads.
A LCG is still used, but only to seed the five per-lcore LFSR
sequences.
In addition, this patch also addresses the issue of the legacy
implementation only producing 62 bits of pseudo randomness, while the
API requires all 64 bits to be random.
This pseudo-random number generator is not cryptographically secure -
just like lrand48().
Bugzilla ID: 114
Bugzilla ID: 276
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Add new telemetry mode support for l3fwd-power.
This is a standalone mode, in this mode l3fwd-power
does simple l3fwding along with calculating
empty polls, full polls, and busy percentage for
each forwarding core. The aggregation of these
values of all cores is reported as application
level telemetry to metric library for every 500ms from the
master core.
The busy percentage is calculated by recording the poll_count
and when the count reaches a defined value the total
cycles it took is measured and compared with minimum and maximum
reference cycles and busy rate is set according to either 0% or
50% or 100%.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
telemetry has support for fetching port based stats
from metrics library.
Metrics library also has global stats which are
not fetched by telemetry, so extend telemetry to
fetch the global metrics.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
When Rx interrupts are disabled, we simply disable rearm when
the interrupt fires the next time. So, the next packet will
trigger interrupt (if it is not happened yet after previous Rx
burst processing).
Signed-off-by: Georgiy Levashov <georgiy.levashov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>