When the dpdk-ioat app was renamed to dpdk-dma this example command
was missed, this patch corrects that issue.
Fixes: bb4141dbe5 ("examples/dma: rename ioat application example")
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Enabling trace points at runtime was not working if no trace point had
been enabled first at rte_eal_init() time. The reason was that
trace.args reflected the arguments passed to --trace= EAL option.
To fix this:
- the trace subsystem initialisation is updated: trace directory
creation is deferred to when traces are dumped (to avoid creating
directories that may not be used),
- per lcore memory allocation still relies on rte_trace_is_enabled() but
this helper now tracks if any trace point is enabled. The
documentation is updated accordingly,
- cleanup helpers must always be called in rte_eal_cleanup() since some
trace points might have been enabled and disabled in the lifetime of
the DPDK application,
With this fix, we can update the unit test and check that a trace point
callback is invoked when expected.
Note:
- the 'trace' global variable might be shadowed with the argument
passed to the functions dealing with trace point handles.
'tp' has been used for referring to trace_point object.
Prefer 't' for referring to handles,
Fixes: 84c4fae462 ("trace: implement operation APIs")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Sunil Kumar Kori <skori@marvell.com>
dpdk-pmdinfo.py does not produce any parseable output. The -r/--raw flag
merely prints multiple independent JSON lines which cannot be fed
directly to any JSON parser. Moreover, the script complexity is rather
high for such a simple task: extracting PMD_INFO_STRING from .rodata ELF
sections. Rewrite it so that it can produce valid JSON.
Remove the PCI database parsing for PCI-ID to Vendor-Device names
conversion. This should be done by external scripts (if really needed).
The script passes flake8, black, isort and pylint checks.
I have tested this with a matrix of python/pyelftools versions:
pyelftools
0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29
3.6 ok ok ok ok ok ok ok ok
3.7 ok ok ok ok ok ok ok ok
Python 3.8 ok ok ok ok ok ok ok ok
3.9 ok ok ok ok ok *ok ok ok
3.10 fail fail fail fail ok ok ok ok
* Also tested on FreeBSD
All failures with python 3.10 are related to the same issue:
File "elftools/construct/lib/container.py", line 5, in <module>
from collections import MutableMapping
ImportError: cannot import name 'MutableMapping' from 'collections'
Python 3.10 support is only available since pyelftools 0.26. The script
will only work with Python 3.6 and later.
Update the minimal system requirements, docs and release notes.
Signed-off-by: Robin Jarry <rjarry@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The reason for not building is updated
to be consistent with other drivers.
The libibverbs was not detected through pkg-config.
The method dependency() needs to be used first.
The support in rdma-core and Linux is not released yet,
so the documentation is updated.
Fixes: 517ed6e2d5 ("net/mana: add basic driver with build environment")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
When KNI is being used at runtime, output a warning message about its
deprecated status. This is part of the deprecation process for KNI
agreed by the DPDK technical board.[1]
[1] https://mails.dpdk.org/archives/dev/2022-June/243596.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
To ensure all users are aware of KNI's deprecated status at build time,
this library is marked as a deprecated library: the library is disabled
by default. It can be re-enabled by setting disabled_libs to the empty
string (or other string not including 'kni').
The dependent NIC driver, drivers/net/kni, is disabled accordingly as it
depends on the library.
NOTE: This is part of the deprecation process for KNI agreed by the DPDK
technical board.[1]
[1] https://mails.dpdk.org/archives/dev/2022-June/243596.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patchset bumps the minimum meson version from 0.49.2 to 0.53.2.
Ideally, the minimum version should be 0.53 without a point release, but
some DPDK builds (mingw) are broken with 0.53.0 due to issue[1], fixed
by commit[2] in 0.53.1. Therefore we use the latest point release from
0.53 branch i.e. 0.53.2.
Some new features of interest which can now be used in DPDK with this
new minimum meson version:
* can do header-file checks directly inside find_library calls, rather
than needing a separate check.[v0.50].
* can pass multiple cross-files at the same time when cross-compiling
[v0.51].
* "alias_target" function, to allow use to give better/shorter names
for particular build objects [v0.52].
* auto-generation of clang-format [v0.50] and clang-tidy[v0.52] targets
when those tools are present and config dotfiles are present.
Similarly ctags and cscope are added as targets when those tools are
present [v0.53]
* meson module for filesystem operations, so meson can now check for the
presence of particular files or directories [v0.53].
* "summary" function to provide a configuration summary at the end of
the meson run [v0.53].
Plus many other features. See [3] for full details of each version.
[1] https://github.com/mesonbuild/meson/issues/6442
[2] https://github.com/mesonbuild/meson/pull/6457/commits/8e7a7c36b579
[3] https://mesonbuild.com/Release-notes.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Add option for setting uncore frequency min/max/index, through uncore API.
This will be set for each package and die on the SKU.
On exit, uncore min and max frequency will be reverted back
to previous frequencies.
Signed-off-by: Tadhg Kearney <tadhg.kearney@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
Add API to allow uncore frequency adjustment.
Uncore is a term used by Intel to describe function
of a microprocessor that are closely connected
to the core to achieve high performance.
This is done through manipulating related uncore frequency control
sysfs entries to adjust the minimum and maximum uncore frequency values
and works on Linux for Intel hardware.
Signed-off-by: Tadhg Kearney <tadhg.kearney@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
The dumpcap application supports an interface parameter via the
`-i` option however the current documentation utilizes a `-I` flag.
Fixes: cbb44143be ("app/dumpcap: add new packet capture application")
Cc: stable@dpdk.org
Signed-off-by: Ben Magistro <koncept1@gmail.com>
Sketching algorithm provide high-fidelity approximate measurements and
appears as a promising alternative to traditional approaches such as
packet sampling.
NitroSketch [1] is a software sketching framework that optimizes
performance, provides accuracy guarantees, and supports a variety of
sketches.
This commit adds a new data structure called sketch into
membership library. This new data structure is an efficient
way to profile the traffic for heavy hitters. Also use min-heap
structure to maintain the top-k flow keys.
[1] Zaoxing Liu, Ran Ben-Basat, Gil Einziger, Yaron Kassner, Vladimir
Braverman, Roy Friedman, Vyas Sekar, "NitroSketch: Robust and General
Sketch-based Monitoring in Software Switches", in ACM SIGCOMM 2019.
https://dl.acm.org/doi/pdf/10.1145/3341302.3342076
Signed-off-by: Alan Liu <zaoxingliu@gmail.com>
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
Add support for protocol based buffer split in normal Rx
data paths. When the Rx queue is configured with specific protocol type,
packets received will be directly split into protocol header and
payload parts. And the two parts will be put into different mempools.
Currently, protocol based buffer split is not supported in vectorized
paths.
A new API ice_buffer_split_supported_hdr_ptypes_get() has been
introduced, it will return the supported header protocols of ice PMD
to app for splitting.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Add command line parameter:
--rxhdrs=eth[,ipv4]
Set the protocol_hdr of segments to scatter packets on receiving if
split feature is engaged. And the queues with BUFFER_SPLIT flag.
Add interactive mode command:
testpmd>set rxhdrs eth,ipv4,ipv4-udp
(protocol sequence should be valid)
The protocol split feature is off by default. To enable protocol split,
you need:
1. Start testpmd with multiple mempools. E.g. --mbuf-size=2048,2048
2. Configure Rx queue with rx_offload buffer split on.
3. Set the protocol type of buffer split. E.g. set rxhdrs eth,eth-ipv4
(default protocols of testpmd : eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp|
ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|grenat|inner-eth|
inner-ipv4|inner-ipv6|inner-ipv4-tcp|inner-ipv6-tcp|
inner-ipv4-udp|inner-ipv6-udp|inner-ipv4-sctp|inner-ipv6-sctp)
Above protocols can be configured in testpmd. But the configuration can
only be applied when it is supported by specific pmd.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Currently, Rx buffer split supports length based split. With Rx queue
offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment
configured, PMD will be able to split the received packets into
multiple segments.
However, length based buffer split is not suitable for NICs that do split
based on protocol headers. Given an arbitrarily variable length in Rx
packet segment, it is almost impossible to pass a fixed protocol header to
driver. Besides, the existence of tunneling results in the composition of
a packet is various, which makes the situation even worse.
This patch extends current buffer split to support protocol header based
buffer split. A new proto_hdr field is introduced in the reserved field
of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr
field defines the split position of packet, splitting will always happen
after the protocol header defined in the Rx packet segment. When Rx queue
offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding
protocol header is configured, driver will split the ingress packets into
multiple segments.
Examples for proto_hdr field defines:
To split after ETH-IPV4-UDP, it should be defined as
proto_hdr = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
RTE_PTYPE_L4_UDP
For inner ETH-IPV4-UDP, it should be defined as
proto_hdr = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP
If the protocol header is repeated with the previously defined one,
the repeated part should be omitted. For example, split after ETH, ETH-IPV4
and ETH-IPV4-UDP, it should be defined as
proto_hdr0 = RTE_PTYPE_L2_ETHER
proto_hdr1 = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
proto_hdr2 = RTE_PTYPE_L4_UDP
If protocol header split can be supported by a PMD, the
rte_eth_buffer_split_get_supported_hdr_ptypes function can
be used to obtain a list of these protocol headers.
For example, let's suppose we configured the Rx queue with the
following segments:
seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
off0=2B
seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B
seg2 - pool2, proto_hdr2=0, off1=0B
The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like
following:
seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
seg1 - udp header @ 128 in mbuf from pool1
seg2 - payload @ 0 in mbuf from pool2
Now buffer split can be configured in two modes. User can choose length
or protocol header to configure buffer split according to NIC's
capability. For length based buffer split, the mp, length, offset field
in Rx packet segment should be configured, while the proto_hdr field
must be 0. For protocol header based buffer split, the mp, offset,
proto_hdr field in Rx packet segment should be configured, while the
length field must be 0.
Note: When protocol header split is enabled, NIC may receive packets
which do not match all the protocol headers within the Rx segments.
At this point, NIC will have two possible split behaviors according to
matching results, one is exact match, another is longest match.
The split result of NIC must belong to one of them.
The exact match means NIC only do split when the packets exactly match all
the protocol headers in the segments. Otherwise, the whole packet will be
put into the last valid mempool. The longest match means NIC will do split
until packets mismatch the protocol header in the segments. The rest will
be put into the last valid pool.
Pseudo-code for exact match:
FOR each seg in segs except last one
IF proto_hdr is not matched THEN
BREAK
END IF
END FOR
IF loop breaked THEN
put whole pkt in last seg
ELSE
put protocol header in each seg
put everything else in last seg
END IF
Pseudo-code for longest match:
FOR each seg in segs except last one
IF proto_hdr is matched THEN
put protocol header in seg
ELSE
BREAK
END IF
END FOR
put everything else in last seg
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add a new ethdev API to retrieve supported protocol headers
of a PMD, which helps to configure protocol header based buffer split.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
These actions have been deprecated since DPDK 21.11 as
ambiguous and hard-to-use, but their removal might not
be popular because net drivers i40e, ixgbe and txgbe
employ these actions in complicated "PF/VF + QUEUE"
tunnel rule support. Maintainers of these drivers
should voice their attitude to the said problem.
For now, document the status in deprecation notes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In commit [1], It was announced to remove the DPAA2 cmdif
raw driver as there was no active user known at that time.
But now, one of the DPAA2 user has objected this driver
removal so in this patch, removing the deprecation notice
for the driver.
[1] commit 10f0e51554 ("doc: announce removal of DPAA2 cmdif raw driver")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
The IPsec SA expiry events were added as per below patch,
but the deprecation notice was not removed. This patch removed it.
Fixes: d1ce79d14b ("ethdev: add IPsec SA expiry event subtypes")
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Added function to parse algorithm for TDES CBC and ECB tests in JSON.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Acked-by: Brian Dooley <brian.dooley@intel.com>
Added parameters in rte_bbdev_queue_data to expose information
with regards to any queue related failure and warning
which cannot be supported in existing API.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Extended bbdev operations to support FFT based operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Added more options in the API to expose the number
of queues exposed and related priority.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Added device status information, so that the PMD can
expose information related to the underlying accelerator device status.
Minor order change in structure to fit into padding hole.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Mingshan Zhang <mingshan.zhang@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Updated the enum for rte_bbdev_op_type
to allow to keep ABI compatible for enum insertion
while adding padded maximum value for array need.
Removing RTE_BBDEV_OP_TYPE_COUNT and instead exposing
RTE_BBDEV_OP_TYPE_SIZE_MAX.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Enabled the flag pmd_supports_disable_iova_as_pa in cnxk driver build
files as they work with IOVA as VA. Updated cn9k and cn10k soc build
configurations to disable the IOVA as PA build by default.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Swapped position of mbuf next pointer and second dynamic field (dynfield2)
if the build is configured to disable IOVA as PA.
This is to move the mbuf next pointer to first cache line.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Some of the HW has support for choosing memory pools based on the
packet's size.
This is often useful for saving the memory where the application
can create a different pool to steer the specific size of the
packet, thus enabling more efficient usage of memory.
For example, let's say HW has a capability of three pools,
- pool-1 size is 2K
- pool-2 size is > 2K and < 4K
- pool-3 size is > 4K
Here,
pool-1 can accommodate packets with sizes < 2K
pool-2 can accommodate packets with sizes > 2K and < 4K
pool-3 can accommodate packets with sizes > 4K
With multiple mempool capability enabled in SW, an application may
create three pools of different sizes and send them to PMD. Allowing
PMD to program HW based on the packet lengths. So that packets with
less than 2K are received on pool-1, packets with lengths between 2K
and 4K are received on pool-2 and finally packets greater than 4K
are received on pool-3.
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
This patch extends hairpin-mode command line option of test-pmd
application with an ability to configure whether Rx/Tx hairpin queue
should use locked device memory or RTE memory.
For purposes of this configurations the following bits of 32 bit
hairpin-mode are reserved:
- Bit 8 - If set, then force_memory flag will be set for hairpin RX
queue.
- Bit 9 - If set, then force_memory flag will be set for hairpin TX
queue.
- Bits 12-15 - Memory options for hairpin Rx queue:
- Bit 12 - If set, then use_locked_device_memory will be set.
- Bit 13 - If set, then use_rte_memory will be set.
- Bit 14 - Reserved for future use.
- Bit 15 - Reserved for future use.
- Bits 16-19 - Memory options for hairpin Tx queue:
- Bit 16 - If set, then use_locked_device_memory will be set.
- Bit 17 - If set, then use_rte_memory will be set.
- Bit 18 - Reserved for future use.
- Bit 19 - Reserved for future use.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
This patch adds a capability to place hairpin Rx queue in locked device
memory. This capability is equivalent to storing hairpin RQ's data
buffers in locked internal device memory.
Hairpin Rx queue creation is extended with requesting that RQ is
allocated in locked internal device memory. If allocation fails and
force_memory hairpin configuration is set, then hairpin queue creation
(and, as a result, device start) fails. If force_memory is unset, then
PMD will fallback to allocating memory for hairpin RQ in unlocked
internal device memory.
To allow such allocation, the user must set HAIRPIN_DATA_BUFFER_LOCK
flag in FW using mlxconfig tool.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds a capability to place hairpin Tx queue in host memory
managed by DPDK. This capability is equivalent to storing hairpin SQ's
WQ buffer in host memory.
Hairpin Tx queue creation is extended with allocating a memory buffer of
proper size (calculated from required number of packets and WQE BB size
advertised in HCA capabilities).
force_memory flag of hairpin queue configuration is also supported.
If it is set and:
- allocation of memory buffer fails,
- or hairpin SQ creation fails,
then device start will fail. If it is unset, PMD will fallback to
creating the hairpin SQ with WQ buffer located in unlocked device
memory.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Before this patch, implementation details and configuration of hairpin
queues were decided internally by the PMD. Applications had no control
over the configuration of Rx and Tx hairpin queues, despite number of
descriptors, explicit Tx flow mode and disabling automatic binding.
This patch addresses that by adding:
- Hairpin queue capabilities reported by PMDs.
- New configuration options for Rx and Tx hairpin queues.
Main goal of this patch is to allow applications to provide
configuration hints regarding placement of hairpin queues.
These hints specify whether buffers of hairpin queues should be placed
in host memory or in dedicated device memory. Different memory options
may have different performance characteristics and hairpin configuration
should be fine-tuned to the specific application and use case.
This patch introduces new hairpin queue configuration options through
rte_eth_hairpin_conf struct, allowing to tune Rx and Tx hairpin queues
memory configuration. Hairpin configuration is extended with the
following fields:
- use_locked_device_memory - If set, PMD will use specialized on-device
memory to store RX or TX hairpin queue data.
- use_rte_memory - If set, PMD will use DPDK-managed memory to store RX
or TX hairpin queue data.
- force_memory - If set, PMD will be forced to use provided memory
settings. If no appropriate resources are available, then device start
will fail. If unset and no resources are available, PMD will fallback
to using default type of resource for given queue.
If application chooses to use PMD default memory configuration, all of
these flags should remain unset.
Hairpin capabilities are also extended, to allow verification of support
of given hairpin memory configurations. Struct rte_eth_hairpin_cap is
extended with two additional fields of type rte_eth_hairpin_queue_cap:
- rx_cap - memory capabilities of hairpin RX queues.
- tx_cap - memory capabilities of hairpin TX queues.
Struct rte_eth_hairpin_queue_cap exposes whether given queue type
supports use_locked_device_memory and use_rte_memory flags.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
libbpf v0.8.0 deprecates the bpf_get_link_xdp_id() and
bpf_set_link_xdp_fd() functions. Use meson to detect if
bpf_xdp_attach() is available and if so, use the recommended
replacement functions bpf_xdp_query_id(), bpf_xdp_attach()
and bpf_xdp_detach().
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
DPAA driver has dependency on kernel to perform various functionalities.
So kernel and DPDK version should be compatible for proper working.
This patch updates the DPAA guide with the information that user can
refer to find the compatible kernel version.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
NIC HW controllers often come with congestion management support on
various HW objects such as Rx queue depth or mempool queue depth.
Also, it can support various modes of operation such as RED
(Random early discard), WRED etc on those HW objects.
Add a framework to express such modes(enum rte_cman_mode) and
introduce (enum rte_eth_cman_obj) to enumerate the different
objects where the modes can operate on.
Add RTE_CMAN_RED mode of operation and RTE_ETH_CMAN_OBJ_RX_QUEUE,
RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL objects.
Introduce reserved fields in configuration structure
backed by rte_eth_cman_config_init() to add new configuration
parameters without ABI breakage.
Add rte_eth_cman_info_get() API to get the information such as
supported modes and objects.
Add rte_eth_cman_config_init(), rte_eth_cman_config_set() APIs
to configure congestion management on those object with associated mode.
Finally, add rte_eth_cman_config_get() API to retrieve the
applied configuration.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Sunil Kumar Kori <skori@marvell.com>
Added the ethdev Rx/Tx desc dump API which provides functions for query
descriptor from device. HW descriptor info differs in different NICs.
The information demonstrates I/O process which is important for debug.
As the information is different between NICs, the new API is introduced.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
mana can receive Rx interrupts from kernel through RDMA verbs interface.
Implement Rx interrupts in the driver.
Signed-off-by: Long Li <longli@microsoft.com>
MANA allocate device queues through the IB layer when starting Tx queues.
When device is stopped all the queues are unmapped and freed.
Signed-off-by: Long Li <longli@microsoft.com>
Currently this PMD supports RSS configuration when the device is stopped.
Configuring RSS in running state will be supported in the future.
Signed-off-by: Long Li <longli@microsoft.com>
MANA supports PCI hot plug events. Add this interrupt to DPDK core so its
parent PMD can detect device removal during Azure servicing or live
migration.
Signed-off-by: Long Li <longli@microsoft.com>
MANA is a PCI device. It uses IB verbs to access hardware through the
kernel RDMA layer. This patch introduces build environment and basic
device probe functions.
Signed-off-by: Long Li <longli@microsoft.com>
For the Rx logic, fallback packets are multiplexed to the
correct representor port based on the prepended metadata.
For the Tx logic, because fallback packets are prepended
metadata, the start of the packet has to be adjusted for
in the Tx descriptor.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Adds the framework to support flower representors. The number of VF
representors are parsed from the command line. For physical port
representors the current logic aims to create a representor for
each physical port present on the hardware.
An eth_dev is created for each physical port and VF, and flower
firmware requires a MAC repr cmsg to be transmitted to firmware
with info about the number of physical ports configured.
Reify messages are sent to hardware for each physical port representor.
An rte_ring is also created per representor so that traffic can be
pushed and pulled to this interface.
To up and down the real device represented by a flower representor port
a port mod message is used to convey that info to the firmware. This
message will be used in the dev_ops callbacks of flower representors.
Each cmsg generated by the driver is prepended with a cmsg header.
This commit also adds the logic to fill in the header of cmsgs.
Also add the Rx and Tx path for flower representors. For Rx packets are
dequeued from the representor ring and passed to the eth_dev. For Tx
the first queue of the PF vNIC is used. Metadata about the representor
is added before the packet is sent down to firmware.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Adds the Rx and Tx function for the ctrl VNIC. The logic is mostly
identical to the normal Rx and Tx functionality of the NFP PMD.
Make use of the ctrl VNIC service logic to service the ctrl vNIC Rx
path.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Adds the basic probing infrastructure to support the flower firmware
application.
Adds the cpp service, used for some user tools.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Instead of a one-liner describing each vdev argument, add a description
and example for each. Move the information describing preferred busy
polling from the "Limitations" section to the "Options" section where it
is better placed. Also make general grammar improvements.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
The relation between the isolated mode in ethdev flow API
and bifurcated driver behaviour was not clearly explained.
It is made clear in the how-to guide that isolated mode is required
for flow bifurcation to the kernel.
On the other side, the impact of the isolated mode on a bifurcated
driver is made more explicit.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The TAP device only lasts as long as the DPDK application that opened
it is running. This behavior is bad if the DPDK application needs
to be updated transparently without disturbing other services
using the tap device.
Add a persist feature to the TAP device. If this flag is set, the
kernel network device remains even if after the application has exited.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The dev->device.numa_node field is set by each bus driver for
every device it manages to indicate on which NUMA node this device lies.
When this information is unknown, the assigned value is not consistent
across the bus drivers.
Set the default value to SOCKET_ID_ANY (-1) by all bus drivers
when the NUMA information is unavailable. This change impacts
rte_eth_dev_socket_id() in the same manner.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The rate parameter modified to uint32_t, so that it can work
for more than 64 Gbps.
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
As announced in the deprecation note, remove the Rx offload flag
'RTE_ETH_RX_OFFLOAD_HEADER_SPLIT' and 'split_hdr_size' field from
the structure 'rte_eth_rxmode'. Meanwhile, the place where the examples
and apps initialize the 'split_hdr_size' field, and where the drivers
check if the 'split_hdr_size' value is 0 are also removed.
User can still use `RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT` for per-queue packet
split offload, which is configured by 'rte_eth_rxseg_split'.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.
This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.
Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.
Example with testpmd command:
flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
As part of DPDK 21.11 release, it was announced that the
use of attributes 'ingress' and 'egress' in 'transfer'
rules was deprecated. The transition period is over.
Starting from DPDK 22.11, the use of direction attributes
with attribute 'transfer' is not allowed. To enforce that,
a generic check is added to flow rule validate API.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
These actions are supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
The action is supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
The action is supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
The action is supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Using rte_mtr_color_in_protocol_set(), user can configure
combination of protocol headers, like outer_vlan and outer_ip,
can be enabled on given meter object.
But rte_mtr_meter_vlan_table_update() and
rte_mtr_meter_dscp_table_update() do not have information that
which table needs to be updated corresponding to protocol header
i.e. inner or outer.
Adding protocol paramreter will allow user to provide required
protocol information so that corresponding inner or outer table
can be updated corresponding to protocol header.
If user wishes to configure both inner and outer table then
API must be called twice with correct protocol information.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Create a new Flow API action: METER_MARK.
It Meters a packet stream and marks its packets with colors.
The marking is done on a metadata, not on a packet field.
Unlike the METER action, it performs no policing at all.
A user has the flexibility to create any policies with the help of
the METER_COLOR item later, only meter profile is mandatory here.
Add testpmd command line to match for METER_MARK action:
flow create ... actions meter_mark mtr_profile 20 / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Introduce a new Meter API to retrieve a Meter profile and policy
objects using the profile/policy ID previously created with
meter_profile_add() and meter_policy_create() functions.
That allows to save the pointer and avoid any lookups in the
corresponding lists for quick access during a flow rule creation.
Also, it eliminates the need for CIR, CBS and EBS calculations
and conversion to a PMD-specific format when the profile is used.
Pointers are destroyed and cannot be used after the corresponding
meter_profile_delete() or meter_policy_delete() are called.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Extend modify_field Flow API with support of Meter Color Marker
modifications. It allows setting the packet's metadata to any
color marker: green, yellow or red. A user is able to specify
an initial packet color for Meter API or create simple Metering
and Marking flow rules based on his own coloring algorithm.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Provide an ability to use a Color Marker set by a Meter
as a matching item in Flow API. The Color Marker reflects
the metering result by setting the metadata for a
packet to a particular codepoint: green, yellow or red.
Add testpmd command line to match on a meter color:
flow create 0 ingress group 0 pattern meter color is green / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Similar to RISC-V, the current version for LoongArch do not support
vector. Re-use vector processing stubs in ixgbe PMD defined for PPC
for LoongArch. This enables ixgbe PMD usage in scalar mode on
LoongArch.
The ixgbe PMD driver was validated with Intel X520-DA2 NIC and the
test-pmd application, l2fwd, l3fwd examples.
Signed-off-by: Min Zhou <zhoumin@loongson.cn>
Add all necessary elements for DPDK to compile and run EAL on
LoongArch64 Soc.
This includes:
- EAL library implementation for LoongArch ISA.
- meson build structure for 'loongarch' architecture.
RTE_ARCH_LOONGARCH define is added for architecture identification.
- xmm_t structure operation stubs as there is no vector support in
the current version for LoongArch.
Compilation was tested on Debian and CentOS using loongarch64
cross-compile toolchain from x86 build hosts. Functions were tested
on Loongnix and Kylin which are two Linux distributions supported
LoongArch host based on Linux 4.19 maintained by Loongson
Corporation.
We also tested DPDK on LoongArch with some external applications,
including: Pktgen-DPDK, OVS, VPP.
The platform is currently marked as linux-only because there is no
other OS than Linux support LoongArch host currently.
The i40e PMD driver is disabled on LoongArch because of the absence
of vector support in the current version.
Similar to RISC-V, the compilation of following modules has been
disabled by this commit and will be re-enabled in later commits as
fixes are introduced:
net/ixgbe, net/memif, net/tap, example/l3fwd.
Signed-off-by: Min Zhou <zhoumin@loongson.cn>
RTE_TEST_[RT]X_DESC_DEFAULT and RTE_TEST_[RT]X_DESC_MAX macros have been
copied in a lot of app/ and examples/ code.
Those macros are local to each program.
They are not related to a DPDK public header/API, drop the RTE_TEST_
prefix.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
The function return type is changed to fixed width uint32_t
to be consistent with what appears to be the original authors intent.
It doesn't make much sense to return signed integers for these functions.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Structure rte_security_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
Structure rte_cryptodev_sym_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
This function was never implemented and has been deprecated for a long
time. We can remove it.
Signed-off-by: David Marchand <david.marchand@redhat.com>
As part of the agreed process for deprecating KNI in DPDK, the example
app is scheduled for removal as part of the 22.11 release.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
NVIDIA acquired Mellanox Technologies in 2020.
The DPDK documentation and code might still include instances
of or references to Mellanox trademarks (like BlueField and ConnectX)
that are now NVIDIA trademarks.
The PCI IDs and copyrights are unchanged.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gal Cohen <galco@nvidia.com>
Introduce ability to aggregate crypto operations processed by event
crypto adapter into single event containing rte_event_vector whose event
type is RTE_EVENT_TYPE_CRYPTODEV_VECTOR.
Application should set RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR in
rte_event_crypto_adapter_queue_conf::flag and provide vector configuration
with respect of rte_event_crypto_adapter_vector_limits, which could be
obtained by calling rte_event_crypto_adapter_vector_limits_get, to enable
vectorization.
The event crypto adapter would be responsible for vectorizing the crypto
operations based on provided response information in
rte_event_crypto_metadata::response_info.
Updated drivers and tests accordingly to new API.
Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
ShangMi 3 (SM3) is a cryptographic hash function used in
the Chinese National Standard.
- Added SM3 algorithm
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
ShangMi 4 (SM4) is a block cipher used in the
Chinese National Standard for Wireless LAN WAPI and also
used with Transport Layer Security.
Added SM4 encryption algorithm in ECB, CBC and CTR modes.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The API rte_security_get_userdata() was being unused by most of
the drivers and it was retrieving userdata from mbuf dynamic field.
Hence, the API was removed and the application can directly get the
userdata from dynamic field. This helps in removing extra checks
in datapath.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
DPDK libraries should never call rte_exit on failure, so change the
function return type of rte_metrics_init to "int" to allow returning an
error code to the application rather than exiting the whole app on init
failure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Library functions should not cause the app to exit or panic. Replace the
existing panic call in the EAL remote launch functions with an error
code return instead.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Deprecation notice targeted for v22.11 of event vector has been
merged in the following commits, remove deprecation notices.
Fixes: 0fbb55efa5 ("eventdev: add element offset to event vector")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Replace *u64s with u64s in rte_event_vector structure as
the *ptrs already serves the purpose of holding pointers
and the intention of u64s is to hold array of uint64_t
values.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add `rte` prefix to stop flush callback function pointer
declaration to avoid conflicts with application functions,
``eventdev_stop_flush_t`` is renamed to
``rte_eventdev_stop_flush_t``.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
For best performance, applications running on certain cores should use
the DLB device locally available on the same tile along with other
resources. To allocate optimal resources, probing is done for each
producer port (PP) for a given CPU and the best performing ports are
allocated to producers. The CPU used for probing is either the first
core of producer coremask (if present) or the second core of EAL
coremask. This will be extended later to probe for all CPUs in the
producer coremask or EAL coremask.
Producer coremask can be passed along with the BDF of the DLB devices.
"-a xx:y.z,producer_coremask=<core_mask>"
Applications also need to pass RTE_EVENT_PORT_CFG_HINT_PRODUCER during
rte_event_port_setup() for producer ports for optimal port allocation.
For optimal load balancing ports that map to one or more QIDs in common
should not be in numerical sequence. The port->QID mapping is application
dependent, but the driver interleaves port IDs as much as possible to
reduce the likelihood of sequential ports mapping to the same QID(s).
Hence, DLB uses an initial allocation of Port IDs to maximize the
average distance between an ID and its immediate neighbors. Using
the initialport allocation option can be passed through devarg
"default_port_allocation=y(or Y)".
When events are dropped by workers or consumers that use LDB ports,
completions are sent which are just ENQs and may impact the latency.
To address this, probing is done for LDB ports as well. Probing is
done on ports per 'cos'. When default cos is used, ports will be
allocated from best ports from the best 'cos', else from best ports of
the specific cos.
Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
As rte_flow_action_handle_create/destroy/update() have their own
asynchronous rte_flow_async_action_handle_create/destroy/update()
version functions to accelerate the indirect action operations in
queue based flow engine. Currently, the asynchronous version query
function for indirect action was missing.
Add rte_flow_async_action_handle_query() function corresponding
to rte_flow_action_handle_query(). The new asynchronous version
function enables enqueue the query to the hardware similar as
asynchronous flow management does and returns immediately to free
the CPU for other tasks. Application can get the query results from
rte_flow_pull() when the hardware completes its work.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
In queue based async flow engine, in order to optimize the flow
insertion rate, PMD can use the hints from application to have
resources pre-allocate during initialization phase for actions
such as count/meter/aging.
This commit adds the connection tracking action hints.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
If there's no param in represented_port item, it will be treated as
matching all ports by default. But there's some limitation when using it
with meter hierarchy.
This patch adds the limitation that when matching all ports, the meter
hierarchy should not contain any meter having drop count.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In case of higher order (greater than 99) logical cores, name was
truncated (length is restricted to 16 characters, including the
terminating null byte ('\0')) and it makes hard to follow threads.
Before this fix, this issue can be reproduced using following arguments:
--lcores=0,10@1,100@2
Then we had:
lcore-worker-10
lcore-worker-10
Signed-off-by: Abdullah Ömer Yamaç <omer.yamac@ceng.metu.edu.tr>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Those helpers have been marked as deprecated for a long time and have
documented equivalent helpers.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch is to enable scalar path inner and outer Tx checksum offload
for tunnel packet by configure ol_flags.
Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a known issue: configuring VLAN filters from VF is unsupported
for i40e driver 2.17.15.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Move all VF related limitation or known issues from i40e.rst to
intel_vf.rst, as i40evf has been removed from i40e, i40e.rst should only
cover PF's information.
The patch also fix couple typos and refine the words to be more accurate.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add support for sending traffic to the original DCF port
with 'port_representor' action by using DCF port id as 'port_id'.
For example:
testpmd> flow create 0 ingress pattern any
/ end actions port_representor port_id 0 / end
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add flow subscription pattern support for AVF.
The supported patterns are listed below:
eth/vlan/ipv4
eth/ipv4(6)
eth/ipv4(6)/udp
eth/ipv4(6)/tcp
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The ice has the feature to extract protocol fields into flex descriptor
by programming per queue. However, the dynamic field for proto_ext are
allocated by PMD, it is the responsibility of application to reserved
the field, before start DPDK.
Application with parse the offset and proto_ext name to PMD with devargs.
Remove related private API in 'rte_pmd_ice.h' and 'rte_pmd_ice.h' file.
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Tested-by: Jin Ling <jin.ling@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Support disabling DCF ACL engine via devarg "acl=off" in cmdline, aiming to
shorten the DCF startup time.
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
According to the ABI and API Deprecation, remove deprecated VF action
as hard-to-use / ambiguous.
Action REPRESENTED_PORT should be used instead.
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add support for action REPRESENTED_PORT in DCF. Supposed to send matching
traffic to the entity (VF) represented by the given ethdev, at embedded
switch level.
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Added support for MACsec in rte_security for offloading
MACsec Protocol operation to inline NIC device or a crypto device.
To support MACsec we cannot just make one security session and
send with the packet to process it. MACsec specifications suggest,
it has 3 different entities - SECY Entity, SC (secure channel) and
SA (security association). And same SA can be used by multiple SCs and
similarly many SECY can have same SCs. Hence, in order to support this
many to one relationships between all entities, 2 new APIs are created -
rte_security_macsec_sc_create and rte_security_macsec_sa_create.
Flow of execution of the APIs would be as
- rte_security_macsec_sa_create
- rte_security_macsec_sc_create
- rte_security_session_create (for secy)
And in case of inline protocol processing rte_flow can be created with
rte_security action. A new flow item will be added for MACsec header.
New APIs are also created for getting SC and SA stats.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Add support to start or stop a particular queue
that is associated with the adapter.
Start function enables the Tx adapter to start enqueueing
packets to the Tx queue.
Stop function stops the Tx adapter from enqueueing any
packets to the Tx queue. The stop API also frees any packets
that may have been buffered for this queue. All in-flight packets
destined to the queue are freed by the adapter runtime until the
queue is started again.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Removed support to limit XAQ from devargs. If XAQ is limited, new add
works could run out of XAQ entries and disable the queue.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
The ioat driver has been superseded by the ioat and idxd dmadev drivers,
and has been deprecated for some time, so remove it.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Clarify that for Outbound Inline IPsec processing, L2 header
needs to be up to date with ether type which will be applicable
post IPsec processing as the IPsec offload only touches L3 and above.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Clarify mbuf meta data needed for Outbound Inline IPsec processing.
Application needs to provide mbuf.l3_len and L3 type in
mbuf.ol_flags so that like tunnel mode using mbuf.l2_len, transport mode
can make use of l3_len and l3_type to determine perform
proper transport mode IPsec processing.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add support for offloading RTE_CRYPTO_CIPHER_AES_DOCSISBPI and
RTE_CRYPTO_CIPHER_DES_DOCSISBPI algorithms to symmetric crypto session.
Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Arm port of ipsec_mb library [1] has different header file name than
the Intel ipsec_mb library. Proper header name is picked according to
the architecture to get the code compile when ipsec_mb is installed on
Arm platform.
And the Arm port currently supports ZUC and SNOW3g. Call to other
algorithms will be blocked.
[1] https://gitlab.arm.com/arm-reference-solutions/ipsec-mb/-/tree/main
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Ashwin Sekhar T K <asekhar@marvell.com>
Added new fields to represent event queue weight and affinity in
rte_event_queue_conf structure. Internal op to get queue attribute is
removed as it is no longer needed. Updated driver to use the new field.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support to configure and use periodic event timers in
software timer adapter.
The structure ``rte_event_timer_adapter_stats`` is extended
by adding a new field, ``evtim_drop_count``. This stat
represents the number of times an event_timer expiry event
is dropped by the event timer adapter.
Updated the software eventdev pmd timer_adapter_caps_get
callback function to report the support of periodic
event timer capability.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Added rte_event_eth_tx_adapter_instance_get() to get the
adapter instance id for specified ethernet device id and
tx queue index.
Added testcase for rte_event_eth_tx_adapter_instance_get().
Added rte_event_eth_tx_adapter_instance_get() details in
prog_guide/event_ethernet_tx_adapter.rst
Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
Reviewed-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Added rte_event_eth_rx_adapter_instance_get() to get
adapter instance id for specified ethernet device id and
rx queue index.
Added telemetry handler for rte_event_eth_rx_adapter_instance_get().
Added test case for rte_event_eth_rx_adapter_instance_get()
Added rte_event_eth_rx_adapter_instance_get() details in
prog_guide/event_ethernet_rx_adapter.rst
Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
Reviewed-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Such deprecation was commenced in DPDK 21.11.
Since then, no parties have objected. Remove.
The patch breaks ABI.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Such deprecation was commenced in DPDK 21.11.
Since then, no parties have objected. Remove.
The patch breaks ABI.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Such deprecation was commenced in DPDK 21.11.
Since then, no parties have objected. Remove.
The patch breaks ABI.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Such deprecation was commenced in DPDK 21.11.
Since then, no parties have objected. Remove.
The patch breaks ABI.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
The paragraph describing flow operation without representors
shows the use of traffic direction attributes in combination
with attribute "transfer". Such scenario has been deprecated.
Also, the paragraph mentions the use of deprecated action VF.
Drop irrelevant parts, adjust remaining text and the diagram.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Items PORT_REPRESENTOR and REPRESENTED_PORT as well as their
action counterparts have been a part of the flow library for
a year already. However, these haven't been described in the
switch representation guide. Provide the missing description.
Also, update relevant testpmd flow rule examples accordingly.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
There are testpmd examples which demonstrate flow alteration
and steering between endpoints using outdated action PORT_ID.
Revisit these examples to make use of new port-based actions.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
There has been support for similar action PORT_ID for
some time already, but this action will be deprecated.
Support action REPRESENTED_PORT before the transition.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
There's been support for similar actions PHY_PORT and PORT_ID
for some time already, but these actions are being deprecated.
Support action REPRESENTED_PORT to prepare for the transition.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
rte_flow_pick_transfer_proxy() was first added to DPDK 21.11.
Since then, no one has requested any fixes. At the same time,
the API is required by series [1] in OvS for the new release.
[1] http://patchwork.ozlabs.org/project/openvswitch/list/?series=310415
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
The following set of primitives has been introduced in 21.11:
- RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR
- RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT
- RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR
- RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT
Since then, no one has requested any fixes. At the same time,
the set is required by series [1] in OvS for the new release.
[1] http://patchwork.ozlabs.org/project/openvswitch/list/?series=310415
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
rte_eth_rx_metadata_negotiate() was introduced in DPDK 21.11.
Since then, no one has requested any fixes. At the same time,
the API is required by series [1] in OvS for the new release.
[1] http://patchwork.ozlabs.org/project/openvswitch/list/?series=310415
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX defines are unused
since xmem API removal.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Replacement RTE_MEMPOOL_REGISTER_OPS() should be used instead.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
MEMPOOL_HEADER_SIZE() is removed. The replacement with RTE_ prefix
is internal only since it is implementation details which are not
required in applications.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Limit the telemetry command characters to the minimum set needed for
current implementations. This prevents issues with invalid json
characters needing to be escaped on replies.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Add FC check in vector event Tx path, the check needs to be
performed after head wait right before LMTST is issued.
Since, SQB pool FC updates are delayed w.r.t the actual
utilization of pool add sufficient slack to avoid overflow.
Added a new device argument to override the default SQB slack
configured, can be used as follows:
-a 0002:02:00.0,sqb_slack=32
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Added functionality to update link speed, duplex mode and link state.
Signed-off-by: Sathesh Edara <sedara@marvell.com>
Acked-by: Veerasenareddy Burru <vburru@marvell.com>
This patch renames octeon end point driver from octeontx_ep to
octeon_ep to enable single unified driver to support current
OcteonTx and future Octeon PCI endpoint NICs to reflect common
driver for all Octeon based PCI endpoint NICs.
Signed-off-by: Sathesh Edara <sedara@marvell.com>
Acked-by: Veerasenareddy Burru <vburru@marvell.com>
Remove deprecated fdir_conf from device configuration.
Assume that mode is equal to RTE_FDIR_MODE_NONE.
Add internal Flow Director configuration copy in ixgbe and txgbe device
private data since flow API supports requires it. Initialize mode to
the first flow rule mode on the rule validation or creation.
Since Flow Director configuration data types are still used by some
drivers internally, move it from public API to ethdev driver internal
API.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Dongdong Liu <liudongdong3@huawei.com>
Introduce a new command and remove the last part of specific port init
from testpmd.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Move related specific testpmd commands into this driver directory.
The bypass init is left in testpmd at this point and can be moved later.
While at it, fix checkpatch warnings.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Remove deprecated ``ETH_VLAN_*`` and ``ETH_QINQ_`` defines.
Use corresponding defines with ``RTE_`` prefix instead.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Remove deprecated ``DEV_RX_OFFLOAD_*`` and ``DEV_TX_OFFLOAD_`` defines.
Use corresponding defines with ``RTE_ETH_RX_OFFLOAD_`` and
``RTE_ETH_TX_OFFLOAD_`` prefix instead.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Remove deprecated ``ETH_RSS_*`` defines used for hash function and RETA
size specification. Use corresponding defines with ``RTE_`` prefix
instead.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Remove deprecated ``ETH_MQ_RX_*`` and ``ETH_MQ_TX_*`` defines.
Use corresponding defines with ``RTE_`` prefix instead.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Remove deprecated ``ETH_LINK_SPEED_``, ``ETH_SPEED_NUM_`` and
``ETH_LINK_`` defines. Use corresponding defines with ``RTE_`` prefix
instead.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Make rte_device opaque for non internal users.
This will make extending this object possible without breaking the ABI.
Some applications may have been dereferencing rte_device objects, mark
this object's accessors as stable.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Make rte_driver opaque for non internal users.
This will make extending this object possible without breaking the ABI.
Introduce a new driver header and move rte_driver definition.
Update drivers and library to use the internal header.
Some applications may have been dereferencing rte_driver objects, mark
this object's accessors as stable.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Make rte_bus opaque for non internal users.
This will make extending this object possible without breaking the ABI.
Introduce a new driver header and move rte_bus definition and helpers.
Update drivers and library to use the internal header.
Some applications may have been dereferencing rte_bus objects, mark
this object's accessors as stable.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The vmbus bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.
While at it, cleanup the code:
- fix indentation,
- remove unneeded reference to bus specific singleton object,
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com>
The vdev bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.
While at it, cleanup the code:
- fix indentation,
- remove unneeded reference to bus specific singleton object,
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
The pci bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.
While at it, cleanup the code:
- fix indentation,
- remove unneeded reference to bus specific singleton object,
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
The ifpga bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.
While at it, cleanup the code:
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
The auxiliary bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.
While at it, cleanup the code:
- fix indentation,
- remove unneeded reference to bus specific singleton object,
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com>
Those macros have no real value and are easily replaced with a simple
if() block.
Existing users have been converted using a new cocci script.
Deprecate them.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Those commands date back to the early stages of DPDK when only PCI
devices were supported.
At the time, developers may have used those commands to help in
debugging their buggy^Wwork in progress drivers.
Removing them, we can drop the dependency on the PCI bus and library and
make testpmd bus agnostic.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
There is no in-tree user for this accessor that returns the PCI bus
object.
On the other hand, a bus object can be retrieved by name using
rte_bus_find_by_name.
We can remove this driver specific API.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Start a new release cycle with empty release notes.
The ABI version becomes 23.0.
The map files are updated to the new ABI major number (23).
The ABI exceptions are dropped and CI ABI checks are disabled because
compatibility is not preserved.
Special handling of removed drivers is also dropped in check-abi.sh and
a note has been added in libabigail.abignore as a reminder.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Structure rte_event_queue_conf will be extended to include fields to
support weight and affinity attribute. Once it gets added in DPDK 22.11,
eventdev internal op, queue_attr_get can be removed.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
The structure ``rte_event_vector`` will be modified to include
``elem_offset:12`` bits taken from ``rsvd:15``.
The ``elem_offset`` defines the offset into the vector array from
which valid elements are present.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
The field `*u64s` in the structure `rte_event_vector` will
be replaced with `u64s`.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Stop flush callback is missing `rte_` prefix
and might conflict with application declarations.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
The structure ``rte_event_timer_adapter_stats`` will be
extended by adding a new field ``evtim_drop_count``.
This stat will represent the number of times an event_timer expiry event
is dropped by the event timer adapter.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Reviewed-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
The function rte_cryptodev_cb_fn prototype will be extended
to add a new parameter qp_id, to return queue pair ID,
which got error interrupt to the application,
so that application can reset that particular queue pair.
https://mails.dpdk.org/archives/dev/2022-June/245428.html
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
MACsec support is planned for DPDK 22.11, which would
result in ABI breakage in some of the rte_security structures.
This patch is to give deprecation notice for the affected structures.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
New event subtypes need to be added for notifying expiry events
upon reaching IPsec SA soft packet expiry and hard packet/byte
expiry limits. This would be added in DPDK 22.11.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Intent to resolve in DPDK 22.11 historical usage which prevents
graceful extension of enum and API without troublesome ABI breakage
as well as extending API RTE_BBDEV_OP_FFT for new operation type
in bbdev as well as other new members in existing structures.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
RTE_ETH_RX_OFFLOAD_HEADER_SPLIT offload was introduced some time ago to
substitute bit-field header_split in struct rte_eth_rxmode. It allows
to enable per-port header split offload with the header size controlled
using split_hdr_size in the same structure.
Right now, no single PMD actually supports RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
with above definition. Many examples and test apps initialize the field
to 0 explicitly. The most of drivers simply ignore split_hdr_size since
the offload is not advertised, but some double-check that its value is 0.
So the RTE_ETH_RX_OFFLOAD_HEADER_SPLIT and split_header_size field
will be removed in DPDK 22.11. After DPDK 22.11 LTS, the
RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT can still be used for per-queue Rx
packet split offload, which is configured by rte_eth_rxseg_split.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
rte_eth_set_queue_rate_limit argument rate will be modified to uint32_t
to support more than 64Gbps.
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
To enable single unified driver to support current OcteonTx and
future Octeon PCI endpoint NICs, octeontx_ep driver will be renamed
to octeon_ep to reflect common driver for all Octeon based
PCI endpoint NICs.
Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
rte_pmd_ifpga_get_pci_bus() documentation is vague and it is unclear
what could be done with it.
On the other hand, EAL provides a standard API to retrieve a bus object
by name.
Announce removal of this driver specific API for v22.11.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Wei Huang <wei.huang@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
dpaa2_cmdif raw driver is no longer in use,
so it will be removed in v22.11
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Following discussion on-list [1], we will look to limit the allowed
characters in names for items in telemetry. This will simplify the
escaping needed for JSON output, or any future output formats. The lists
will initially be minimal, since expansion to allow more characters can
be done without affecting compatibility, while reducing the set cannot.
[1] http://inbox.dpdk.org/dev/20220623164245.561371-1-bruce.richardson@intel.com/#r
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
rte_driver and rte_device are unnecessarily exposed in the public API/ABI.
Announce that they will be made opaque in the public API and mark
associated API as internal.
This impacts all bus, as their driver registration mechanism will be
made internal.
Note: the PCI bus had a similar deprecation notice that we can remove as
the new one is more generic.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
rte_bus is unnecessarily exposed in the public API/ABI.
Besides, we had cases where extending rte_bus was necessary.
Announce that rte_bus will be made opaque in the public API and mark
associated API as internal.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In case NUMA node of a device is unknown,
the default value must be consistently -1.
Link: https://patches.dpdk.org/project/dpdk/patch/20211026090610.10823-1-houssem.bouhlel@6wind.com/
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Announce the deprecation plan for KNI kernel module, library, PMD
and example.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Fix grammar, spelling and formatting of DPDK 22.07 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
This updates the doc to include new supported devices like ConnectX-7,
and updates the description of older ones.
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Previously, QinQ is enabled by default and can't be disabled,
but there'll be performance drop if QinQ is enabled.
So, disabled QinQ by default and also updated the knowing VLAN
issue with this configure.
Fixes: 5bd74df1db ("net/i40e: fix QinQ enablement")
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
To help encourage use of virtio-user in place of KNI, put a reference to
the relevant howto section at the top of the KNI doc.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The HOWTO guide for using virtio-user as an exception path to the kernel
only provided an example of how testpmd may be used for that purpose.
However, a real application wanting to use virtio-user as exception path
would likely want to create such devices from code within the app
itself. Therefore, we update the doc with instructions and a code
snippet showing how this may be done.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch extensively reworks the howto guide on using virtio-user for
exception packets. Changes include:
* rename "exceptional path" to "exception path"
* remove references to uio and just reference vfio-pci
* simplify testpmd command-lines, giving a basic usage example first
before adding on detail about checksum or TSO parameters
* give a complete working example showing traffic flowing through the
whole system from a testpmd loopback using the created TAP netdev
* replace use of "ifconfig" with Linux standard "ip" command
* other general rewording.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Negative integrity item refers to condition when the item value mask
is set, but value spec is cleared:
... integrity value mask l4_ok value spec 0 ...
ethdev library defines integrity bits `l3_ok` and `l4_ok` as accumulators
for all hardware L3 and L4 integrity verifications respectfully.
Hardware `l3_ok` and `l4_ok` integrity bits refer to L3 and L4
network headers only.
Integrity bits `l3_ok` and `l4_ok` are not compatible between
ethdev library and hardware.
PMD translations for ethdev `l3_ok` are:
IPv4: `l3_ok` and `l3_csum_ok`
IPv6: `l3_ok`
ethdev `l4_ok` is translated into PMD `l4_ok` and `l4_csum_ok` bits.
Positive IPv4 `l3_ok` flow item configuration is translated into
a single matcher that AND corresponding hardware bits.
Negative IPv4 `l3_ok` is translated into 2 hardware conditions where
each condition probes a single integrity bit:
ethdev::l3_ok is 0 => MLX5::l3_ok is 0 OR MLX5:l3_csum_ok is 0
MLX5 hardware does not do OR condition in flow rule item.
Negative IPv4 `l3_ok` must be translated into 2 flow rules.
Similarly negative ethdev `l4_ok` condition is also translated into 2
hardware rules.
Current PMD roadmap does not allow implicit flow rule split.
Bugzilla ID: 948
Cc: stable@dpdk.org
Suggested-by: Raja Zidane <rzidane@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add mlx5 internal test for map and unmap external RxQs.
This patch adds to testpmd app a runtime function to test the mapping
API.
testpmd> mlx5 port (port_id) ext_rxq map (sw_queue_id) (hw_queue_id)
testpmd> mlx5 port (port_id) ext_rxq unmap (sw_queue_id)
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@nvidia.com>
Add mlx5 internal option in testpmd similar to run-time function
"port attach" which adds another parameter named "socket" for attaching
port and add 2 devargs before.
The arguments are "cmd_fd" and "pd_handle" using to import device
created out of PMD. Testpmd application import it using IPC, and updates
the devargs list before attaching.
These arguments were added in
the commit 9d936f4f1a ("common/mlx5: support remote PD and CTX")
The syntax is:
testpmd> mlx5 port attach (identifier) socket=(path)
Where "path" is the IPC socket path agreed on the remote process.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@nvidia.com>
Enable double VLAN by default after firmware v8.3
and disable double VLAN is not allowed in subsequent
operations.
Fixes: 38e9762be1 ("net/i40e: add outer VLAN processing")
Signed-off-by: Kevin Liu <kevinx.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>