Add event vector support in pipeline tests. By default this mode
is disabled, it can be enabled by using the option --enable_vector.
example:
dpdk-test-eventdev -l 7-23 -s 0xff00 -- --prod_type_ethdev
--nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=a
--wlcores=20-23 --enable_vector
Additional options to configure vector size and vector timeout are
also implemented and can be used by specifying --vector_size and
--vector_tmo_ns
This patch also adds a new option to set the number of Rx queues
configured per event eth rx adapter.
example:
dpdk-test-eventdev -l 7-23 -s 0xff00 -- --prod_type_ethdev
--nb_pkts=0 --verbose 2 --test=pipeline_atq --stlist=a
--wlcores=20-23 --nb_eth_queues 4
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add event vector support for event eth Rx adapter, the implementation
creates vector flows based on port and queue identifier of the received
mbufs.
The flow id for SW Rx event vectorization will use 12-bits of queue
identifier and 8-bits port identifier when custom flow id is not set
for simplicity.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Introduce event vector transmit capability for event eth
tx adapter.
The capability indicates that the Tx adapter is capable of
transmitting event vectors.
When rte_event_vector::union_valid is set, the Tx adapter should
transmit all the packets to the rte_event_vector::port using the
rte_event_vector::queue.
If rte_event_vector::union_valid is not set then the Tx adapter
should peek into each mbuf to get the destination port and queue
pair.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Introduce event ethernet Rx adapter event vector capability.
If an event eth Rx adapter has the capability of
RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR then a given Rx queue
can be configured to enable event vectorization by passing the
flag RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR to
rte_event_eth_rx_adapter_queue_conf::rx_queue_flags while configuring
Rx adapter through rte_event_eth_rx_adapter_queue_add().
The max vector size, vector timeout define the vector size and
mempool used for allocating vector event are configured through
rte_event_eth_rx_adapter_queue_add. The element size of the element
in the vector pool should be equal to
sizeof(struct rte_event_vector) + (vector_sz * sizeof(uintptr_t))
Application can use `rte_event_vector_pool_create` to create the
vector mempool used for
rte_event_eth_rx_adapter_queue_conf::vector_mp.
The Rx adapter would be responsible for vectorizing the mbufs
based on the flow, the vector limits configured by the application
and add the vector event of mbufs to the event queue set via
rte_event_eth_rx_adapter_queue_conf::ev::queue_id.
It should also mark rte_event_vector::union_valid and fill
rte_event_vector::port, rte_event_vector::queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Introduce rte_event_vector datastructure which is capable of holding
multiple uintptr_t of the same flow thereby allowing applications
to vectorize their pipeline and reducing the complexity of pipelining
the events across multiple stages.
This approach also reduces the scheduling overhead on a event device.
Add a event vector mempool create handler to create mempools based on
the best mempool ops available on a given platform.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
A timer adapter in periodic mode can be used to arm periodic timers.
This patch adds flags used to advertise capability and configure timer
adapter in periodic mode. Capability flag should be set for adapters
which support periodic mode.
Below is a programming sequence on the usage:
/* check for periodic mode support by reading capability. */
rte_event_timer_adapter_caps_get(...);
/* create adapter in periodic mode by setting periodic flag
(RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) and resolution. */
rte_event_timer_adapter_create_ext(...);
/* arm periodic timer of configured resolution */
rte_event_timer_arm_burst(...);
/* timer event will be periodically generated at configured
resolution till cancel is called. */
while (running) { rte_event_dequeue_burst(...); }
/* cancel periodic timer which stops generating events */
rte_event_timer_cancel_burst(...);
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Remove event/dlb driver from DPDK code base.
Updated release note's removal section to reflect the same.
Also updated doc/guides/rel_notes/release_20_11.rst to fix the
the missing link issue due to removal of doc/guides/eventdevs/dlb.rst
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Current support for unique data is to compile with config.h
var FIXED_VALUES as 0, and this is only supported on
compilation time, as a result the user may use only single
mode for each compilation.
Starting with this commit the user will have the ability to
use this feature on the fly by using this new option:
--unique-data
Example of unique data usage:
Insert many rules with different encap data for a flows that
have encap action in it.
Signed-off-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Action port_id was not supported until now.
In this patch the action port_id supports passing from input
port PF to output port which is one of input port respective VF
Signed-off-by: Smadar Fuks <smadarf@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Fix the documentation about the MODIFY_FIELD flow action.
1. Include the rte_flow_field_id enumeration reference to point
to the full list of all supported Field IDs available.
2. Correct the formatting of the MODIFY_FIELD action and the
destination/source field definition tables.
Fixes: 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for rte_flow_action_nvge_encap as a sample action.
The example of test-pmd command:
1. set nvgre ip-version ... tni ... ip-src ... ip-dst ...
set raw_encap 1 eth src... / ipv4... /...
set sample_actions 2 nvgre / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets
using NVGRE encapsulation data and sent to wire.
Signed-off-by: Salem Sol <salems@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for rte_flow_action_vxlan_encap as a sample action.
The example of test-pmd command:
1. set vxlan ip-version ... vni ... udp-src ...
set raw_encap 1 eth src.../ ipv4.../...
set sample_actions 2 vxlan_encap / port_id id 0 / end
flow create 0 ... pattern eth / end actions
sample ratio 1 index 2 / raw_encap index 1 / port_id id 0...
The flow will result in all the matched egress packets will be
encapsulated and sent to wire, and also mirrored the packets
using VXLAN encapsulation data and sent to wire.
Signed-off-by: Salem Sol <salems@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for NVGRE encap as a sample action
and validate it.
Signed-off-by: Salem Sol <salems@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support for VXLAN encap as a sample action
and validate it.
Signed-off-by: Salem Sol <salems@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Kernel driver 2.13.10 is removed, so update recommended matching list
for i40e.
Cc: stable@dpdk.org
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Disable loading of external DDP package as it is not
supported on Windows.
Signed-off-by: Pallavi Kadam <pallavi.kadam@intel.com>
Reviewed-by: Ranjit Menon <ranjit.menon@intel.com>
Acked-by: Jie Zhou <jizh@microsoft.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
- Add Intel ice PMD support on Windows.
- Remove #include sys/ioctl header file as it is not needed.
- Replace x86intrin.h with rte_vect.h to avoid __m_prefetchw conflicting
types.
- Replace POSIX usleep() API with rte API.
- Add a new macro for the access() API as the original function
has been deprecated on Windows.
- Add extra cflags '-fno-asynchronous-unwind-tables'
to avoid MinGW build error:
Error: invalid register for .seh_savexmm
- Add documentation to support ice PMD on Windows.
Update the release notes and features list for the same.
Signed-off-by: Pallavi Kadam <pallavi.kadam@intel.com>
Reviewed-by: Ranjit Menon <ranjit.menon@intel.com>
Acked-by: Jie Zhou <jizh@microsoft.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
PMDs use RTE_LIBRTE_<PMD_NAME>_DEBUG_RX|TX as build option to wrap
data path debug code. As .config has been removed since the meson build,
It is not friendly for new DPDK users to notice those debug options.
The patch introduces below build options for data path debug, so PMD
can choose to reuse them to avoid maintain their own.
- RTE_ETHDEV_DEBUG_RX
- RTE_ETHDEV_DEBUG_TX
All the build options are documented at programming guide
"3.1 Driver Option", so users can easily find them.
The original undocumented RTE_LIBRTE_ETHDEV_DEBUG will alias to
both RTE_ETHDEV_DEBUG_RX and RTE_ETHDEV_DEBUG_TX for backward
compatibility.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Update documentation for sample action usage in testpmd and
show the command line example.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Marvell CN10k mempool supports batch enqueue/dequeue which can
dequeue up to 512 pointers and enqueue up to 15 pointers using
a single instruction.
These batch operations require a DMA memory to enqueue/dequeue
pointers. This patch adds the initialization of this DMA memory.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
Add Marvell CN10k mempool ops and implement CN10k mempool alloc.
CN10k has 64 bytes L1D cache line size. Hence the CN10k mempool
alloc does not make the element size an odd multiple L1D cache
line size as NPA requires the element sizes to be multiples of
128 bytes.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
Add Marvell CN9k mempool enqueue/dequeue. Marvell CN9k
supports burst dequeue which allows to dequeue up to 32
pointers using pipelined casp instructions.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
Add the implementation for CNXk mempool device
probe and remove.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
Add the meson based build infrastructure for Marvell
CNXK mempool driver along with stub implementations
for mempool device probe.
Also add Marvell CNXK mempool base documentation.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
Platform specific guide for Marvell OCTEON CN9K/CN10K SoC is added.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
The current machine='default' build name is not descriptive. The actual
default build is machine='native'. Add an alternative string which does
the same build and better describes what we're building:
machine='generic'. Leave machine='default' for backwards compatibility.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
To allow support for additional build checks and tests only really
relevant for developers, we add support for a developer mode option to
DPDK. The default, "auto", value for this enables developer mode if a
".git" folder is found at the root of the source tree - as was the case
with the previous "make" build system. There is also support for
explicitly enabling or disabling this option using "meson configure" if
so desired.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
UMR (User-Mode Memory Registration) WQE can present data buffers
scattered within multiple mbufs with single indirect mkey. Take
advantage of the UMR WQE, scattered mbuf in one operation can be
presented to an indirect mkey. The RegEx which only accepts one
mkey can now process the whole scattered mbuf in one operation.
The maximum scattered mbuf can be supported in one UMR WQE is now
defined as 64. The mbufs from multiple operations can be combined
into one UMR WQE as well if there is enough space in the KLM array,
since the operations can address their own mbuf's content by the
mkey's address and length. However, one operation's scattered mbuf's
can't be placed in two different UMR WQE's KLM array, if the UMR
WQE's KLM does not has enough free space for one operation, the
extra UMR WQE will be engaged.
In case the UMR WQE's indirect mkey will be over wrapped by the SQ's
WQE move, the mkey's index used by the UMR WQE should be the index
of last the RegEX WQE in the operations. As one operation consumes
one WQE set, build the RegEx WQE by reverse helps address the mkey
more efficiently. Once the operations in one burst consumes multiple
mkeys, when the mkey KLM array is full, the reverse WQE set index
will always be the last of the new mkey's for the new UMR WQE.
In GGA mode, the SQ WQE's memory layout becomes UMR/NOP and RegEx
WQE by interleave. The UMR and RegEx WQE can be called as WQE set.
The SQ's pi and ci will also be increased as WQE set not as WQE.
For operations don't have scattered mbuf, uses the mbuf's mkey directly,
the WQE set combination is NOP + RegEx.
For operations have scattered mbuf but share the UMR WQE with others,
the WQE set combination is NOP + RegEx.
For operations complete the UMR WQE, the WQE set combination is UMR +
RegEx.
Signed-off-by: John Hurley <jhurley@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The log levels are configured by using the name of the logs.
Some drivers are aligned to follow a common log name standard:
pmd.class.driver[.sub]
Some "common" drivers skip the "class" part:
pmd.driver.sub
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
The name of the constant PCI_ANY_ID was missing RTE_ prefix.
It is renamed, and the old name becomes a deprecated alias.
While renaming, the duplicate definitions in rte_bus_pci.h
are removed to keep only those in rte_pci.h.
Note: rte_pci.h is included in rte_bus_pci.h
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
If the rtd theme is available, passing it by name is enough to select
it. Sphinx itself recognises the "sphinx_rtd_theme" name as a special
case and tries to find its path automatically.
On the other hand, passing a html_theme_path makes sphinx parse all
themes availables in this path, which in some environment (like GHA) is
/usr/share and makes sphinx error on the first zipfile it finds (in GHA,
some Azure CLI thingy) that has no sphinx theme in it.
Fixes: 46562be65094 ("doc: import sphinx rtd theme when available")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Modification of the 802.1Q Tag Identifier, VXLAN Network
Identifier or GENEVE Network Identifier is not supported.
Reject attempt to modify these fields via the MODIFY_FIELD
action and document this mlx5 driver limitation.
Fixes: 641dbe4fb053 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To probe representors from different kernel bonding PFs, had to specify
2 separate devargs like this:
-a 03:00.0,representor=pf0vf[0-3] -a 03:00.0,representor=pf1vf[0-3]
This patch supports range or list of PF section in devargs, so the
alternative short devargs of above is:
-a 03:00.0,representor=pf[0-1]vf[0-3]
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To probe representor on 2nd PF of kernel bonding device, had to specify
PF1 BDF in devarg:
<PF1_BDF>,representor=0
When closing bonding device, all representors had to be closed together
and this implies all representors have to use primary PF of bonding
device. So after probing representor port on 2nd PF, when locating new
probed device using device argument, the filter used 2nd PF as PCI
address and failed to locate new device.
Conflict happened by using current representor devargs:
- Use PCI BDF to specify representor owner PF
- Use PCI BDF to locate probed representor device.
- PMD uses primary PCI BDF as PCI device.
To resolve such conflicts, new representor syntax is introduced here:
<primary BDF>,representor=pfXvfY
All representors must use primary PF as owner PCI device, PMD internally
locate owner PCI address by checking representor "pfX" part. To EAL, all
representors are registered to primary PCI device, the 2nd PF is hidden
to EAL, thus all search should be consistent.
Same to VF representor, HPF (host PF on BlueField) uses same syntax to
probe, example: representor=pf1vf[0-3,-1]
This patch also adds pf index into kernel bonding representor port name:
<BDF>_<ib_name>_representor_pf<X>vf<Y>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds support for SF representor. Similar to VF representor,
switch port name of SF representor in phys_port_name sysfs key is
"pf<x>sf<y>".
Device representor argument is "representors=sf[list]", list member
could be mix of instance and range. Example:
representors=sf[0,2,4,8-12,-1]
To probe VF representor and SF representor, need to separate into 2
devices:
-a <BDF>,representor=vf[list] -a <BDF>,representor=sf[list]
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add device arguments to support runtime options.
And use these configuration to control the link setup flow, to adapt to
different NIC's construction. Use firmware version to control the impact
of firmware update. And fix some left bugs.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Since rte_flow is the only API for filtering operations,
the legacy driver interface filter_ctrl was too much complicated
for the simple task of getting the struct rte_flow_ops.
The filter type RTE_ETH_FILTER_GENERIC and
the filter operarion RTE_ETH_FILTER_GET are removed.
The new driver callback flow_ops_get replaces filter_ctrl.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Remove OS restriction and update release notes.
For the record, tested on the following setup with Windows Server 2019 in
QEMU (-device vmxnet3) :
[ping ] [ ] [ ping]
[OS ] [ dpdk-skeleton ] [ OS]
[virtio---]--sockets--[---vmxnet3 vmxnet3---]--sockets--[---virtio]
[Debian VM] [ Windows VM ] [Debian VM]
Debian VMs successfully ping'd each other with Windows forwarding.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Updates the documentation for supported sample actions in the NIC Rx
and E-Switch steering flow.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add support for rte_flow_item_raw to parse custom L2 and L3 protocols.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for query Rx descriptor status in hns3 driver. Check the
descriptor specified and provide the status information of the
corresponding descriptor.
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Add support for query Tx descriptor status in hns3 driver. Check the
descriptor specified and provide the status information of the
corresponding descriptor.
Signed-off-by: Hongbo Zheng <zhenghongbo3@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Kunpeng930 support outer UDP cksum, this patch add support for it.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Currently, the driver support multiple IO burst function and auto
selection of the most appropriate function based on offload
configuration.
Most applications such as l2fwd/l3fwd don't provide the means to
change offload configuration, so it will use the auto selection's io
burst function.
This patch support runtime config to select io burst function, which
add two config: rx_func_hint and tx_func_hint, both could assign
vec/sve/simple/common.
The driver will use the following rules to select io burst func:
a. if hint equal vec and meet the vec Rx/Tx usage condition then use the
neon function.
b. if hint equal sve and meet the sve Rx/Tx usage condition then use the
sve function.
c. if hint equal simple and meet the simple Rx/Tx usage condition then
use the simple function.
d. if hint equal common then use the common function.
e. if hint not set then:
e.1. if meet the vec Rx/Tx usage condition then use the neon function.
e.2. if meet the simple Rx/Tx usage condition then use the simple
function.
e.3. else use the common function.
Note: the sve Rx/Tx usage condition based on the vec Rx/Tx usage
condition and runtime environment (which must support SVE).
In the previous versions, driver will preferred use the sve function
when meet the sve Rx/Tx usage condition, but in this case driver could
get better performance if use the neon function.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>