Add Rx/Tx of GQI_QPL queue format and GQI_RDA queue format.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Support device init and add following devops skeleton:
- dev_configure
- dev_start
- dev_stop
- dev_close
Note that build system (including doc) is also added in this patch.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Add the corresponding logics to support the offload of
IPv4 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of decap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
IPv4 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of encap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of decap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the related data structure and functions, prepare for
the decap action of IPv4 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of encap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of IPv4 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
set IPv6 DSCP action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
set IPv4 DSCP action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of set TTL action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of set
TP dest port action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of set TP source port action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
set dest IPv6 address action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of set source IPv6 address action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
set dest IPv4 address action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of set source IPv4 address action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of push_vlan action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of pop_vlan action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
set dest MAC action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of set source MAC action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload
of SCTP item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload
of UDP item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of TCP item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of IPv6 item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of IPv4 item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of VLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of very basic actions: count, drop
and output.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of very basic items: ethernet and
port id.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
The jumbo offload was removed from patch [1], but testpmd still exist
jumbo offload related code, this patch removes it, and also updates
the rst file.
[1] ethdev: remove jumbo offload flag
Fixes: b563c14212 ("ethdev: remove jumbo offload flag")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The code is very similar, but the simple case can skip a few branches
in the hot path. This improves PPS when 10KB mbufs are used.
S/G is enabled on the Rx side by offload DEV_RX_OFFLOAD_SCATTER.
S/G is enabled on the Tx side by offload DEV_TX_OFFLOAD_MULTI_SEGS.
S/G is automatically enabled on the Rx side if the provided mbufs are
too small to hold the maximum possible frame.
To enable S/G in testpmd, add these args:
--rx-offloads=0x2000 --tx-offloads=0x8000
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: R Mohamed Shah <mohamedshah.r@amd.com>
When 'ionic_cmb' is set to '1', queue memory will be allocated from
the device's onboard memory (Controller Memory Buffer). In some
configurations, this will dramatically reduce packet latency and
increase PPS.
Add the WC_ACTIVATE flag to the PCI driver flags.
Write combining must be enabled to achieve the maximum PPS.
When the queue is in the CMB, descriptors cannot be prefetched.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: Neel Patel <neel.patel@amd.com>
Linearize RX mbuf chains in the expanded info array.
Clean one and fill one per CQE (completions are not coalesced).
Touch the mbufs as little as possible in the fill stage.
When touching the mbuf in the clean stage, use the rearm_data unions.
Ring the doorbell once at the end of the bulk clean/fill.
Signed-off-by: Neel Patel <neel.patel@amd.com>
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
As stop action has been forbidden in secondary process, so
the reset action should also not be allowed.
Fixes: a550baf24a ("app/testpmd: support multi-process")
Cc: stable@dpdk.org
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Some PMDs (e.g. hns3) could detect hardware or firmware errors, one
error recovery mode is to report RTE_ETH_EVENT_INTR_RESET event, and
wait for application invoke rte_eth_dev_reset() to recover the port,
however, this mode has the following weaknesses:
1) Due to different hardware and software design, some NIC port recovery
process requires multiple handshakes with the firmware and PF (when the
port is VF). It takes a long time to complete the entire operation for
one port, If multiple ports (for example, multiple VFs of a PF) are
reset at the same time, other VFs may fail to be reset. (Because the
reset processing is serial, the previous VFs must be processed before
the subsequent VFs).
2) The impact on the application layer is great, and it should stop
working queues, stop calling Rx and Tx functions, and then call
rte_eth_dev_reset(), and re-setup all again.
This patch introduces proactive error handling mode, the PMD will try
to recover from the errors itself. In this process, the PMD sets the
data path pointers to dummy functions (which will prevent the crash),
and also make sure the control path operations failed with retcode
-EBUSY.
Because the PMD recovers automatically, the application can only sense
that the data flow is disconnected for a while and the control API
returns an error in this period.
In order to sense the error happening/recovering, three events were
introduced:
1) RTE_ETH_EVENT_ERR_RECOVERING: used to notify the application that it
detected an error and the recovery is being started. Upon receiving the
event, the application should not invoke any control path APIs until
receiving RTE_ETH_EVENT_RECOVERY_SUCCESS or
RTE_ETH_EVENT_RECOVERY_FAILED event.
2) RTE_ETH_EVENT_RECOVERY_SUCCESS: used to notify the application that
it recovers successful from the error, the PMD already re-configures the
port, and the effect is the same as that of the restart operation.
3) RTE_ETH_EVENT_RECOVERY_FAILED: used to notify the application that it
recovers failed from the error, the port should not usable anymore. The
application should close the port.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
'has_vlan' attribute is only supported by sfc, mlx5 and cnxk.
Other drivers doesn't support it. Most of them (like i40e) just
ignore it silently. Some drivers (like mlx4) never had a full
support of the eth item even before introduction of 'has_vlan'
(mlx4 allows to match on the destination MAC only).
Same for the 'has_more_vlan' flag of the vlan item.
'has_vlan' is part of 'rte_flow_item_eth', so changing 'eth'
field to 'partial support' in documentation for all such drivers.
'has_more_vlan' is part of 'rte_flow_item_vlan', so changing
'vlan' to 'partial support' as well.
This doesn't solve the issue, but at least marks the problematic
drivers.
Some details are available in:
https://bugs.dpdk.org/show_bug.cgi?id=958
Fixes: 09315fc838 ("ethdev: add VLAN attributes to ethernet and VLAN items")
Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Increase xstats ID width from 32 to 64 bits. This also
fixes the xstats ID datatype discrepancy between reset and
rest of the xstats family.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Fixed release notes for changes made in eventdev library.
Also updated the eventdev guide had got the type of the
rte_event_vector struct's u64s union field wrong.
Fixes: 5fa63911e4 ("eventdev: replace padding type in event vector")
Fixes: 0fbb55efa5 ("eventdev: add element offset to event vector")
Fixes: d986276f9b ("eventdev: add prefix to public symbol")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
When the dpdk-ioat app was renamed to dpdk-dma this example command
was missed, this patch corrects that issue.
Fixes: bb4141dbe5 ("examples/dma: rename ioat application example")
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Enabling trace points at runtime was not working if no trace point had
been enabled first at rte_eal_init() time. The reason was that
trace.args reflected the arguments passed to --trace= EAL option.
To fix this:
- the trace subsystem initialisation is updated: trace directory
creation is deferred to when traces are dumped (to avoid creating
directories that may not be used),
- per lcore memory allocation still relies on rte_trace_is_enabled() but
this helper now tracks if any trace point is enabled. The
documentation is updated accordingly,
- cleanup helpers must always be called in rte_eal_cleanup() since some
trace points might have been enabled and disabled in the lifetime of
the DPDK application,
With this fix, we can update the unit test and check that a trace point
callback is invoked when expected.
Note:
- the 'trace' global variable might be shadowed with the argument
passed to the functions dealing with trace point handles.
'tp' has been used for referring to trace_point object.
Prefer 't' for referring to handles,
Fixes: 84c4fae462 ("trace: implement operation APIs")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Sunil Kumar Kori <skori@marvell.com>
dpdk-pmdinfo.py does not produce any parseable output. The -r/--raw flag
merely prints multiple independent JSON lines which cannot be fed
directly to any JSON parser. Moreover, the script complexity is rather
high for such a simple task: extracting PMD_INFO_STRING from .rodata ELF
sections. Rewrite it so that it can produce valid JSON.
Remove the PCI database parsing for PCI-ID to Vendor-Device names
conversion. This should be done by external scripts (if really needed).
The script passes flake8, black, isort and pylint checks.
I have tested this with a matrix of python/pyelftools versions:
pyelftools
0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29
3.6 ok ok ok ok ok ok ok ok
3.7 ok ok ok ok ok ok ok ok
Python 3.8 ok ok ok ok ok ok ok ok
3.9 ok ok ok ok ok *ok ok ok
3.10 fail fail fail fail ok ok ok ok
* Also tested on FreeBSD
All failures with python 3.10 are related to the same issue:
File "elftools/construct/lib/container.py", line 5, in <module>
from collections import MutableMapping
ImportError: cannot import name 'MutableMapping' from 'collections'
Python 3.10 support is only available since pyelftools 0.26. The script
will only work with Python 3.6 and later.
Update the minimal system requirements, docs and release notes.
Signed-off-by: Robin Jarry <rjarry@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The reason for not building is updated
to be consistent with other drivers.
The libibverbs was not detected through pkg-config.
The method dependency() needs to be used first.
The support in rdma-core and Linux is not released yet,
so the documentation is updated.
Fixes: 517ed6e2d5 ("net/mana: add basic driver with build environment")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
When KNI is being used at runtime, output a warning message about its
deprecated status. This is part of the deprecation process for KNI
agreed by the DPDK technical board.[1]
[1] https://mails.dpdk.org/archives/dev/2022-June/243596.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
To ensure all users are aware of KNI's deprecated status at build time,
this library is marked as a deprecated library: the library is disabled
by default. It can be re-enabled by setting disabled_libs to the empty
string (or other string not including 'kni').
The dependent NIC driver, drivers/net/kni, is disabled accordingly as it
depends on the library.
NOTE: This is part of the deprecation process for KNI agreed by the DPDK
technical board.[1]
[1] https://mails.dpdk.org/archives/dev/2022-June/243596.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patchset bumps the minimum meson version from 0.49.2 to 0.53.2.
Ideally, the minimum version should be 0.53 without a point release, but
some DPDK builds (mingw) are broken with 0.53.0 due to issue[1], fixed
by commit[2] in 0.53.1. Therefore we use the latest point release from
0.53 branch i.e. 0.53.2.
Some new features of interest which can now be used in DPDK with this
new minimum meson version:
* can do header-file checks directly inside find_library calls, rather
than needing a separate check.[v0.50].
* can pass multiple cross-files at the same time when cross-compiling
[v0.51].
* "alias_target" function, to allow use to give better/shorter names
for particular build objects [v0.52].
* auto-generation of clang-format [v0.50] and clang-tidy[v0.52] targets
when those tools are present and config dotfiles are present.
Similarly ctags and cscope are added as targets when those tools are
present [v0.53]
* meson module for filesystem operations, so meson can now check for the
presence of particular files or directories [v0.53].
* "summary" function to provide a configuration summary at the end of
the meson run [v0.53].
Plus many other features. See [3] for full details of each version.
[1] https://github.com/mesonbuild/meson/issues/6442
[2] https://github.com/mesonbuild/meson/pull/6457/commits/8e7a7c36b579
[3] https://mesonbuild.com/Release-notes.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Add option for setting uncore frequency min/max/index, through uncore API.
This will be set for each package and die on the SKU.
On exit, uncore min and max frequency will be reverted back
to previous frequencies.
Signed-off-by: Tadhg Kearney <tadhg.kearney@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
Add API to allow uncore frequency adjustment.
Uncore is a term used by Intel to describe function
of a microprocessor that are closely connected
to the core to achieve high performance.
This is done through manipulating related uncore frequency control
sysfs entries to adjust the minimum and maximum uncore frequency values
and works on Linux for Intel hardware.
Signed-off-by: Tadhg Kearney <tadhg.kearney@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
The dumpcap application supports an interface parameter via the
`-i` option however the current documentation utilizes a `-I` flag.
Fixes: cbb44143be ("app/dumpcap: add new packet capture application")
Cc: stable@dpdk.org
Signed-off-by: Ben Magistro <koncept1@gmail.com>
Sketching algorithm provide high-fidelity approximate measurements and
appears as a promising alternative to traditional approaches such as
packet sampling.
NitroSketch [1] is a software sketching framework that optimizes
performance, provides accuracy guarantees, and supports a variety of
sketches.
This commit adds a new data structure called sketch into
membership library. This new data structure is an efficient
way to profile the traffic for heavy hitters. Also use min-heap
structure to maintain the top-k flow keys.
[1] Zaoxing Liu, Ran Ben-Basat, Gil Einziger, Yaron Kassner, Vladimir
Braverman, Roy Friedman, Vyas Sekar, "NitroSketch: Robust and General
Sketch-based Monitoring in Software Switches", in ACM SIGCOMM 2019.
https://dl.acm.org/doi/pdf/10.1145/3341302.3342076
Signed-off-by: Alan Liu <zaoxingliu@gmail.com>
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
Add support for protocol based buffer split in normal Rx
data paths. When the Rx queue is configured with specific protocol type,
packets received will be directly split into protocol header and
payload parts. And the two parts will be put into different mempools.
Currently, protocol based buffer split is not supported in vectorized
paths.
A new API ice_buffer_split_supported_hdr_ptypes_get() has been
introduced, it will return the supported header protocols of ice PMD
to app for splitting.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Add command line parameter:
--rxhdrs=eth[,ipv4]
Set the protocol_hdr of segments to scatter packets on receiving if
split feature is engaged. And the queues with BUFFER_SPLIT flag.
Add interactive mode command:
testpmd>set rxhdrs eth,ipv4,ipv4-udp
(protocol sequence should be valid)
The protocol split feature is off by default. To enable protocol split,
you need:
1. Start testpmd with multiple mempools. E.g. --mbuf-size=2048,2048
2. Configure Rx queue with rx_offload buffer split on.
3. Set the protocol type of buffer split. E.g. set rxhdrs eth,eth-ipv4
(default protocols of testpmd : eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp|
ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|grenat|inner-eth|
inner-ipv4|inner-ipv6|inner-ipv4-tcp|inner-ipv6-tcp|
inner-ipv4-udp|inner-ipv6-udp|inner-ipv4-sctp|inner-ipv6-sctp)
Above protocols can be configured in testpmd. But the configuration can
only be applied when it is supported by specific pmd.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Currently, Rx buffer split supports length based split. With Rx queue
offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment
configured, PMD will be able to split the received packets into
multiple segments.
However, length based buffer split is not suitable for NICs that do split
based on protocol headers. Given an arbitrarily variable length in Rx
packet segment, it is almost impossible to pass a fixed protocol header to
driver. Besides, the existence of tunneling results in the composition of
a packet is various, which makes the situation even worse.
This patch extends current buffer split to support protocol header based
buffer split. A new proto_hdr field is introduced in the reserved field
of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr
field defines the split position of packet, splitting will always happen
after the protocol header defined in the Rx packet segment. When Rx queue
offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding
protocol header is configured, driver will split the ingress packets into
multiple segments.
Examples for proto_hdr field defines:
To split after ETH-IPV4-UDP, it should be defined as
proto_hdr = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
RTE_PTYPE_L4_UDP
For inner ETH-IPV4-UDP, it should be defined as
proto_hdr = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP
If the protocol header is repeated with the previously defined one,
the repeated part should be omitted. For example, split after ETH, ETH-IPV4
and ETH-IPV4-UDP, it should be defined as
proto_hdr0 = RTE_PTYPE_L2_ETHER
proto_hdr1 = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
proto_hdr2 = RTE_PTYPE_L4_UDP
If protocol header split can be supported by a PMD, the
rte_eth_buffer_split_get_supported_hdr_ptypes function can
be used to obtain a list of these protocol headers.
For example, let's suppose we configured the Rx queue with the
following segments:
seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
off0=2B
seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B
seg2 - pool2, proto_hdr2=0, off1=0B
The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like
following:
seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
seg1 - udp header @ 128 in mbuf from pool1
seg2 - payload @ 0 in mbuf from pool2
Now buffer split can be configured in two modes. User can choose length
or protocol header to configure buffer split according to NIC's
capability. For length based buffer split, the mp, length, offset field
in Rx packet segment should be configured, while the proto_hdr field
must be 0. For protocol header based buffer split, the mp, offset,
proto_hdr field in Rx packet segment should be configured, while the
length field must be 0.
Note: When protocol header split is enabled, NIC may receive packets
which do not match all the protocol headers within the Rx segments.
At this point, NIC will have two possible split behaviors according to
matching results, one is exact match, another is longest match.
The split result of NIC must belong to one of them.
The exact match means NIC only do split when the packets exactly match all
the protocol headers in the segments. Otherwise, the whole packet will be
put into the last valid mempool. The longest match means NIC will do split
until packets mismatch the protocol header in the segments. The rest will
be put into the last valid pool.
Pseudo-code for exact match:
FOR each seg in segs except last one
IF proto_hdr is not matched THEN
BREAK
END IF
END FOR
IF loop breaked THEN
put whole pkt in last seg
ELSE
put protocol header in each seg
put everything else in last seg
END IF
Pseudo-code for longest match:
FOR each seg in segs except last one
IF proto_hdr is matched THEN
put protocol header in seg
ELSE
BREAK
END IF
END FOR
put everything else in last seg
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add a new ethdev API to retrieve supported protocol headers
of a PMD, which helps to configure protocol header based buffer split.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
These actions have been deprecated since DPDK 21.11 as
ambiguous and hard-to-use, but their removal might not
be popular because net drivers i40e, ixgbe and txgbe
employ these actions in complicated "PF/VF + QUEUE"
tunnel rule support. Maintainers of these drivers
should voice their attitude to the said problem.
For now, document the status in deprecation notes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In commit [1], It was announced to remove the DPAA2 cmdif
raw driver as there was no active user known at that time.
But now, one of the DPAA2 user has objected this driver
removal so in this patch, removing the deprecation notice
for the driver.
[1] commit 10f0e51554 ("doc: announce removal of DPAA2 cmdif raw driver")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
The IPsec SA expiry events were added as per below patch,
but the deprecation notice was not removed. This patch removed it.
Fixes: d1ce79d14b ("ethdev: add IPsec SA expiry event subtypes")
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Added function to parse algorithm for TDES CBC and ECB tests in JSON.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Acked-by: Brian Dooley <brian.dooley@intel.com>
Added parameters in rte_bbdev_queue_data to expose information
with regards to any queue related failure and warning
which cannot be supported in existing API.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Extended bbdev operations to support FFT based operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Added more options in the API to expose the number
of queues exposed and related priority.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Added device status information, so that the PMD can
expose information related to the underlying accelerator device status.
Minor order change in structure to fit into padding hole.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Mingshan Zhang <mingshan.zhang@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Updated the enum for rte_bbdev_op_type
to allow to keep ABI compatible for enum insertion
while adding padded maximum value for array need.
Removing RTE_BBDEV_OP_TYPE_COUNT and instead exposing
RTE_BBDEV_OP_TYPE_SIZE_MAX.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Enabled the flag pmd_supports_disable_iova_as_pa in cnxk driver build
files as they work with IOVA as VA. Updated cn9k and cn10k soc build
configurations to disable the IOVA as PA build by default.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Swapped position of mbuf next pointer and second dynamic field (dynfield2)
if the build is configured to disable IOVA as PA.
This is to move the mbuf next pointer to first cache line.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Some of the HW has support for choosing memory pools based on the
packet's size.
This is often useful for saving the memory where the application
can create a different pool to steer the specific size of the
packet, thus enabling more efficient usage of memory.
For example, let's say HW has a capability of three pools,
- pool-1 size is 2K
- pool-2 size is > 2K and < 4K
- pool-3 size is > 4K
Here,
pool-1 can accommodate packets with sizes < 2K
pool-2 can accommodate packets with sizes > 2K and < 4K
pool-3 can accommodate packets with sizes > 4K
With multiple mempool capability enabled in SW, an application may
create three pools of different sizes and send them to PMD. Allowing
PMD to program HW based on the packet lengths. So that packets with
less than 2K are received on pool-1, packets with lengths between 2K
and 4K are received on pool-2 and finally packets greater than 4K
are received on pool-3.
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
This patch extends hairpin-mode command line option of test-pmd
application with an ability to configure whether Rx/Tx hairpin queue
should use locked device memory or RTE memory.
For purposes of this configurations the following bits of 32 bit
hairpin-mode are reserved:
- Bit 8 - If set, then force_memory flag will be set for hairpin RX
queue.
- Bit 9 - If set, then force_memory flag will be set for hairpin TX
queue.
- Bits 12-15 - Memory options for hairpin Rx queue:
- Bit 12 - If set, then use_locked_device_memory will be set.
- Bit 13 - If set, then use_rte_memory will be set.
- Bit 14 - Reserved for future use.
- Bit 15 - Reserved for future use.
- Bits 16-19 - Memory options for hairpin Tx queue:
- Bit 16 - If set, then use_locked_device_memory will be set.
- Bit 17 - If set, then use_rte_memory will be set.
- Bit 18 - Reserved for future use.
- Bit 19 - Reserved for future use.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
This patch adds a capability to place hairpin Rx queue in locked device
memory. This capability is equivalent to storing hairpin RQ's data
buffers in locked internal device memory.
Hairpin Rx queue creation is extended with requesting that RQ is
allocated in locked internal device memory. If allocation fails and
force_memory hairpin configuration is set, then hairpin queue creation
(and, as a result, device start) fails. If force_memory is unset, then
PMD will fallback to allocating memory for hairpin RQ in unlocked
internal device memory.
To allow such allocation, the user must set HAIRPIN_DATA_BUFFER_LOCK
flag in FW using mlxconfig tool.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds a capability to place hairpin Tx queue in host memory
managed by DPDK. This capability is equivalent to storing hairpin SQ's
WQ buffer in host memory.
Hairpin Tx queue creation is extended with allocating a memory buffer of
proper size (calculated from required number of packets and WQE BB size
advertised in HCA capabilities).
force_memory flag of hairpin queue configuration is also supported.
If it is set and:
- allocation of memory buffer fails,
- or hairpin SQ creation fails,
then device start will fail. If it is unset, PMD will fallback to
creating the hairpin SQ with WQ buffer located in unlocked device
memory.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Before this patch, implementation details and configuration of hairpin
queues were decided internally by the PMD. Applications had no control
over the configuration of Rx and Tx hairpin queues, despite number of
descriptors, explicit Tx flow mode and disabling automatic binding.
This patch addresses that by adding:
- Hairpin queue capabilities reported by PMDs.
- New configuration options for Rx and Tx hairpin queues.
Main goal of this patch is to allow applications to provide
configuration hints regarding placement of hairpin queues.
These hints specify whether buffers of hairpin queues should be placed
in host memory or in dedicated device memory. Different memory options
may have different performance characteristics and hairpin configuration
should be fine-tuned to the specific application and use case.
This patch introduces new hairpin queue configuration options through
rte_eth_hairpin_conf struct, allowing to tune Rx and Tx hairpin queues
memory configuration. Hairpin configuration is extended with the
following fields:
- use_locked_device_memory - If set, PMD will use specialized on-device
memory to store RX or TX hairpin queue data.
- use_rte_memory - If set, PMD will use DPDK-managed memory to store RX
or TX hairpin queue data.
- force_memory - If set, PMD will be forced to use provided memory
settings. If no appropriate resources are available, then device start
will fail. If unset and no resources are available, PMD will fallback
to using default type of resource for given queue.
If application chooses to use PMD default memory configuration, all of
these flags should remain unset.
Hairpin capabilities are also extended, to allow verification of support
of given hairpin memory configurations. Struct rte_eth_hairpin_cap is
extended with two additional fields of type rte_eth_hairpin_queue_cap:
- rx_cap - memory capabilities of hairpin RX queues.
- tx_cap - memory capabilities of hairpin TX queues.
Struct rte_eth_hairpin_queue_cap exposes whether given queue type
supports use_locked_device_memory and use_rte_memory flags.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
libbpf v0.8.0 deprecates the bpf_get_link_xdp_id() and
bpf_set_link_xdp_fd() functions. Use meson to detect if
bpf_xdp_attach() is available and if so, use the recommended
replacement functions bpf_xdp_query_id(), bpf_xdp_attach()
and bpf_xdp_detach().
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
DPAA driver has dependency on kernel to perform various functionalities.
So kernel and DPDK version should be compatible for proper working.
This patch updates the DPAA guide with the information that user can
refer to find the compatible kernel version.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
NIC HW controllers often come with congestion management support on
various HW objects such as Rx queue depth or mempool queue depth.
Also, it can support various modes of operation such as RED
(Random early discard), WRED etc on those HW objects.
Add a framework to express such modes(enum rte_cman_mode) and
introduce (enum rte_eth_cman_obj) to enumerate the different
objects where the modes can operate on.
Add RTE_CMAN_RED mode of operation and RTE_ETH_CMAN_OBJ_RX_QUEUE,
RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL objects.
Introduce reserved fields in configuration structure
backed by rte_eth_cman_config_init() to add new configuration
parameters without ABI breakage.
Add rte_eth_cman_info_get() API to get the information such as
supported modes and objects.
Add rte_eth_cman_config_init(), rte_eth_cman_config_set() APIs
to configure congestion management on those object with associated mode.
Finally, add rte_eth_cman_config_get() API to retrieve the
applied configuration.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Sunil Kumar Kori <skori@marvell.com>
Added the ethdev Rx/Tx desc dump API which provides functions for query
descriptor from device. HW descriptor info differs in different NICs.
The information demonstrates I/O process which is important for debug.
As the information is different between NICs, the new API is introduced.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
mana can receive Rx interrupts from kernel through RDMA verbs interface.
Implement Rx interrupts in the driver.
Signed-off-by: Long Li <longli@microsoft.com>
MANA allocate device queues through the IB layer when starting Tx queues.
When device is stopped all the queues are unmapped and freed.
Signed-off-by: Long Li <longli@microsoft.com>
Currently this PMD supports RSS configuration when the device is stopped.
Configuring RSS in running state will be supported in the future.
Signed-off-by: Long Li <longli@microsoft.com>
MANA supports PCI hot plug events. Add this interrupt to DPDK core so its
parent PMD can detect device removal during Azure servicing or live
migration.
Signed-off-by: Long Li <longli@microsoft.com>
MANA is a PCI device. It uses IB verbs to access hardware through the
kernel RDMA layer. This patch introduces build environment and basic
device probe functions.
Signed-off-by: Long Li <longli@microsoft.com>
For the Rx logic, fallback packets are multiplexed to the
correct representor port based on the prepended metadata.
For the Tx logic, because fallback packets are prepended
metadata, the start of the packet has to be adjusted for
in the Tx descriptor.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Adds the framework to support flower representors. The number of VF
representors are parsed from the command line. For physical port
representors the current logic aims to create a representor for
each physical port present on the hardware.
An eth_dev is created for each physical port and VF, and flower
firmware requires a MAC repr cmsg to be transmitted to firmware
with info about the number of physical ports configured.
Reify messages are sent to hardware for each physical port representor.
An rte_ring is also created per representor so that traffic can be
pushed and pulled to this interface.
To up and down the real device represented by a flower representor port
a port mod message is used to convey that info to the firmware. This
message will be used in the dev_ops callbacks of flower representors.
Each cmsg generated by the driver is prepended with a cmsg header.
This commit also adds the logic to fill in the header of cmsgs.
Also add the Rx and Tx path for flower representors. For Rx packets are
dequeued from the representor ring and passed to the eth_dev. For Tx
the first queue of the PF vNIC is used. Metadata about the representor
is added before the packet is sent down to firmware.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Adds the Rx and Tx function for the ctrl VNIC. The logic is mostly
identical to the normal Rx and Tx functionality of the NFP PMD.
Make use of the ctrl VNIC service logic to service the ctrl vNIC Rx
path.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Adds the basic probing infrastructure to support the flower firmware
application.
Adds the cpp service, used for some user tools.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Instead of a one-liner describing each vdev argument, add a description
and example for each. Move the information describing preferred busy
polling from the "Limitations" section to the "Options" section where it
is better placed. Also make general grammar improvements.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
The relation between the isolated mode in ethdev flow API
and bifurcated driver behaviour was not clearly explained.
It is made clear in the how-to guide that isolated mode is required
for flow bifurcation to the kernel.
On the other side, the impact of the isolated mode on a bifurcated
driver is made more explicit.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
The TAP device only lasts as long as the DPDK application that opened
it is running. This behavior is bad if the DPDK application needs
to be updated transparently without disturbing other services
using the tap device.
Add a persist feature to the TAP device. If this flag is set, the
kernel network device remains even if after the application has exited.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The dev->device.numa_node field is set by each bus driver for
every device it manages to indicate on which NUMA node this device lies.
When this information is unknown, the assigned value is not consistent
across the bus drivers.
Set the default value to SOCKET_ID_ANY (-1) by all bus drivers
when the NUMA information is unavailable. This change impacts
rte_eth_dev_socket_id() in the same manner.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The rate parameter modified to uint32_t, so that it can work
for more than 64 Gbps.
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
As announced in the deprecation note, remove the Rx offload flag
'RTE_ETH_RX_OFFLOAD_HEADER_SPLIT' and 'split_hdr_size' field from
the structure 'rte_eth_rxmode'. Meanwhile, the place where the examples
and apps initialize the 'split_hdr_size' field, and where the drivers
check if the 'split_hdr_size' value is 0 are also removed.
User can still use `RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT` for per-queue packet
split offload, which is configured by 'rte_eth_rxseg_split'.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.
This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.
Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.
Example with testpmd command:
flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
As part of DPDK 21.11 release, it was announced that the
use of attributes 'ingress' and 'egress' in 'transfer'
rules was deprecated. The transition period is over.
Starting from DPDK 22.11, the use of direction attributes
with attribute 'transfer' is not allowed. To enforce that,
a generic check is added to flow rule validate API.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
These actions are supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
The action is supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
The action is supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
The action is supported by no drivers.
The patch breaks ABI.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Using rte_mtr_color_in_protocol_set(), user can configure
combination of protocol headers, like outer_vlan and outer_ip,
can be enabled on given meter object.
But rte_mtr_meter_vlan_table_update() and
rte_mtr_meter_dscp_table_update() do not have information that
which table needs to be updated corresponding to protocol header
i.e. inner or outer.
Adding protocol paramreter will allow user to provide required
protocol information so that corresponding inner or outer table
can be updated corresponding to protocol header.
If user wishes to configure both inner and outer table then
API must be called twice with correct protocol information.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Create a new Flow API action: METER_MARK.
It Meters a packet stream and marks its packets with colors.
The marking is done on a metadata, not on a packet field.
Unlike the METER action, it performs no policing at all.
A user has the flexibility to create any policies with the help of
the METER_COLOR item later, only meter profile is mandatory here.
Add testpmd command line to match for METER_MARK action:
flow create ... actions meter_mark mtr_profile 20 / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Introduce a new Meter API to retrieve a Meter profile and policy
objects using the profile/policy ID previously created with
meter_profile_add() and meter_policy_create() functions.
That allows to save the pointer and avoid any lookups in the
corresponding lists for quick access during a flow rule creation.
Also, it eliminates the need for CIR, CBS and EBS calculations
and conversion to a PMD-specific format when the profile is used.
Pointers are destroyed and cannot be used after the corresponding
meter_profile_delete() or meter_policy_delete() are called.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Extend modify_field Flow API with support of Meter Color Marker
modifications. It allows setting the packet's metadata to any
color marker: green, yellow or red. A user is able to specify
an initial packet color for Meter API or create simple Metering
and Marking flow rules based on his own coloring algorithm.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Provide an ability to use a Color Marker set by a Meter
as a matching item in Flow API. The Color Marker reflects
the metering result by setting the metadata for a
packet to a particular codepoint: green, yellow or red.
Add testpmd command line to match on a meter color:
flow create 0 ingress group 0 pattern meter color is green / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Similar to RISC-V, the current version for LoongArch do not support
vector. Re-use vector processing stubs in ixgbe PMD defined for PPC
for LoongArch. This enables ixgbe PMD usage in scalar mode on
LoongArch.
The ixgbe PMD driver was validated with Intel X520-DA2 NIC and the
test-pmd application, l2fwd, l3fwd examples.
Signed-off-by: Min Zhou <zhoumin@loongson.cn>
Add all necessary elements for DPDK to compile and run EAL on
LoongArch64 Soc.
This includes:
- EAL library implementation for LoongArch ISA.
- meson build structure for 'loongarch' architecture.
RTE_ARCH_LOONGARCH define is added for architecture identification.
- xmm_t structure operation stubs as there is no vector support in
the current version for LoongArch.
Compilation was tested on Debian and CentOS using loongarch64
cross-compile toolchain from x86 build hosts. Functions were tested
on Loongnix and Kylin which are two Linux distributions supported
LoongArch host based on Linux 4.19 maintained by Loongson
Corporation.
We also tested DPDK on LoongArch with some external applications,
including: Pktgen-DPDK, OVS, VPP.
The platform is currently marked as linux-only because there is no
other OS than Linux support LoongArch host currently.
The i40e PMD driver is disabled on LoongArch because of the absence
of vector support in the current version.
Similar to RISC-V, the compilation of following modules has been
disabled by this commit and will be re-enabled in later commits as
fixes are introduced:
net/ixgbe, net/memif, net/tap, example/l3fwd.
Signed-off-by: Min Zhou <zhoumin@loongson.cn>
RTE_TEST_[RT]X_DESC_DEFAULT and RTE_TEST_[RT]X_DESC_MAX macros have been
copied in a lot of app/ and examples/ code.
Those macros are local to each program.
They are not related to a DPDK public header/API, drop the RTE_TEST_
prefix.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
The function return type is changed to fixed width uint32_t
to be consistent with what appears to be the original authors intent.
It doesn't make much sense to return signed integers for these functions.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Structure rte_security_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
Structure rte_cryptodev_sym_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
This function was never implemented and has been deprecated for a long
time. We can remove it.
Signed-off-by: David Marchand <david.marchand@redhat.com>
As part of the agreed process for deprecating KNI in DPDK, the example
app is scheduled for removal as part of the 22.11 release.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
NVIDIA acquired Mellanox Technologies in 2020.
The DPDK documentation and code might still include instances
of or references to Mellanox trademarks (like BlueField and ConnectX)
that are now NVIDIA trademarks.
The PCI IDs and copyrights are unchanged.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gal Cohen <galco@nvidia.com>
Introduce ability to aggregate crypto operations processed by event
crypto adapter into single event containing rte_event_vector whose event
type is RTE_EVENT_TYPE_CRYPTODEV_VECTOR.
Application should set RTE_EVENT_CRYPTO_ADAPTER_EVENT_VECTOR in
rte_event_crypto_adapter_queue_conf::flag and provide vector configuration
with respect of rte_event_crypto_adapter_vector_limits, which could be
obtained by calling rte_event_crypto_adapter_vector_limits_get, to enable
vectorization.
The event crypto adapter would be responsible for vectorizing the crypto
operations based on provided response information in
rte_event_crypto_metadata::response_info.
Updated drivers and tests accordingly to new API.
Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
ShangMi 3 (SM3) is a cryptographic hash function used in
the Chinese National Standard.
- Added SM3 algorithm
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
ShangMi 4 (SM4) is a block cipher used in the
Chinese National Standard for Wireless LAN WAPI and also
used with Transport Layer Security.
Added SM4 encryption algorithm in ECB, CBC and CTR modes.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The API rte_security_get_userdata() was being unused by most of
the drivers and it was retrieving userdata from mbuf dynamic field.
Hence, the API was removed and the application can directly get the
userdata from dynamic field. This helps in removing extra checks
in datapath.
Signed-off-by: Srujana Challa <schalla@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
DPDK libraries should never call rte_exit on failure, so change the
function return type of rte_metrics_init to "int" to allow returning an
error code to the application rather than exiting the whole app on init
failure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Library functions should not cause the app to exit or panic. Replace the
existing panic call in the EAL remote launch functions with an error
code return instead.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Deprecation notice targeted for v22.11 of event vector has been
merged in the following commits, remove deprecation notices.
Fixes: 0fbb55efa5 ("eventdev: add element offset to event vector")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Replace *u64s with u64s in rte_event_vector structure as
the *ptrs already serves the purpose of holding pointers
and the intention of u64s is to hold array of uint64_t
values.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add `rte` prefix to stop flush callback function pointer
declaration to avoid conflicts with application functions,
``eventdev_stop_flush_t`` is renamed to
``rte_eventdev_stop_flush_t``.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
For best performance, applications running on certain cores should use
the DLB device locally available on the same tile along with other
resources. To allocate optimal resources, probing is done for each
producer port (PP) for a given CPU and the best performing ports are
allocated to producers. The CPU used for probing is either the first
core of producer coremask (if present) or the second core of EAL
coremask. This will be extended later to probe for all CPUs in the
producer coremask or EAL coremask.
Producer coremask can be passed along with the BDF of the DLB devices.
"-a xx:y.z,producer_coremask=<core_mask>"
Applications also need to pass RTE_EVENT_PORT_CFG_HINT_PRODUCER during
rte_event_port_setup() for producer ports for optimal port allocation.
For optimal load balancing ports that map to one or more QIDs in common
should not be in numerical sequence. The port->QID mapping is application
dependent, but the driver interleaves port IDs as much as possible to
reduce the likelihood of sequential ports mapping to the same QID(s).
Hence, DLB uses an initial allocation of Port IDs to maximize the
average distance between an ID and its immediate neighbors. Using
the initialport allocation option can be passed through devarg
"default_port_allocation=y(or Y)".
When events are dropped by workers or consumers that use LDB ports,
completions are sent which are just ENQs and may impact the latency.
To address this, probing is done for LDB ports as well. Probing is
done on ports per 'cos'. When default cos is used, ports will be
allocated from best ports from the best 'cos', else from best ports of
the specific cos.
Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
As rte_flow_action_handle_create/destroy/update() have their own
asynchronous rte_flow_async_action_handle_create/destroy/update()
version functions to accelerate the indirect action operations in
queue based flow engine. Currently, the asynchronous version query
function for indirect action was missing.
Add rte_flow_async_action_handle_query() function corresponding
to rte_flow_action_handle_query(). The new asynchronous version
function enables enqueue the query to the hardware similar as
asynchronous flow management does and returns immediately to free
the CPU for other tasks. Application can get the query results from
rte_flow_pull() when the hardware completes its work.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
In queue based async flow engine, in order to optimize the flow
insertion rate, PMD can use the hints from application to have
resources pre-allocate during initialization phase for actions
such as count/meter/aging.
This commit adds the connection tracking action hints.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
If there's no param in represented_port item, it will be treated as
matching all ports by default. But there's some limitation when using it
with meter hierarchy.
This patch adds the limitation that when matching all ports, the meter
hierarchy should not contain any meter having drop count.
Fixes: e8146c63 ("net/mlx5: support represented port item in flow rules")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In case of higher order (greater than 99) logical cores, name was
truncated (length is restricted to 16 characters, including the
terminating null byte ('\0')) and it makes hard to follow threads.
Before this fix, this issue can be reproduced using following arguments:
--lcores=0,10@1,100@2
Then we had:
lcore-worker-10
lcore-worker-10
Signed-off-by: Abdullah Ömer Yamaç <omer.yamac@ceng.metu.edu.tr>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Those helpers have been marked as deprecated for a long time and have
documented equivalent helpers.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
This patch is to enable scalar path inner and outer Tx checksum offload
for tunnel packet by configure ol_flags.
Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Tested-by: Ke Xu <ke1.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add a known issue: configuring VLAN filters from VF is unsupported
for i40e driver 2.17.15.
Signed-off-by: Steve Yang <stevex.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Move all VF related limitation or known issues from i40e.rst to
intel_vf.rst, as i40evf has been removed from i40e, i40e.rst should only
cover PF's information.
The patch also fix couple typos and refine the words to be more accurate.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add support for sending traffic to the original DCF port
with 'port_representor' action by using DCF port id as 'port_id'.
For example:
testpmd> flow create 0 ingress pattern any
/ end actions port_representor port_id 0 / end
Signed-off-by: Zhichao Zeng <zhichaox.zeng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add flow subscription pattern support for AVF.
The supported patterns are listed below:
eth/vlan/ipv4
eth/ipv4(6)
eth/ipv4(6)/udp
eth/ipv4(6)/tcp
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>