From Documentation/admin-guide/kernel-parameters.txt, specifically the
last sentence:
nohz_full= [KNL,BOOT,SMP,ISOL]
The argument is a cpu list, as described above.
In kernels built with CONFIG_NO_HZ_FULL=y, set
the specified list of CPUs whose tick will be stopped
whenever possible. The boot CPU will be forced outside
the range to maintain the timekeeping. Any CPUs
in this list will have their RCU callbacks offloaded,
just as if they had also been called out in the
rcu_nocbs= boot parameter.
The kernel or-s the nohz_full cpumask into the rcu_nocbs cpumask at
startup, and uses that.
Signed-off-by: Tudor Brindus <me@tbrindus.ca>
The document roadmap section was missing any mention of the individual
drivers guides which are important for users. Add them to list.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The document roadmap section called out the titles of other documents,
but these are better as hyperlinks.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The document roadmap section was missing any mention of the individual
drivers guides which are important for users. Add them to list.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The document roadmap section called out the titles of other documents,
but these are better as hyperlinks.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
There are two warnings in the VFIO section about limitations of VFIO and
limitations on who can bind/unbind devices. Since these don't actually
describe any unsafe conditions, and are more informational, we can
change these to notes. This also helps emphasise the other warnings in
the documents which flag genuine security concerns.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Rather than having separate sections for VFIO and VFIO no-iommu mode, as
well as a separate section further down the document on troubleshooting
VFIO, we can consolidate all these as subsections into a primary VFIO
section. This section starts with the basics of VFIO use, then covers
no-iommu mode, before moving on to the more advanced topics such as
creating VFs and ending with the troubleshooting subsection.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
To further de-emphasise UIO over the alternatives, we can move the UIO
section of the drivers page to the end of the document, giving more
prominence to VFIO and bifurcated drivers.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The VFIO section of the page about Linux drivers was rather long and
unstructured. This can be improved by splitting it up into subsections,
to cover the specifics of memory limits and creating VFs. When moving
the various text notes into the relevant subsections, we can drop the
note about kernels earlier than 3.6, since DPDK no longer supports
kernels that old.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
VFIO is to be strongly preferred over UIO-based modules, so update our
text and examples to only refer to VFIO, giving an initial reference at
the start to UIO as a fallback option.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
While the details of VFIO and UIO may be of interest to some, most users
of the doc are likely primarily interested in how to bind their devices
to the kernel driver and then move on to running the app. Therefore, the
most important part of the "Linux Drivers" section of the GSG is the
subsection on "Binding and Unbinding", so put that first.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The GSG has a note warning that use of UIO is inherently unsafe due to
lack of IOMMU protection. However, this was only flagged as a "NOTE",
meaning it could easily be missed. Changing the rst tag from "note" to
"warning" and moving it to the top of the UIO subsection makes this a
lot more visible to users.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The docs on binding drivers was updated as part of the removal of the
igb_uio module from the main DPDK repo. As part of that update, a note
about uio_pci_generic requiring legacy interrupts was removed, but
should have been kept.
Fixes: 56bb5841fd ("kernel/linux: remove igb_uio")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Some IDEs, such as eclipse, complained on save about the use of special
characters in the (R) symbol in linux GSG doc. We can replace those with
the equivalent "|reg|" text, and including isonum.txt.
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The "Linux Drivers" section of the GSG already notes that, for use of
UIO, the IOMMU must be disabled or put into pass-through mode.
Therefore, there is no need to duplicate this information in the
"additional functionality" section. Also the kernel configuration
options documented in the section are enabled as standard on all common
distro kernels, so the information should not be needed in a GSG doc.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The KNI library is disabled by default in DPDK and is already documented
in the programmers guide and also in the sample application guide. There
are also in-kernel alternatives to it. Therefore, we can drop the
(already fairly minimal) reference to it from the Linux GSG.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
As best we can tell, the HPET timers are not commonly used, so there is
little need to give extensive detail and commentry on them in the Linux
GSG. As such, we can reduce the GSG section to just a single subsection
and also move it down the page below items which are likely of greater
importance.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Not all directories were given in the GSG document, but many of those
omitted would be of interest to users, e.g. "doc", "license" and
"usertools" directories. Adding these leaves only "devtools", and
"kernel" as the only undocumented directories, so add them in too for
completeness.
When updating the section, add "including" to the line leading up to the
directory list, indicating that, while the list is currently complete,
it is not guaranteed to always be.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When building end-applications linked with DPDK, the only additional
tool needed is pkg-config/pkgconf. However, the standard development
tools meta-packages on most distro's include this as standard, meaning
it does not really require its own section. The one outlier in the
existing text is "alpine" where it is not present when using "libc-dev"
target. However, changing "gcc" and "libc-dev" to "alpine-sdk"
metapackage aligns alpine with the other distros in this regard.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
A note about secure boot not allowing UIO is present in both the system
requirements section and the driver binding section. This fits better in
the driver binding section, so the copy in system requirements can be
removed. The document in general now also emphasises VFIO over UIO more
than when this note was first added, reducing the need for this warning
to be repeated.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Some minor updates for the section on builing DPDK in the GSG:
* update Python 3.7 package name to the 3.8 version
* note that the pyelftools needs to be tied to the python version
* drop reference to jansson library for legacy telemetry
* replace special characters for (R)
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Multi-Packet Rx queue uses PMD-managed buffers to store packets.
These buffers are externally attached to user mbufs.
This conflicts with the feature that allows using user-managed
externally attached buffers in an application.
Add the corresponding limitation to MLX5 documentation that MPRQ
and external data buffers cannot be used together.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
This patch adds mlx5 specifics description about
handling the Ethernet type by modify field action
for VLAN-ed traffic.
Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
A flow rule with sample action will be split into two sub flows,
and a tag action was added implicitly in the sample prefix sub flow,
the reserved metadata regC index was used for this tag action.
The reserved metadata regC was shared with metering action,
for ConnectX-5 trusted device (VF/SF), the reserved metadata regC was
invalid since PF only supported the legacy metering.
This patch adds the checking for the tag index and back to use the
application tag if a failure happened.
Fixes: a9b6ea45be ("net/mlx5: fix tag ID conflict with sample action")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Trusted VF is needed to offload rules with rte_flow to a group
that is bigger than 0.
The configuration is done in two parts: driver and FW.
This patch adds the needed steps to configure a VF to be trusted.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
When E-Switch mode was enabled, the NIC egress flows was implicitly
appended with source vport to match on. If the metadata register C0
was used to maintain the source vport, it was initialized to zero
on packet steering engine entry, the flow could be hit only
if source vport was zero, the register C0 of the packet was not correct
to match in the TX side, this caused egress flow misses.
This patch:
- removes the implicit source vport match for NIC egress flow.
- rejects the NIC egress flows on the representor ports at validation.
- allows the internal NIC egress flows containing the TX_QUEUE items in
order to not impact hairpins.
Fixes: ce777b147b ("net/mlx5: fix E-Switch flow without port item")
Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add support to define ipv4 and ipv6 forwarding tables
from reading from a config file for EM with a format
similar to l3fwd-acl one.
Users can now use the default hardcoded route tables
or optionally config files for 'l3fwd_em'. Default
config files have been provided for use with EM.
Related l3fwd docs have been updated to reflect these
changes.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This fixes typos and punctuation in the rte flow API guide.
Fixes: 2f82d143fb ("ethdev: add group jump action")
Fixes: 4d73b6fb99 ("doc: add generic flow API guide")
Cc: stable@dpdk.org
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
To enable the gpudev rte_gpu_mem_cpu_map feature to expose
GPU memory to the CPU, the GPU CUDA driver library needs
the GDRCopy library and driver.
If DPDK is built without GDRCopy, the GPU CUDA driver returns
error if the is invoked rte_gpu_mem_cpu_map.
All the others GPU CUDA driver functionalities are not affected by
the absence of GDRCopy, thus this is an optional functionality
that can be enabled in the GPU CUDA driver.
CUDA driver documentation has been updated accordingly.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
The features list were missed when introducing the driver.
Fixes: 1306a73b19 ("gpu/cuda: introduce CUDA driver")
Cc: stable@dpdk.org
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
Add support queue/RSS action for external Rx queue.
In indirection table creation, the queue index will be taken from
mapping array.
This feature supports neither LRO nor Hairpin.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add option to probe common device using import CTX/PD functions instead
of create functions.
This option requires accepting the context FD and the PD handle as
devargs.
This sharing can be useful for applications that use PMD for only some
operations. For example, an app that generates queues itself and uses
PMD just to configure flow rules.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This patch adds matching on the optional fields (checksum/key/sequence)
of GRE header. The matching on checksum and sequence fields requests
support from rdma-core with the capability of misc5 and tunnel_header 0-3.
For patterns without checksum and sequence specified, keep using misc for
matching as before, but for patterns with checksum or sequence, validate
capability first and then use misc5 for the matching.
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
'gre_option' flow item was missing in the feature list, adding it.
Fixes: f61490bdf2 ("ethdev: support GRE optional fields")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add CNF95xx B0 variant to the list of supported models.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Currently inline inbound device usage is not default for eventdev,
patch renames force_inl_dev dev arg to no_inl_dev and enables inline
inbound device by default.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
cnxk platform supports red/yellow packet marking based on TM
configuration. This patch set hooks to enable/disable packet
marking for VLAN DEI, IP DSCP and IP ECN. Marking enabled only
in scalar mode.
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Added capability and support for inline inbound IP reassembly
in cnxk driver. The IP reassembly offload is supported only
when the inline IPSec security offload is enabled.
In case of IP reassembly incomplete, the mbufs are attached
in the mbuf dynamic field and a dynamic flag is set accordingly.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The HW steering uses async queue-based flow rules management
mechanism. The matcher and part of the actions have been
prepared during flow table creation. Some remaining actions
will be constructed during flow creation if needed.
A flow postpone attribute bit describes if flow management
should be applied to the HW directly. An extra push function
is provided to force push all the cached flows to the HW.
Once the flow has been applied to the HW, the pull function
will be called to get the queued creation/destruction flows.
The DR rule flow memory is represented in PMD layer instead
of allocating from HW steering layer. While destroying the
flow, the flow rule memory can only be freed after the CQE
received.
The HW queue job descriptor is currently introduced to convey
the flow information and operation type between the flow
insertion/destruction in the pull function.
This commit adds the basic flow queue operation for:
rte_flow_async_create();
rte_flow_async_destroy();
rte_flow_push();
rte_flow_pull();
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The new hardware steering engine relies on using dedicated steering WQEs
instead of writing to the low-level steering table entries directly.
In the first implementation the hardware steering engine supports the
new queue based Flow API, the existing synchronous non-queue based Flow
API is not supported.
A new dv_flow_en value 2 is added to manage mlx5 PMD steering engine:
dv_flow_en rte_flow API rte_flow_async API
------------------------------------------------
0 support not support
1 support not support
2 not support support
This commit introduces the extra dv_flow_en = 2 to specify the new
flow initialize and manage operation routine.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The hardware since ConnectX-7 supports waiting on
specified moment of time with new introduced wait
descriptor. A timestamp can be directly placed
into descriptor and pushed to sending queue.
Once hardware encounter the wait descriptor the
queue operation is suspended till specified moment
of time. This patch update the Tx datapath to handle
this new hardware wait capability.
PMD documentation and release notes updated accordingly.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
flow queue 0 indirect_action 0 create action_id 9
ingress postpone yes action rss / end
flow queue 0 indirect_action 0 update action_id 9
action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
testpmd> flow queue 0 create 0 postpone no
template_table 6 pattern_template 0 actions_template 0
pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
testpmd> flow queue 0 destroy 0 postpone yes rule 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
testpmd> flow template_table 0 create table_id 6
group 9 priority 4 ingress mode 1
rules_number 64 pattern_template 2 actions_template 4
testpmd> flow template_table 0 destroy table 6
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
testpmd> flow pattern_template 0 create pattern_template_id 2
template eth dst is 00:16:3e:31:15:c3 / end
testpmd> flow actions_template 0 create actions_template_id 4
template drop / end mask drop / end
testpmd> flow actions_template 0 destroy actions_template 4
testpmd> flow pattern_template 0 destroy pattern_template 2
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256
Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.
The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.
The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.
A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.
The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.
In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.
The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for inline inbound SPI range via devargs
instead of just max SPI value and range being 0..max.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Adds support for priority flow control support for CNXK
platforms.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The "tx_db_nc" devarg forces doorbell register mapping to non-cached
region eliminating the extra write memory barrier. This argument was
used in creating the UAR for Tx and thus affected its performance.
Recently [1] its use has been extended to all UAR creation in all mlx5
drivers, and now its name is no longer so accurate.
This patch changes its name to "sq_db_nc" to suit any send queue that
uses it. The old name will still work for backward compatibility.
[1] commit 5dfa003db5 ("common/mlx5: fix post doorbell barrier")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Adds new documentation for MLX5 common driver that contains:
- Its features list (doesn't exist for now).
- Its devargs description.
- Device configuration information and tutorial.
- Quick Start Guide for Mellanox OFED/EN.
Move into this doc all shared information from other MLX5 PMD docs and
add them reference to new common doc.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Vectorized routines were removed in result of Tx datapath refactoring,
and devarg keys documentation was updated.
However, more updating should have been done. In environment variables
doc, there was explanation according to vectorized Tx which isn't
relevant anymore.
This patch removes this irrelevant explanation.
Fixes: a6bd4911ad ("net/mlx5: remove Tx implementation")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The default missing Tx completion timeout was set to 5 seconds.
In order to provide users with the interface to control this timeout
to adjust it with the application's watchdog, the device argument for
controlling this value was added.
The parameter is called 'miss_txc_to' and can be modified using the
devargs interface:
./app -a <bdf>,miss_txc_to=UINT_NUMBER
This parameter accepts values from 0 to 60 and indicates number of
seconds after which the Tx packet will be considered as missing.
HW hints for the Tx completions timeout were removed to do not overwrite
parameter from the user. Also specifying default Tx completion timeout
value was moved from the configuration to init phase in order to
simplify default value assignment.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
ENA was only supporting retrieval of all the xstats name and wasn't
implementing the eth_xstats_get_names_by_id API.
As this API may be more efficient than retrieving all the names, it
tries to avoid excessive string copying.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
ENA driver did not allow applications to call tx_cleanup. Freeing Tx
mbufs was always done by the driver and it was not possible to manually
request the driver to free mbufs.
Modify ena_tx_cleanup function to accept maximum number of packets to
free and return number of packets that was freed.
Signed-off-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Originally, the ena_com memzone counter was shared by ports, which
caused the memzones to be harder to identify and could potentially
lead to race and because of that the counter had to be atomic.
This atomic counter was global variable and it couldn't work in the
multiprocess implementation.
The memzone is now being identified by the local to port memzone counter
and the port ID - both of those information can be found in the shared
data, so it can be probed easily.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Due to how the ena_com compatibility layer is written, all AQ commands
triggering functions use stack to save results of AQ and then copy them
to user given function.
Therefore to keep the compatibility layer common, introduce ENA_PROXY
macro. It either calls the wrapped function directly (in primary
process) or proxies it to the primary via DPDK IPC mechanism. Since all
proxied calls are taken under a lock share the result data through
shared memory (in struct ena_adapter) to work around 256B IPC parameter
size limit.
New proxy calls can be added by
1. Adding a new message type at the end of enum ena_mp_req
2. Adding new message arguments to the struct ena_mp_body if needed
3. Defining proxy request descriptor with ENA_PROXY_DESC. Its arguments
include handlers for request preparation and response processing.
Any of those may be empty (aside of marking arguments as used).
4. Adding request handling logic to ena_mp_primary_handle()
5. Replacing proxied function calls with ENA_PROXY(adapter, <func>, ...)
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
As the default behavior for arm64 is to alias rte_memcpy as memcpy, ENA
cannot redefine memcpy as rte_memcpy as it would cause nested
declaration.
To make it possible to use optimized memcpy in the ena_com layer on Arm,
the driver now redefines memcpy when it is beneficial:
* For arm64 only when the flag RTE_ARCH_ARM64_MEMCPY was defined
* For arm only when the flag RTE_ARCH_ARM_NEON_MEMCPY was defined
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
ENA uses AENQ for notification about various events, like LSC, keep
alive etc. By default it was enabling all AENQ that were supported by
both the driver and the device. As a result the LSC was always processed
even if the application turned it off explicitly.
As the DPDK provides application with the possibility to configure the
LSC, ENA should respect that. AENQ groups are now being updated upon
configure step, thus LSC can be activated or disabled between ENA PMD
reconfigurations. Moreover, the LSC capability for the device is being
determined dynamically.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
* Split 'bad_csum' Rx statistic into 'l3_csum_bad' and 'l4_csum_bad' to
be able to check which checksum was not calculated properly.
* Add l4_csum_good statistic, which shows how many times L4 Rx checksum
was properly offloaded.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Dawid Gorecki <dgr@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
When an AF_XDP PMD is created without specifying the 'start_queue', the
default Rx queue associated with the socket will be Rx queue 0. A common
scenario encountered by users new to AF_XDP is that they create the
socket on queue 0 however their interface is configured with many more
queues. In this case, traffic might land on for example queue 18 which
means it will never reach the socket.
This commit updates the AF_XDP documentation with instructions on how to
configure the interface to ensure the traffic will land on queue 0 and
thus reach the socket successfully.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support for L2TPv2(include PPP over L2TPv2) protocols FDIR
based on outer MAC src/dst address and L2TPv2 session ID.
Add support for PPPoL2TPv2oUDP protocols FDIR based on inner IP
src/dst address and UDP/TCP src/dst port.
Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2
eth/ipv4(6)/udp/l2tpv2/ppp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Add support for L2TPv2(include PPP over L2TPv2) protocols RSS based
on outer MAC src/dst address and L2TPv2 session ID.
Patterns are listed below:
eth/ipv4/udp/l2tpv2
eth/ipv4/udp/l2tpv2/ppp
eth/ipv6/udp/l2tpv2
eth/ipv6/udp/l2tpv2/ppp
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Updated AESNI MB and AESNI GCM, KASUMI, ZUC and SNOW3G PMD documentation
guides with information about the latest Intel IPSec Multi-buffer
library supported.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added missing step for converting SHA request files to correct
format. Replaced AES_GCM with GCM to follow the correct
naming format.
Fixes: 3d0fad56b7 ("examples/fips_validation: add crypto FIPS application")
Cc: stable@dpdk.org
Signed-off-by: Jakub Poczatek <jakub.poczatek@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Add support to enable per port packet pool and also override
vector pool size from command line args. This is useful
on some HW to tune performance based on usecase.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This commit refactors asummetric crypto functions
in Intel QuickAssist Technology PMD.
Functions right now are shorter and far easier readable,
plus it facilitates addition of new algorithms.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
In crypto producer mode, producer core enqueues cryptodev with software
generated crypto ops and worker core dequeues crypto completion events
from the eventdev. Event crypto metadata used for above processing is
pre-populated in each crypto session.
Parameter --prod_type_cryptodev can be used to enable crypto producer
mode. Parameter --crypto_adptr_mode can be set to select the crypto
adapter mode, 0 for OP_NEW and 1 for OP_FORWARD.
This mode can be used to measure the performance of crypto adapter.
Example:
./dpdk-test-eventdev -l 0-2 -w <EVENTDEV> -w <CRYPTODEV> -- \
--prod_type_cryptodev --crypto_adptr_mode 1 --test=perf_atq \
--stlist=a --wlcores 1 --plcores 2
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Grinder configuration is now moved to sched library.
Number of grinders can also modified by specifying
RTE_SCHED_PORT_N_GRINDERS=N in CFLAGS, where N is number of grinders.
Signed-off-by: Megha Ajmera <megha.ajmera@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
The documentation on how to configure device instances using
accel-config can be improved by a number of changes:
* For initial example, when only configuring one queue, omit
configuration of a second engine, which is unused later.
* Add the "max-batch-size" setting to the options being configured for
each queue
* Add a final, more complete example, showing configuration of multiple
queues on a device.
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
For DMA device 0000:7d:0.0, the original generated dmadev name starts
with the "7d:0.0", which is not expected.
This patch uses rte_pci_device_name API to generates the dmadev name.
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
The Kunpeng930 DMA devices have the same PCI device id with Kunpeng920,
but with different PCI revision and register layout. This patch
introduces the basic initialization for Kunpeng930 DMA devices.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
rte_gpu_mem_cpu_map() exposes a GPU memory area to the CPU.
In gpudev communication list this is useful to store the
status flag.
A communication list status flag allocated on GPU memory
and mapped for CPU visibility can be updated by CPU and polled
by a GPU workload.
The polling operation is more frequent than the CPU update operation.
Having the status flag in GPU memory reduces the GPU workload polling
latency.
If CPU mapping feature is not enabled, status flag resides in
CPU memory registered so it's visible from the GPU.
To facilitate the interaction with the status flag, this patch
provides also the set/get functions for it.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
Add PMD parameter that allows one to select only subset of available
GPIOs.
This might be useful in cases where some GPIOs are already reserved yet
still available for userspace access but particular app should not touch
them.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Add support for custom interrupt handlers. Custom interrupt
handlers bypass kernel completely and are meant for fast
and low latency access to GPIO state.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Add initial support for PMD that allows to control particular pins form
userspace. Moreover PMD allows to attach custom interrupt handlers to
controllable GPIOs.
Main users of this PMD are dataplain applications requiring fast and low
latency access to pin state.
Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
Add support for allow or block list for devices bound
to the kernel driver.
When used the allow or block list applies as an additional
condition to the name prefix.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
Adding useful debug prints in DPAA driver for
easy debugging. A devarg is added to enable various levels
of prints.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
This patch supports ordered queue for DPAA2 platform.
A devarg is added to enable strict ordering.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Few useful debug prints added in dequeue function.
These are controlled via pmd devargs. Details of using the
devarg is updated in dpaa2_sec.rst
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Rather than the asym session create function returning a session on
success, and a NULL value on error, it is modified to now return int
values - 0 on success or -EINVAL/-ENOTSUP/-ENOMEM on failure.
The session to be used is passed as input.
This adds clarity on the failure of the create function, which enables
treating the -ENOTSUP return as TEST_SKIPPED in test apps.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
A user data field is added to the asymmetric session structure.
Relevant API added to get/set the field.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The rte_cryptodev_asym_session structure is now moved to an internal
header. This will no longer be used directly by apps,
private session data can be accessed via get API.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Rather than using a session buffer that contains pointers to private
session data elsewhere, have a single session buffer.
This session is created for a driver ID, and the mempool element
contains space for the max session private data needed for any driver.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The programmer's guide for cryptodev included sample code for using
Asymmetric crypto. This is now replaced with direct code from the test
application, using literal includes. It is broken into snippets as the
test application didn't have all of the required code in one function.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The "feature" BBDEV API is useless as all baseband drivers
must implement it by definition.
The non-implemented features should not be marked with "N".
Keeping them blank is clearer to read in the resulting matrix.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
When libxdp is used, the LIBXDP_OBJECT_PATH environment variable must be
set to the location of where libxdp placed its bpf object files. This is
usually in /usr/local/lib/bpf or /usr/local/lib64/bpf. Failure to do so
will result in the PMD not initialising correctly as the bpf program is
not found. Document this requirement.
Also, mention that the following logs which are generated on application
launch can be ignored:
libbpf: elf: skipping unrecognized data section(7) .xdp_run_config
libbpf: elf: skipping unrecognized data section(8) xdp_metadata
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add gre_option command for matching optional fields
(checksum/key/sequence) in GRE header. The item must follow gre item,
and the item does not change the flags in gre item, the application
should set the flags in gre item correspondingly.
Application can still use gre_key item 'gre_key value is xx' for key
matching, the effect is the same with using 'gre_option key is xx'.
The examples for gre_option are as follows:
To match on checksum field with value 0x11:
testpmd> ... pattern / eth / gre c_bit is 1 / gre_option checksum is
0x11 / end ..
To match on checksum field with value 0x11 and any value of key:
testpmd> ... pattern / eth / gre c_bit is 1 k_bit is 1 / gre_option
checksum is 0x11 / end ..
To match on checksum field with value 0x11 and no key field in packet:
testpmd> ... pattern / eth / gre c_bit is 1 k_bit is 0 / gre_option
checksum is 0x11 / end ..
The invalid patterns for gre_option are as follows:
testpmd> ... pattern / eth / gre / gre_option checksum is 0x11 / end ..
(c_bit in gre item not present)
testpmd> ... pattern / eth / gre c_bit is 0 / gre_option checksum is 0x11 /
end .. (c_bit is unset for gre item, but checksum is
specified by gre_option item)
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Add flow pattern items and header format for matching optional fields
(checksum/key/sequence) in GRE header. And the flags in gre item should
be correspondingly set with the new added items.
Signed-off-by: Sean Zhang <xiazhang@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Secondary process support had been disabled for the AF_XDP PMD because
there was no logic in place to share the AF_XDP socket file descriptors
between the processes. This commit introduces this logic using the IPC
APIs.
Rx and Tx are disabled in the secondary process due to memory mapping of
the AF_XDP rings being assigned by the kernel in the primary process only.
However other operations including retrieval of stats are permitted.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Support to configure LED in firmware. Driver commands firmware to turn
the LED on and off. And OEM customize their LED solutions in firmware.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Support to get OEM customized LED configuration information from firmware.
And driver needs to adjust the process of PHY setup link, based on this
LED configuration.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Added the ethdev dump API which provides querying private info from device.
There exists many private properties in different PMD drivers, such as
adapter state, Rx/Tx func algorithm in hns3 PMD. The information of these
properties is important for debug. As the information is private, the new
API is introduced.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
MPRQ cannot be used in multi-process applications because of
externally attached MPRQ buffers. A callback is registered by
a primary process to free MPRQ buffers once they are no longer
needed. But this information is shared among all the processes.
The virtual address of the mlx5_mprq_buf_free_cb function is
different in a secondary process, which leads to a segmentation
fault. Document that MPRQ is not supported in a multi-process
app, since there is no way to find out if this is the one.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Since dmadev is introduced in 21.11, to avoid the overhead of vhost DMA
abstraction layer and simplify application logics, this patch integrates
dmadev in asynchronous data path.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Sunil Pai G <sunil.pai.g@intel.com>
Tested-by: Yvonne Yang <yvonnex.yang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch supports drop any and steer all to queue in switch
filter. Support new rte_flow pattern any to handle all packets.
The usage is listed below.
1. drop any:
flow create 0 ingress pattern any / end actions drop / end
All packets received in port 0 will be dropped.
2. steer all to queue:
flow create 0 ingress pattern any / end actions queue index 3 / end
All packets received in port 0 will be steered to queue 3.
Signed-off-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload feature is exposed in eth_dev ops for devices which
can attempt IP reassembly of packets in hardware.
- rte_eth_ip_reassembly_capability_get() - to get the maximum values
of reassembly configuration which can be set.
- rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
and to enable the feature in the PMD (to be called before
rte_eth_dev_start()).
- rte_eth_ip_reassembly_conf_get() - to get the current configuration
set in PMD.
Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
the resulting reassembled IP packet would be a typical segmented mbuf in
case of success.
And if reassembly of IP fragments is failed or is incomplete (if
fragments do not come before the reass_timeout, overlap, etc), the mbuf
dynamic flags can be updated by the PMD. This is updated in a subsequent
patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
This patch adds L2TPv2 control message and 5 types of data message
support for testpmd.
The added L2TPv2 message types are listed below:
1. L2TPv2 control
2. L2TPv2
3. L2TPv2 + length option
4. L2TPv2 + sequence option
5. L2TPv2 + offset option
6. L2TPv2 + length option + sequence option
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
This patch defines new RSS offload type for L2TPv2, which
is required when users want to distribute packets based on
the L2TPv2 session ID field.
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Enable the possibility to expose a GPU memory area and make it
accessible from the CPU.
GPU memory has to be allocated via rte_gpu_mem_alloc().
This patch allows the gpudev library to map (and unmap),
through the GPU driver, a chunk of GPU memory and to return
a memory pointer usable by the CPU to access the GPU memory area.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
DPDK 21.11 adds vfio support for DMA device in vhost. This patch
updates recommended IOVA mode in async datapath.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Patch adds command line options to configure queue based
priority flow control.
- Syntax command is given as below:
set pfc_queue_ctrl <port_id> rx <on|off> <tx_qid> <tx_tc> \
tx <on|off> <rx_qid> <rx_tc> <pause_time>
- Example command to configure queue based priority flow control
on rx and tx side for port 0, Rx queue 0, Tx queue 0 with pause
time 2047
testpmd> set pfc_queue_ctrl 0 rx on 0 0 tx on 0 0 2047
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Based on device support and use-case need, there are two different ways
to enable PFC. The first case is the port level PFC configuration, in
this case, rte_eth_dev_priority_flow_ctrl_set() API shall be used to
configure the PFC, and PFC frames will be generated using based on VLAN
TC value.
The second case is the queue level PFC configuration, in this
case, Any packet field content can be used to steer the packet to the
specific queue using rte_flow or RSS and then use
rte_eth_dev_priority_flow_ctrl_queue_configure() to configure the
TC mapping on each queue.
Based on congestion selected on the specific queue, configured TC
shall be used to generate PFC frames.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Csum forwarding mode only supports software UDP/TCP csum calculation
for single segment packets when hardware offload is not enabled.
This patch enables software UDP/TCP csum calculation over multiple
segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
This patch enables method to provide key and mask for raw rules
to be provided as hexadecimal values. There is new parameter
pattern_mask added to support this.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
AF_XDP support is deprecated in libbpf since v0.7.0 [1]. The libxdp library
now provides the functionality which once was in libbpf and which the
AF_XDP PMD relies on. This commit updates the AF_XDP meson build to use the
libxdp library if a version >= v1.2.2 is available. If it is not available,
only versions of libbpf prior to v0.7.0 are allowed, as they still contain
the required AF_XDP functionality.
libbpf still remains a dependency even if libxdp is present, as we use
libbpf APIs for program loading.
The minimum required kernel version for libxdp for use with AF_XDP is v5.3.
For the library to be fully-featured, a kernel v5.10 or newer is
recommended. The full compatibility information can be found in the libxdp
README.
v1.2.2 of libxdp includes an important fix required for linking with DPDK
which is why this version or greater is required. Meson uses pkg-config to
verify the version of libxdp on the system, so it is necessary that the
library is discoverable using pkg-config in order for the PMD to use it. To
verify this, you can run: pkg-config --modversion libxdp
[1] https://github.com/libbpf/libbpf/commit/277846bc6c15
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
eCPRI message can be over Ethernet layer (.1Q supported also) or over
UDP layer. Message header formats are the same in these two variants.
Only up though the first packet header in the PDU can be matched.
RSS on the eCPRI payload is not supported.
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
Expose Linux EAL ability to reuse existing hugepage files
via --huge-unlink=never switch.
Default behavior is unchanged, it can also be specified
using --huge-unlink=existing for consistency.
Old --huge-unlink switch is kept,
it is an alias for --huge-unlink=always.
Add a test case for the --huge-unlink=never mode.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
EAL malloc layer assumed all free elements content
is filled with zeros ("clean"), as opposed to uninitialized ("dirty").
This assumption was ensured in two ways:
1. EAL memalloc layer always returned clean memory.
2. Freed memory was cleared before returning into the heap.
Clearing the memory can be as slow as around 14 GiB/s.
To save doing so, memalloc layer is allowed to return dirty memory.
Such segments being marked with RTE_MEMSEG_FLAG_DIRTY.
The allocator tracks elements that contain dirty memory
using the new flag in the element header.
When clean memory is requested via rte_zmalloc*()
and the suitable element is dirty, it is cleared on allocation.
When memory is deallocated, the freed element is joined
with adjacent free elements, and the dirty flag is updated:
a) If the joint element contains dirty parts, it is dirty:
dirty + freed + dirty = dirty => no need to clean
freed + dirty = dirty the freed memory
Dirty parts may be large (e.g. initial allocation),
so clearing them could create unpredictable slowdown.
b) If the only dirty part of the joint element
is the freed memory, the joint element can be made clean:
clean + freed + clean = clean => freed memory
clean + freed = clean must be cleared
freed + clean = clean
freed = clean
This logic naturally reproduces the old behavior
and always applies in modes when EAL memalloc layer
returns only clean segments.
As a result, memory is either cleared on free, as before,
or it will be cleared on allocation if need be, but never twice.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Hugepage mapping is a layer of EAL malloc builds upon.
There were implicit references to its details,
like mentions of segment file descriptors,
but no explicit description of its modes and operation.
Add an overview of mechanics used on ech supported OS.
Convert memory management subsections from list items
to level 4 headers: they are big and important enough.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
The KNI PMD name should be "net_kni".
Fixes: 75e2bc54c0 ("net/kni: add KNI PMD")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The Kni kthreads seem to be re-scheduled at a granularity of roughly
1 millisecond right now, which seems to be insufficient for performing
tests involving a lot of control plane traffic.
Even if KNI_KTHREAD_RESCHEDULE_INTERVAL is set to 5 microseconds, it
seems that the existing code cannot reschedule at the desired granularily,
due to precision constraints of schedule_timeout_interruptible().
In our use case, we leverage the Linux Kernel for control plane, and
it is not uncommon to have 60K - 100K pps for some signaling protocols.
Since we are not in atomic context, the usleep_range() function seems to be
more appropriate for being able to introduce smaller controlled delays,
in the range of 5-10 microseconds. Upon reading the existing code, it would
seem that this was the original intent. Adding sub-millisecond delays,
seems unfeasible with a call to schedule_timeout_interruptible().
KNI_KTHREAD_RESCHEDULE_INTERVAL 5 /* us */
schedule_timeout_interruptible(
usecs_to_jiffies(KNI_KTHREAD_RESCHEDULE_INTERVAL));
Below, we attempted a brief comparison between the existing implementation,
which uses schedule_timeout_interruptible() and usleep_range().
We attempt to measure the CPU usage, and RTT between two Kni interfaces,
which are created on top of vmxnet3 adapters, connected by a vSwitch.
insmod rte_kni.ko kthread_mode=single carrier=on
schedule_timeout_interruptible(usecs_to_jiffies(5))
kni_single CPU Usage: 2-4 %
[root@localhost ~]# ping 1.1.1.2 -I eth1
PING 1.1.1.2 (1.1.1.2) from 1.1.1.1 eth1: 56(84) bytes of data.
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=2.70 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=1.00 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=1.99 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.985 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=1.00 ms
usleep_range(5, 10)
kni_single CPU usage: 50%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.338 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.150 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.123 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.139 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.159 ms
usleep_range(20, 50)
kni_single CPU usage: 24%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.202 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.170 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.171 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.248 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.185 ms
usleep_range(50, 100)
kni_single CPU usage: 13%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.537 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.257 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.231 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.143 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.200 ms
usleep_range(100, 200)
kni_single CPU usage: 7%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=0.716 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=0.167 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=0.459 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=0.455 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=0.252 ms
usleep_range(1000, 1100)
kni_single CPU usage: 2%
64 bytes from 1.1.1.2: icmp_seq=1 ttl=64 time=2.22 ms
64 bytes from 1.1.1.2: icmp_seq=2 ttl=64 time=1.17 ms
64 bytes from 1.1.1.2: icmp_seq=3 ttl=64 time=1.17 ms
64 bytes from 1.1.1.2: icmp_seq=4 ttl=64 time=1.17 ms
64 bytes from 1.1.1.2: icmp_seq=5 ttl=64 time=1.15 ms
Upon testing, usleep_range(1000, 1100) seems roughly equivalent in
latency and cpu usage to the variant with schedule_timeout_interruptible(),
while usleep_range(100, 200) seems to give a decent tradeoff between
latency and cpu usage, while allowing users to tweak the limits for
improved precision if they have such use cases.
Disabling RTE_KNI_PREEMPT_DEFAULT, interestingly seems to lead to a
softlockup on my kernel.
Kernel panic - not syncing: softlockup: hung tasks
CPU: 0 PID: 1226 Comm: kni_single Tainted: G W O 3.10 #1
<IRQ> [<ffffffff814f84de>] dump_stack+0x19/0x1b
[<ffffffff814f7891>] panic+0xcd/0x1e0
[<ffffffff810993b0>] watchdog_timer_fn+0x160/0x160
[<ffffffff810644b2>] __run_hrtimer.isra.4+0x42/0xd0
[<ffffffff81064b57>] hrtimer_interrupt+0xe7/0x1f0
[<ffffffff8102cd57>] smp_apic_timer_interrupt+0x67/0xa0
[<ffffffff8150321d>] apic_timer_interrupt+0x6d/0x80
This patch also attempts to remove this option.
References:
[1] https://www.kernel.org/doc/Documentation/timers/timers-howto.txt
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Padraig Connolly <Padraig.J.Connolly@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Starting in meson 0.56, the functions meson.source_root() and
meson.build_root() are deprecated and to be replaced by the [more
descriptive] functions: project_source_root()/global_source_root() and
project_build_root()/global_build_root(). Unfortunately, these new
replacement functions were only added in 0.56 release too, so to use
them we would need version checks for old/new functions to remove the
deprecation warnings.
However, the functions "current_build_dir()" and "current_source_dir()"
remain unaffected by all this, so we can bypass the versioning problem,
by saving off these values to "dpdk_source_root" and "dpdk_build_root"
in the top-level meson.build file
Bugzilla ID: 926
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Jerin Jacob <jerinj@marvell.com>
Users can create the desired number of RxQ and TxQ in DPDK. For
example, if the number of RxQ = 2 and the number of TxQ = 5,
a total of 8 file descriptors will be created for a tap device,
including RxQ, TxQ, and one for keepalive. The RxQ and TxQ
with the same ID are paired by dup(2).
In this scenario, Kernel will have 3 RxQ where packets are
incoming but not read. The reason for this is that there are only
2 RxQ that are polled by DPDK, while there are 5 queues in Kernel.
This patch add a checking if DPDK has appropriate numbers of
queues to avoid unexpected packet drop.
Signed-off-by: Nobuhiro Miki <nmiki@yahoo-corp.jp>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
If hardware supports IEEE 1588 PTP, PTP capability will be set.
Currently, vec and sve burst is unsupported when PTP capability is set.
For sake of Rx/Tx performance, IEEE 1588 PTP is not supported in sve or
vec burst mode. When enabling IEEE 1588 PTP, Rx/Tx burst mode should be
simple or common. Rx/Tx burst mode could be set like this, for example:
-a 0000:35:00.0,rx_func_hint=common,tx_func_hint=common
This patch supports vec and sve burst when PTP is disabled. And only
support simple or common burst When PTP is enabled.
Fixes: 38b539d96e ("net/hns3: support IEEE 1588 PTP")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Adding changes to configure switch header type pre_l2 for cnxk.
pre_l2 headers are custom headers placed before the ethernet
header. Along with switch header type, user needs to provide the
offset within the custom header that holds the size of the
custom header and mask for the size within the size offset.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Satheesh Paul <psatheesh@marvell.com>
Provides ethdev callback support of rx_queue_count,
rx_descriptor_status and tx_descriptor_status.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Recent VIC models can parse GENEVE, including options, and inner
packet headers. Enable GENEVE header and option flow items. Currently,
only the first option that follows the GENEVE header can be matched,
and the GENEVE header item must specify option length.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
This patch adds support for level 2 for QoS shaping.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Old macros kept for backward compatibility, but this cause old macro
usage to sneak in silently.
Marking old macros as deprecated. Downside is this will cause some noise
for applications that are using old macros.
Fixes: 295968d174 ("ethdev: add namespace")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Add missing capability for outer UDP Tx checksum.
Also fixed the feature list in ice_dcf.ini
Fixes: bf89db4409 ("net/ice: complete device info get in DCF")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules on SDP interfaces.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>