The packet mirroring is configured through slots and sessions, with
the number of slots and sessions set at init.
The new "mirror" instruction assigns one of the existing sessions to a
specific slot, which results in scheduling a mirror operation for the
current packet to be executed later at the time the packet is either
transmitted or dropped.
Several copies of the same input packet can be mirrored to different
output ports by using multiple slots.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Kamalakannan R <kamalakannan.r@intel.com>
Add packet clone operation to the output ports in order to support
packet mirroring.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Kamalakannan R <kamalakannan.r@intel.com>
Add support for arguments to the default action of regular and learner
tables at initialization time. Until now, only default actions with no
arguments were accepted in the .spec file.
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Yogesh Jangra <yogesh.jangra@intel.com>
Fix the emit instruction for the pathological case of all headers to
be emitted being invalid. In this case, the for loop was essentially
skipped and the last emitted header (or an invalid memory location)
getting corrupted by setting its size to 0 through the assignment to
ho->n_bytes right after the for loop.
Fixes: d60dbdc88a ("pipeline: create inline functions for emit instruction")
Cc: stable@dpdk.org
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Signed-off-by: Venkata Suresh Kumar P <venkata.suresh.kumar.p@intel.com>
Signed-off-by: Yogesh Jangra <yogesh.jangra@intel.com>
On NUMA systems, default cores (0 and 1) might be on different memory
nodes. Double the amount of memory.
Fixes: 9e6b36c34c ("app/testpmd: reduce memory consumption")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Shell used in documentation generation could not run on Windows.
Rewrite scripts in Python.
New scripts use proper path separators and handle paths with spaces.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
API documentation index had spaces between link caption and URL,
which may be unsupported by some Markdown implementations.
That is, "[caption](URL)" is valid but "[caption] (URL)" is not.
The problematic behavior is observed with Doxygen on Windows.
Remove the spaces.
Unfortunately, Markdown syntax is not formally specified.
Fixes: 9bf486e606 ("doc: generate HTML for API with doxygen")
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
CSS for API documentation was customized by a shell script
modifying the file that Doxygen produces.
This way CSS code is kept in a script and an extra build step is added.
Move custom style to a plain CSS file.
Use Doxygen capability to attach this extra stylesheet.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Enable printing of the outer VLAN if flags indicate it is present.
Cc: stable@dpdk.org
Signed-off-by: Ben Magistro <koncept1@gmail.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Also mark some conditional functions as const.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
If a /32 route is entered in the RIB the code to traverse
will not see end of the tree. This is due to trying
to do a negative shift which is an undefined in C.
Fix by checking for max depth as is already done in rib6.
Fixes: 5a5793a5ff ("rib: add RIB library")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
According to RFC791,the options may appear or not in datagrams.
They must be implemented by all IP modules (host and gateways).
What is optional is their transmission in any particular datagram,
not their implementation. So we have to deal with it during the
fragmenting process.
Add some test data for the IPv4 header optional field fragmenting.
Signed-off-by: Huichao Cai <chcchc88@163.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Added new API for flag to enable or disable TC oversubscription
for best effort traffic class at subport level.
By default TC OV is enabled.
Signed-off-by: Marcin Danilewicz <marcinx.danilewicz@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
This patch support copy, submit, completed and
completed status functionality of DMA driver.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Add additional PMD APIs for DPAA2 QDMA driver for configuring
RBP, Ultra Short format, and Scatter Gather support
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch support basic DMA operations which includes
device capability and channel setup.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The DPAA2 DMA driver is an implementation of the dmadev APIs,
that provide means to initiate a DMA transaction from CPU.
Earlier this was part of RAW driver, but with DMA drivers
added as separate flavor of drivers, this driver is being
moved to DMA drivers.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
With DMA devices supported as a separate flavor of devices,
the DPAA2 QDMA driver is moved in the DMA devices.
This change removes the DPAA2 QDMA driver from raw devices.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Within ACL rule IPv6 address can be represented in different ways:
either as 4x4B fields, or as 2x8B fields.
Till now, only first format was supported.
Extend test-acl to support both formats, mainly for testing and
demonstrating purposes.
To control desired behavior '--ipv6' command-line option is extended
to accept an optional argument:
To be more precise:
'--ipv6' - use 4x4B fields format (default behavior)
'--ipv6=4B' - use 4x4B fields format (default behavior)
'--ipv6=8B' - use 2x8B fields format
Also replaced home brewed IPv4/IPv6 address parsing with inet_pton() calls.
Signed-off-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
In theory ACL library allows fields with 8B long.
Though in practice they usually not used, not tested,
and as was revealed by Ido, this functionality is not working properly.
There are few places inside ACL build code-path that need to be addressed.
Bugzilla ID: 673
Fixes: dc276b5780 ("acl: new library")
Cc: stable@dpdk.org
Reported-by: Ido Goshen <ido@cgstowernetworks.com>
Signed-off-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Tested-by: Ido Goshen <ido@cgstowernetworks.com>
The AltiVec header file is defining "vector", except in C++ build.
The keyword "vector" may conflict easily.
As a rule, it is better to use the alternative keyword "__vector",
so we will be able to #undef vector after including AltiVec header.
Later it may become possible to #undef vector in rte_altivec.h
with a compatibility breakage.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Current pmd_perf_autotest() in continuous mode tries
to enqueue MAX_TRAFFIC_BURST completely before starting
the test. Some drivers cannot accept complete
MAX_TRAFFIC_BURST even though rx+tx desc count can fit it.
This patch changes behaviour to stop enqueuing after few
retries.
Fixes: 002ade70e9 ("app/test: measure cycles per packet in Rx/Tx")
Cc: stable@dpdk.org
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Enable GPU_REGISTERED flag in gpu/cuda driver in the memory list.
If a GPU memory address CPU mapped is freed before being
unmapped, CUDA driver unmaps it before freeing the memory.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
FreeBSD has updated its CPU macros to align more with the definitions
used on Linux[1]. Unfortunately, while this makes compatibility better
in future, it means we need to have both legacy and newer definition
support. Use a meson check to determine which set of macros are used.
[1] https://cgit.freebsd.org/src/commit/?id=e2650af157bc
Bugzilla ID: 1014
Fixes: c3568ea376 ("eal: restrict control threads to startup CPU affinity")
Fixes: b6be16acfe ("eal: fix control thread affinity with --lcores")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Daxue Gao <daxuex.gao@intel.com>
Forcing inlining in test_ring_enqueue and test_ring_dequeue can cause
the compiled code to grow extensively when compiled with no optimization
(-O0 or -Og). This is default in the meson's debug configuration. This
can collide with compiler bugs and cause issues during linking of unit
tests where the api_type or esize are non-const variables causing
inlining cascade. In perf tests this is not the case in perf-tests as
esize and api_type are const values.
One such case was discovered when porting DPDK to RISC-V. GCC 11.2 (and
no fix still in 12.1) is generating a short relative jump instruction
(J <offset>) for goto and for loops. When loop body grows extensively in
ring test, the target offset goes beyond supported offfset of +/- 1MB
from PC. This is an obvious bug in the GCC as RISC-V has a
two-instruction construct to jump to any absolute address (AUIPC+JALR).
However there is no reason to force inlining as the test code works
perfectly fine without it.
GCC has a bug report for a similar case (with conditionals):
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93062
Fixes: a9fe152363 ("test/ring: add custom element size functional tests")
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
The lpm_process_event_pkt() can either process a packet using an
architecture specific (defined for X86/SSE, ARM/Neon and PPC64/Altivec)
path or a scalar one. The choice is however done using an ifdef
pre-processor macro. Because of that the scalar version was apparently
not widely exercised/compiled.
Due to some copy/paste errors, the scalar logic in
lpm_process_event_pkt() retained a "continue" statement where it should
utilize rfc1812_process() and return the port/BAD_PORT.
Fixes: 99fc91d180 ("examples/l3fwd: add event lpm main loop")
Signed-off-by: Stanislaw Kardach <kda@semihalf.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Caught by ASan, if a secondary process tried to attach a device with an
incorrect driver name, devargs was leaked.
Fixes: 64051bb1f1 ("devargs: unify scratch buffer storage")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Calls to rte_memcpy for 1 < n < 16 could result in unaligned
loads/stores, which is undefined behaviour according to the C
standard, and strict aliasing violations.
The code was changed to use a packed structure that allows aliasing
(using the __may_alias__ attribute) to perform the load/store
operations. This results in code that has the same performance as the
original code and that is also C standards-compliant.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Luc Pelletier <lucp.at.work@gmail.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Establish unit test for testing thread api. Initial unit tests
for rte_thread_{get,set}_affinity_by_id().
Signed-off-by: Narcisa Vasile <navasile@linux.microsoft.com>
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Implement functions for getting/setting thread affinity.
Threads can be pinned to specific cores by setting their
affinity attribute.
Windows error codes are translated to errno-style error codes.
The possible return values are chosen so that we have as
much semantic compatibility between platforms as possible.
note: convert_cpuset_to_affinity has a limitation that all cpus of
the set belong to the same processor group.
Signed-off-by: Narcisa Vasile <navasile@linux.microsoft.com>
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Provide a portable type-safe thread identifier.
Provide rte_thread_self for obtaining current thread identifier.
Signed-off-by: Narcisa Vasile <navasile@linux.microsoft.com>
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Merge CRC32 hash calculation public API implementation for x86 and Arm.
Select the best available CRC32 algorithm when unsupported algorithm
on a given CPU architecture is requested by an application.
Previously, if an application directly includes `rte_crc_arm64.h`
without including `rte_hash_crc.h` it will fail to compile.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Split x86 and SW hash crc intrinsics into separate files.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
If an event queue flush does not complete after a fixed number of tries,
remaining queues are flushed before retrying the one with incomplete
flush.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Added API to set queue attributes at runtime and API to get weight and
affinity.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Since mailbox is now accessed from multiple threads, use lock to
synchronize access.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Added test cases to test changing of queue QoS attributes priority,
weight and affinity at runtime.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Extended eventdev queue QoS attributes to support weight and affinity.
If queues are of the same priority, events from the queue with highest
weight will be scheduled first. Affinity indicates the number of times,
the subsequent schedule calls from an event port will use the same event
queue. Schedule call selects another queue if current queue goes empty
or schedule count reaches affinity count.
To avoid ABI break, weight and affinity attributes are not yet added to
queue config structure and rely on PMD for managing it. New eventdev op
queue_attr_get can be used to get it from the PMD.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Added a new eventdev API rte_event_queue_attr_set(), to set event queue
attributes at runtime from the values set during initialization using
rte_event_queue_setup(). PMD's supporting this feature should expose the
capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Quiesce event ports used by the workers core on exit to free up
any outstanding resources.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Quiesce event ports used by the workers core on exit to free up
any outstanding resources.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add function to quiesce any core specific resources consumed by
the event port.
When the application decides to migrate the event port to another lcore
or teardown the current lcore it may to call `rte_event_port_quiesce`
to make sure that all the data associated with the event port are released
from the lcore, this might also include any prefetched events.
While releasing the event port from the lcore, this function calls the
user-provided flush callback once per event.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Event ports are configured to implicitly release the scheduler contexts
currently held in the next call to rte_event_dequeue_burst().
A worker core might still hold a scheduling context during exit as the
next call to rte_event_dequeue_burst() is never made.
This might lead to deadlock based on the worker exit timing and when
there are very less number of flows.
Add a cleanup function to release any scheduling contexts held by the
worker by using RTE_EVENT_OP_RELEASE.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Event ports are configured to implicitly release the scheduler contexts
currently held in the next call to rte_event_dequeue_burst().
A worker core might still hold a scheduling context during exit, as the
next call to rte_event_dequeue_burst() is never made.
This might lead to deadlock based on the worker exit timing and when
there are very less number of flows.
Add clean up function to release any scheduling contexts held by the
worker by using RTE_EVENT_OP_RELEASE.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Event ports are configured to implicitly release the scheduler contexts
currently held in the next call to rte_event_dequeue_burst().
A worker core might still hold a scheduling context during exit, as the
next call to rte_event_dequeue_burst() is never made.
This might lead to deadlock based on the worker exit timing and when
there are very less number of flows.
Add clean up function to release any scheduling contexts held by the
worker by using RTE_EVENT_OP_RELEASE.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Event ports are configured to implicitly release the scheduler contexts
currently held in the next call to rte_event_dequeue_burst().
A worker core might still hold a scheduling context during exit, as the
next call to rte_event_dequeue_burst() is never made.
This might lead to deadlock based on the worker exit timing and when
there are very less number of flows.
Add clean up function to release any scheduling contexts held by the
worker by using RTE_EVENT_OP_RELEASE.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>