It seems sphinx >= 2.0 is inserting a <p> tag in each table cell.
The feature table (matrix) style needs to be updated to avoid
cells being too big.
The margin, padding and line height are overridden.
The font size in percentage is replaced with an equivalent pixel size.
The border is explicit because it disappeared for th.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
When a log type is registered, the level can be picked
by matching saved options.
The check of fnmatch globbing result was reversed.
The same bug was already fixed in a similar function.
This one is acting in log type register function.
Note: this function rte_log_register_type_and_pick_level()
is not used a lot and could be merged with rte_log_register().
Fixes: 6ff0f81d0e ("log: fix pattern matching")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The keyword __attribute__ will emit a warning,
because it is preferred to use or define a common __rte macro.
The centralized macros may help to control or workaround some compilers.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The new macro __rte_noreturn, for compiler hinting,
is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The new macro __rte_cold, for compiler hinting,
is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
The new macro __rte_used, forcing symbol to be generated,
is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
There is a common macro __rte_unused, avoiding warnings,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
There is a macro __rte_noinline, preventing function to be inlined,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
There is a macro __rte_always_inline, forcing functions to be inlined,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
There is a common macro __rte_packed for packing structs,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
There is a common macro __rte_aligned for alignment,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
The keyword alignas can be replaced with __rte_aligned macro
for consistency and allow compilers compatibility control.
The macro __rte_cache_aligned is a shortcut including __rte_aligned
and RTE_CACHE_LINE_SIZE constant.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The macros RTE_MARKER and __rte_cache_aligned can be used
for consistency for describing MEMIF_CACHELINE_ALIGN_MARK.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
There is a macro RTE_FINI for destructors,
which is now used where appropriate for consistency.
The destructor function mlx5_pmd_socket_uninit does not need
to be declared separately in mlx5.h.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
rte_ipsec has a dependency on rte_hash
So we need the librte_hash to be compiled before librte_ipsec.
Add the DEPDIRs to make sure this.
Fixes: 3feb23609c ("ipsec: add SAD create/destroy implementation")
Cc: stable@dpdk.org
Reported-by: Raslan Darawsheh <rasland@mellanox.com>
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Add redundant stack variable initialization to work around
false-positive warnings in older versions of GCC.
Fixes: 1f2b99e8d9 ("event/dsw: improve migration mechanism")
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Currently, in the case to use bitmap as resource allocator, after
bitmap creation, all the bitmap bits should be set to indicate the
bit available. Every time when allocate one bit, search for the set
bits and clear it to make it in use.
Add a new rte_bitmap_init_with_all_set() function to have a quick
fill up the bitmap bits.
Comparing with the case create the bitmap as empty and set the bitmap
one by one, the new function costs less cycles.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Meson is detecting the path /proc/sys/vm/nr_hugepages in the call to cat
in app/test/meson.build and then adding it as a build dependency.
This causes build loop if the timestamp of this file keeps changing.
It is fixed by hiding hugepage check in a shell script.
Fixes: 77784ef0fb ("test: allow no-huge mode for fast-tests")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Reviewed-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The function add_stylesheet() is deprecated since sphinx 1.8.
It will be removed in sphinx 4.0.
It is replaced by add_css_file().
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Use c11 atomics with RELAXED ordering instead of rte_atomic ops which
enforce unnessary barriers on arm64.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Current l2fwd-event application statically configures adjacent ports as
destination ports for forwarding the traffic.
Add a config option to pass the forwarding port pair mapping which allows
the user to configure forwarding port mapping.
If no config argument is specified, destination port map is not
changed and traffic gets forwarded with existing mapping.
To align port/queue configuration of each lcore with destination port
map, port/queue configuration of each lcore gets modified when config
option is specified.
Ex: ./l2fwd-event -c 0xff -- -p 0x3f -q 2 --config="(0,3)(1,4)(2,5)"
With above config option, traffic received from portid = 0 gets forwarded
to port = 3 and vice versa, similarly traffic gets forwarded on other port
pairs (1,4) and (2,5).
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
DSW keeps an internal port load estimate, used by the load balancing
mechanism. As a side effect, it keeps track of the total number of
busy cycles since startup. This metric is indirectly exposed in the
form of DSW xstats' "port_<n>_event_proc_latency", which is the total
number of busy cycles divided by the total number of events processed
on a particular port.
An external application can take (event_latency * dequeued) to go back
to busy_cycles. One reason for doing this is to measure the port's
load during a longer time period, without resorting to sampling
"port_<n>_load". However, as the number dequeued events grows, a
rounding error in event_latency renders the application-calculated
busy_cycles inaccurate.
Thus, it makes sense to directly expose the number of busy cycles as a
DSW xstats, even though it might seem redundant.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
On dequeue, polling the control ring once is enough.
Fixes: f6257b22e7 ("event/dsw: add load balancing")
Cc: stable@dpdk.org
Suggested-by: Ola Liljedahl <ola.liljedahl@arm.com>
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
DSW limits the rate of migrations on a per-port basis. Hence, as the
number of cores grows, so does the total migration capacity.
In high core-count systems, this allows for a situation where flows
are migrated to a lightly loaded port which recently already received
a number of new flows (from other ports). The processing load
generated by these new flows may not yet be reflected in the lightly
loaded port's load estimate. The result is that the previously lightly
loaded port is now overloaded.
This patch adds a rough estimate of the size of the inbound migrations
to a particular port, which can be factored into the migration logic,
avoiding the above problem.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Allowing moving multiple flows in one migration transaction, to
rebalance load more quickly.
Introduce a threshold to avoid migrating flows between ports with very
similar load.
Simplify logic for selecting which flow to migrate. The aim is now to
move flows in such a way that the receiving port is as lightly-loaded
as possible (after receiving the flow), while still migrating enough
flows from the source port to reduce its load. This is essentially how
legacy strategy work as well, but the code is more readable.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
To allow visualization of migrations, track the number flow
immigrations in "port_<N>_immigrations". The "port_<N>_migrations"
retains legacy semantics, but is renamed "port_<N>_emigrations".
Expose the number of events currently undergoing processing
(i.e. pending releases) at a particular port.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Reduce the maximum number of DSW flows from 32k to 8k, to be able
rebalance load faster.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
In DSW, in case a port can't produce any events for the application to
consume, the port is considered idle.
To slightly reduce wall-time latency, flush the port's output buffer
in case of such an empty dequeue.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Each workslot is always bound to a specific lcore there is no multi-core
contention to cause cache trashing as a result it is safe to remove the
WFE. Also, in dual workslot dequeue work will mostlikely be available on
the pair workslot making WFE impractical.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Remove setting ALLOW_EXPERIMENTAL_API individually for each Makefile and
meson.build. Instead, enable ALLOW_EXPERIMENTAL_API flag across app, lib
and drivers.
This changes reduces the clutter across the project while still
maintaining the functionality of ALLOW_EXPERIMENTAL_API i.e. warning
external applications about experimental API usage.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Added cpu-crypto fallback option parsing as well as tests for it
Signed-off-by: Mariusz Drost <mariuszx.drost@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In nitrox_sym_pmd_create() the name array will overflow if the pci
device name is greater than 57 bytes. To fix this issue subtract pci
device name length from array length while appending substring to the
name.
Coverity issue: 349926
Fixes: 9fdef0cc23 ("crypto/nitrox: create symmetric cryptodev")
Cc: stable@dpdk.org
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
If the NPS_PKT ring/port is greater than 8191 the NPS_PKT*() macros will
evaluate to incorrect values due to unintended sign extension from int
to unsigned long. To fix this, add UL suffix to the constants in these
macros. The same problem is with AQMQ_QSZX() macro also.
Coverity issue: 349899, 349905, 349911, 349921, 349923
Fixes: 32e4930d5a ("crypto/nitrox: add hardware queue management")
Fixes: 0a8fc2423b ("crypto/nitrox: introduce Nitrox driver")
Cc: stable@dpdk.org
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds handling of mixed hash-cipher algorithms
available on GEN2 QAT in particular firmware versions.
Also the documentation is updated to show the mixed crypto
algorithms are supported on QAT GEN2.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch adds the function for retrieving QAT firmware
version, required to check the internal capabilities that
depend on the FW version.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Modification to vectors parameters used for unit test
for coverage and performance test of bbdev drivers
across all devices.
Updating and reducing list for focused coverage on relevant
code blocks for 4G and 5G. Less focus on 4G TB mode as there is
some question how to best support this with mbuf limitations and
if effect are not used.
Removing scenarios with negative LLR assumptions which are not
used with any PMDs and historical only.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Adding support for the offload latency tests when
using the LDPC encoder and decoder operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Adding missing implementation for the interrupt tests
for LDPC encoder and decoders.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Includes support for BLER (Block Error Rate) wireless
performance test with new arguments for SNR and number
of iterations for 5G. This generates LLRs for a given
SNR level then measures the ratio of code blocks being
successfully decoded or not.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Adding functionality to validate HARQ for different
devices implementation.
Adding capacity to fetch HARQ data when required as
part of this validation.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Self-contained and cosmetic renaming of macro
so that to be more explicit for future extension.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This is to support cases when the input data for
decoding a code block is larger than 64kB and would
not fit as a contiguous block of data into one
mbuf. In that case the length from the operation
supersedes the mbuf default structure.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The actual LLR representation was incorrectly assumed to be 2
instead of 4. This would impact wireless performance but is not
critical to be back ported on LTS branches.
Fixes: c769c71175 ("baseband/turbo_sw: extend for 5G")
Cc: stable@dpdk.org
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Dave Burley <dave.burley@accelercomm.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>