rte_security has been experimental since DPDK 17.11 release.
Now the library has matured and expermental tag is removed in
this patch.
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Anoob Joseph <anoob.joseph@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Boris Pismenny <borisp@mellanox.com>
When device be hot-unplugged, the hot-unplug handler will be invoked by uio
remove event and the device will be detached, then kernel will sent another
pci remove event. So if there is any unlock miss, it will cause a dead lock
issue. This patch will add this missing unlock for hot-unplug handler.
Fixes: 0fc54536b14a ("eal: add failure handling for hot-unplug")
Signed-off-by: Jeff Guo <jia.guo@intel.com>
In rte_efd_create() write lock has already been unlocked
before ring creation itself.
So second unlock after the ring creation has been removed.
Fixes: 56b6ef874f80 ("efd: new Elastic Flow Distributor library")
Cc: stable@dpdk.org
Signed-off-by: Chaitanya Babu Talluri <tallurix.chaitanya.babu@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Removed the use of MAP_HUGETLB for anonymous mapping on ppc64. The
MAP_HUGETLB had previously been added to workaround issues on IBM Power8
systems when mapping /dev/zero.
In the current code the MAP_HUGETLB flag will cause the anonymous mapping
to fail on Power9.
Note, Power8 is currently failing to correctly mmap Hugepages, with and
without this change.
Fixes: 284ae3e9ff9a ("eal/ppc: fix mmap for memory initialization")
Signed-off-by: David Wilder <dwilder@us.ibm.com>
Reviewed-by: Pradeep Satyanarayana <pradeep@us.ibm.com>
It may so happen that two memory locations may be adjacent in
virtual memory, but belong to different segment lists. With
current code, such segments will be concatenated. Fix the
adjacency checking code to also check if the adjacent malloc
elements belong to the same memseg list.
Fixes: 66cc45e293ed ("mem: replace memseg with memseg lists")
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
For IOVA as VA mode, we assume that memory is contiguous. However,
for external segments that assumption may not necessarily hold.
Fix the code to not assume that external memory segments are
contiguous even in IOVA as VA mode.
Fixes: 5282bb1c3695 ("mem: allow memseg lists to be marked as external")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
The rte_eal_get_runtime_dir() function is currently being declared in two
header files.
This API was made public in commit 6911c9fd8fbe ("eal: export function to
get runtime directory"), adding it to rte_eal.h. To make it public, the
'rte' prefix was added to the function so it needed to be modified in the
original location of the declaration, eal_filesystem.h. By only modifying,
and not removing the decalration, it is now a duplicate.
This patch removes the declaration from eal_filesystem.h.
Fixes: 6911c9fd8fbe ("eal: export function to get runtime directory")
Reported-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This patches fixes the coverity issue of logically dead code.
Coverity issue: 323523
Fixes: 96303217a606 ("pipeline: add symmetric crypto table action")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
`local_conf` variable was needed for offload conversions but no more
required. No functional difference, only interim variable eliminated.
Fixes: ab3ce1e0c193 ("ethdev: remove old offload API")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
The device information cannot be gotten correctly before
the configuration is set. Because on some NICs the
information has dependence on the configuration.
Fixes: 3be82f5cc5e3 ("ethdev: support PMD-tuned Tx/Rx parameters")
Cc: stable@dpdk.org
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
The new configuration is stored during the rte_eth_dev_configure() API
but the API may fail. After failure stored configuration will be
invalid since it is not fully applied to the device.
We better roll the configuration back after failure.
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
The caller will guarantee that msg won't be null. Remove
the unneeded null pointer check which caused a Coverity
warning.
Coverity issue: 323484
Fixes: 8f972312b8f4 ("vhost: support vhost-user")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch fixes the incorrect packet content copy in the
chaining mode. Originally the content before cipher offset is
overwritten by all zeros. This patch fixes the problem by
making sure the correct write back source and destination
settings during set up.
Fixes: 3bb595ecd682 ("vhost/crypto: add request handler")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We should apply for RO access when receiving packets from the
VM and apply for RW access when sending packets to the VM.
Fixes: a922401f35cc ("vhost: add Rx support for packed ring")
Fixes: ae999ce49dcb ("vhost: add Tx support for packed ring")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This updates the license on the rte_rtm.h file to be the standard
BSD-3-Clause license used for the rest of DPDK, thus bringing the file in
compliance with the DPDK licensing policy.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When TSX transactions abort, it is generally worth retrying a number of
times before falling back to the traditional locking path, as the
parallelism benefits from TSX can be worth it when a transaction does
succeed. For cases with multiple threads and high contention rates, it
can be useful to have increasing delays between retry attempts, so as to
avoid having the same threads repeatedly collided.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
gcc 7 and 8 with O3 will generate vzeroupper from rte_memcpy
into TSX region which may abort the TSX transaction.
This fix changes rte_memcpy to memcpy which will not insert
extra vzeroupper into the library.
Fixes: f2e3001b53ec ("hash: support read/write concurrency")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
EAL should not crash when setting alarm fails. Also, remove the
profanity in error message.
Fixes: daf9bfca717e ("ipc: remove thread for async requests")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
x86 jit can generate invalid code for (BPF_LD | BPF_IMM | EBPF_DW)
instructions, when immediate value is bigger then INT32_MAX.
Fixes: cc752e43e079 ("bpf: add JIT compilation for x86_64 ISA")
Cc: stable@dpdk.org
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
If the last part of the PCI address (function number) is missing,
the parsing was successful, assuming function 0.
The call to strtoul is not returning an error in such a case,
so an explicit check is inserted before.
This bug has always been there in older parsing macros:
- GET_PCIADDR_FIELD
- GET_BLACKLIST_FIELD
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Reported-by: Wisam Jaddo <wisamm@mellanox.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The lock-free algorithm has caused significant lookup
performance regression for certain use cases. The
regression is attributed to the use of non-relaxed
memory orderings. 2 versions of the lookup functions
are created. One that uses the RW lock and the one that
is lock-free. This restores the performance regression
caused for use cases that used RW lock version of the
lookup function.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Suggested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
When calling __atomic_compare_exchange_n, use relaxed ordering for the
success case, as multiple producers/consumers do not release updates to
each other so no need for acquire or release ordering.
Because the thread fence in place, ordering for the first iteration can
be relaxed.
Run the ring perf test on the following testbed:
HW: ThunderX2 B0 CPU CN9975 v2.0, 2 sockets, 28core,4 threads/core,2.5GHz
OS: Ubuntu 16.04.5 LTS, Kernel: 4.15.0-36-generic
DPDK: 18.08, Configuration: arm64-armv8a-linuxapp-gcc
gcc: 8.1.0
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 \
--socket-mem=1024 -- -i
Without the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.75
MP/MC bulk enq/dequeue (size: 8): 10.18
SP/SC bulk enq/dequeue (size: 32): 1.80
MP/MC bulk enq/dequeue (size: 32): 2.34
With the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.59
MP/MC bulk enq/dequeue (size: 8): 10.54
SP/SC bulk enq/dequeue (size: 32): 1.73
MP/MC bulk enq/dequeue (size: 32): 2.38
No significant improvement, nor regression was seen, as the optimisation
is not at the critical path.
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Use case scenario:
1) Thread 1 is enqueuing. It reads prod.head and gets stalled for some
reasons (running out of cpu time, preempted,...)
2) Thread 2 is enqueuing. It succeeds in enqueuing and moves prod.head
forward.
3) Thread 3 is dequeuing. It succeeds in dequeuing and moves the cons.tail
beyond the prod.head read by thread 1.
4) Thread 1 is re-scheduled. It reads cons.tail.
cpu1(producer) cpu2(producer) cpu3(consumer)
load r->prod.head
^ load r->prod.head
| load r->cons.tail
| store r->prod.head(+n)
stalled <-- enqueue ----->
| store r->prod.tail(+n)
| load r->cons.head
| load r->prod.tail
| store r->cons.head(+n)
| <...dequeue.....>
v store r->cons.tail(+n)
load r->cons.tail
For thread 1, the __atomic_compare_exchange_n detects the outdated
prod.head and retry the flow with the new one. This retry flow works ok on
strong ordering platform(eg:x86). But for weak ordering platforms(arm,
ppc), loading cons.tail and prod.head might be re-ordered, prod.head is new
but cons.tail becomes too old, the retry flow, based on the detection of
outdated head, does not trigger as expected, thus the outdate cons.tail
causes wrong free_entries.
Similarly, for dequeuing, outdated prod.tail leads to wrong avail_entries.
The fix is to keep the deterministic order of two loads allowing the retry
to work.
Run the ring perf test on the following testbed:
HW: ThunderX2 B0 CPU CN9975 v2.0, 2 sockets, 28core, 4 threads/core, 2.5GHz
OS: Ubuntu 16.04.5 LTS, Kernel: 4.15.0-36-generic
DPDK: 18.08, Configuration: arm64-armv8a-linuxapp-gcc
gcc: 8.1.0
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 \
--socket-mem=1024 -- -i
Without the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.64
MP/MC bulk enq/dequeue (size: 8): 9.58
SP/SC bulk enq/dequeue (size: 32): 1.98
MP/MC bulk enq/dequeue (size: 32): 2.30
With the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.75
MP/MC bulk enq/dequeue (size: 8): 10.18
SP/SC bulk enq/dequeue (size: 32): 1.80
MP/MC bulk enq/dequeue (size: 32): 2.34
The results showed the thread fence degrade the performance slightly, but
it is required for correctness.
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Some toolchain has fls() definition in string.h as argument type int,
which is conflicting uint32_t argument type.
/export/dpdk.org/lib/librte_eal/common/rte_reciprocal.c:47:19:
error: conflicting types for ‘fls’
static inline int fls(uint32_t x)
^~~
/opt/marvell-tools-201/aarch64-marvell-elf/include/strings.h:59:6:
note: previous declaration of ‘fls’ was here
int fls(int) __pure2;
FreeBSD string.h also has fls() with argument as int type.
https://www.freebsd.org/cgi/man.cgi?query=fls&sektion=3
Fixing the conflict by using rte version of fls.
Fixes: ffe3ec811ef5 ("sched: introduce reciprocal divide")
Fixes: faf2b25c9f80 ("fm10k: support VMDQ in multi-queue configuration")
Cc: stable@dpdk.org
Suggested-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The function returns the last (most-significant) bit set.
Added unit testcase to verify rte_fls_u32().
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The use of rte_memcpy_ptr was removed in revert below,
but it was missing removing the file arch/x86/rte_memcpy.c.
Fixes: d35cc1fe6a7a ("eal/x86: revert select optimized memcpy at run-time")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The devargs of a device can be replaced by a newly allocated one
when trying to probe again the same device (multi-process or
multi-ports scenarios). This is breaking some pointer references.
It can be avoided by copying the new content, freeing the new devargs,
and returning the already inserted pointer.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Current code has different max DMA mask width values for 32 and 64
bits systems. IOMMU hardware could report a higher supported width
than current MAX_DMA_MASK_BITS when RTE_ARCH_64 is not defined. This
is actually true with a 32 bits kernel running in a 64 bits server
with IOMMU hardware. This could also be a problem with embedded systems
using an IOMMU designed for 64 bits in a 32 bits system.
This patch leaves a single max DMA mask width which will make sure the
mask width is within the range for 64 bits variables used for DMA mask.
This also will avoid wrong values because any value higher than
64 bits is likely wrong.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Adding an additional failure path in DMA mask check has exposed an
issue where `hugepage` pointer may point to memory that has already
been unmapped, but pointer value is still not NULL, so failure
handler will attempt to unmap it second time if DMA mask check
fails. Fix it by setting `hugepage` pointer to NULL once it is no
longer needed.
Coverity issue: 325730
Fixes: 165c89b84538 ("mem: use DMA mask check for legacy memory")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Right now reassembly code relies on src_dst[] being all zeroes to
determine is it free/occupied entry in the fragments table.
This is suboptimal and error prone - user can crash DPDK ip_reassembly
app by something like the following scapy script:
x=Ether(src=...,dst=...)/IP(dst='0.0.0.0',src='0.0.0.0',id=0)/('X'*1000)
frags=fragment(x, fragsize=500)
sendp(frags, iface=...)
To overcome that issue and reduce overhead of
'key invalidate' and 'key is empty' operations -
add key_len into keys comparision procedure.
Fixes: 4f1a8f633862 ("ip_frag: add IPv6 reassembly")
Cc: stable@dpdk.org
Reported-by: Ryan E Hall <ryan.e.hall@intel.com>
Reported-by: Alexander V Gutkin <alexander.v.gutkin@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Under some conditions ill-formed fragments might cause
reassembly code to corrupt mbufs and/or crash.
Let say the following fragments sequence:
<ofs=0,len=100, flags=MF>
<ofs=96,len=100, flags=MF>
<ofs=200,len=0,flags=MF>
<ofs=200,len=100,flags=0>
can trigger the problem.
To overcome such situation, added check that fragment length
of incoming value is greater than zero.
Fixes: 601e279df074 ("ip_frag: move fragmentation/reassembly headers into a library")
Fixes: 4f1a8f633862 ("ip_frag: add IPv6 reassembly")
Cc: stable@dpdk.org
Reported-by: Ryan E Hall <ryan.e.hall@intel.com>
Reported-by: Alexander V Gutkin <alexander.v.gutkin@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
After removing the function rte_eth_dev_attach(),
there are two replacement solutions possible:
one using probe event notification, and one using a new iterator.
So the application can get the new probed ports either asynchronously
or synchronously.
The iterator API is new in DPDK 18.11 so they got the experimental
tag by policy. It causes an issue for strict applications which do
not use experimental functions, and want to use the synchronous method.
The replacement for removed API should not be experimental.
That's why the experimental status of the ethdev iterator is removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Tested-by: Kevin Traynor <ktraynor@redhat.com>
The functions rte_dev_probe() and rte_dev_remove() are new
in DPDK 18.11 so they got the experimental tag by policy.
However they are too much basic functions for being skipped
by strict applications which do not use experimental functions.
The alternative is to use rte_eal_hotplug_add() and
rte_eal_hotplug_remove(), but their API requires the application
to parse the devargs string in order to provide bus name,
device name and driver arguments.
The new function rte_dev_probe() is really simpler to use and
more flexible by accepting any devargs string.
Let's encourage applications to use it.
The old functions rte_eal_hotplug_* may be deprecated later.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Tested-by: Kevin Traynor <ktraynor@redhat.com>
When adding memory to an external heap, do not go to unlock failure
handler because the memory hotplug lock hasn't been taken out yet.
Fixes: 7d75c31014f7 ("malloc: allow adding memory to named heaps")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
For packed ring layout, we need save avail index and its wrap
counter value. At restore time, the used index and its wrap counter
are set to available's ones, as the ring procressing is stopped
at vring base get time.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
The following error popped when compiling with -pedantic:
In file included from
drivers/net/mlx5/mlx5_flow_dv.c:28:0:
include/rte_gre.h:20:2:
error: type of bit-field 'res2' is a GCC extension [-Werror=pedantic]
uint16_t res2:4; /**< Reserved */
Fixing by adding the __extension__ attribute.
Fixes: 894f71a3805d ("net: add GRE header structure")
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
In __rte_ring_move_prod_head, move the __atomic_load_n up and out of
the do {} while loop as upon failure the old_head will be updated,
another load is costly and not necessary.
This helps a little on the latency,about 1~5%.
Test result with the patch(two cores):
SP/SC bulk enq/dequeue (size: 8): 5.64
MP/MC bulk enq/dequeue (size: 8): 9.58
SP/SC bulk enq/dequeue (size: 32): 1.98
MP/MC bulk enq/dequeue (size: 32): 2.30
Fixes: 39368ebfc606 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Jia He <justin.he@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Synchronize the load-acquire of the tail and the store-release
within update_tail, the store release ensures all the ring operations,
enqueue or dequeue, are seen by the observers on the other side as soon
as they see the updated tail. The load-acquire is needed here as the
data dependency is not a reliable way for ordering as the compiler might
break it by saving to temporary values to boost performance.
When computing the free_entries and avail_entries, use atomic semantics
to load the heads and tails instead.
The patch was benchmarked with test/ring_perf_autotest and it decreases
the enqueue/dequeue latency by 5% ~ 27.6% with two lcores, the real gains
are dependent on the number of lcores, depth of the ring, SPSC or MPMC.
For 1 lcore, it also improves a little, about 3 ~ 4%.
It is a big improvement, in case of MPMC, with two lcores and ring size
of 32, it saves latency up to (3.26-2.36)/3.26 = 27.6%.
This patch is a bug fix, while the improvement is a bonus. In our analysis
the improvement comes from the cacheline pre-filling after hoisting load-
acquire from _atomic_compare_exchange_n up above.
The test command:
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 --socket-mem=\
1024 -- -i
Test result with this patch(two cores):
SP/SC bulk enq/dequeue (size: 8): 5.86
MP/MC bulk enq/dequeue (size: 8): 10.15
SP/SC bulk enq/dequeue (size: 32): 1.94
MP/MC bulk enq/dequeue (size: 32): 2.36
In comparison of the test result without this patch:
SP/SC bulk enq/dequeue (size: 8): 6.67
MP/MC bulk enq/dequeue (size: 8): 13.12
SP/SC bulk enq/dequeue (size: 32): 2.04
MP/MC bulk enq/dequeue (size: 32): 3.26
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Jia He <justin.he@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fixed bad logic in rte_comp_op_alloc() checking return
value from rte_comp_op_raw_bulk_alloc(). This
could have resulted in a seg-fault in error case.
Made rte_comp_ob_bulk_alloc() code consistent
with rte_comp_op_alloc().
Fixes: 96086db5a369 ("compressdev: add operation management")
Cc: stable@dpdk.org
Reported-by: Sabyasachi Sengupta <sabyasg@hpe.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Add note on usage of op structure and when it can be
accessed and freed.
Fixes: 63f4bfd5328b ("compressdev: add enqueue/dequeue functions")
Cc: stable@dpdk.org
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
During memory initialization calling rte_mem_check_dma_mask
leads to a deadlock because memory_hotplug_lock is locked by a
writer, the current code in execution, and rte_memseg_walk
tries to lock as a reader.
This patch adds a thread_unsafe version which will call the final
function specifying the memory_hotplug_lock does not need to be
acquired. The patch also modified rte_mem_check_dma_mask as a
intermediate step which will call the final function as before,
implying memory_hotplug_lock will be acquired.
PMDs should always use the version acquiring the lock with the
thread_unsafe one being just for internal EAL memory code.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.
Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages option need to be also considered.
This patch adds the DMA check for those cases.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>