The external heaps API already implicitly expects start address
of the external memory area to be page-aligned, but it is not
enforced or documented. Fix this by implementing additional
parameter checks at memory add call, and document the page
alignment requirement explicitly.
Fixes: 7d75c31014 ("malloc: allow adding memory to named heaps")
Cc: stable@dpdk.org
Suggested-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
We already trigger a mem event notification inside the walk function,
no need to do it twice.
Fixes: f32c7c9de9 ("malloc: enable event callbacks for external memory")
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
When secondary process hotplugs memory, it sends a request
to primary, which then performs the real mmap() and sends
sync requests to all secondary processes. Upon receiving
such sync request, each secondary process will notify the
upper layers of hotplugged memory (and will call all
locally registered event callbacks).
In the end we'll end up with memory event callbacks fired
in all the processes except the primary, which is a bug.
This gets critical if memory is hotplugged while a VFIO
device is attached, as the VFIO memory registration -
which is done from a memory event callback present in the
primary process only - is never called.
After this patch, a primary process fires memory event
callbacks before secondary processes start their
synchronizations - both for hotplug and hotremove.
Fixes: 07dcbfe010 ("malloc: support multiprocess memory hotplug")
Cc: stable@dpdk.org
Signed-off-by: Seth Howell <seth.howell@intel.com>
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
SPDK uses the rte_mem_event_callback_register API to
create RDMA memory regions (MRs) for newly allocated regions
of memory. This is used in both the SPDK NVMe-oF target
and the NVMe-oF host driver.
DPDK creates internal malloc_elem structures for these
allocated regions. As users malloc and free memory, DPDK
will sometimes merge malloc_elems that originated from
different allocations that were notified through the
registered mem_event callback routine. This results
in subsequent allocations that can span across multiple
RDMA MRs. This requires SPDK to check each DPDK buffer to
see if it crosses an MR boundary, and if so, would have to
add considerable logic and complexity to describe that
buffer before it can be accessed by the RNIC. It is somewhat
analagous to rte_malloc returning a buffer that is not
IOVA-contiguous.
As a malloc_elem gets split and some of these elements
get freed, it can also result in DPDK sending an
RTE_MEM_EVENT_FREE notification for a subset of the
original RTE_MEM_EVENT_ALLOC notification. This is also
problematic for RDMA memory regions, since unregistering
the memory region is all-or-nothing. It is not possible
to unregister part of a memory region.
To support these types of applications, this patch adds
a new --match-allocations EAL init flag. When this
flag is specified, malloc elements from different
hugepage allocations will never be merged. Memory will
also only be freed back to the system (with the requisite
memory event callback) exactly as it was originally
allocated.
Since part of this patch is extending the size of struct
malloc_elem, we also fix up the malloc autotests so they
do not assume its size exactly fits in one cacheline.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
The RTE_PROC_PRIMARY error handler lost the unlock statement in the
current codes. Now unlock and return in one place to fix it.
Fixes: 49df3db848 ("memzone: replace memzone array with fbarray")
Cc: stable@dpdk.org
Signed-off-by: Gao Feng <davidfgao@tencent.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
With a larger PAGE_SIZE it is possible for the MSI table to very
close to the end of the BAR s.t. when we align the start and end
of the MSI table to the PAGE_SIZE, the end offset of the MSI
table is out of the PCI BAR boundary.
This patch addresses the issue by comparing both the start and the
end offset of the MSI table with the BAR size, and skip the mapping
if it is out of Bar scope.
The patch fixes the debug log as below:
EAL: Skipping BAR0
Signed-off-by: Tone Zhang <tone.zhang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add the check for null peer pointer like the bundle pointer in the mp request
handler. They should follow same style. And add some logs for nomem cases.
Signed-off-by: Gao Feng <davidfgao@tencent.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
When rte_eal_alarm_set failed, need to free the bundle mem in the
error handler of handle_primary_request and handle_secondary_request.
Fixes: 244d513071 ("eal: enable hotplug on multi-process")
Fixes: ac9e4a1737 ("eal: support attach/detach shared device from secondary")
Cc: stable@dpdk.org
Signed-off-by: Gao Feng <davidfgao@tencent.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Missing brackets around the if means that the loop will end at
its first iteration.
Fixes: 2395332798 ("eal: add option register infrastructure")
Cc: stable@dpdk.org
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
The same issue was fixed on for the ipv4 version of this routine in
commit 8d4d3a4f73 ("ip_frag: handle MTU sizes not aligned to 8 bytes").
Briefly, the size of an ipv6 header is always 40 bytes. With an MTU of
1500, this will never produce a multiple of 8 bytes for the frag_size
and this routine can never succeed. Since RTE_ASSERTS are disabled by
default, this failure is typically ignored.
To fix this, round down to the nearest 8 bytes and use this when
producing the fragments.
Fixes: 0aa31d7a59 ("ip_frag: add IPv6 fragmentation support")
Cc: stable@dpdk.org
Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
There is no need for these useless information and
it had better be removed in order to not confuse users.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Increase the number of addressable cores from 64 to 256. Also remove the
warning that incresing this number beyond 64 will cause problems (because
of the previous use of uint64_t masks). Now this number can be increased
significantly without causing problems.
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Extending the functionality to allow vms to power manage cores beyond 63.
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Since we're moving to allowing greater than 64 cores, the mask functions
that use uint64_t to perform functions on a masked set of cores are no
longer needed, so removing them.
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
vm_power_manager currently makes use of uint64_t masks to keep track of
cores in use, limiting use of the app to only being able to manage the
first 64 cores in a multi-core system. Many modern systems have core
counts greater than 64, so this limitation needs to be removed.
This patch converts the relevant 64-bit masks to character arrays.
Signed-off-by: David Hunt <david.hunt@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add few functional and perfomance tests
for rte_rwlock_read_trylock() and rte_rwlock_write_trylock().
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
rte_timer_manage() adds expired timers to a "run list", and walks the
list, transitioning each timer from the PENDING to the RUNNING state.
If another lcore resets or stops the timer at precisely this
moment, the timer state would instead be set to CONFIG by that other
lcore, which would cause timer_manage() to skip over it. This is
expected behavior.
However, if a timer expires quickly enough, there exists the
following race condition that causes the timer_manage() routine to
misinterpret a timer in CONFIG state, resulting in lost timers:
- Thread A:
- starts a timer with rte_timer_reset()
- the timer is moved to CONFIG state
- the spinlock associated with the appropriate skiplist is acquired
- timer is inserted into the skiplist
- the spinlock is released
- Thread B:
- executes rte_timer_manage()
- find above timer as expired, add it to run list
- walk run list, see above timer still in CONFIG state, unlink it from
run list and continue on
- Thread A:
- move timer to PENDING state
- return from rte_timer_reset()
- timer is now in PENDING state, but not actually linked into a
pending list or a run list and will never get processed further
by rte_timer_manage()
This commit fixes this race condition by only releasing the spinlock
after the timer state has been transitioned from CONFIG to PENDING,
which prevents rte_timer_manage() from seeing an incorrect state.
Fixes: 9b15ba895b ("timer: use a skip list")
Cc: stable@dpdk.org
Signed-off-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Code refactoring to separate validation from benchmarking part.
Added op's status checking after rte_compressdev_dequeue_burst
function.
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Added:
- initial version of compression performance test
description file.
- release note in release_18_11.rst
Updated index.rst file
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Added performance measurement part into compression perf. test.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Added parser part into compression perf. test.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Lee Daly <lee.daly@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
This patch increments error counter (stats.dequeue_err_count)
in case of any error detection during qat_comp_process_response
function.
Fixes: 3cc14fc48e ("compress/qat: check that correct firmware is in use")
Fixes: 32842f2a6d ("compress/qat: create FW request and process response")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch fixes error status which should be set inside
qat_comp_build_request function in case any errors are detected.
In these cases op.status is set to
RTE_COMP_OP_STATUS_INVALID_ARGS to help application debug.
Fixes: 1947bd1858 ("compress/qat: support scatter-gather buffers")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This reverts commit d09973f6c4 ("common/qat: fix for invalid
response from firmware") due to incorrectly reporting failures
on some older firmware versions.
Fixes: d09973f6c4 ("common/qat: fix for invalid response from firmware")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
NULL algo algo does not to set counter flag so it should be zeroed.
Fixes: db0e952a5c ("crypto/qat: add NULL capability")
Cc: stable@dpdk.org
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
AES-CCM algo does not to set counter flag so it should be zeroed.
Fixes: ab56c4d9ed ("crypto/qat: support AES-CCM")
Cc: stable@dpdk.org
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Error code of qat_hash_get_block_size needs to be handle properly.
Fixes: 10b49880e3 ("crypto/qat: make the session struct variable in size")
Cc: stable@dpdk.org
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Tested-by: Marko Kovacevic <marko.kovacevic@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Building Turbo Software as shared library for AVX512 failed
due to wrong order of library in the library list (LDLIBS)
Fixes: b8cfe2c9ae ("bb/turbo_sw: add software turbo driver")
Cc: stable@dpdk.org
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Improvements added to interrupt test:
- test is run in loop (number of iterations is specified by
TEST_REPETITIONS define) which ensures more accurate results
- mapping cores to thread parameteres was put in order.
Master core is always set at first index. It fixes problem with
running test for only one core
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Test application and Turbo Software driver were adapted
to support chained-mbuf for bigger TB sizes.
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Improvements added to throughput test:
- test is run in loop (number of iterations is specified by
TEST_REPETITIONS define) which ensures more accurate results
- length of input data is calculated based on amount of CBs in TB
- maximum number of decoding iterations is gathered from results
- added new functions responsible for printing results
- small fixes for memory management
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Offload cost test was improved in order to collect
more accurate results.
Signed-off-by: Kamil Chalupnik <kamilx.chalupnik@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Fixes incorrect comment on compressdev rte_comp_op structure element.
Comment needed to be updated to be compliant with the use of
chained mbufs.
Fixes: f87bdc1ddc ("compressdev: add compression specific data")
Cc: stable@dpdk.org
Signed-off-by: Lee Daly <lee.daly@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Remove if() condition prior to calling BN_free() as
BN_free(a) does nothing if a is NULL.
Signed-off-by: Akash Saxena <akash.saxena@caviumnetworks.com>
Signed-off-by: Shally Verma <shally.verma@caviumnetworks.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Use the new rte_comp_op_bulk_free API.
Add trace to catch any mempool elements not freed at test end.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Acked-by: Lee Daly <lee.daly@intel.com>
There's an API to bulk allocate operations,
this adds a corresponding bulk free API.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Acked-by: Lee Daly <lee.daly@intel.com>
This removes the magic number from the assignment of the engine variable,
which is used in the debug trace.
Signed-off-by: Lee Daly <lee.daly@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
rte_event_eth_tx_adapter_queue_add() - add a check
that returns an error if the ethdev has zero Tx queues
configured.
rte_event_eth_tx_adapter_queue_del() - remove the
checks for ethdev queue count, instead check for
queues added to the adapter which maybe different
from the current ethdev queue count.
Fixes: a3bbf2e097 ("eventdev: add eth Tx adapter implementation")
Cc: stable@dpdk.org
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
The eventdev extended stats documentation referred to two non-existent
functions, rte_eventdev_xstats_get and rte_eventdev_get_xstats_by_name.
Fixes: 3ed7fc039a ("eventdev: add extended stats")
Cc: stable@dpdk.org
Signed-off-by: Gage Eads <gage.eads@intel.com>
If timer events get dropped for some reason, the thread that launched
producer and worker cores will never exit, because the deadlock check
doesn't currently apply to the event timer adapter case. This commit
fixes this.
Fixes: d008f20bce ("app/eventdev: add event timer adapter as a producer")
Cc: stable@dpdk.org
Signed-off-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
When the packet length is smaller than the header length,
the calculated payload length will be overflowed and result
in incorrect reassembly behaviors.
Fixes: 1e4cf4d6d4 ("gro: cleanup")
Fixes: 9e0b9d2ec0 ("gro: support VxLAN GRO")
Cc: stable@dpdk.org
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
When creating process data structures, EAL will create many files
in EAL runtime directory. Because we allow multiple secondary
processes to run, each secondary process gets their own unique
file. With many secondary processes running and exiting on the
system, runtime directory will, over time, create enormous amounts
of sockets, fbarray files and other stuff that just sits there
unused because the process that allocated it has died a long time
ago. This may lead to exhaustion of disk (or RAM) space in the
runtime directory.
Fix this by removing every unlocked file at initialization that
matches either socket or fbarray naming convention. We cannot be
sure of any other files, so we'll leave them alone. Also, remove
similar code from mp socket code.
We do it at the end of init, rather than at the beginning, because
secondary process will use primary process' data structures even
if the primary itself has died, and we don't want to remove those
before we lock them.
Bugzilla ID: 106
Cc: stable@dpdk.org
Reported-by: Vipin Varghese <vipin.varghese@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>