Currently, when deallocating pages, malloc will fixup other
elements' headers if there is not enough space to store a full
element in leftover space. This leads to race conditions because
there are some functions that check for pad size with an unlocked
heap, expecting pad size to be constant.
Fix it by being more conservative and only freeing pages when
there is enough space before and after the page to store a free
element.
Fixes: 1403f87d4f ("malloc: enable memory hotplug support")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
The pad value is not used unless element is in pad state, but it
will show up in heap dumps and may be confusing.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
After below commit, we encounter some strange issue:
1) Dead lock as described here:
http://dpdk.org/ml/archives/dev/2018-April/099806.html
2) SIGSEGV issue when starting a testpmd in VM.
Considering below commit changes to use dynamic memory instead of
stack for memory barrier, we doubt it's caused by use-after-free.
Fixes: 3d09a6e26d ("eal: fix threads block on barrier")
Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reported-by: Lei Yao <lei.a.yao@intel.com>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
params is not freed if pthread_create() fails. The fix is
straight-forward.
Fixes: 3d09a6e26d ("eal: fix threads block on barrier")
Reported-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Many sample applications fail because of
dev_info.flow_type_rss_offloads check in rte_eth_dev_configure()
The sample applications need to be fixed/updated before returning error
on rte_eth_dev_configure() and rte_eth_dev_rss_hash_update().
This patch keeps the error logs but removes returning errors.
Fixes: 8863a1fbfc ("ethdev: add supported hash function check")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
With the legacy build system MODULE_CFLAGS can be set to pass compiler
flags specific for the kernel modules builds.
This is used currently by Ubuntu and Debian.
Set ccflags-y in the Kbuild to achieve the same result with Meson, and
to keep backward compatbility with older scripts.
Fixes regression in Ubuntu/Debian when the Kbuild is included in the
DKMS source package, as DKMS will pick it up silently by default if
present, causing the MODULE_CFLAGS to be ignored.
Fixes: a52f4574f7 ("igb_uio: build with meson")
Cc: stable@dpdk.org
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The port_init function calls the rte_eth_dev_is_valid_port function.
This function now returns 1 if the port state is attached.
A return value of 1 now means a valid port.
Fixes: a9dbe18022 ("fix ethdev port id validation")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
When heap initializes, we need to add already allocated segments
onto the heap. However, in doing that, we never increased total
heap size. Fix it by adding segment length to total heap length
when initializing the heap.
Fixes: 66cc45e293 ("mem: replace memseg with memseg lists")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
At hugepage info initialization, EAL takes out a write lock on
hugetlbfs directories, and drops it after the memory init is
finished. However, in non-legacy mode, if "-m" or "--socket-mem"
switches are passed, this leads to a deadlock because EAL tries
to allocate pages (and thus take out a write lock on hugedir)
while still holding a separate hugedir write lock in EAL.
Fix it by checking if write lock in hugepage info is active, and
not trying to lock the directory if the hugedir fd is valid.
Fixes: 1a7dc2252f ("mem: revert to using flock and add per-segment lockfiles")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Shahaf Shuler <shahafs@mellanox.com>
Tested-by: Andrew Rybchenko <arybchenko@solarflare.com>
EAL option -m is supported in FreeBSD,
so move it under supported heading from non
supported heading.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Commits for bbdev and security libraries are merged
into the Next Crypto subtree.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Claim maintainership of all areas of EAL memory init, including
OS-specific parts of it.
Also, claim maintainership of fbarray, since although it's not
related to memory allocation, it is heavily used by it and its
primary purpose is to serve memory allocation functions, and
thus will appear under "memory allocation" banner.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The original implementation used flock() locks, but was later
switched to using fcntl() locks for page locking, because
fcntl() locks allow locking parts of a file, which is useful
for single-file segments mode, where locking the entire file
isn't as useful because we still need to grow and shrink it.
However, according to fcntl()'s Ubuntu manpage [1], semantics of
fcntl() locks have a giant oversight:
This interface follows the completely stupid semantics of System
V and IEEE Std 1003.1-1988 (“POSIX.1”) that require that all
locks associated with a file for a given process are removed
when any file descriptor for that file is closed by that process.
This semantic means that applications must be aware of any files
that a subroutine library may access.
Basically, closing *any* fd with an fcntl() lock (which we do because
we don't want to leak fd's) will drop the lock completely.
So, in this commit, we will be reverting back to using flock() locks
everywhere. However, that still leaves the problem of locking parts
of a memseg list file in single file segments mode, and we will be
solving it with creating separate lock files per each page, and
tracking those with flock().
We will also be removing all of this tailq business and replacing it
with a simple array - saving a few bytes is not worth the extra
hassle of dealing with pointers and potential memory allocation
failures. Also, remove the tailq lock since it is not needed - these
fd lists are per-process, and within a given process, it is always
only one thread handling access to hugetlbfs.
So, first one to allocate a segment will create a lockfile, and put
a shared lock on it. When we're shrinking the page file, we will be
trying to take out a write lock on that lockfile, which would fail if
any other process is holding onto the lockfile as well. This way, we
can know if we can shrink the segment file. Also, if no other locks
are found in the lock list for a given memseg list, the memseg list
fd is automatically closed.
One other thing to note is, according to flock() Ubuntu manpage [2],
upgrading the lock from shared to exclusive is implemented by dropping
and reacquiring the lock, which is not atomic and thus would have
created race conditions. So, on attempting to perform operations in
hugetlbfs, we will take out a writelock on hugetlbfs directory, so
that only one process could perform hugetlbfs operations concurrently.
[1] http://manpages.ubuntu.com/manpages/artful/en/man2/fcntl.2freebsd.html
[2] http://manpages.ubuntu.com/manpages/bionic/en/man2/flock.2.html
Fixes: 66cc45e293 ("mem: replace memseg with memseg lists")
Fixes: 582bed1e1d ("mem: support mapping hugepages at runtime")
Fixes: a5ff05d60f ("mem: support unmapping pages at runtime")
Fixes: 2a04139f66 ("eal: add single file segments option")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Currently, memseg lists for secondary process are allocated on
sync (triggered by init), when they are accessed for the first
time. Move this initialization to a separate init stage for
memalloc.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
For non-legacy mode, we are preallocating space for hugepages, so
we know in advance which pages we will be able to allocate, and
which we won't. However, the init procedure was using hugepage
counts gathered from sysfs and paid no attention to hugepage
sizes that were actually available for reservation, and failed
on attempts to reserve unavailable pages.
Fix this by limiting total page counts by number of pages
actually preallocated.
Also, VA preallocate procedure only looks at mountpoints that are
available, and expects pages to exist if a mountpoint exists. That
might not necessarily be the case, so also check if there are
hugepages available for a particular page size on a particular
NUMA node.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Jananee Parthasarathy <jananeex.m.parthasarathy@intel.com>
Previously, if we couldn't preallocate VA space on 32-bit for
one page size, we simply bailed out, even though we could've
tried allocating VA space with other page sizes.
For example, if user had both 1G and 2M pages enabled, and
has asked DPDK to allocate memory on both sockets, DPDK
would've tried to allocate VA space for 1x1G page on both
sockets, failed and never tried again, even though it
could've allocated the same 1G of VA space for 512x2M pages.
Fix this by retrying with different page sizes if VA space
reservation failed.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Jananee Parthasarathy <jananeex.m.parthasarathy@intel.com>
32-bit mode has an upper limit on amount of VA space it can preallocate,
but the original implementation used the wrong constant, resulting in
failure to initialize due to integer overflow. Fix it by using the
correct constant.
Fixes: 66cc45e293 ("mem: replace memseg with memseg lists")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Jananee Parthasarathy <jananeex.m.parthasarathy@intel.com>
Previous code checked for both first/last elements being NULL,
but if they weren't, the expectation was that they're both
non-NULL, which will be the case under normal conditions, but
may not be the case due to heap structure corruption.
Coverity issue: 272566
Fixes: bb372060da ("malloc: make heap a doubly-linked list")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Technically, while the pointer would've been invalid if msl_idx
were invalid, we wouldn't have actually attempted to access the
pointer until verifying the index. Fix it by moving array access
to after we've verified validity of the index.
Coverity issue: 272574
Fixes: 66cc45e293 ("mem: replace memseg with memseg lists")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
If user has specified a flag to unmap the area right after mapping it,
we were passing an already-unmapped pointer to RTE_LOG. This is not an
issue since RTE_LOG doesn't actually dereference the pointer, but fix
it anyway by moving call to RTE_LOG to before unmap.
Coverity issue: 272584
Fixes: b7cc54187e ("mem: move virtual area function in common directory")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Coverity reports these lines as having no effect. Technically, we do
want for those lines to have no effect, however they would've likely
been optimized out. Add volatile qualifiers to ensure the code has
effects.
Coverity issue: 272608
Fixes: 582bed1e1d ("mem: support mapping hugepages at runtime")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Previously, if mmap failed to map page address at requested
address, we were attempting to unmap the wrong address. Fix it
by unmapping our actual mapped address, and jump further to
avoid unmapping memory that is not allocated.
Coverity issue: 272602
Fixes: 582bed1e1d ("mem: support mapping hugepages at runtime")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Previous code had an old rebase leftover from the time when
oldpolicy was an actual int, instead of a pointer. Fix it to
do comparison with dereferencing the pointer.
Coverity issue: 272589
Fixes: 582bed1e1d ("mem: support mapping hugepages at runtime")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Normally, tailq entry should have a valid fd by the time we attempt
to map the segment. However, in case it doesn't, we're leaking fd,
so fix it.
Coverity issue: 272570
Fixes: 2a04139f66 ("eal: add single file segments option")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
We close fd if we managed to find it in the list of allocated
segment lists (which should always be the case under normal
conditions), but if we didn't, the fd was leaking. Close it if
we couldn't find it in the segment list. This is not an issue
as if the segment is zero length, we're getting rid of it
anyway, so there's no harm in not storing the fd anywhere.
Coverity issue: 272568
Fixes: 2a04139f66 ("eal: add single file segments option")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
We were closing descriptor before checking if mapping has
failed, but if it did, we did a second close afterwards. Fix
it by moving closing descriptor to after we've done all error
checks.
Coverity issue: 272560
Fixes: 2a04139f66 ("eal: add single file segments option")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
resize_hugefile() returns either 0 (which indicates success) or -1
(which indicates failure). We failed to check the success as we
use --single-file-segments option.
Fixes: 2a04139f66 ("eal: add single file segments option")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Below commit introduced pthread barrier for synchronization.
But two IPC threads block on the barrier, and never wake up.
(gdb) bt
#0 futex_wait (private=0, expected=0, futex_word=0x7fffffffcff4)
at ../sysdeps/unix/sysv/linux/futex-internal.h:61
#1 futex_wait_simple (private=0, expected=0, futex_word=0x7fffffffcff4)
at ../sysdeps/nptl/futex-internal.h:135
#2 __pthread_barrier_wait (barrier=0x7fffffffcff0) at pthread_barrier_wait.c:184
#3 rte_thread_init (arg=0x7fffffffcfe0)
at ../dpdk/lib/librte_eal/common/eal_common_thread.c:160
#4 start_thread (arg=0x7ffff6ecf700) at pthread_create.c:333
#5 clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
Through analysis, we find the barrier defined on the stack could be the
root cause. This patch will change to use heap memory as the barrier.
Fixes: d651ee4919 ("eal: set affinity for control threads")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
With Hotplugging memory support, the order of memseg has been changed
from physically contiguous to virtual contiguous. DPAA bus and drivers
depend on PA to VA address conversion for I/O.
This patch creates a list of blocks requested to be pinned to the
DPAA mempool. For searching physical addresses, it is expected that
it would belong to this list (from hardware pool) and hence it is
less expensive than memseg walks. Though, there is a marginal drop
in performance vis-a-vis the legacy mode with physically contiguous
memsegs.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
With Hotplugging memory support, the order of memseg has been changed
from physically contiguous to virtual contiguous. FSLMC bus and dpaa2
drivers depend on PA to VA address conversion when in Physical
addressing mode.
This patch creates a list of blocks requested to be pinned to the
DPAA2 mempool. For searching physical addresses, it is expected that
it would belong to this list (from hardware pool) and hence it is
less expensive than memseg walks. Though, this has marginal impact on
performance vis-a-vis legacy mode with physically contiguous memsegs.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Crypto requires physical to virtual address conversion for
descriptors. Prior to memory hotplugging this was based on memseg
iteration assuming memsegs are all physical contiguous and using
cached start address fast calculations can be done. This
assumption now stands invalid with memory hotplugging support.
In preparation for supporting hotplugging change to memory,
this patchset removes the optimized pool context stored physical
address offset based PA-VA conversion.
This adversely affects the performance as complete memsegs now need
to be parsed, but a rework containing necessary optimization would be
posted over this.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
This patch is to accommodate an experimental feature of mbuf - external
buffer attachment. If mbuf is attached to an external buffer, its ol_flags
will have EXT_ATTACHED_MBUF set. Without enabling/using the feature,
everything remains same.
If PMD delivers Rx packets with non-direct mbuf, ol_flags should not be
overwritten. For mlx5 PMD, if Multi-Packet RQ is enabled, Rx packets could
be carried with externally attached mbufs.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch introduces a new way of attaching an external buffer to a mbuf.
Attaching an external buffer is quite similar to mbuf indirection in
replacing buffer addresses and length of a mbuf, but a few differences:
- When an indirect mbuf is attached, refcnt of the direct mbuf would be
2 as long as the direct mbuf itself isn't freed after the attachment.
In such cases, the buffer area of a direct mbuf must be read-only. But
external buffer has its own refcnt and it starts from 1. Unless
multiple mbufs are attached to a mbuf having an external buffer, the
external buffer is writable.
- There's no need to allocate buffer from a mempool. Any buffer can be
attached with appropriate free callback.
- Smaller metadata is required to maintain shared data such as refcnt.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch fix final condition check while moving virtqueue
descriptors.
Fixes: 3bb595ecd6 ("vhost/crypto: add request handler")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch fixes the missing head descriptor correction for
indirect descriptors.
Fixes: 0aee242841 ("vhost/crypto: move to safe GPA translation API")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We should call set_features callback after setting features in virtio_net
structure, otherwise vDPA driver cannot get the right features.
Fixes: 07718b4f87 ("vhost: adapt library for selective datapath")
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Zhihong Wang <zhihong.wang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This reverts commit 394313fff3.
While the patch did solve concurrency issue, it induces more
pages copies as some clean pages are marked as dirty for
performance reasons. Moreover, as there is no more contention
doing the logging, the rate of packets than can be processed is
higher, leading to even more pages to be dirtied.
It has been reported that with more than one queue pair, and
with a relatively low packet rate (1Mpps), the live migration
never converges until the flow is stopped.
While a better solution is found, it is better to reset to the
old behaviour, i.e. using atomic operation for dirty pages
logging.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
New version of the packages with dependencies for the szedata2
driver is needed due to the new API of the libsze2 library which
is used in the driver.
The documentation and the release notes are updated to contain
the information about the required versions.
Signed-off-by: Matej Vido <vido@cesnet.cz>
Acked-by: Jan Remes <remes@netcope.com>
Missing "return -ENOTSUP" will always lead to illegal offload
passing through offload checking.
Fixes: 7497d3e2f7 ("net/i40e: convert to new Tx offloads API")
Signed-off-by: Yanglong Wu <yanglong.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
JUMBO_FRAME offload capability should be exposed since i40e
does support it
Fixes: c3ac7c5b0b ("net/i40e: convert to new Rx offloads API")
Signed-off-by: Yanglong Wu <yanglong.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Library folder name and output library name are same except a few flaws
including librte_ether.
This library is network device abstraction layer, the name "ethdev" fits
better than "ether", and library & header files already named as ethdev.
Also there is a rte_ether.h in the net library which can cause confusion.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Use dynamic log type (instead of PMD) in vhost.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>