This patch fixes the incorrect packet content copy in the
chaining mode. Originally the content before cipher offset is
overwritten by all zeros. This patch fixes the problem by
making sure the correct write back source and destination
settings during set up.
Fixes: 3bb595ecd682 ("vhost/crypto: add request handler")
Cc: stable@dpdk.org
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
We should apply for RO access when receiving packets from the
VM and apply for RW access when sending packets to the VM.
Fixes: a922401f35cc ("vhost: add Rx support for packed ring")
Fixes: ae999ce49dcb ("vhost: add Tx support for packed ring")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This updates the license on the rte_rtm.h file to be the standard
BSD-3-Clause license used for the rest of DPDK, thus bringing the file in
compliance with the DPDK licensing policy.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When TSX transactions abort, it is generally worth retrying a number of
times before falling back to the traditional locking path, as the
parallelism benefits from TSX can be worth it when a transaction does
succeed. For cases with multiple threads and high contention rates, it
can be useful to have increasing delays between retry attempts, so as to
avoid having the same threads repeatedly collided.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
gcc 7 and 8 with O3 will generate vzeroupper from rte_memcpy
into TSX region which may abort the TSX transaction.
This fix changes rte_memcpy to memcpy which will not insert
extra vzeroupper into the library.
Fixes: f2e3001b53ec ("hash: support read/write concurrency")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
EAL should not crash when setting alarm fails. Also, remove the
profanity in error message.
Fixes: daf9bfca717e ("ipc: remove thread for async requests")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
x86 jit can generate invalid code for (BPF_LD | BPF_IMM | EBPF_DW)
instructions, when immediate value is bigger then INT32_MAX.
Fixes: cc752e43e079 ("bpf: add JIT compilation for x86_64 ISA")
Cc: stable@dpdk.org
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
If the last part of the PCI address (function number) is missing,
the parsing was successful, assuming function 0.
The call to strtoul is not returning an error in such a case,
so an explicit check is inserted before.
This bug has always been there in older parsing macros:
- GET_PCIADDR_FIELD
- GET_BLACKLIST_FIELD
Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org
Reported-by: Wisam Jaddo <wisamm@mellanox.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The lock-free algorithm has caused significant lookup
performance regression for certain use cases. The
regression is attributed to the use of non-relaxed
memory orderings. 2 versions of the lookup functions
are created. One that uses the RW lock and the one that
is lock-free. This restores the performance regression
caused for use cases that used RW lock version of the
lookup function.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Suggested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
When calling __atomic_compare_exchange_n, use relaxed ordering for the
success case, as multiple producers/consumers do not release updates to
each other so no need for acquire or release ordering.
Because the thread fence in place, ordering for the first iteration can
be relaxed.
Run the ring perf test on the following testbed:
HW: ThunderX2 B0 CPU CN9975 v2.0, 2 sockets, 28core,4 threads/core,2.5GHz
OS: Ubuntu 16.04.5 LTS, Kernel: 4.15.0-36-generic
DPDK: 18.08, Configuration: arm64-armv8a-linuxapp-gcc
gcc: 8.1.0
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 \
--socket-mem=1024 -- -i
Without the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.75
MP/MC bulk enq/dequeue (size: 8): 10.18
SP/SC bulk enq/dequeue (size: 32): 1.80
MP/MC bulk enq/dequeue (size: 32): 2.34
With the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.59
MP/MC bulk enq/dequeue (size: 8): 10.54
SP/SC bulk enq/dequeue (size: 32): 1.73
MP/MC bulk enq/dequeue (size: 32): 2.38
No significant improvement, nor regression was seen, as the optimisation
is not at the critical path.
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Use case scenario:
1) Thread 1 is enqueuing. It reads prod.head and gets stalled for some
reasons (running out of cpu time, preempted,...)
2) Thread 2 is enqueuing. It succeeds in enqueuing and moves prod.head
forward.
3) Thread 3 is dequeuing. It succeeds in dequeuing and moves the cons.tail
beyond the prod.head read by thread 1.
4) Thread 1 is re-scheduled. It reads cons.tail.
cpu1(producer) cpu2(producer) cpu3(consumer)
load r->prod.head
^ load r->prod.head
| load r->cons.tail
| store r->prod.head(+n)
stalled <-- enqueue ----->
| store r->prod.tail(+n)
| load r->cons.head
| load r->prod.tail
| store r->cons.head(+n)
| <...dequeue.....>
v store r->cons.tail(+n)
load r->cons.tail
For thread 1, the __atomic_compare_exchange_n detects the outdated
prod.head and retry the flow with the new one. This retry flow works ok on
strong ordering platform(eg:x86). But for weak ordering platforms(arm,
ppc), loading cons.tail and prod.head might be re-ordered, prod.head is new
but cons.tail becomes too old, the retry flow, based on the detection of
outdated head, does not trigger as expected, thus the outdate cons.tail
causes wrong free_entries.
Similarly, for dequeuing, outdated prod.tail leads to wrong avail_entries.
The fix is to keep the deterministic order of two loads allowing the retry
to work.
Run the ring perf test on the following testbed:
HW: ThunderX2 B0 CPU CN9975 v2.0, 2 sockets, 28core, 4 threads/core, 2.5GHz
OS: Ubuntu 16.04.5 LTS, Kernel: 4.15.0-36-generic
DPDK: 18.08, Configuration: arm64-armv8a-linuxapp-gcc
gcc: 8.1.0
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 \
--socket-mem=1024 -- -i
Without the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.64
MP/MC bulk enq/dequeue (size: 8): 9.58
SP/SC bulk enq/dequeue (size: 32): 1.98
MP/MC bulk enq/dequeue (size: 32): 2.30
With the patch:
*** Testing using two physical cores ***
SP/SC bulk enq/dequeue (size: 8): 5.75
MP/MC bulk enq/dequeue (size: 8): 10.18
SP/SC bulk enq/dequeue (size: 32): 1.80
MP/MC bulk enq/dequeue (size: 32): 2.34
The results showed the thread fence degrade the performance slightly, but
it is required for correctness.
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Some toolchain has fls() definition in string.h as argument type int,
which is conflicting uint32_t argument type.
/export/dpdk.org/lib/librte_eal/common/rte_reciprocal.c:47:19:
error: conflicting types for ‘fls’
static inline int fls(uint32_t x)
^~~
/opt/marvell-tools-201/aarch64-marvell-elf/include/strings.h:59:6:
note: previous declaration of ‘fls’ was here
int fls(int) __pure2;
FreeBSD string.h also has fls() with argument as int type.
https://www.freebsd.org/cgi/man.cgi?query=fls&sektion=3
Fixing the conflict by using rte version of fls.
Fixes: ffe3ec811ef5 ("sched: introduce reciprocal divide")
Fixes: faf2b25c9f80 ("fm10k: support VMDQ in multi-queue configuration")
Cc: stable@dpdk.org
Suggested-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The function returns the last (most-significant) bit set.
Added unit testcase to verify rte_fls_u32().
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The use of rte_memcpy_ptr was removed in revert below,
but it was missing removing the file arch/x86/rte_memcpy.c.
Fixes: d35cc1fe6a7a ("eal/x86: revert select optimized memcpy at run-time")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The devargs of a device can be replaced by a newly allocated one
when trying to probe again the same device (multi-process or
multi-ports scenarios). This is breaking some pointer references.
It can be avoided by copying the new content, freeing the new devargs,
and returning the already inserted pointer.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: Qi Zhang <qi.z.zhang@intel.com>
Tested-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Current code has different max DMA mask width values for 32 and 64
bits systems. IOMMU hardware could report a higher supported width
than current MAX_DMA_MASK_BITS when RTE_ARCH_64 is not defined. This
is actually true with a 32 bits kernel running in a 64 bits server
with IOMMU hardware. This could also be a problem with embedded systems
using an IOMMU designed for 64 bits in a 32 bits system.
This patch leaves a single max DMA mask width which will make sure the
mask width is within the range for 64 bits variables used for DMA mask.
This also will avoid wrong values because any value higher than
64 bits is likely wrong.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Adding an additional failure path in DMA mask check has exposed an
issue where `hugepage` pointer may point to memory that has already
been unmapped, but pointer value is still not NULL, so failure
handler will attempt to unmap it second time if DMA mask check
fails. Fix it by setting `hugepage` pointer to NULL once it is no
longer needed.
Coverity issue: 325730
Fixes: 165c89b84538 ("mem: use DMA mask check for legacy memory")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Right now reassembly code relies on src_dst[] being all zeroes to
determine is it free/occupied entry in the fragments table.
This is suboptimal and error prone - user can crash DPDK ip_reassembly
app by something like the following scapy script:
x=Ether(src=...,dst=...)/IP(dst='0.0.0.0',src='0.0.0.0',id=0)/('X'*1000)
frags=fragment(x, fragsize=500)
sendp(frags, iface=...)
To overcome that issue and reduce overhead of
'key invalidate' and 'key is empty' operations -
add key_len into keys comparision procedure.
Fixes: 4f1a8f633862 ("ip_frag: add IPv6 reassembly")
Cc: stable@dpdk.org
Reported-by: Ryan E Hall <ryan.e.hall@intel.com>
Reported-by: Alexander V Gutkin <alexander.v.gutkin@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Under some conditions ill-formed fragments might cause
reassembly code to corrupt mbufs and/or crash.
Let say the following fragments sequence:
<ofs=0,len=100, flags=MF>
<ofs=96,len=100, flags=MF>
<ofs=200,len=0,flags=MF>
<ofs=200,len=100,flags=0>
can trigger the problem.
To overcome such situation, added check that fragment length
of incoming value is greater than zero.
Fixes: 601e279df074 ("ip_frag: move fragmentation/reassembly headers into a library")
Fixes: 4f1a8f633862 ("ip_frag: add IPv6 reassembly")
Cc: stable@dpdk.org
Reported-by: Ryan E Hall <ryan.e.hall@intel.com>
Reported-by: Alexander V Gutkin <alexander.v.gutkin@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
After removing the function rte_eth_dev_attach(),
there are two replacement solutions possible:
one using probe event notification, and one using a new iterator.
So the application can get the new probed ports either asynchronously
or synchronously.
The iterator API is new in DPDK 18.11 so they got the experimental
tag by policy. It causes an issue for strict applications which do
not use experimental functions, and want to use the synchronous method.
The replacement for removed API should not be experimental.
That's why the experimental status of the ethdev iterator is removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Tested-by: Kevin Traynor <ktraynor@redhat.com>
The functions rte_dev_probe() and rte_dev_remove() are new
in DPDK 18.11 so they got the experimental tag by policy.
However they are too much basic functions for being skipped
by strict applications which do not use experimental functions.
The alternative is to use rte_eal_hotplug_add() and
rte_eal_hotplug_remove(), but their API requires the application
to parse the devargs string in order to provide bus name,
device name and driver arguments.
The new function rte_dev_probe() is really simpler to use and
more flexible by accepting any devargs string.
Let's encourage applications to use it.
The old functions rte_eal_hotplug_* may be deprecated later.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Tested-by: Kevin Traynor <ktraynor@redhat.com>
When adding memory to an external heap, do not go to unlock failure
handler because the memory hotplug lock hasn't been taken out yet.
Fixes: 7d75c31014f7 ("malloc: allow adding memory to named heaps")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
For packed ring layout, we need save avail index and its wrap
counter value. At restore time, the used index and its wrap counter
are set to available's ones, as the ring procressing is stopped
at vring base get time.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
The following error popped when compiling with -pedantic:
In file included from
drivers/net/mlx5/mlx5_flow_dv.c:28:0:
include/rte_gre.h:20:2:
error: type of bit-field 'res2' is a GCC extension [-Werror=pedantic]
uint16_t res2:4; /**< Reserved */
Fixing by adding the __extension__ attribute.
Fixes: 894f71a3805d ("net: add GRE header structure")
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
In __rte_ring_move_prod_head, move the __atomic_load_n up and out of
the do {} while loop as upon failure the old_head will be updated,
another load is costly and not necessary.
This helps a little on the latency,about 1~5%.
Test result with the patch(two cores):
SP/SC bulk enq/dequeue (size: 8): 5.64
MP/MC bulk enq/dequeue (size: 8): 9.58
SP/SC bulk enq/dequeue (size: 32): 1.98
MP/MC bulk enq/dequeue (size: 32): 2.30
Fixes: 39368ebfc606 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Jia He <justin.he@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Synchronize the load-acquire of the tail and the store-release
within update_tail, the store release ensures all the ring operations,
enqueue or dequeue, are seen by the observers on the other side as soon
as they see the updated tail. The load-acquire is needed here as the
data dependency is not a reliable way for ordering as the compiler might
break it by saving to temporary values to boost performance.
When computing the free_entries and avail_entries, use atomic semantics
to load the heads and tails instead.
The patch was benchmarked with test/ring_perf_autotest and it decreases
the enqueue/dequeue latency by 5% ~ 27.6% with two lcores, the real gains
are dependent on the number of lcores, depth of the ring, SPSC or MPMC.
For 1 lcore, it also improves a little, about 3 ~ 4%.
It is a big improvement, in case of MPMC, with two lcores and ring size
of 32, it saves latency up to (3.26-2.36)/3.26 = 27.6%.
This patch is a bug fix, while the improvement is a bonus. In our analysis
the improvement comes from the cacheline pre-filling after hoisting load-
acquire from _atomic_compare_exchange_n up above.
The test command:
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 --socket-mem=\
1024 -- -i
Test result with this patch(two cores):
SP/SC bulk enq/dequeue (size: 8): 5.86
MP/MC bulk enq/dequeue (size: 8): 10.15
SP/SC bulk enq/dequeue (size: 32): 1.94
MP/MC bulk enq/dequeue (size: 32): 2.36
In comparison of the test result without this patch:
SP/SC bulk enq/dequeue (size: 8): 6.67
MP/MC bulk enq/dequeue (size: 8): 13.12
SP/SC bulk enq/dequeue (size: 32): 2.04
MP/MC bulk enq/dequeue (size: 32): 3.26
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Jia He <justin.he@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fixed bad logic in rte_comp_op_alloc() checking return
value from rte_comp_op_raw_bulk_alloc(). This
could have resulted in a seg-fault in error case.
Made rte_comp_ob_bulk_alloc() code consistent
with rte_comp_op_alloc().
Fixes: 96086db5a369 ("compressdev: add operation management")
Cc: stable@dpdk.org
Reported-by: Sabyasachi Sengupta <sabyasg@hpe.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Add note on usage of op structure and when it can be
accessed and freed.
Fixes: 63f4bfd5328b ("compressdev: add enqueue/dequeue functions")
Cc: stable@dpdk.org
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
During memory initialization calling rte_mem_check_dma_mask
leads to a deadlock because memory_hotplug_lock is locked by a
writer, the current code in execution, and rte_memseg_walk
tries to lock as a reader.
This patch adds a thread_unsafe version which will call the final
function specifying the memory_hotplug_lock does not need to be
acquired. The patch also modified rte_mem_check_dma_mask as a
intermediate step which will call the final function as before,
implying memory_hotplug_lock will be acquired.
PMDs should always use the version acquiring the lock with the
thread_unsafe one being just for internal EAL memory code.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.
Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages option need to be also considered.
This patch adds the DMA check for those cases.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
If DMA mask checks shows mapped memory out of the supported range
specified by the DMA mask, nothing can be done but return an error
an report the error. This can imply the app not being executed at
all or precluding dynamic memory allocation once the app is running.
In any case, we can advice the user to force IOVA as PA if currently
IOVA being VA and user being root.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds the possibility of setting a dma mask to be used
once the memory initialization is done.
This is currently needed when IOVA mode is set by PCI related
code and an x86 IOMMU hardware unit is present. Current code calls
rte_mem_check_dma_mask but it is wrong to do so at that point
because the memory has not been initialized yet.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Current name rte_eal_check_dma_mask does not follow the naming
used in the rest of the file.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
The param needs to be the maskbits and not the mask.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
build error:
In function ‘eal_plugin_add’,
.../lib/librte_eal/common/eal_common_options.c:225:2:
error: ‘strncpy’ output may be truncated copying 4095 bytes from a
string of length 4095 [-Werror=stringop-truncation]
strncpy(solib->name, path, PATH_MAX-1);
strncpy may result a not null-terminated string,
replaced it with strlcpy
Fixes: f9a08f650211 ("eal: add support for shared object drivers")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
errno_autotest testcase were failed since
commit 5d7b673d5fd6 ("mk: build with _GNU_SOURCE defined by default")
RTE>>errno_autotest
rte_strerror: 'Unknown error 11',
strerror: 'Resource temporarily unavailable'
Test Failed
There are two different version of strerror_t() based on
_GNU_SOURCE definition.
/* XSI-compliant */
int strerror_r(int errnum, char *buf, size_t buflen);
/* GNU-specific */
char *strerror_r(int errnum, char *buf, size_t buflen);
Since the GNU-specific version returns char* the exiting "if"
condition around the strerror_r fails.
Switching back to XSI-compliant version to allow
a) Portable strerror_r() usage as musl c library uses
non GNU speficic version
https://git.musl-libc.org/cgit/musl/tree/src/string/strerror_r.c
b) Based on strerror_r(3) man page, it is possible that GNU-specific
version need not use char *buf to fill error message instead it
can use the immutable static string from the library and return it.
note from strerror_r(3) man page:
The GNU-specific strerror_r() returns a pointer to a string containing
the error message. This may be either a pointer to a string that the
function stores in buf, or a pointer to some (immutable)
static string (in which case buf is unused).
Fixes: 5d7b673d5fd6 ("mk: build with _GNU_SOURCE defined by default")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
If a device is unplugged while an interrupt is pending, the
read call to the uio device to remove it from the poll wait list
can fail resulting in it being continually polled forever. This
change checks for the read failing and if so, unregisters the device
as an interrupt source and causes the wait list to be rebuilt.
This race has been reported and observed in production.
Fixes: 0a45657a6794 ("pci: rework interrupt handling")
Cc: stable@dpdk.org
Signed-off-by: Brian Russell <brussell@brocade.com>
Signed-off-by: Luca Boccassi <bluca@debian.org>
rte_mp_request_sync() says that the caller is responsible
for freeing one of its parameters afterwards. EAL didn't
do that, causing a memory leak.
Fixes: 244d5130719c ("eal: enable hotplug on multi-process")
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Some global variables can be eliminated, since they are not part of
public interface, it is free to remove them.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Some global variables can indeed be static, add static keyword to them.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>