For packed ring layout, we need save avail index and its wrap
counter value. At restore time, the used index and its wrap counter
are set to available's ones, as the ring procressing is stopped
at vring base get time.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
The following error popped when compiling with -pedantic:
In file included from
drivers/net/mlx5/mlx5_flow_dv.c:28:0:
include/rte_gre.h:20:2:
error: type of bit-field 'res2' is a GCC extension [-Werror=pedantic]
uint16_t res2:4; /**< Reserved */
Fixing by adding the __extension__ attribute.
Fixes: 894f71a3805d ("net: add GRE header structure")
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
In __rte_ring_move_prod_head, move the __atomic_load_n up and out of
the do {} while loop as upon failure the old_head will be updated,
another load is costly and not necessary.
This helps a little on the latency,about 1~5%.
Test result with the patch(two cores):
SP/SC bulk enq/dequeue (size: 8): 5.64
MP/MC bulk enq/dequeue (size: 8): 9.58
SP/SC bulk enq/dequeue (size: 32): 1.98
MP/MC bulk enq/dequeue (size: 32): 2.30
Fixes: 39368ebfc606 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Jia He <justin.he@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Synchronize the load-acquire of the tail and the store-release
within update_tail, the store release ensures all the ring operations,
enqueue or dequeue, are seen by the observers on the other side as soon
as they see the updated tail. The load-acquire is needed here as the
data dependency is not a reliable way for ordering as the compiler might
break it by saving to temporary values to boost performance.
When computing the free_entries and avail_entries, use atomic semantics
to load the heads and tails instead.
The patch was benchmarked with test/ring_perf_autotest and it decreases
the enqueue/dequeue latency by 5% ~ 27.6% with two lcores, the real gains
are dependent on the number of lcores, depth of the ring, SPSC or MPMC.
For 1 lcore, it also improves a little, about 3 ~ 4%.
It is a big improvement, in case of MPMC, with two lcores and ring size
of 32, it saves latency up to (3.26-2.36)/3.26 = 27.6%.
This patch is a bug fix, while the improvement is a bonus. In our analysis
the improvement comes from the cacheline pre-filling after hoisting load-
acquire from _atomic_compare_exchange_n up above.
The test command:
$sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 --socket-mem=\
1024 -- -i
Test result with this patch(two cores):
SP/SC bulk enq/dequeue (size: 8): 5.86
MP/MC bulk enq/dequeue (size: 8): 10.15
SP/SC bulk enq/dequeue (size: 32): 1.94
MP/MC bulk enq/dequeue (size: 32): 2.36
In comparison of the test result without this patch:
SP/SC bulk enq/dequeue (size: 8): 6.67
MP/MC bulk enq/dequeue (size: 8): 13.12
SP/SC bulk enq/dequeue (size: 32): 2.04
MP/MC bulk enq/dequeue (size: 32): 3.26
Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option")
Cc: stable@dpdk.org
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Jia He <justin.he@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fixed bad logic in rte_comp_op_alloc() checking return
value from rte_comp_op_raw_bulk_alloc(). This
could have resulted in a seg-fault in error case.
Made rte_comp_ob_bulk_alloc() code consistent
with rte_comp_op_alloc().
Fixes: 96086db5a369 ("compressdev: add operation management")
Cc: stable@dpdk.org
Reported-by: Sabyasachi Sengupta <sabyasg@hpe.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
Add note on usage of op structure and when it can be
accessed and freed.
Fixes: 63f4bfd5328b ("compressdev: add enqueue/dequeue functions")
Cc: stable@dpdk.org
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Shally Verma <shally.verma@caviumnetworks.com>
During memory initialization calling rte_mem_check_dma_mask
leads to a deadlock because memory_hotplug_lock is locked by a
writer, the current code in execution, and rte_memseg_walk
tries to lock as a reader.
This patch adds a thread_unsafe version which will call the final
function specifying the memory_hotplug_lock does not need to be
acquired. The patch also modified rte_mem_check_dma_mask as a
intermediate step which will call the final function as before,
implying memory_hotplug_lock will be acquired.
PMDs should always use the version acquiring the lock with the
thread_unsafe one being just for internal EAL memory code.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.
Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages option need to be also considered.
This patch adds the DMA check for those cases.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
If DMA mask checks shows mapped memory out of the supported range
specified by the DMA mask, nothing can be done but return an error
an report the error. This can imply the app not being executed at
all or precluding dynamic memory allocation once the app is running.
In any case, we can advice the user to force IOVA as PA if currently
IOVA being VA and user being root.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds the possibility of setting a dma mask to be used
once the memory initialization is done.
This is currently needed when IOVA mode is set by PCI related
code and an x86 IOMMU hardware unit is present. Current code calls
rte_mem_check_dma_mask but it is wrong to do so at that point
because the memory has not been initialized yet.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Current name rte_eal_check_dma_mask does not follow the naming
used in the rest of the file.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
The param needs to be the maskbits and not the mask.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
build error:
In function ‘eal_plugin_add’,
.../lib/librte_eal/common/eal_common_options.c:225:2:
error: ‘strncpy’ output may be truncated copying 4095 bytes from a
string of length 4095 [-Werror=stringop-truncation]
strncpy(solib->name, path, PATH_MAX-1);
strncpy may result a not null-terminated string,
replaced it with strlcpy
Fixes: f9a08f650211 ("eal: add support for shared object drivers")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
errno_autotest testcase were failed since
commit 5d7b673d5fd6 ("mk: build with _GNU_SOURCE defined by default")
RTE>>errno_autotest
rte_strerror: 'Unknown error 11',
strerror: 'Resource temporarily unavailable'
Test Failed
There are two different version of strerror_t() based on
_GNU_SOURCE definition.
/* XSI-compliant */
int strerror_r(int errnum, char *buf, size_t buflen);
/* GNU-specific */
char *strerror_r(int errnum, char *buf, size_t buflen);
Since the GNU-specific version returns char* the exiting "if"
condition around the strerror_r fails.
Switching back to XSI-compliant version to allow
a) Portable strerror_r() usage as musl c library uses
non GNU speficic version
https://git.musl-libc.org/cgit/musl/tree/src/string/strerror_r.c
b) Based on strerror_r(3) man page, it is possible that GNU-specific
version need not use char *buf to fill error message instead it
can use the immutable static string from the library and return it.
note from strerror_r(3) man page:
The GNU-specific strerror_r() returns a pointer to a string containing
the error message. This may be either a pointer to a string that the
function stores in buf, or a pointer to some (immutable)
static string (in which case buf is unused).
Fixes: 5d7b673d5fd6 ("mk: build with _GNU_SOURCE defined by default")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
If a device is unplugged while an interrupt is pending, the
read call to the uio device to remove it from the poll wait list
can fail resulting in it being continually polled forever. This
change checks for the read failing and if so, unregisters the device
as an interrupt source and causes the wait list to be rebuilt.
This race has been reported and observed in production.
Fixes: 0a45657a6794 ("pci: rework interrupt handling")
Cc: stable@dpdk.org
Signed-off-by: Brian Russell <brussell@brocade.com>
Signed-off-by: Luca Boccassi <bluca@debian.org>
rte_mp_request_sync() says that the caller is responsible
for freeing one of its parameters afterwards. EAL didn't
do that, causing a memory leak.
Fixes: 244d5130719c ("eal: enable hotplug on multi-process")
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Some global variables can be eliminated, since they are not part of
public interface, it is free to remove them.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Some global variables can indeed be static, add static keyword to them.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
So far each process in MP used to have a separate container
and relied on the primary process to register all memsegs.
Mapping external memory via rte_vfio_container_dma_map()
in secondary processes was broken, because the default
(process-local) container had no groups bound. There was
even no way to bind any groups to it, because the container
fd was deeply encapsulated within EAL.
This patch introduces a new SOCKET_REQ_DEFAULT_CONTAINER
message type for MP synchronization, makes all processes
within a MP party use a single default container, and hence
fixes rte_vfio_container_dma_map() for secondary processes.
From what I checked this behavior was always the same, but
started to be invalid/insufficient once mapping external
memory was allowed.
While here, fix up the comment on rte_vfio_get_container_fd().
This function always opens a new container, never reuses
an old one.
Fixes: 73a639085938 ("vfio: allow to map other memory regions")
Cc: stable@dpdk.org
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
We were reading some memory just after freeing it.
Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
Cc: stable@dpdk.org
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Always attempt to find already opened fd for an iommu
group as subsequent attempts to open it will fail.
There's no public API to check if a group was already
bound and has a container, so rte_vfio_container_group_bind()
shouldn't fail in such case.
Fixes: ea2dc1066870 ("vfio: add multi container support")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This patch uses EAL option "--iova-mode" to force the IOVA mode to a
particular value. There exists virtual devices that are not directly
attached to the PCI bus, and therefore the auto detection of the IOVA
mode based on probing the PCI bus and IOMMU configuration may not
report the required addressing mode. Using the EAL option permits the
mode to be explicitly configured in this scenario.
Signed-off-by: Eric Zhang <eric.zhang@windriver.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Marko Kovacevic <marko.kovacevic@intel.com>
In the case of user don't want to use bus iova scheme and want
to override.
For that, adding EAL option --iova-mode=<string> where valid input
string is 'pa' or 'va'.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Eric Zhang <eric.zhang@windriver.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Commit 73a639085938 ("vfio: allow to map other memory regions")
introduced a bug in sPAPR IOMMU mapping. The commit removed necessary
ioctl with VFIO_IOMMU_SPAPR_REGISTER_MEMORY. Also, vfio_spapr_map_walk
should call vfio_spapr_dma_do_map instead of vfio_spapr_dma_mem_map.
Fixes: 73a639085938 ("vfio: allow to map other memory regions")
Cc: stable@dpdk.org
Signed-off-by: Takeshi Yoshimura <tyos@jp.ibm.com>
Linux kernel uses a really high address as starting address for
serving mmaps calls. If there exist addressing limitations and
IOVA mode is VA, this starting address is likely too high for
those devices. However, it is possible to use a lower address in
the process virtual address space as with 64 bits there is a lot
of available space.
This patch adds an address hint as starting address for 64 bits
systems and increments the hint for next invocations. If the mmap
call does not use the hint address, repeat the mmap call using
the hint address incremented by page size.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
A device can suffer addressing limitations. This function checks
memsegs have iovas within the supported range based on dma mask.
PMDs should use this function during initialization if device
suffers addressing limitations, returning an error if this function
returns memsegs out of range.
Another usage is for emulated IOMMU hardware with addressing
limitations.
It is necessary to save the most restricted dma mask for checking out
memory allocated dynamically after initialization.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
RTE_MEMZONE_SIZE_HINT_ONLY wasn't checked in any way,
causing size hints to be parsed as hard requirements.
This resulted in some allocations being failed prematurely.
Fixes: 68b6092bd3c7 ("malloc: allow reserving biggest element")
Cc: stable@dpdk.org
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
This patch is used to fix the memory leak issue of logid.
We use the ASAN test in SPDK when integrating DPDK and
find this memory leak issue.
Fixes: d8a2bc71dfc2 ("log: remove app path from syslog id")
Cc: stable@dpdk.org
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
in struct ip_frag_key,src_dst[] type is uint64_t.
but "val" which to store the calc restult ,type is uint32_t.
we may lost high 32 bit key. and function return value is int,
but it won't return < 0.
Signed-off-by: Li Han <han.li1@zte.com.cn>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As part of the effort of consolidating the DPDK installation bits and
pieces across distros, set the default directory of lib/ where PMDs get
installed to dpdk/pmds-XX.YY. It's necessary to have a versioned
subdirectory as multiple ABI revisions might be installed at the same
time, so having a fixed name will cause trouble with the autoload
feature.
Small refactor with parsing and saving the major version to a variable,
since it's now used in 3 different places.
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Timothy Redaelli <tredaelli@redhat.com>
This patch adds telemetry as a dependecy to all applications. Without these
changes, the --telemetry flag will not be recognised and applications will
fail to run if they want to enable telemetry.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds functionality to enable/disable the selftest.
This functionality will be extended in future to make the
enabling/disabling more dynamic and remove this 'hardcoded' approach. We
are temporarily using this approach due to the design changes (vdev vs eal)
made to the library.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds functionality to create a JSON message in
order to send it to a client socket.
When stats are requested by a client, they are retrieved from
the metrics library and encoded in JSON format.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds functionality to update the statistics in
the metrics library with values from the ethdev stats.
Values need to be updated before they are encoded into a JSON
message and sent to the client that requested them. The JSON encoding
will be added in a subsequent patch.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds the parser file. This is used to parse any
messages that are received on any of the client sockets.
Currently, the unregister functionality works using the parser.
Functionality relating to getting statistic values for certain ports
will be added in a subsequent patch, however the parsing involved
for that command is added in this patch.
Some of the parser code included is in preparation for future
functionality, that is not implemented yet in this patchset.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch introduces clients to the telemetry API.
When a client makes a connection through the initial telemetry
socket, they can send a message through the socket to be
parsed. Register messages are expected through this socket, to
enable clients to register and have a client socket setup for
future communications.
A TAILQ is used to store all clients information. Using this, the
client sockets are polled for messages, which will later be parsed
and dealt with accordingly.
Functionality that make use of the client sockets were introduced
in this patch also, such as writing to client sockets, and sending
error responses.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds the telemetry UNIX socket. It is used to
allow connections from external clients.
On the initial connection from a client, ethdev stats are
registered in the metrics library, to allow for their retrieval
at a later stage.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds the infrastructure and initial code for the telemetry
library.
The telemetry init is registered with eal_init(). We can then check to see
if --telemetry was passed as an eal option. If --telemetry was parsed, then
we call telemetry init at the end of eal init.
Control threads are used to get CPU cycles for telemetry, which are
configured in this patch also.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Brian Archbold <brian.archbold@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch makes the eal_get_runtime_dir() API public so it can be used
from outside EAL.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds infrastructure to EAL that allows an application to
register it's init function with EAL. This allows libraries to be
initialized at the end of EAL init.
This infrastructure allows libraries that depend on EAL to be initialized
as part of EAL init, removing circular dependency issues.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The offload name functions are useful, but since they are
marked experimental they can not be used by upstream projects.
For example, VPP duplicates the same table in its code.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add a new rte_delay_us_sleep() function that uses nanosleep().
This function can be used by applications to not implement
their own nanosleep() based callback and by internal DPDK
code if CPU non-blocking delay needed.
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The iterator was matching all representors if it was not specified
in the devargs string. It was a wrong default behaviour.
If there is no representor parameter in the devargs, the iterator
should not match any representor port.
The implementation of the default behaviour would be simpler
if a "no match" handler is added to rte_kvargs_process().
As it requires an API breakage, it will be reworked later.
Fixes: a7d3c6271d55 ("ethdev: support representor id as iterator filter")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>