We watched a rte panic of mbuf_autotest in our qualcomm arm64 server
(Amberwing).
Root cause:
In __rte_ring_move_cons_head()
...
do {
/* Restore n as it may change every loop */
n = max;
*old_head = r->cons.head; //1st load
const uint32_t prod_tail = r->prod.tail; //2nd load
In weak memory order architectures (powerpc,arm), the 2nd load might be
reodered before the 1st load, that makes *entries is bigger than we wanted.
This nasty reording messed enque/deque up.
cpu1(producer) cpu2(consumer) cpu3(consumer)
load r->prod.tail
in enqueue:
load r->cons.tail
load r->prod.head
store r->prod.tail
load r->cons.head
load r->prod.tail
...
store r->cons.{head,tail}
load r->cons.head
Then, r->cons.head will be bigger than prod_tail, then make *entries very
big and the consumer will go forward incorrectly.
After this patch, the old cons.head will be recaculated after failure of
rte_atomic32_cmpset
There is no such issue on X86, because X86 is strong memory order model.
But rte_smp_rmb() doesn't have impact on runtime performance on X86, so
keep the same code without architectures specific concerns.
Fixes: 50d7690548 ("ring: add burst API")
Cc: stable@dpdk.org
Signed-off-by: Jia He <jia.he@hxt-semitech.com>
Signed-off-by: Jie Liu <jie2.liu@hxt-semitech.com>
Signed-off-by: Bing Zhao <bing.zhao@hxt-semitech.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@arm.com>
If pdump_pktmbuf_copy_data() fails it's possible to have segment leak
as rte_pktmbuf_free() only handles m_dup chain but not the seg just
allocated and yet not chained.
Fixes: 278f945402 ("pdump: add new library for packet capture")
Signed-off-by: Ilya V. Matveychikov <matvejchikov@gmail.com>
Fixes: c261d1431b ("security: introduce security API and framework")
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
More error reported for device reset in release() [1],
when device pass-through to the guest, host kernel crash on guest exit.
Removing the reset completely.
This is close to reverting commit b58eedfc7d [2], taking into account
previous fix to remove reset in open as well [3], but not exactly same.
With latest code, interrupts are enabled in uio open() callback and
disabled in uio release() callback, so when a DPDK application exit
device interrupts are disabled. Previously interrupts were only enabled
once in igb_uio module insert and disabled in module removal.
Also with latest code device set as bus master in open() and master
cleared in release(), clearing bus master should prevent further DMA
which was one of the target of the initial patch.
The initial intention was also to reset the device to be sure it has
been left in proper state, but currently that part is missing because of
reported problem(s).
Still igb_uio should be safer comparing to the pre b58eedfc7d state.
[1]
http://dpdk.org/ml/archives/dev/2017-November/081459.html
[2]
b58eedfc7d ("igb_uio: issue FLR during open and release of device file")
[3]
f73b38e924 ("igb_uio: remove device reset in open")
Fixes: e3a64deae2 ("igb_uio: prevent reset for bnx2x devices")
Fixes: b58eedfc7d ("igb_uio: issue FLR during open and release of device file")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Move the vdev bus from lib/librte_eal to drivers/bus.
As the crypto vdev helper function refers to data structure
in rte_vdev.h, so we move those helper function into drivers/bus
too.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Remove rte_cryptodev_create_vdev() for duplication.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Qemu versions from v2.7.0 to v2.9.0 have their reply-ack protocol
feature implementation broken with multiqueue. The reply-ack
protocol feature is optional except for IOMMU feature.
This patch introduce a new RTE_VHOST_USER_IOMMU_SUPPORT flag to
enable VIRTIO_F_IOMMU_PLATFORM virtio feature.
By default, the IOMMU support is now disabled.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yliu@fridaylinux.org>
Tested-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
If the application has disabled VIRTIO_F_IOMMU_PLATFORM, disable
VHOST_USER_PROTOCOL_F_REPLY_ACK protocol feature that is only
mandatory with IOMMU for now.
This is done to provide a way for the application to support
multiqueue with old Qemu versions (v2.7.0 to v2.9.0) that have
reply-ack feature broken.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yliu@fridaylinux.org>
Tested-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
Acked-by: Mark Kavanagh <mark.b.kavanagh@intel.com>
If multiple queue pairs are created but all are not used, the
device is never started, as unused queues aren't enabled and
their ring addresses aren't translated. The device is changed
to running state when all rings addresses are translated.
This patch fixes this by postponning rings addresses translation
at kick time unconditionnaly, VHOST_USER_F_PROTOCOL_FEATURES
being negotiated or not.
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
Acked-by: Yuanhan Liu <yliu@fridaylinux.org>
Unsuccesfull memory allocation for elements inside cfgfile
structure could result in resource leak.
Fixed by pointer verification after each malloc,
if malloc fail - error branch is proceeded with freeing memory.
Coverity issue: 195032
Fixes: d4cb819758 ("cfgfile: support runtime modification")
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Acked-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Function memchr() could return NULL and assign it to split[1] pointer.
Additional check and error handing is made after memchr() call.
Coverity issue: 195004
Fixes: a6a47ac9c2 ("cfgfile: rework load function")
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Acked-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
This commit fixes a possible race condition if an application
uses the service-cores infrastructure and the function to run
a service on an application lcore at the same time.
The fix is to change the num_mapped_cores variable to be an
atomic variable. This causes concurrent accesses by multiple
threads to a service using rte_service_run_iter_on_app_lcore()
to detect if another core is currently mapped to the service,
and refuses to run if it is not multi-thread safe.
The run iteration on app lcore function has two arguments, the
service id to run, and if atomics should be used to serialize access
to multi-thread unsafe services. This allows applications to choose
if they wish to use use the service-cores feature, or if they
take responsibility themselves for serializing invoking a service.
See doxygen documentation for more details.
Two unit tests were added to verify the behaviour of the
function to run a service on an application core, testing both
a multi-thread safe service, and a multi-thread unsafe service.
The doxygen API documentation for the function has been updated
to reflect the current and correct behaviour.
Fixes: e9139a32f6 ("service: add function to run on app lcore")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The check for the existence of the default plugin directory calls stat
using an incorrect variable, which will cause a NULL pointer dereference
error.
Coverity issue: 198440
Fixes: d6a4399cdf ("eal: avoid error for non-existent default PMD path")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Aaron Conole <aconole@redhat.com>
For virtual device, the rte_intr_handle struct is
initialized by the virtual device driver, including
the event fd assignment. If the event fd need to be
read for clean, an argument is required for the proper
event fd read.
This patch adds efd_counter_size in rte_intr_handle
struct to tell the rx interrupt process the read size.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
Revert the patchset run-time Linking support including the following
3 commits:
Fixes: 84cc318424 ("eal/x86: select optimized memcpy at run-time")
Fixes: c7fbc80fe6 ("test: select memcpy alignment unit at run-time")
Fixes: 5f180ae329 ("efd: move AVX2 lookup in its own compilation unit")
The patchset would cause perf drop in vhost/virtio loopback performance
test. Because the run-time dispatch must cost at least a function call
comparing to the compile-time dispatch. And the reference cpu cycles value
is small. And in the test, when using 128-256 bytes packet, it would cause
16%-20% perf drop with mergeble path. When using 256 bytes packet, it would
cause 13% perf drop with vector path.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Some devices are having problem on device reset that happens during DPDK
application exit [1].
Create a static list of devices and exclude them from device reset.
[1]
http://dpdk.org/ml/archives/dev/2017-November/080927.html
Fixes: b58eedfc7d ("igb_uio: issue FLR during open and release of device file")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
The function pci_get_sysfs_path was moved from EAL to the PCI driver.
The namespace is now fixed by adding "rte_" prefix.
The map files are fixed by removing the symbol from EAL and adding
it to the PCI driver.
It is an API break but it is probably not used by applications.
Anyway this API is already broken by the move in a new header file.
Fixes: c752998b5e ("pci: introduce library and driver")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Fix kernel crash with KNI because KNI requires physical addresses.
When IOVA VA mode used, memzones and mbufs physical address fields
contain virtual addresses. But KNI relies on these fields to enable
kernel access for buffers. Those fields having virtual address cause
crash in kernel.
This is a workaround until KNI fixed properly to work with virtual
addresses.
Fixes: 72d013644b ("mem: honor IOVA mode in malloc virt2phy")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Before this commit, the EXPERIMENTAL version of ABI
derived from the DPDK_17.08 tag. In parallel there
was a DPDK_17.11 tag.
Experimental map should always derive from the latest ABI,
so this patch moves the 17.11 section above EXPERIMENTAL,
and updates EXPERIMENTAL to derive from the 17.11 map.
Fixes: aadc3eb002 ("pci: export match function")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
API and ABI of mempool library has been changed in 17.11.
Fixes: 02604520b2 ("mempool: remove unused flags argument")
Fixes: 0cc0f8aaa3 ("mempool: change flags from int to unsigned int")
Fixes: 6eac187bff ("mempool: add flags arg in xmem size and usage")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Renamed data type from phys_addr_t to rte_iova_t.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Renamed data type from phys_addr_t to rte_iova_t.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The following inline functions and macros have been renamed to be
consistent with the IOVA wording:
rte_mbuf_data_dma_addr -> rte_mbuf_data_iova
rte_mbuf_data_dma_addr_default -> rte_mbuf_data_iova_default
rte_pktmbuf_mtophys -> rte_pktmbuf_iova
rte_pktmbuf_mtophys_offset -> rte_pktmbuf_iova_offset
The deprecated functions and macros are kept to avoid breaking the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Rename buf_physaddr to buf_iova.
Keep the deprecated name in an anonymous union to avoid breaking
the API.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The functions rte_mempool_populate_phys() and
rte_mempool_populate_phys_tab() are renamed to
rte_mempool_populate_iova() and rte_mempool_populate_iova_tab().
The deprecated functions are kept as aliases to avoid breaking the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The function rte_mempool_virt2phy() is renamed to rte_mempool_virt2iova().
The new function has one less parameter because it is unused.
The deprecated function is kept as an alias to avoid breaking the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The struct fields phys_addr_t rte_mempool_objhdr.physaddr and
rte_mempool_memhdr.phys_addr are renamed to rte_iova_t iova.
The deprecated names are kept in an anonymous union to avoid breaking
the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The struct rte_memzone field .phys_addr is renamed to .iova.
The deprecated name is kept in an anonymous union to avoid breaking
the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The function rte_malloc_virt2phy() is renamed to rte_malloc_virt2iova().
The deprecated name is kept as an alias to avoid breaking the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The function rte_mem_virt2phy() is kept and used in functions which
works only with physical addresses.
For all other calls this function is replaced by rte_mem_virt2iova()
which does a direct mapping (no conversion) in the VA case.
Note: the new function rte_mem_virt2iova() function matches the
behaviour implemented in rte_mem_virt2phy() by the commit
680f6c1260 ("mem: honor IOVA mode in virt2phy")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Renaming rte_memseg {.phys_addr} to {.iova}
Keep the deprecated name in an anonymous union to avoid breaking
the API.
Use rte_iova_t and RTE_BAD_IOVA where appropriate in
memory segment handling.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The IO virtual addresses may be used instead of physical addresses.
As IOVA is more generic, it should be used in most places instead
of physical address wording.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
If the IOVA mode is not using physical addresses,
no need to log an error about physical address issue.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The function rte_mem_phy2mch() was removed with the support
of Xen dom0.
Fixes: a7cb2e20d2 ("mem: remove API to get physical address in dom0")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
The memzone header is often included without good reason.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The file rte_config.h is generated and automatically included
with -include option.
The explicit includes in drivers and libraries are useless.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
It is easier to find all constructor functions when they use
the same macros RTE_INIT or RTE_INIT_PRIO.
The macro definitions are moved from rte_eal.h to rte_common.h.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Some symbols were introduced with the wrong prefix.
Add the usual "rte_" prefix when needed.
Fixes: c752998b5e ("pci: introduce library and driver")
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Exposed VFIO functions simply uses a "vfio" prefix.
Use the proper "rte_vfio" prefix for those symbols.
Fixes: 279b581c89 ("vfio: expose functions")
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Revert back to using VFIO_PRESENT as a marker to enable compilation
of VFIO-related segments.
VFIO_PRESENT is the combination of user configuration RTE_EAL_VFIO and
kernel version support check.
eal_vfio.h VFIO_PRESENT related check ordered to be compatible with
rte_vfio.h one, no functional modification.
Fixes: 279b581c89 ("vfio: expose functions")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Bruce Richardson <bruce.richardson@intel.com>
The compilation with gcc-6.3.0 and EXTRA_CFLAGS=-Og gives the following
error:
CC rte_lpm6.o
rte_lpm6.c: In function ‘rte_lpm6_add_v1705’:
rte_lpm6.c:442:11: error: ‘tbl_next’ may be used uninitialized in
this function [-Werror=maybe-uninitialized]
if (!tbl[tbl_index].valid) {
^
rte_lpm6.c:521:29: note: ‘tbl_next’ was declared here
struct rte_lpm6_tbl_entry *tbl_next;
^~~~~~~~
This is a false positive from gcc. Fix it by initializing tbl_next
to NULL.
Fixes: 5c510e13a9 ("lpm: add IPv6 support")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>