Add IOVA versions of dirty page logging functions.
Note that the API facing rte_vhost_log_write is not modified.
So, make explicit that it expects the address in GPA space.
Fixes: 69c90e98f483 ("vhost: enable IOMMU support")
Cc: stable@dpdk.org
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
When IOMMU is enabled the incoming log address is in IOVA space. In that
case, look in IOTLB table and translate the resulting HVA to GPA.
If IOMMU is not enabled, the incoming log address is already a GPA so no
transformation is needed.
Fixes: 69c90e98f483 ("vhost: enable IOMMU support")
Cc: stable@dpdk.org
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for used
flags in packed ring.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend
and backend are assumed to be implemented in software, that is they can
run on identical CPUs in an SMP configuration.
Thus a weak form of memory barriers like rte_smp_r/wmb, other than
rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1)
and yields better performance.
For the above case, this patch helps yielding even better performance
by replacing the two-way barriers with C11 one-way barriers for avail
flags in packed ring.
Meanwhile, a read barrier is required to ensure ordering between
descriptor's flags and content reads [1]. With C11, load-acquire can
enforce the ordering instead of rmb barrier.
[1] https://patchwork.dpdk.org/patch/49109/
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Some PMDs have more than one Rx/Tx burst paths, add the ethdev API
that allows an application to retrieve the mode information about
Rx/Tx packet burst such as Scalar or Vector, and Vector technology
like AVX2.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Use correct flag for indicating QinQ strip rx offload.
Fixes: dfebfc9882fb ("ethdev: support dynamic configuration of QinQ strip")
Cc: stable@dpdk.org
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Adding support to enable GTPU eth flow type for RSS hash
index calculation.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch add definitions of maximal data length in module EEPROM,
values are compatible with include/uapi/linux/ethtool.h.
These definitions can be used by application to validate data length.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new rte_flow_item_ah in order to match the Authentication Header
based on RFC 2402.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new rte_flow_item_igmp in order to match the Internet Group
Management Protocol based on RFC 2236.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new rte_flow_item_nsh in order to match the network service header
based on RFC 8300.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
We are using '--base-virtaddr' in a few places. We have a define for that,
so use it instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
According to our docs, only Linuxapp supports base-virtaddr option.
That is, strictly speaking, not true because most of the things
that are attempting to respect base-virtaddr are in common files,
so FreeBSD already *mostly* supports this option in practice.
This commit fixes the remaining bits to explicitly support
base-virtaddr option, and moves the arg parsing from EAL to common
options parsing code. Documentation is also updated to reflect
that all platforms now support base-virtaddr.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Distributor and worker threads rely on data structs in cache line
for synchronization. The shared data structs were not protected.
This caused deadlock issue on weaker memory ordering platforms as
aarch64.
Fix this issue by adding memory barriers to ensure synchronization
among cores.
Bugzilla ID: 342
Fixes: 775003ad2f96 ("distributor: add new burst-capable library")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: David Hunt <david.hunt@intel.com>
Replace rte_ipsec_sad_add(), rte_ipsec_sad_del() and
rte_ipsec_sad_lookup() stubs with actual implementation.
It uses three librte_hash tables each of which contains
an entries for a specific SA type (either it is addressed by SPI only
or SPI+DIP or SPI+DIP+SIP)
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Replace rte_ipsec_sad_create(), rte_ipsec_sad_destroy() and
rte_ipsec_sad_find_existing() API stubs with actual
implementation.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
According to RFC 4301 IPSec implementation needs an inbound SA database
(SAD).
For each incoming inbound IPSec-protected packet (ESP or AH) it has to
perform a lookup within it's SAD.
Lookup should be performed by:
Security Parameters Index (SPI) + destination IP (DIP) + source IP (SIP)
or SPI + DIP
or SPI only
and an implementation has to return the 'longest' existing match.
This patch extend DPDK IPsec library with inbound security association
database (SAD) API implementation that:
- conforms to the RFC requirements above
- can scale up to millions of entries
- supports fast lookups
- supports incremental updates
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Each cryptodev are indexed with dev_id in the global rte_crypto_devices
variable. nb_devs is incremented / decremented each time a cryptodev is
created / deleted. The goal of nb_devs was to prevent the user to get an
invalid dev_id.
Let's imagine DPDK has configured N cryptodevs. If the cryptodev=1 is
removed at runtime, the latest cryptodev N cannot be accessible, because
nb_devs=N-1 with the current implementaion.
In order to prevent this kind of behavior, let's remove the check with
nb_devs and iterate in all the rte_crypto_devices elements: if data is
not NULL, that means a valid cryptodev is available.
Also, remove max_devs field and use RTE_CRYPTO_MAX_DEVS in order to
unify the code.
Fixes: d11b0f30df88 ("cryptodev: introduce API and framework for crypto devices")
Cc: stable@dpdk.org
Signed-off-by: Julien Meunier <julien.meunier@nokia.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This commit adds asymmetric session-less option to
rte_crypto_asym_op. Feature flag for session-less is added
to rte_cryptodev.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
RTE_EAL_ALLOW_INV_SOCKET_ID had been introduced and documented as used
with xen dom0 support (dropped for some time now).
Closely looking at this, the code was changed later and ensures that the
socket id is in the [0..RTE_MAX_NUMA_NODES] range anyway.
Let's drop this dead code and the build option with it.
Fixes: 94ef2964148a ("eal/linux: fix numa node detection")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
An ifdef present in eal_memory.c references "RTE_ARCH_PPC64" when
it should actually use "RTE_ARCH_PPC_64". Simple testing revealed
that both the PPC_64 and non-PPC_64 versions of the code involved
work, but the PPC_64 version of the code is retained to be
consistent with other instances in the same file where mmapped
memory is accessed in reverse order on Power platforms.
Fixes: 66cc45e293ed ("mem: replace memseg with memseg lists")
Cc: stable@dpdk.org
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add function for freeing a bulk of mbufs.
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
DPDK currently compiles with implicit-fallthrough=2 warning level. With gcc
-Wextra flag, the default level is 3, so some minor changes are needed to
support this in DPDK.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
If we want to add support for turning off components because of missing
dependencies, then we need to check for those dependencies before we
make a determination as to whether a component should be built or not,
assuming that the component says it should be built.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
To help developers to get the correct dependency name e.g. when creating a
new example that depends on a specific component, print out the dependency
name for each lib/driver as it is processed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This patch introduces a `flag` in the Eth TX adapter enqueue API.
Some drivers may support burst functionality only with the packets
having same destination device and queue.
The flag `RTE_EVENT_ETH_TX_ADAPTER_ENQUEUE_SAME_DEST` can be used
to indicate this so the underlying driver, for drivers to utilize
burst functionality appropriately.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
When the writer is checking the quiescent state status, it is not
deleting any entries in the data structure. This means, the readers
do not need to update their quiescent state during that period.
Readers update the quiescent state only when there are updates
available from the writer.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
When the rte_rcu_qsbr_check API is called, it is possible to
calculate the least valued token acknowledged by all the readers.
When the API is called next time, the readers' token counters do
not need to be scanned if the value of the token being queried is
less than the last least token acknowledged. This avoids the
cache line bounces between readers and writer.
Fixes: 64994b56cfd7 ("rcu: add RCU library supporting QSBR mechanism")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Bare metal support has been gone for quite some time but we still had
some checks on system includes.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Enable both C11 atomic and non C11 atomic lock-free stack for aarch64.
Introduced a new header to reduce the ifdef clutter across generic and C11
files. The rte_stack_lf_stubs.h contains stub implementations of
__rte_stack_lf_count, __rte_stack_lf_push_elems and
__rte_stack_lf_pop_elems.
Suggested-by: Gage Eads <gage.eads@intel.com>
Suggested-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch adds the implementation of the 128-bit atomic compare
exchange API on aarch64. Using 64-bit 'ldxp/stxp' instructions
can perform this operation. Moreover, on the LSE atomic extension
accelerated platforms, it is implemented by 'casp' instructions for
better performance.
Since the '__ARM_FEATURE_ATOMICS' flag only supports GCC-9, this
patch adds a new config flag 'RTE_ARM_FEATURE_ATOMICS' to enable
the 'cas' version on older version compilers.
For octeontx2, we make sure that the lse (and other) extensions are
enabled even if the compiler does not know of the octeontx2 target
cpu.
Since direct x0 register used in the code and cas_op_name() and
rte_atomic128_cmp_exchange() is inline function, based on parent
function load, it may corrupt x0 register aka break aarch64 ABI.
Define CAS operations as rte_noinline functions to avoid an ABI
break [1].
1: https://git.dpdk.org/dpdk/commit/?id=5b40ec6b9662
Suggested-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This ensures secondary processes never have to calculate the TSC rate
themselves, which can be noticeable in VMs that don't have access to
arch-specific detection mechanism (such as CPUID leaf 0x15 or MSR 0xCE
on x86).
Since rte_mem_config is now internal to the EAL library, we can add
tsc_hz without ABI breakage concerns.
Reduces rte_eal_init() execution time in a secondary process from 165ms
to 66ms on my test system.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Thread unregister returns success while unregister not been performed.
This is due to incorrect thread registration status check.
Fix this issue by correcting bitmap check.
Fixes: 64994b56cfd7 ("rcu: add RCU library supporting QSBR mechanism")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
For a valid service, the core mask of the service
is checked against the current core and the corresponding
entry in the active_on_lcore array is set or reset.
Upto 8 cores share the same cache line for their
service active_on_lcore array entries since each entry is a uint8_t.
Some number of these entries also share the cache line with
the internal_flags member of struct rte_service_spec_impl,
hence this false sharing also makes the service_valid() check
expensive.
Eliminate false sharing by moving the active_on_lcore array to
a per-core data structure. The array is now indexed by service id.
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: Gage Eads <gage.eads@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This code was added 7+ years ago in
commit fb022b85bae4 ("timer: check TSC reliability")
presumably when variant TSCs were still somewhat common.
But this code doesn't do anything except print a warning,
and the warning doesn't give any kind of advice to the user,
so let's just remove it.
While the warning has no functional meaning, the /proc/cpuinfo
parsing consumes a non-trivial amount of time which is especially
noticeable in secondary processes.
On my test system, it consumes 21ms out of the 66ms total execution
time for rte_eal_init() in a secondary process.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The rte_atomic64_exchange operation for ppc_64 incorrectly linked
back to a 32 bit generic operation (__atomic_exchange_4) rather than
the 64 bit generic operation (__atomic_exchange_8). As a result,
applications that used rte_eth_link_get_nowait() would only receive
the link speed, they would not receive the link state, link duplex,
or link autoneg properties.
Fixes: ff2863570fcc ("eal: introduce atomic exchange operation")
Cc: stable@dpdk.org
Signed-off-by: David Christensen <drc@linux.vnet.ibm.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This is a commonly used operation that surprisingly the
DPDK has not supported. The new rte_pktmbuf_copy does a
deep copy of packet. This is a complete copy including
meta-data.
It handles the case where the source mbuf comes from a pool
with larger data area than the destination pool. The routine
also has options for skipping data, or truncating at a fixed
length.
This patch also introduces internal inline to copy the
metadata fields of mbuf.
Add a test for this new function, based of the clone tests.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Cloning mbufs requires allocations and iteration
and therefore should not be an inline.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This copy part of this function is too big to be put inline.
The places it is used are only in special exception paths
where a highly fragmented mbuf arrives at a device that can't handle it.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
No functional change.
Clarify the populate function to make future changes easier to
understand.
Rename the variables:
- to avoid negation in the name
- to have more understandable names
Remove useless variable (no_pageshift is equivalent to pg_sz == 0).
Remove duplicate affectation of "external" variable.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch adds support to allow users enable/disable allmulticast mode for
kni interface.
This requirement comes from bugzilla 312, more details can refer to:
https://bugs.dpdk.org/show_bug.cgi?id=312
Bugzilla ID: 312
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
EAL should always use rte_log instead of putting errors to
stderr (which maybe redirected to /dev/null in a daemon).
Also checks for null before rte_free are unnecessary.
Minor code consistency improvements.
Fixes: 21698354c832 ("service: introduce service cores concept")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Have rte_eal_config_reattach clean up the mapped address which is a valid
address but not the one intended.
Coverity issue: 343439
Fixes: 4e8854ae89fa ("eal: do not panic on shared memory init")
Fixes: b149a7064261 ("eal/freebsd: add config reattach in secondary process")
Cc: stable@dpdk.org
Signed-off-by: Arnon Warshavsky <arnon@qwilt.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
The code checks both rte_mp_request_sync() return code and that the number
of messages in the reply equals 1. If rte_mp_request_sync() succeeds but
there was more than one message, those messages would get leaked.
Found via code review by Anatoly Burakov of patches that used the vhost
code as a template for using rte_mp_request_sync().
Fixes: 83a73c5fef66 ("vfio: use generic multi-process channel")
Cc: stable@dpdk.org
Reported-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
RTE_BPF_ARG_PTR_STACK is used as internal program
arg type. Rename to RTE_BPF_ARG_RESERVED to
avoid exposing internal program type.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add branch and call operations.
jump_offset_* APIs used for finding the relative offset
to jump w.r.t current eBPF program PC.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Implement XADD eBPF instruction using STADD arm64 instruction.
If the given platform does not have atomics support,
use LDXR and STXR pair for critical section instead of STADD.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>