Update software event device documentation to include use of service
cores for event distribution.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Update the sample app eventdev_pipeline_sw_pmd to use service run iter for
event scheduling in case of sw eventdev.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Use service run iter for event scheduling instead of calling the event
schedule api directly.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Use service cores for offloading event scheduling in case of
centralized scheduling instead of calling the schedule api directly.
This removes the dependency on dedicated scheduler core specified by
giving command line option --slcore.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Extend the service capability of the sw event device by exposing service id
to the application.
The application can use service id to configure service cores to run event
scheduling.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
In case of sw event device the scheduling can be done on a service core
using the service registered at the time of probe.
This patch adds a helper function to get the service id that can be used
by the application to assign a lcore for the service to run on.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Update the guide with event queue configuration and event enqueue
operation.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Add schedule type queue attribute so that it can be queried along with
the queue config structure.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
With the current scheme of event queue configuration the cfg schedule
type macros (RTE_EVENT_QUEUE_CFG_*_ONLY) are inconsistent with the
event schedule type (RTE_SCHED_TYPE_*) this requires unnecessary
conversion between the fastpath and slowpath API's while scheduling
events or configuring event queues.
This patch aims to fix such inconsistency by using event schedule
types (RTE_SCHED_TYPE_*) for event queue configuration.
This patch also fixes example/eventdev_pipeline_sw_pmd as it doesn't
convert RTE_EVENT_QUEUE_CFG_*_ONLY to RTE_SCHED_TYPE_* which leads to
improper events being enqueued to the eventdev.
Fixes: adb5d5486c ("examples/eventdev_pipeline_sw_pmd: add sample app")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Set log_level to RTE_LOG_INFO.
The RTE_LIBRTE_CLASSIFY_DEBUG macro has been removed from the
config file, use the log_level instead.
Fixes: be41ac2a33 ("flow_classify: introduce flow classify library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
The original code used movl instead of xchgl, this caused
rte_atomic64_cmpset to use ebx as the lower dword of the source
to cmpxchg8b instead of the lower dword of function argument "src".
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Reported-by: Job Abraham <job.abraham@intel.com>
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Tested-by: Job Abraham <job.abraham@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When checking if any devices bound to uio, we did not exclude
those which are blacklisted (or in the case that a whitelist
is specified).
This patch fixes it by only checking whitelisted devices, or
not-blacklisted devices depending on the bus scan mode.
Fixes: 815c7deaed ("pci: get IOMMU class on Linux")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The PCI lib defines the types and methods allowing to use PCI elements.
The PCI bus implements a bus driver for PCI devices by constructing
rte_bus elements using the PCI lib.
Move the relevant code out of the EAL to its expected place.
Libraries, drivers, unit tests and applications are updated to use the
new rte_bus_pci.h header when necessary.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The current name conflicts with the librte_pci naming convention.
Additionally, it is easier to use gdb when having prefixed even private
functions.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
make the functions
+ rte_pci_detach
+ rte_pci_probe
+ rte_pci_probe_one
+ rte_pci_scan
private as there is no point in using them outside of the rte_bus
framework.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Do not expose the minute implementations of PCI parsing.
This leaves only the all-purpose rte_pci_addr_parse, which is simpler to
use.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
A new single function that is able to parse all currently supported
format:
* Domain-Bus-Device-Function
* Bus-Device-Function
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Using a macro helps writing the code to the detriment of the reader
in this case. This is backward. Write once, read many.
The few LOCs gained is not worth the opacity of the implementation.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Parsing operations should not happen in performance critical sections.
Headers should not propose implementations unless duly required.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
These symbols are only relevant to PCI operations.
Move them to a private PCI-related header, allowing to remove the
dependency of the PCI subsystem upon private eal_vfio.h.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The following symbols are used by vfio implementations within the PCI bus.
They need to be publicly available for the PCI bus to be outside the
EAL.
+ vfio_enable;
+ vfio_is_enabled;
+ vfio_noiommu_is_enabled;
+ vfio_release_device;
+ vfio_setup_device;
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Some internal configuration elements set by the user on the command line
are necessary outside the EAL, when the PCI bus is detached.
Expose:
+ rte_eal_create_uio_dev
+ rte_eal_has_pci
+ rte_eal_vfio_intr_mode
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
This function was previously private to the EAL layer.
Other subsystems requires it, such as the PCI bus.
In order not to force other components to include stdbool, which is
incompatible with several NIC drivers, the return type has
been changed from bool to int.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The macro RTE_SET_USED is defined in rte_common.h
This header is included through eal_private.h, which includes in turn
rte_pci.h
Once the PCI subsystem is out of the EAL, this will break the
compilation (seen on FreeBSD).
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
This header is included through rte_pci.h, which will be removed once
the PCI bus is moved out of the EAL.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Fixes: cbc12b0a96 ("mk: do not generate LDLIBS from directory dependencies")
Fixes: b677d4c6d2 ("net/dpaa2: add API for event Rx adapter")
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Various symbols are being used by DPAA Crypto driver which were not exposed
from DPAA bus during initial version. This breaks the shared build.
This patch also adds the LDLIBS line required after (cbc12b0a9) patch.
Fixes: c3e85bdcc6 ("crypto/dpaa_sec: add crypto driver for NXP DPAA platform")
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Build fails when rte_security is disabled; make rte_security mandatory
Fixes: 0a23d4b6f4 ("crypto/dpaa2_sec: support protocol offload IPsec")
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Build fails when rte_security is disabled; make rte_security mandatory
Fixes: ec17993a14 ("examples/ipsec-secgw: support security offload")
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Tested-by: David Marchand <david.marchand@6wind.com>
Some devices may not support or fail setting VLAN offload
configuration based on dynamic circumstances so the
vlan_offload_set_t vector is modified to return an int so
the caller can determine success or not.
rte_eth_dev_set_vlan_offload is updated to return the
value provided by the vector when called along with restoring
the original offload configs on failure.
Existing vlan_offload_set_t vectors are modified to return
an int. Majority of cases return 0 but a few that actually
can fail now return their failure codes.
Finally, a vlan_offload_set_t vector is added to virtio
to facilitate dynamically turning VLAN strip on or off.
Signed-off-by: David Harton <dharton@cisco.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Result of slaves link properties validation is not used when new slave
is added.
This patch uses the value of link_properties_valid() to determinate if
slave can be used in the bonding. If function fails, error is returned
preventing to add slave with invalid link properties.
Coverity issue: 158661
Fixes: deba8a2f8b ("net/bonding: fix link properties management")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
LIST macro are not safe when inside a LIST_FOREACH() a LIST_REMOVE() is
called to remove an entry, this behavior is undefined causing some entries
to disappear from the list.
Fixes: 6e78005a9b ("net/mlx5: add reference counter on DPDK Tx queues")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
When copying VLAN tags from the RX descriptor to the vlan_tci field
in the mbuf header, igb_rxtx.c:eth_igb_recv_pkts() and
eth_igb_recv_scattered_pkts() both assume that the VLAN tag is always
little endian. While i350, i354 and /i350vf VLAN non-loopback
packets are stored little endian, VLAN tags in loopback packets (LB)
for those devices are big endian.
For i350, i354 and i350vf VLAN loopback packets, swap the tag when
copying from the RX descriptor to the mbuf header. This will ensure
that the mbuf vlan_tci is always little endian.
Signed-off-by: Roger Melton <rmelton@cisco.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Configuring UAR as IO-mapped makes maximum throughput decline by
noticeable amount. If UAR is configured as write-combining register,
a write memory barrier is needed on ringing a doorbell.
rte_wmb() is mostly effective when the size of a burst is comparatively
small. Revert the register back to write-combining and enforce a write
memory barrier instead, except for vectorized Tx burst routines.
Application can change it by setting MLX5_SHUT_UP_BF under its own
necessity.
Fixes: 9f9bebae55 ("net/mlx5: don't map doorbell register to write combining")
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>