Replace the BSD license header with the SPDX tag for files
with only an Intel copyright on them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit fixes a possible race condition if an application
uses the service-cores infrastructure and the function to run
a service on an application lcore at the same time.
The fix is to change the num_mapped_cores variable to be an
atomic variable. This causes concurrent accesses by multiple
threads to a service using rte_service_run_iter_on_app_lcore()
to detect if another core is currently mapped to the service,
and refuses to run if it is not multi-thread safe.
The run iteration on app lcore function has two arguments, the
service id to run, and if atomics should be used to serialize access
to multi-thread unsafe services. This allows applications to choose
if they wish to use use the service-cores feature, or if they
take responsibility themselves for serializing invoking a service.
See doxygen documentation for more details.
Two unit tests were added to verify the behaviour of the
function to run a service on an application core, testing both
a multi-thread safe service, and a multi-thread unsafe service.
The doxygen API documentation for the function has been updated
to reflect the current and correct behaviour.
Fixes: e9139a32f6e8 ("service: add function to run on app lcore")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Update the sample app eventdev_pipeline_sw_pmd to use service run iter for
event scheduling in case of sw eventdev.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
With the current scheme of event queue configuration the cfg schedule
type macros (RTE_EVENT_QUEUE_CFG_*_ONLY) are inconsistent with the
event schedule type (RTE_SCHED_TYPE_*) this requires unnecessary
conversion between the fastpath and slowpath API's while scheduling
events or configuring event queues.
This patch aims to fix such inconsistency by using event schedule
types (RTE_SCHED_TYPE_*) for event queue configuration.
This patch also fixes example/eventdev_pipeline_sw_pmd as it doesn't
convert RTE_EVENT_QUEUE_CFG_*_ONLY to RTE_SCHED_TYPE_* which leads to
improper events being enqueued to the eventdev.
Fixes: adb5d5486c39 ("examples/eventdev_pipeline_sw_pmd: add sample app")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Events sent through single-link queues are naturally in-order and
atomic, without reordering or atomic scheduling. Logically the
nb_atomic_flows and nb_atomic_order_sequences arguments don't apply to a
single link queue, but applications must set these (depending on the queue
config type) to bypass the is_valid_{ordered, atomic}_queue_conf() checks
in the eventdev layer.
This commit updates those is_valid_* functions to ignore queues with the
SINGLE_LINK flag, to simplify their configuration.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
During the refactoring of the sample app to be more generic, the
option to set -n0 and process a huge number of packets was lost.
This commit re-adds -n0, and will process INT64_MAX number of packets.
Fixes: adb5d5486c39 ("examples/eventdev_pipeline_sw_pmd: add sample app")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>