Defining the value 0 as default value for dequeue timeout
will help the application reduce the configuration setup
if the application is interested only in default
timeout value.
removed "min_dequeue_limit" negative testcase as
min_dequeue_limit value could be zero(which is
default timeout now) if driver has
dev_info->min_dequeue_timeout_ns = 1.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Introducing the burst mode capability flag to express the event device
is capable of operating in burst mode for enqueue(forward, release) and
dequeue operation. If the device is not capable, then the application
still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst()
but PMD accepts only one event at a time which is any way transparent
with the current rte_event_*_burst API semantics.
It solves two purposes:
1) Fix performance regression on the PMD which supports only nonburst
mode, and this issue is two-fold.
Typically the burst_worker main loop consists of following pseudo code:
while(1)
{
uint16_t nb_rx = rte_event_dequeue_burst(ev,..);
for (i=0; i < nb_rx; i++) {
process(ev[i]);
if (is_release_required(ev[i]))
release_the_event(ev);
}
uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
events, nb_rx);
while (nb_tx < nb_rx)
nb_tx += rte_event_enqueue_burst(dev_id, port_id,
events + nb_tx, nb_rx - nb_tx);
}
Typically the non_burst_worker main loop consists of following pseudo code:
while(1)
{
uint16_t nb_rx = rte_event_dequeue_burst(&ev, , 1);
if (!nb_rx)
continue;
process(ev);
while (rte_event_enqueue_burst(dev, port, &ev, 1) != 1);
}
Following overhead has been seen on nonburst mode capable PMDs with
burst mode version
- Extra explicit release(PMD does release on implicitly on next
dequeue) and thus avoids the cost additional driver function overhead.
- Extra "for" loop for event processing which compiler cannot detect at
runtime
2) Simplify the application configuration by avoiding the application to
find the correct enqueue and dequeue depth across different PMD.
If burst mode is not supported then, PMD can ignore depth field.
This will enable to write portable applications and makes
RFC eventdev_pipeline application works on OCTEONTX PMD
http://dpdk.org/dev/patchwork/patch/23799/
If an application wishes to get the maximum performance on nonburst
capable PMD then the application can write the code in a way that by
keeping packet processing function as inline functions and launch the
workers based on the capability.
The generic burst based worker still work on those PMDs without
any code change but this scheme needed only when the application wants
to gets the maximum performance out of nonburst capable PMDs.
This patch is based the on the real world test cases
http://dpdk.org/dev/patchwork/patch/24832/, Where without this scheme
20.9% performance drop observed per core.
See worker_wrapper(), perf_queue_worker(), perf_queue_worker_burst()
functions to use this scheme in a portable way without losing performance
on both sets of PMDs and achieving the portability.
http://dpdk.org/dev/patchwork/patch/24832/
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Made libeventdev library independent of VDEV bus by moving vdev pmd
specific function to rte_eventdev_pmd_vdev.h header file. Eventdev VDEV
PMD can include that for generic eventdev VDEV init and uninit function
enablement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Made libeventdev library independent of PCI bus by moving pci pmd
specific function to rte_eventdev_pmd_pci.h header file. Eventdev PCI
PMD can include that for generic eventdev PCI probe and remove function
enablement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove rte_event_dev_close() from rte_event_pmd_release() function so
that rte_event_pmd_release() can be used in stateless way. This will
enable rte_event_pmd_vdev_uninit() function to avoid using
eventdev_globals global variable and the need for exposing the a
global variable to PMD.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove the PCI dependency from generic data structures
and moved the PCI specific code to rte_event_pmd_pci*
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
If the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
is not set indicates the device is centralized and thus needs
a dedicated scheduling thread that repeatedly calls
rte_event_schedule().
Update the worker thread code snippet to match
the description.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The nb_atomic_flows and nb_atomic_order_sequences fields are only inspected
if the queue is configured for atomic or ordered scheduling, respectively.
This commit updates the documentation to reflect that.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The PCI code will move to the bus drivers directory.
Rename functions from rte_eal_pci_ to rte_pci_
to prepare the move of the driver out of EAL.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Exported headers must allow compilation with the strictest flags. This
commit addresses the following errors:
In file included from build/include/rte_eventdev_pmd.h:55:0,
from /tmp/check-includes.sh.25816.c:1:
build/include/rte_eventdev.h:908:8: error: struct has no named members
[-Werror=pedantic]
[...]
In file included from /tmp/check-includes.sh.25816.c:1:0:
build/include/rte_eventdev_pmd.h:65:35: error: ISO C does not permit named
variadic macros [-Werror=variadic-macros]
[...]
Fixes: 71f2384328 ("eventdev: introduce event driven programming model")
Fixes: 4f0804bbdf ("eventdev: implement the northbound APIs")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The RTE_FUNC_*_RET() and RTE_PROC_*_RET() macro definitions in rte_dev.h
require RTE_PMD_DEBUG_TRACE(). This macro is defined as needed by users of
rte_dev.h since its value depends on their own debug settings.
It may be defined multiple times as a result when including files from
various components simultaneously. Worse, these redefinitions may be
inconsistent. This causes the following compilation errors:
In file included from /tmp/check-includes.sh.13890.c:27:0:
build/include/rte_eventdev_pmd.h:58:0: error: "RTE_PMD_DEBUG_TRACE"
redefined [-Werror]
[...]
In file included from build/include/rte_ethdev_pci.h:39:0,
from /tmp/check-includes.sh.13890.c:13:
build/include/rte_ethdev.h:1042:0: note: this is the location of the
previous definition
[...]
In file included from /tmp/check-includes.sh.13890.c:83:0:
build/include/rte_cryptodev_pmd.h:65:0: error: "RTE_PMD_DEBUG_TRACE"
redefined [-Werror]
[...]
In file included from /tmp/check-includes.sh.13890.c:27:0:
build/include/rte_eventdev_pmd.h:58:0: note: this is the location of
the previous definition
[...]
This commit moves the RTE_PMD_DEBUG_TRACE() definition to rte_dev.h where
it is enabled consistently depending on global configuration settings and
removes redundant definitions.
Also when disabled, RTE_PMD_DEBUG_TRACE() is now defined as (void)0 to
avoid empty statements warnings if used outside { } blocks.
Fixes: b974e4a40c ("ethdev: make error checking macros public")
Cc: stable@dpdk.org
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
build error:
.../lib/librte_eventdev/rte_eventdev.c:371:6:
error: logical not is only applied to the left hand side of this
bitwise operator [-Werror,-Wlogical-not-parentheses]
if (!dev_conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
^
Added parentheses after the '!' to evaluate the bitwise operator first.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
This commit documents two error return values for the
rte_event_dev_start() function.
-ESTALE indicates not all ports are configured
-ENOLINK indicates that not all queues are linked to ports. If an
application enqueues to such a queue it can lead to deadlock
Suggested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds rte_errno return values to rte_event_enqueue_burst() and
rte_event_dequeue_burst().
These return values allows user software to differentiate between an
invalid argument (such as an invalid queue_id or sched_type in an enqueued
event) and backpressure from the event device.
The port and device ID checks are placed in RTE_LIBRTE_EVENTDEV_DEBUG
header guards to avoid the performance hit in non-debug execution.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Add in APIs for extended stats so that eventdev implementations can report
out information on their internal state. The APIs are based on, but not
identical to, the equivalent ethdev functions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
PMDs that only do a specific type of scheduling cannot provide
CFG_ALL_TYPES, so the Eventdev infrastructure should not demand
that every PMD supports CFG_ALL_TYPES.
By not overriding the default configuration of the queue as
suggested by the PMD, the eventdev_common unit tests can pass
on all PMDs, regardless of their capabilities.
RTE_EVENT_QUEUE_CFG_DEFAULT is no longer used by the eventdev layer
it can be removed now. Applications should use CFG_ALL_TYPES
if they require enqueue of all types a queue, or specify which
type of queue they require.
The CFG_DEFAULT value is changed to CFG_ALL_TYPES in event/skeleton,
to not break the compile.
A capability flag is added that indicates if the underlying PMD
supports creating queues of ALL_TYPES.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
eventdev driver may return error on dequeue timeout tick conversion.
Change the pmd callback interface to address the same.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch initializes the links_map array entries to
EVENT_QUEUE_SERVICE_PRIORITY_INVALID, as expected by
rte_event_port_links_get(). This is necessary for the sw eventdev PMD,
which does not initialize links_map when rte_event_port_setup() calls
rte_event_port_unlink().
Fixes: 4f0804bbdf ("eventdev: implement the northbound APIs")
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
rte_device is a generic device which is available to the applications
and EAL. This patch replaces rte_pci_device in 'struct rte_eventdev'
and in 'struct rte_event_dev_info' with common rte_device.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Improve the documentation of the return values of the
rte_event_dequeue_timeout_ticks() function, adding a
-ENOTSUP value for eventdevs that do not support waiting.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Large port enqueue sizes were not supported as the value
it was stored in was a uint8_t. Using uint8_ts to save
space in config apis makes no sense - increasing the 3
instances of uint8_t enqueue / dequeue depths to more
appropriate values (based on the context around them).
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This commit clarifies the usage of nb_links and nb_unlinks when passing
a NULL pointer as the queues argument.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Updated the comments on 'nb_events_limit' of 'struct rte_event_dev_config'
and 'new_event_threshold' of 'struct rte_event_port_conf'.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
On port_setup, the link_map is updated only
for configured number of event queues.
Limit the port_links_get scan only to configured number
of event queues. Also, Limit the port link and unlink queue
validation to configured number of event queues.
Fixes: 4f0804bbdf ("eventdev: implement the northbound APIs")
Reported-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
Added eventdev vdev uninit support to release the resources
allocated in eventdev vdev init.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
- Removed uninitialized max_devs value
- Corrected dev assignment
Fixes: 4f0804bbdf ("eventdev: implement the northbound APIs")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Since eventdev uses event structures rather than working directly on
mbufs, there is no actual dependencies on the mbuf library. The
inclusion of an mbuf pointer element inside the event itself does not
require the inclusion of the mbuf header file. Similarly the pci
header is not needed, but following their removal, rte_memory.h is
needed for the definition of the __rte_cache_aligned macro.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Added a pointer to the rte_eventdev type in the event port
link and unlink callbacks. This device shall be used by some
of the event drivers to fetch queue related information.
Also, update the skeleton eventdev driver with corresponding changes.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This patch adds infrastructure for registering the vdev or
the PCI based event device.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch implements northbound eventdev API interface using
southbond driver interface
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In a polling model, lcores poll ethdev ports and associated
rx queues directly to look for packet. In an event driven model,
by contrast, lcores call the scheduler that selects packets for
them based on programmer-specified criteria. Eventdev library
adds support for event driven programming model, which offer
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.
By introducing event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
(or combination of the two) that best suits their needs.
This patch adds the eventdev specification header file.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>