Some of the internal toolchain versions create unaligned
memory access fault when copying from 17-31B buffer using memcpy.
Subsequent patches in this series will be using 17-31B mbox message.
Since the mailbox message copy comes in slow path, changing memcpy to
byte-per-byte copy to workaround the issue.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Naming convention for event drivers is "rte_pmd_<name>_event_version.map"
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Initially, DPAA2 objects (except ETH and CRYPTO) were defined from VFIO
layer. This patch moves that into Bus definition.
This patch also realigns the object types with the new device types.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This commit reworks the API to move from two separate start
and stop functions, to a "runstate" API which allows setting
the runstate. The is_running API is replaced with an function
to query the runstate. The runstate functions take a id value
for service. Unit tests and the eventdev sw pmd are updated.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
This commit reworks the service register function to accept
an extra parameter. The parameter is a uint32_t *, which when
provided will be set to the integer service_id that the newly
registered service is represented by.
This is useful for services that wish to validate settings at
a later point in time - they need to know their own service id.
This commit updates the eventdev sw pmd, as well as unit tests
to use the new register API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Burst mode capability flag was introduced in 73e6b8c9 for event drivers.
DPAA2 event driver supports burst mode so this patch adds this capability
flag in DPAA2 event driver
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This commit shows how easy it is to enable a specific
DPDK component with a service callback, in order to get
CPU cycles for it.
The beauty of this method is that the service is unaware
of how much CPU time it is getting - the application can
decide how to split and slice cores and map them to the
registered services.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
NXP Copyright has been wrongly worded with '(c)' at various places.
This patch removes these extra characters. It also removes
"All rights reserved".
Only NXP copyright syntax is changed. Freescale copyright is not
modified.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Replace the incorrect reference to "Cavium Networks", "Cavium Ltd"
company name with correct the "Cavium, Inc" company name in
copyright headers.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Now that we have a standard event ring implementation for passing events
core-to-core, use that in place of the custom event rings in the software
eventdev.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
OCTEONTX can have optimized handling of events if the PMD
knows it is a producer pattern in advance and it can support
burst mode if all the events has op == RTE_EVENT_OP_NEW.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Introducing the rte_event_enqueue_new_burst() for enabling the
PMD, an optimization opportunity to optimize if all the events in
the enqueue burst has the op type of RTE_EVENT_OP_FORWARD.
If a PMD does not have any optimization opportunity
for this operation then the PMD can choose the generic enqueue
burst PMD callback as the fallback.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Introducing the rte_event_enqueue_new_burst() for enabling the
PMD, an optimization opportunity to optimize if all the events in
the enqueue burst has the op type of RTE_EVENT_OP_NEW.
If a PMD does not have any optimization opportunity
for this operation then the PMD can choose the generic enqueue
burst PMD callback as the fallback.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds support for interrupt handling on the event port.
These interrupts facilitates managing of timeout ticks in the
event dequeue functions.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
This patch adds all the configuration API's for DPAA2 eventdev
including device config, start, stop & port and queue
related API's
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Defining the value 0 as default value for dequeue timeout
will help the application reduce the configuration setup
if the application is interested only in default
timeout value.
removed "min_dequeue_limit" negative testcase as
min_dequeue_limit value could be zero(which is
default timeout now) if driver has
dev_info->min_dequeue_timeout_ns = 1.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Typically RTE_EVENT_OP_NEW issued by the producer
lcore. To reflect the write changes issued by the
producer lcore on worker lcore, an SMP write barrier
is required on producer enqueue. Fixing the missing
rte_smp_wmb() on enqueue with RTE_EVENT_OP_NEW.
Fixes: f10d322eff ("event/octeontx: support worker enqueue")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
switch tag wait is a costly operation as it may
translate to IOB read if core swtag cache is not updated.
Do tag switch wait only when there is a tag request on
the same hardware work slot.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>
Made libeventdev library independent of VDEV bus by moving vdev pmd
specific function to rte_eventdev_pmd_vdev.h header file. Eventdev VDEV
PMD can include that for generic eventdev VDEV init and uninit function
enablement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Made libeventdev library independent of PCI bus by moving pci pmd
specific function to rte_eventdev_pmd_pci.h header file. Eventdev PCI
PMD can include that for generic eventdev PCI probe and remove function
enablement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove the PCI dependency from generic data structures
and moved the PCI specific code to rte_event_pmd_pci*
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This commit fixes the counting of mapped queues to a port,
when the type of queue type is PARALLEL. Not incrementing
the count here could lead to an underflow of the count when
unlinking at a later date.
Fixes: 371a688fc1 ("event/sw: support linking queues to ports")
Reported-by: Jesse Bruni <jesse.bruni@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Single-link optimized ports previously did not correctly track
credits when dequeued, and re-enqueued as a FORWARD type. This
could "inflate" the number of credits in the system.
A unit test is added to reproduce and verify the issue, and the
fixed implementation counts FORWARD packets, and reduces the
number of credits the port has if it is of single-link type.
Fixes: 656af91800 ("event/sw: add worker core functions")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Gage Eads <gage.eads@intel.com>
This commit adds a new statistic to the SW eventdev PMD.
The statistic shows how many packets were sent from a
queue to a port. This provides information on how traffic
from a specific queue is being load-balanced to worker cores.
Note that these numbers should be compared across all queue
stages - the load-balancing does not try to perfectly share
each queue's traffic, rather it balances the overall traffic
from all queues to the ports.
The statistic is printed from the rte_eventdev_dump() function,
as well as being made available via the xstats API.
Unit tests have been updated to expect more per-queue statistics,
and the correctness of counts and counts after reset is verified.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Different drivers use internal macros like force_inline for compiler
always inline feature.
Standardizing it through __rte_always_inline macro.
Verified the change by comparing the output binary file.
No difference found in the output binary file with this change.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When taking events from a port, we checked the history list to check if the
event needed to be put back in order i.e. originally came from a reordered
queue type. The check for reordering involved checking if the reorder
buffer entry pointer was null. However, after that pointer was used it was
never cleared to null again.
This caused problems when we had mixed reordered and atomic or parallel
events, as the events from the latter two queue types were misidentified as
needing reordering. This let in some cases to crashes, but mostly led to
dropping events, and then application lock-up.
Fixes: 617995dfc5 ("event/sw: add scheduling logic")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch returns a credit when an rte_event is
enqueued with an invalid queue_id. Previously a
credit was leaked from the system.
Note that the eventdev instance does not attempt
to free any resources that the rte_event owns. As
a result, resources owned by the rte_event are leaked.
Eg. if the rte_event represents an rte_mbuf, the mbuf
will not be freed, and causes a leak from the mempool.
Fixes: 656af91800 ("event/sw: add worker core functions")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
The flow id of packets was not being hashed on ingress
on an ordered queue. Fix by applying same hashing as is
applied in the atomic queue case. The hashing itself is
broken out into a macro to avoid duplication of code.
Fixes: 617995dfc5 ("event/sw: add scheduling logic")
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This is a preparation to embed the generic rte_device into the rte_eth_dev
also for virtual devices.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
build error:
.../event/sw/sw_evdev_worker.c: In function ‘sw_event_release’:
.../event/sw/sw_evdev_worker.c:52:3: error: unknown field ‘op’ specified
in initializer
Fixed by updating struct initialization.
Fixes: 656af91800 ("event/sw: add worker core functions")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
build error:
.../drivers/event/octeontx/ssovf_worker.c(212):
error #592: variable "get_work0" is used before its value is set
RTE_SET_USED(get_work0);
^
.../drivers/event/octeontx/ssovf_worker.c(213):
error #592: variable "get_work1" is used before its value is set
RTE_SET_USED(get_work1);
^
For x86 these variables set but not used, move macros below
where values assigned.
Fixes: f61808eaa9 ("event/octeontx: add start function")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
If device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
configuration then use different fast path dequeue handler to wait till
requested amount of nanosecond if the event is not available.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>