Based on EAL Bus APIs, PCI bus callbacks and support functions are
introduced in this patch.
EAL continues to have direct PCI init/scan calls as well. These would be
removed in subsequent patches to enable bus only PCI devices.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Reviewed-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
rte_eal_pci_detach calls pci_detach_all_drivers which loops over all
PCI drivers for detaching the device. This is unnecessary as the device
already has the PCI driver reference which can be used directly.
Removing pci_detach_all_drivers and restructuring rte_eal_pci_detach
and rte_eal_pci_detach_dev to work without looping over driver list.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Reviewed-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Matching of PCI device address and driver ID table is being done at two
discreet locations duplicating the code. (rte_eal_pci_probe_one_driver
and rte_eal_pci_detach_dev).
Refactor the match logic as a single function.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Today eth_dev_attach_secondary is defined as static and can only be
called by pci drivers. However, the functionality is also required for
non-pci drivers - so the patch export the function.
Signed-off-by: Ami Sabo <amis@radware.com>
The error return code for rte_ring_dequeue() function should be -ENOENT
rather than -ENOBUFS (which is the error value from the enqueue() fn).
Fixes: cfa7c9e6fc1f ("ring: make bulk and burst return values consistent")
Reported-by: Zhihong Wang <zhihong.wang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When packet is flag of "PKT_TX_OUTER_IPV6", it also need to be
considered to be tunnel case, in order to calculate the correct
csum value.
Fixes: 2b76648872c9 ("net/e1000: add Tx preparation")
Cc: stable@dpdk.org
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Remove the include to "rte_timer.h" which is not needed
by latencystats library (only "rte_cycles.h" is used).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
If probe of the whitelisted PCI device fails, reset ret to zero
to silently skip non-whitelisted PCI devices.
Fixes: 10f6c93cea38 ("eal: do not panic on PCI failures")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
It's likely that this function isn't used anywhere, but since it was part
of the public API, mark the function for deprecation for at least one
release.
Signed-off-by: Aaron Conole <aconole@redhat.com>
This function rte_cpu_is_supported is now part of the public ABI,
so should be advertised as such.
Fixes: 37e97ad2c56a ("eal: do not panic when CPU is not supported")
Signed-off-by: Aaron Conole <aconole@redhat.com>
The patch change the prototype of callback function
(rte_intr_callback_fn) by removing the unnecessary parameter.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Remove the inappropriate modification on get_max_intr
field that keep the intr_source read only.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add note to cryptodev API that chained mbufs
are not supported in DOCSISBPI mode.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
HW based crypto drivers may only support limited number of
sessions per queue pair. This requires support for attaching
sessions to specific queue pair. New APIs are introduced to
attach/detach a session with/from a particular queue pair.
These are optional APIs.
Application can call attach API after creating a session
and can call detach API before deleting a session.
Application needs to check if max_nb_sessions_per_qp > 0,
then it should call the attach API.
max_nb_sessions_per_qp = 0 means infinite sessions per qp
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch changes the device configuration API for rte_cryptodev_ops
function prototype, and update all cryptodev PMDs for this change.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Extend the DPDK cryptodev API to enable processing of packets according
to the Baseline Privacy Interface Plus (BPI+) Specification described in
the security specification of the Cablelabs Data-over-Cable Service
Interface Specification (DOCSIS).
Brief summary of BPI+ symmetric cryptography requirements:
BPI+ cryptography uses a block cipher (AES-CBC/DES-CBC) to encrypt/decrypt
all the whole blocks in the packet. However the data length is not always
a block-multiple, so where there is a final block less than the full block
size this residual block requires special handling using AES-CFB/DES-CFB
mode. Similar special handling is specified where there is only one block,
smaller than the block size for the cipher. See spec for further details.
https://apps.cablelabs.com/specification/docsis-3-1-security-specification/
Two new elements are added to the enum rte_crypto_cipher_algorithm.
Note elements of this enum are actually a combination of an algorithm (AES,
3DES, etc) and mode (CBC, CTR, etc). The new DOCSISBPI mode is used to
convey to the PMD that the mode applied should be the specific combination
of CBC and CFB required by the DOCSIS Baseline Privacy Plus Spec.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Adds functions to get the cipher/authentication
algorithm enums, given a string. This is useful for applications
which gets the algorithm required from the user, to have a common
string-enum mapping.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
DES-CBC and AUTH NULL algorithms were missing in
the array of algorithm strings.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
build error:
include/rte_ring.h:459:22: error: invalid conversion from ‘void*’
to ‘void**’ [-fpermissive]
ENQUEUE_PTRS(r, &r[1], prod_head, obj_table, n, void *);
Implicit casts of void* to void** are considered warnings in some
compilers. E.g. g++ version 5.8. Cast directly to object types
Fixes: a6619414 ("ring: make struct and macros type agnostic")
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The function rte_eth_find_next is missing in the map file, which causes
errors with shared library builds.
.../test-pmd/testpmd.c:1693: undefined reference to `rte_eth_find_next'
Adding function to map file fixes the issue.
Fixes: 5588909af21b ("ethdev: add device iterator")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
APIs for selecting the architecure specific implementation and computing
the crc (16-bit and 32-bit CRCs) are added. For CRCs calculation, scalar
as well as x86 intrinsic(sse4.2) versions are implemented.
The scalar version is based on generic Look-Up Table(LUT) algorithm,
while x86 intrinsic version uses carry-less multiplication for
fast CRC computation.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This iterator helps applications iterate over the device list and skip
holes caused by invalid or detached devices.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
The hotplug API introduced multiple states for a device with possible
values defined internally, while the related field in struct rte_eth_dev
was made public.
Exposing those states improves consistency because applications have to
deal with the device list directly.
"DEV_DETACHED" is renamed "RTE_ETH_DEV_UNUSED" to better reflect that
the emptiness of a slot is not necessarily the result of detaching a
device.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
build error:
In file included from .../lib/librte_ring/rte_ring.c(90):
.../lib/librte_ring/rte_ring.h(162):
error #1366: a reduction in alignment without the "packed" attribute
is ignored
} __rte_cache_aligned;
^
Alignment attribute moved to first element of the struct
Fixes: a6619414e0a9 ("ring: make struct and macros type agnostic")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Add a library designed to calculate latency statistics and report them
to the application when queried. The library measures minimum, average and
maximum latencies, and jitter in nano seconds. The current implementation
supports global latency stats, i.e. per application stats.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Signed-off-by: Remy Horton <remy.horton@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch adds a library that calculates peak and average data-rate
statistics. For ethernet devices. These statistics are reported using
the metrics library.
Signed-off-by: Remy Horton <remy.horton@intel.com>
This patch adds a new information metrics library. This Metrics
library implements a mechanism by which producers can publish
numeric information for later querying by consumers. Metrics
themselves are statistics that are not generated by PMDs, and
hence are not reported via ethdev extended statistics.
Metric information is populated using a push model, where
producers update the values contained within the metric
library by calling an update function on the relevant metrics.
Consumers receive metric information by querying the central
metric data, which is held in shared memory.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Deprecate the following functions:
- rte_set_log_level(), replaced by rte_log_set_global_level()
- rte_get_log_level(), replaced by rte_log_get_global_level()
- rte_set_log_type(), replaced by rte_log_set_level()
- rte_get_log_type(), replaced by rte_log_get_level()
The new functions provide a better control of the per-type log level,
and have a better name prefix (rte_log_).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Example of use:
./app/test-pmd --log-level='pmd\.i40e.*,8'
This enables debug logs for all dynamic logs whose type starts with
'pmd.i40e'.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Introduce 2 new functions to support dynamic log types:
- rte_log_register(): register a log name, and return a log type id
- rte_log_set_level(): set the log level of a given log type
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The reorganization of the mbuf structure induces an ABI breakage.
Bump the library version, and update the documentation accordingly.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Move this field in the second cache line, since no driver use it
in Rx path. The freed space will be used by a timestamp in next
commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Change the size of m->port and m->nb_segs to 16 bits. It is now possible
to reference a port identifier larger than 256 and have a mbuf chain
larger than 256 segments.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
To avoid multiple stores on fast path, Ethernet drivers
aggregate the writes to data_off, refcnt, nb_segs and port
to an uint64_t data and write the data in one shot
with uint64_t* at &mbuf->rearm_data address.
Some of the non-IA platforms have store operation overhead
if the store address is not naturally aligned.This patch
fixes the performance issue on those targets.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Set the value of m->refcnt to 1, m->nb_segs to 1 and m->next
to NULL when the mbuf is stored inside the mempool (unused).
This is done in rte_pktmbuf_prefree_seg(), before freeing or
recycling a mbuf.
Before this patch, the value of m->refcnt was expected to be 0
while in pool.
The objectives are:
- to avoid drivers to set m->next to NULL in the early Rx path, since
this field is in the second 64B of the mbuf and its access could
trigger a cache miss
- rationalize the behavior of raw_alloc/raw_free: one is now the
symmetric of the other, and refcnt is never changed in these functions.
To optimize the freeing of the segments, we try try to only update
m->refcnt, m->next, and m->nb_segs when it's required (idea from
Konstantin Ananyev <konstantin.ananyev@intel.com>).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Rename __rte_mbuf_raw_free() as rte_mbuf_raw_free() and make
it public. The old function is kept for compat but is marked as
deprecated.
The next commit changes the behavior of rte_mbuf_raw_free() to
make it more consistent with rte_mbuf_raw_alloc().
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Document the function and make it public, since it is used at several
places in the drivers. The old one is marked as deprecated.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit documents two error return values for the
rte_event_dev_start() function.
-ESTALE indicates not all ports are configured
-ENOLINK indicates that not all queues are linked to ports. If an
application enqueues to such a queue it can lead to deadlock
Suggested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds rte_errno return values to rte_event_enqueue_burst() and
rte_event_dequeue_burst().
These return values allows user software to differentiate between an
invalid argument (such as an invalid queue_id or sched_type in an enqueued
event) and backpressure from the event device.
The port and device ID checks are placed in RTE_LIBRTE_EVENTDEV_DEBUG
header guards to avoid the performance hit in non-debug execution.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Add in APIs for extended stats so that eventdev implementations can report
out information on their internal state. The APIs are based on, but not
identical to, the equivalent ethdev functions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
PMDs that only do a specific type of scheduling cannot provide
CFG_ALL_TYPES, so the Eventdev infrastructure should not demand
that every PMD supports CFG_ALL_TYPES.
By not overriding the default configuration of the queue as
suggested by the PMD, the eventdev_common unit tests can pass
on all PMDs, regardless of their capabilities.
RTE_EVENT_QUEUE_CFG_DEFAULT is no longer used by the eventdev layer
it can be removed now. Applications should use CFG_ALL_TYPES
if they require enqueue of all types a queue, or specify which
type of queue they require.
The CFG_DEFAULT value is changed to CFG_ALL_TYPES in event/skeleton,
to not break the compile.
A capability flag is added that indicates if the underlying PMD
supports creating queues of ALL_TYPES.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
eventdev driver may return error on dequeue timeout tick conversion.
Change the pmd callback interface to address the same.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This patch initializes the links_map array entries to
EVENT_QUEUE_SERVICE_PRIORITY_INVALID, as expected by
rte_event_port_links_get(). This is necessary for the sw eventdev PMD,
which does not initialize links_map when rte_event_port_setup() calls
rte_event_port_unlink().
Fixes: 4f0804bbdfb9 ("eventdev: implement the northbound APIs")
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
rte_device is a generic device which is available to the applications
and EAL. This patch replaces rte_pci_device in 'struct rte_eventdev'
and in 'struct rte_event_dev_info' with common rte_device.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Improve the documentation of the return values of the
rte_event_dequeue_timeout_ticks() function, adding a
-ENOTSUP value for eventdevs that do not support waiting.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>