The rte_service_lcore_reset_all function stops execution of services
on all lcores and switches them back from ROLE_SERVICE to ROLE_RTE.
However the thread loop for slave lcores (eal_thread_loop) distincts these
roles to set lcore state after processing delegated function.
It sets WAIT state for ROLE_SERVICE, but FINISHED for ROLE_RTE.
So changing the role to RTE before stopping work in slave lcores
causes lcores to end in FINISHED state. That is why the rte_eal_lcore_wait
must be run after rte_service_lcore_reset_all to bring back lcores to
launchable (WAIT) state.
This has been fixed in test app and clarified in API documentation.
Setting the state to WAIT in rte_service_runner_func is premature
as the rte_service_runner_func function is still a part of the lcore
function delegated to slave lcore. The state is overwritten anyway in
slave lcore thread loop. This premature setting state to WAIT might
however cause rte_eal_lcore_wait, that was called by the application,
to return before slave lcore thread set the FINISHED state. That's
why it is removed from librte_eal rte_service_runner_func function.
Bugzilla ID: 464
Fixes: 21698354c8 ("service: introduce service cores concept")
Fixes: f038a81e1c ("service: add unit tests")
Cc: stable@dpdk.org
Reported-by: Sarosh Arif <sarosh.arif@emumba.com>
Signed-off-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
The impl_opaque field is shared between the timer arm and cancel
operations. Meanwhile, the state flag acts as a guard variable to
make sure the update of impl_opaque is synchronized. The original
code uses rte_smp barriers to achieve that. This patch uses C11
atomics with an explicit one-way memory barrier instead of full
barriers rte_smp_w/rmb() to avoid the unnecessary barrier on aarch64.
Since compilers can generate the same instructions for volatile and
non-volatile variable in C11 __atomics built-ins, so remain the volatile
keyword in front of state enum to avoid the ABI break issue.
Cc: stable@dpdk.org
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
There is no thread will access these impl_opaque data after timer
canceled. When new timer armed, it got refilled. So the cleanup
process is unnecessary.
Cc: stable@dpdk.org
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
The in_use flag is a per core variable which is not shared between
lcores in the normal case and the access of this variable should be
ordered on the same core. However, if non-EAL thread pick the highest
lcore to insert timers into, there is the possibility of conflicts
on this flag between threads. Then the atomic compare-and-swap
operation is needed.
Use the C11 atomics instead of the generic rte_atomic operations to
avoid the unnecessary barrier on aarch64.
Cc: stable@dpdk.org
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
The n_poll_lcores counter and poll_lcore array are shared between lcores
and the update of these variables are out of the protection of spinlock
on each lcore timer list. The read-modify-write operations of the counter
are not atomic, so it has the potential of race condition between lcores.
Use c11 atomics with RELAXED ordering to prevent confliction.
Fixes: cc7b73ea9e ("eventdev: add new software timer adapter")
Cc: stable@dpdk.org
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
This patch adds traces to some Cryptodev functions that are used
in primary/secondary context.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds function that can check if queue pair
was already setup. This may be useful when dealing with
multi process approach in cryptodev.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add a note to the rte_crypto_sym_op->auth.data fields to state that
for DOCSIS security protocol, these are used to specify the offset and
length of data over which the CRC is calculated.
Signed-off-by: David Coyle <david.coyle@intel.com>
Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add support for DOCSIS protocol to rte_security library. This support
currently comprises the combination of Crypto and CRC operations.
Signed-off-by: David Coyle <david.coyle@intel.com>
Signed-off-by: Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch adds the verification of the element size of the
mempool provided for the session creation. Returns the error
if the element size is too small to hold the session object.
Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The multiprocess feature has been implicitly enabled so far.
Applications might want to explicitly disable like when using the
non-EAL threads registration API.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add a helper to iterate all lcores.
The iterator callback is read-only wrt the lcores list.
Implement a dump function on top of this for debugging.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
DPDK components and applications can have their say when a new lcore is
initialized. For this, they can register a callback for initializing and
releasing their private data.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
DPDK allows calling some part of its API from a non-EAL thread but this
has some limitations.
OVS (and other applications) has its own thread management but still
want to avoid such limitations by hacking RTE_PER_LCORE(_lcore_id) and
faking EAL threads potentially unknown of some DPDK component.
Introduce a new API to register non-EAL thread and associate them to a
free lcore with a new NON_EAL role.
This role denotes lcores that do not run DPDK mainloop and as such
prevents use of rte_eal_wait_lcore() and consorts.
Multiprocess is not supported as the need for cohabitation with this new
feature is unclear at the moment.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
For consistency sake, move all lcore role code in the dedicated
compilation unit / header.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This is a preparation step for dynamically unregistering threads.
Since we explicitly allocate a per thread trace buffer in
__rte_thread_init, add an internal helper to free this buffer.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce a helper responsible for initialising the per thread context.
We can then have a unified context for EAL and non-EAL threads and
remove copy/paste'd OS-specific helpers.
Per EAL thread CPU affinity setting is separated from the thread init.
It is to accommodate with Windows EAL where CPU affinity is not set at
the moment.
Besides, having affinity set by the master lcore in FreeBSD and Linux
will make it possible to detect errors rather than panic in the child
thread. But the cleanup when such an event happens is left for later.
A side-effect of this patch is that control threads can now use
recursive locks (rte_gettid() was not called before).
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Because of the inline accessor + static declaration in rte_gettid(),
we end up with multiple symbols for RTE_PER_LCORE(_thread_id).
Each compilation unit will pay a cost when accessing this information
for the first time.
$ nm build/app/dpdk-testpmd | grep per_lcore__thread_id
0000000000000054 d per_lcore__thread_id.5037
0000000000000040 d per_lcore__thread_id.5103
0000000000000048 d per_lcore__thread_id.5259
000000000000004c d per_lcore__thread_id.5259
0000000000000044 d per_lcore__thread_id.5933
0000000000000058 d per_lcore__thread_id.6261
0000000000000050 d per_lcore__thread_id.7378
000000000000005c d per_lcore__thread_id.7496
000000000000000c d per_lcore__thread_id.8016
0000000000000010 d per_lcore__thread_id.8431
Make it global as part of the DPDK_21 stable ABI.
Fixes: ef76436c68 ("eal: get unique thread id")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
We have per lcore thread symbols scattered in OS implementations but
common code relies on them.
Move all of them in common.
RTE_PER_LCORE(_socket_id) and RTE_PER_LCORE(_cpuset) have public
accessors and are not exported through the library map, they can be
made static.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Change the barrier APIs for IO to reflect that Armv8-a is other-multi-copy
atomicity memory model.
Armv8-a memory model has been strengthened to require
other-multi-copy atomicity. This property requires memory accesses
from an observer to become visible to all other observers
simultaneously [3]. This means
a) A write arriving at an endpoint shared between multiple CPUs is
visible to all CPUs
b) A write that is visible to all CPUs is also visible to all other
observers in the shareability domain
This allows for using cheaper DMB instructions in the place of DSB
for devices that are visible to all CPUs (i.e. devices that DPDK
caters to).
Please refer to [1], [2] and [3] for more information.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=22ec71615d824f4f11d38d0e55a88d8956b7e45f
[2] https://www.youtube.com/watch?v=i6DayghhA8Q
[3] https://www.cl.cam.ac.uk/~pes20/armv8-mca/
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
The service core list is populated, but not used. Incorrect
lcore states are examined for a service.
Use the populated list to iterate over service cores.
Fixes: e484ccddbe ("service: avoid false sharing on core state")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
All include files should be safe from C++
Fixes: 5a5793a5ff ("rib: add RIB library")
Fixes: f7e861e21c ("rib: support IPv6")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Max_nodes in config is signed, but a negative value makes
no sense. Get rid of extra BSD style parens.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
The getter functions should take a constant pointer
to make it clear that node is not modified.
The rib create functions do not modify their config structure.
Mark the config as constant so that programs can pass
simple constant data.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
The rte_rawdev_dump function was missing from the map file,
meaning it was unavailable for use when linking dynamically.
Fixes: c88b3f2558 ("rawdev: introduce raw device library")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The rawdev info struct has a socket_id field which was not filled in.
We can also omit the checks for the parameter struct being null, since
that is previously checked in the function.
Fixes: c88b3f2558 ("rawdev: introduce raw device library")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
To call the rte_rawdev_info_get() function, the user currently has to know
the underlying type of the device in order to pass an appropriate structure
or buffer as the dev_private pointer in the info structure. By allowing a
NULL value for this field, we can skip getting the device-specific info and
just return the generic info - including the device name and driver, which
can be used to determine the device type - to the user.
This ensures that basic info can be get for all rawdevs, without knowing
the type, and even if the info driver API call has not been implemented for
the device.
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The Linux kernel module vfio-pci introduces the VF token to enable
SR-IOV support since 5.7.
The VF token can be set by a vfio-pci based PF driver and must be known
by the vfio-pci based VF driver in order to gain access to the device.
Since the vfio-pci module uses the VF token as internal data to provide
the collaboration between SR-IOV PF and VFs, so DPDK can use the same
VF token for all PF devices by specifying the related EAL option.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Harman Kalra <hkalra@marvell.com>
Add the dependent header files explicitly, so that the user just needs
to include the 'rte_uuid.h' header file directly to avoid compile error:
(1). rte_uuid.h:97:55: error: unknown type name ‘size_t’
(2). rte_uuid.h:58:2: error: implicit declaration of function ‘memcpy’
Fixes: 6bc67c497a ("eal: add uuid API")
Cc: stable@dpdk.org
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The startup of VFIO is too noisy. Logging is expensive on some
systems, and distracting to the user.
It should not be logging at NOTICE level, reduce it to INFO level.
It really should be DEBUG here but that would hide it by default.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
The 'group_status' has never been used and can be removed.
Fixes: 94c0776b1b ("vfio: support hotplug")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
If rte_lcore_index() is asked to give the index of the
current lcore (argument -1) and is called from a non-EAL thread
then it would invalid result. The result would come
lcore_config[-1].core_index which is some other data in the
per-thread area.
The resolution is to return -1 which is what rte_lcore_index()
returns if handed an invalid lcore.
Same issue existed with rte_lcore_to_cpu_id().
Bugzilla ID: 446
Fixes: 26cc3bbe4d ("eal: add lcore accessors")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Change the inline functions to use __rte_always_inline to be
consistent with rest of the inline functions.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
get_tsc_freq uses 'nanosleep' system call to calculate the CPU
frequency. However, 'nanosleep' results in the process getting
un-scheduled. The kernel saves and restores the PMU state. This
ensures that the PMU cycles are not counted towards a sleeping
process. When RTE_ARM_EAL_RDTSC_USE_PMU is defined, this results
in incorrect CPU frequency calculation. This logic is replaced
with generic counter based loop.
Bugzilla ID: 450
Fixes: f91bcbb2d9 ("eal/armv8: use high-resolution cycle counter")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The experimental tags were removed, but the comment
is still having API classification as EXPERIMENTAL
Fixes: 931cc531aa ("rawdev: remove experimental tag")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The following libraries are experimental, all of their functions can
be changed or removed:
- librte_bbdev
- librte_bpf
- librte_compressdev
- librte_fib
- librte_flow_classify
- librte_graph
- librte_ipsec
- librte_node
- librte_rcu
- librte_rib
- librte_stack
- librte_telemetry
Their status is properly announced in MAINTAINERS.
Remind this status in their headers in a common fashion (aligned to ABI
docs).
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Having a special versioning for experimental/internal libraries put a
additional maintenance cost while this status is already announced in
MAINTAINERS and the library headers/documentation.
Following discussions and vote at 05/20 TB meeting [1], use a single
versioning for all libraries in DPDK.
Note: for the ABI check, an exception [2] had been added when tweaking
this special versioning [3].
Prefer explicit libabigail rules (which will be dropped in 20.11).
1: https://mails.dpdk.org/archives/dev/2020-May/168450.html
2: https://git.dpdk.org/dpdk/commit/?id=23d7ad5db41c
3: https://git.dpdk.org/dpdk/commit/?id=ec2b8cd7ed69
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Inclusion of the endian.h header is set only for Linux OS.
Windows endianness will be determined by the predefined
__BYTE_ORDER__ macro.
Signed-off-by: Tal Shnaiderman <talshn@mellanox.com>
Some EAL functions are used by mempool lib but not exported on Windows.
The functions are exported.
Added mempool to supported libraries for Windows compilation.
Signed-off-by: Fady Bader <fady@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Function versioning implementation is not supported by Windows.
Function versioning is disabled on Windows.
Signed-off-by: Fady Bader <fady@mellanox.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The QoS scheduler works off port time that is computed from the number
of CPU cycles that have elapsed since the last time the port was
polled. It divides the number of elapsed cycles to calculate how
many bytes can be sent, however this division can generate rounding
errors, where some fraction of a byte sent may be lost.
Lose enough of these fractional bytes and the QoS scheduler
underperforms. The problem is worse with low bandwidths.
To compensate for this rounding error this fix doesn't advance the
port's time_cpu_cycles by the number of cycles that have elapsed,
but by multiplying the computed number of bytes that can be sent
(which has been rounded down) by number of cycles per byte.
This will mean that port's time_cpu_cycles will lag behind the CPU
cycles momentarily. At the next poll, the lag will be taken into
account.
Fixes: de3cfa2c98 ("sched: initial import")
Cc: stable@dpdk.org
Signed-off-by: Alan Dewar <alan.dewar@att.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
This commit introduce the API that is needed by the RegEx devices in
order to work with the RegEX lib.
During the probe of a RegEx device, the device should configure itself,
and allocate the resources it requires.
On completion of the device init, it should call the
rte_regex_dev_register in order to register itself as a RegEx device.
Signed-off-by: Ori Kam <orika@mellanox.com>
Signed-off-by: Parav Pandit <parav@mellanox.com>
Acked-by: Guy Kaneti <guyk@marvell.com>
This commit introduce the rte_regexdev_core.h file.
This file holds internal structures and API that are used by
the regexdev.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Guy Kaneti <guyk@marvell.com>
As RegEx usage become more used by DPDK applications, for example:
* Next Generation Firewalls (NGFW)
* Deep Packet and Flow Inspection (DPI)
* Intrusion Prevention Systems (IPS)
* DDoS Mitigation
* Network Monitoring
* Data Loss Prevention (DLP)
* Smart NICs
* Grammar based content processing
* URL, spam and adware filtering
* Advanced auditing and policing of user/application security policies
* Financial data mining - parsing of streamed financial feeds
* Application recognition.
* Dmemory introspection.
* Natural Language Processing (NLP)
* Sentiment Analysis.
* Big data database acceleration.
* Computational storage.
Number of PMD providers started to work on HW implementation,
along side with SW implementations.
This lib adds the support for those kind of devices.
The RegEx Device API is composed of two parts:
- The application-oriented RegEx API that includes functions to setup
a RegEx device (configure it, setup its queue pairs and start it),
update the rule database and so on.
- The driver-oriented RegEx API that exports a function allowing
a RegEx poll Mode Driver (PMD) to simultaneously register itself as
a RegEx device driver.
RegEx device components and definitions:
+-----------------+
| |
| o---------+ rte_regexdev_[en|de]queue_burst()
| PCRE based o------+ | |
| RegEx pattern | | | +--------+ |
| matching engine o------+--+--o | | +------+
| | | | | queue |<==o===>|Core 0|
| o----+ | | | pair 0 | | |
| | | | | +--------+ +------+
+-----------------+ | | |
^ | | | +--------+
| | | | | | +------+
| | +--+--o queue |<======>|Core 1|
Rule|Database | | | pair 1 | | |
+------+----------+ | | +--------+ +------+
| Group 0 | | |
| +-------------+ | | | +--------+ +------+
| | Rules 0..n | | | | | | |Core 2|
| +-------------+ | | +--o queue |<======>| |
| Group 1 | | | pair 2 | +------+
| +-------------+ | | +--------+
| | Rules 0..n | | |
| +-------------+ | | +--------+
| Group 2 | | | | +------+
| +-------------+ | | | queue |<======>|Core n|
| | Rules 0..n | | +-------o pair n | | |
| +-------------+ | +--------+ +------+
| Group n |
| +-------------+ |<-------rte_regexdev_rule_db_update()
| | | |<-------rte_regexdev_rule_db_compile_activate()
| | Rules 0..n | |<-------rte_regexdev_rule_db_import()
| +-------------+ |------->rte_regexdev_rule_db_export()
+-----------------+
RegEx: A regular expression is a concise and flexible means for matching
strings of text, such as particular characters, words, or patterns of
characters. A common abbreviation for this is â~@~\RegExâ~@~].
RegEx device: A hardware or software-based implementation of RegEx
device API for PCRE based pattern matching syntax and semantics.
PCRE RegEx syntax and semantics specification:
http://regexkit.sourceforge.net/Documentation/pcre/pcrepattern.html
RegEx queue pair: Each RegEx device should have one or more queue pair to
transmit a burst of pattern matching request and receive a burst of
receive the pattern matching response. The pattern matching
request/response embedded in *rte_regex_ops* structure.
Rule: A pattern matching rule expressed in PCRE RegEx syntax along with
Match ID and Group ID to identify the rule upon the match.
Rule database: The RegEx device accepts regular expressions and converts
them into a compiled rule database that can then be used to scan data.
Compilation allows the device to analyze the given pattern(s) and
pre-determine how to scan for these patterns in an optimized fashion that
would be far too expensive to compute at run-time. A rule database
contains a set of rules that compiled in device specific binary form.
Match ID or Rule ID: A unique identifier provided at the time of rule
creation for the application to identify the rule upon match.
Group ID: Group of rules can be grouped under one group ID to enable
rule isolation and effective pattern matching. A unique group identifier
provided at the time of rule creation for the application to identify
the rule upon match.
Scan: A pattern matching request through *enqueue* API.
It may possible that a given RegEx device may not support all the
features
of PCRE. The application may probe unsupported features through
struct rte_regexdev_info::pcre_unsup_flags
By default, all the functions of the RegEx Device API exported by a PMD
are lock-free functions which assume to not be invoked in parallel on
different logical cores to work on the same target object. For instance,
the dequeue function of a PMD cannot be invoked in parallel on two logical
cores to operates on same RegEx queue pair. Of course, this function
can be invoked in parallel by different logical core on different queue
pair. It is the responsibility of the upper level application to
enforce this rule.
In all functions of the RegEx API, the RegEx device is
designated by an integer >= 0 named the device identifier *dev_id*
At the RegEx driver level, RegEx devices are represented by a generic
data structure of type *rte_regexdev*.
RegEx devices are dynamically registered during the PCI/SoC device
probing phase performed at EAL initialization time.
When a RegEx device is being probed, a *rte_regexdev* structure and
a new device identifier are allocated for that device. Then, the
regexdev_init() function supplied by the RegEx driver matching the
probed device is invoked to properly initialize the device.
The role of the device init function consists of resetting the hardware
or software RegEx driver implementations.
If the device init operation is successful, the correspondence between
the device identifier assigned to the new device and its associated
*rte_regexdev* structure is effectively registered.
Otherwise, both the *rte_regexdev* structure and the device identifier
are freed.
The functions exported by the application RegEx API to setup a device
designated by its device identifier must be invoked in the following
order:
- rte_regexdev_configure()
- rte_regexdev_queue_pair_setup()
- rte_regexdev_start()
Then, the application can invoke, in any order, the functions
exported by the RegEx API to enqueue pattern matching job, dequeue
pattern matching response, get the stats, update the rule database,
get/set device attributes and so on
If the application wants to change the configuration (i.e. call
rte_regexdev_configure() or rte_regexdev_queue_pair_setup()), it must
call rte_regexdev_stop() first to stop the device and then do the
reconfiguration before calling rte_regexdev_start() again. The enqueue and
dequeue functions should not be invoked when the device is stopped.
Finally, an application can close a RegEx device by invoking the
rte_regexdev_close() function.
Each function of the application RegEx API invokes a specific function
of the PMD that controls the target device designated by its device
identifier.
For this purpose, all device-specific functions of a RegEx driver are
supplied through a set of pointers contained in a generic structure of
type *regexdev_ops*.
The address of the *regexdev_ops* structure is stored in the
*rte_regexdev* structure by the device init function of the RegEx driver,
which is invoked during the PCI/SoC device probing phase, as explained
earlier.
In other words, each function of the RegEx API simply retrieves the
*rte_regexdev* structure associated with the device identifier and
performs an indirect invocation of the corresponding driver function
supplied in the *regexdev_ops* structure of the *rte_regexdev*
structure.
For performance reasons, the address of the fast-path functions of the
RegEx driver is not contained in the *regexdev_ops* structure.
Instead, they are directly stored at the beginning of the *rte_regexdev*
structure to avoid an extra indirect memory access during their
invocation.
RTE RegEx device drivers do not use interrupts for enqueue or dequeue
operation. Instead, RegEx drivers export Poll-Mode enqueue and dequeue
functions to applications.
The *enqueue* operation submits a burst of RegEx pattern matching
request to the RegEx device and the *dequeue* operation gets a burst of
pattern matching response for the ones submitted through *enqueue*
operation.
Typical application utilisation of the RegEx device API will follow the
following programming flow.
- rte_regexdev_configure()
- rte_regexdev_queue_pair_setup()
- rte_regexdev_rule_db_update() Needs to invoke if precompiled rule
database not
provided in rte_regexdev_config::rule_db for rte_regexdev_configure()
and/or application needs to update rule database.
- rte_regexdev_rule_db_compile_activate() Needs to invoke if
rte_regexdev_rule_db_update function was used.
- Create or reuse exiting mempool for *rte_regex_ops* objects.
- rte_regexdev_start()
- rte_regexdev_enqueue_burst()
- rte_regexdev_dequeue_burst()
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Ori Kam <orika@mellanox.com>
RTE_TRACE_POINT_DEFINE and RTE_TRACE_POINT_REGISTER must come in pairs.
Merge them and let RTE_TRACE_POINT_REGISTER handle the constructor part.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>