Flags for IEEE1588 with ``PKT_*`` prefix has been changed to
``RTE_MBUF_F_*``. So in this patch updating the
old flags.
Fixes: b9b509246d ("mbuf: remove deprecated offload flags")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
Flags for IEEE1588 with ``PKT_*`` prefix has been changed to
``RTE_MBUF_F_*``. So in this patch updating the
old flags.
Fixes: b9b509246d ("mbuf: remove deprecated offload flags")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
Using rte_mtr_color_in_protocol_set(), user can configure
combination of protocol headers, like outer_vlan and outer_ip,
can be enabled on given meter object.
But rte_mtr_meter_vlan_table_update() and
rte_mtr_meter_dscp_table_update() do not have information that
which table needs to be updated corresponding to protocol header
i.e. inner or outer.
Adding protocol paramreter will allow user to provide required
protocol information so that corresponding inner or outer table
can be updated corresponding to protocol header.
If user wishes to configure both inner and outer table then
API must be called twice with correct protocol information.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Create a new Flow API action: METER_MARK.
It Meters a packet stream and marks its packets with colors.
The marking is done on a metadata, not on a packet field.
Unlike the METER action, it performs no policing at all.
A user has the flexibility to create any policies with the help of
the METER_COLOR item later, only meter profile is mandatory here.
Add testpmd command line to match for METER_MARK action:
flow create ... actions meter_mark mtr_profile 20 / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Introduce a new Meter API to retrieve a Meter profile and policy
objects using the profile/policy ID previously created with
meter_profile_add() and meter_policy_create() functions.
That allows to save the pointer and avoid any lookups in the
corresponding lists for quick access during a flow rule creation.
Also, it eliminates the need for CIR, CBS and EBS calculations
and conversion to a PMD-specific format when the profile is used.
Pointers are destroyed and cannot be used after the corresponding
meter_profile_delete() or meter_policy_delete() are called.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Extend modify_field Flow API with support of Meter Color Marker
modifications. It allows setting the packet's metadata to any
color marker: green, yellow or red. A user is able to specify
an initial packet color for Meter API or create simple Metering
and Marking flow rules based on his own coloring algorithm.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Provide an ability to use a Color Marker set by a Meter
as a matching item in Flow API. The Color Marker reflects
the metering result by setting the metadata for a
packet to a particular codepoint: green, yellow or red.
Add testpmd command line to match on a meter color:
flow create 0 ingress group 0 pattern meter color is green / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Similar to RISC-V, the current version for LoongArch do not support
vector. Re-use vector processing stubs in ixgbe PMD defined for PPC
for LoongArch. This enables ixgbe PMD usage in scalar mode on
LoongArch.
The ixgbe PMD driver was validated with Intel X520-DA2 NIC and the
test-pmd application, l2fwd, l3fwd examples.
Signed-off-by: Min Zhou <zhoumin@loongson.cn>
Add all necessary elements for DPDK to compile and run EAL on
LoongArch64 Soc.
This includes:
- EAL library implementation for LoongArch ISA.
- meson build structure for 'loongarch' architecture.
RTE_ARCH_LOONGARCH define is added for architecture identification.
- xmm_t structure operation stubs as there is no vector support in
the current version for LoongArch.
Compilation was tested on Debian and CentOS using loongarch64
cross-compile toolchain from x86 build hosts. Functions were tested
on Loongnix and Kylin which are two Linux distributions supported
LoongArch host based on Linux 4.19 maintained by Loongson
Corporation.
We also tested DPDK on LoongArch with some external applications,
including: Pktgen-DPDK, OVS, VPP.
The platform is currently marked as linux-only because there is no
other OS than Linux support LoongArch host currently.
The i40e PMD driver is disabled on LoongArch because of the absence
of vector support in the current version.
Similar to RISC-V, the compilation of following modules has been
disabled by this commit and will be re-enabled in later commits as
fixes are introduced:
net/ixgbe, net/memif, net/tap, example/l3fwd.
Signed-off-by: Min Zhou <zhoumin@loongson.cn>
Build fails if RTE_LOG_DP_LEVEL is set to RTE_LOG_DEBUG.
Fix the same by including the required header.
lib/rcu/rte_rcu_qsbr.h:678:40: error: expected ‘)’ before ‘PRIu64’
678 | "%s: status: least acked token = %" PRIu64,
| ^~~~~~
Fixes: 30a1de105a ("lib: remove unneeded header includes")
Cc: stable@dpdk.org
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
<ctype.h> and <errno.h> need to be included for the build
since they were removed from <rte_common.h>.
examples/l2fwd-cat/cat.c: In function ‘parse_set’:
examples/l2fwd-cat/cat.c:66:16:
warning: implicit declaration of function ‘isblank’
66 | while (isblank(*str))
| ^~~~~~~
examples/l2fwd-cat/cat.c:18:1:
note: include ‘<ctype.h>’ or provide a declaration of ‘isblank’
17 | #include "cat.h"
+++ |+#include <ctype.h>
18 |
examples/l2fwd-cat/cat.c:70:15:
warning: implicit declaration of function ‘isdigit’
70 | if ((!isdigit(*str) && *str != '(') || *str == '\0')
| ^~~~~~~
examples/l2fwd-cat/cat.c:70:15:
note: include ‘<ctype.h>’ or provide a declaration of ‘isdigit’
examples/l2fwd-cat/cat.c:75:17:
error: ‘errno’ undeclared (first use in this function)
75 | errno = 0;
| ^~~~~
examples/l2fwd-cat/cat.c:18:1:
note: ‘errno’ is defined in header ‘<errno.h>’;
did you forget to ‘#include <errno.h>’?
17 | #include "cat.h"
+++ |+#include <errno.h>
Fixes: 72b452c5f2 ("eal: remove unneeded includes from a public header")
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
GCC-12 warns when a pointer of type union points to an array of same
defined size, as union internally gets paded with pad bytes.
../examples/common/neon/port_group.h:42:21: error: array subscript
'union <anonymous>[0]' is partly outside array bounds of
'uint16_t[5]' {aka 'short unsigned int[5]'}
[-Werror=array-bounds]
42 | pnum->u64 = gptbl[v].pnum;
| ^~
../examples/common/neon/port_group.h:21:23: note: object 'pn' of
size [0, 10]
21 | port_groupx4(uint16_t pn[FWDSTEP + 1], uint16_t *lp, uint16x8_t dp1
| ~~~~~~~~~^~~~~~~~~~~~~~~
../examples/common/neon/port_group.h:43:21: error: array subscript
'union <anonymous>[0]' is partly outside array bounds of
'uint16_t[5]' {aka 'short unsigned int[5]'} [-Werror=array-bounds]
43 | pnum->u16[FWDSTEP] = 1;
| ^~
Fixes: 732115ce38 ("examples/l3fwd: move packet group function in common")
Cc: stable@dpdk.org
Signed-off-by: Amit Prakash Shukla <amitprakashs@marvell.com>
A pointer is passed to a macro and it seems mistakenly referenced.
This issue is seen only when compiling with GCC 12 for 32-bit:
drivers/net/qede/base/ecore_init_fw_funcs.c:1418:25:
error: array subscript 1 is outside array bounds of ‘u32[1]’
{aka ‘unsigned int[1]’} [-Werror=array-bounds]
1418 | ecore_wr(dev, ptt, ((addr) + (4 * i)), \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1419 | ((u32 *)&(arr))[i]); \
| ~~~~~~~~~~~~~~~~~~~
drivers/net/qede/base/ecore_init_fw_funcs.c:1465:17:
note: in expansion of macro ‘ARR_REG_WR’
1465 | ARR_REG_WR(p_hwfn, p_ptt, addr, pData, len_in_dwords);
| ^~~~~~~~~~
drivers/net/qede/base/ecore_init_fw_funcs.c:1439:35:
note: at offset 4 into object ‘pData’ of size 4
1439 | u32 *pData,
| ~~~~~^~~~~
Fixes: 3b307c55f2 ("net/qede/base: update FW to 8.40.25.0")
Cc: stable@dpdk.org
Signed-off-by: Amit Prakash Shukla <amitprakashs@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
When compiling with MinGW GCC 12,
the rte_flow_item array is seen as read out of bound:
net/i40e/i40e_hash.c:389:47: error:
array subscript 50 is above array bounds of ‘const uint64_t[50]’
{aka ‘const long long unsigned int[50]’} [-Werror=array-bounds]
389 | item_hdr = pattern_item_header[last_item_type];
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
It seems the assert check done above this line has no impact.
A real check is added to make the compiler happy.
Fixes: ef4c16fd91 ("net/i40e: refactor RSS flow")
Cc: stable@dpdk.org
Signed-off-by: Amit Prakash Shukla <amitprakashs@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This patch fixes a compilation issue met with GCC 12 on
LoongArch64:
In function ‘mbuf_to_desc’,
inlined from ‘vhost_enqueue_async_packed’
inlined from ‘virtio_dev_rx_async_packed’
inlined from ‘virtio_dev_rx_async_submit_packed’
lib/vhost/virtio_net.c:1159:18: error:
‘buf_vec[0].buf_addr’ may be used uninitialized
1159 | buf_addr = buf_vec[vec_idx].buf_addr;
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/vhost/virtio_net.c: In function ‘virtio_dev_rx_async_submit_packed’:
lib/vhost/virtio_net.c:1834:27: note: ‘buf_vec’ declared here
1834 | struct buf_vector buf_vec[BUF_VECTOR_MAX];
| ^~~~~~~
It happens because the compiler assumes that 'size'
variable in vhost_enqueue_async_packed could wrap to 0 since
'size' is uint32_t and pkt->pkt_len too.
In practice, it would never happen since 'pkt->pkt_len' is
unlikely to be close to UINT32_MAX, but let's just change
'size' to uint64_t to make the compiler happy without
having to add runtime checks.
This patch also fixes similar patterns in three other
places, including one that also produces similar build
issue on ARM64 in vhost_enqueue_single_packed().
Fixes: 873e8dad6f ("vhost: support packed ring in async datapath")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Amit Prakash Shukla <amitprakashs@marvell.com>
mvsam build was broken due to the recent session rework,
as it was not enabled in default build.
This patch fixes the build.
Fixes: 3f3fc3308b ("security: remove private mempool usage")
Fixes: bdce2564db ("cryptodev: rework session framework")
Fixes: 66837861d3 ("drivers/crypto: support security session size query")
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
In a recent commit, changing return type from int to uint32_t,
I did a last minute change to functions rte_bsf32_safe and rte_bsf64_safe,
because thought they were forgotten.
Actually these functions are returning 0 or 1, so it should be int.
The return type is reverted to the original type for these 2 functions.
Fixes: 4b81c145ae ("eal: change return type of bsf/fls functions")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
RTE_TEST_[RT]X_DESC_DEFAULT and RTE_TEST_[RT]X_DESC_MAX macros have been
copied in a lot of app/ and examples/ code.
Those macros are local to each program.
They are not related to a DPDK public header/API, drop the RTE_TEST_
prefix.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
As reported by Coverity, this check is pointless since dev is already
dereferenced earlier. Besides, dev is passed by the rawdev layer and
can't be NULL.
Note: the issue was probably present before the incriminated commit.
It is unclear why Coverity would start complaining about this now.
Coverity issue: 380991
Fixes: 8f1d23ece0 ("eal: deprecate RTE_FUNC_PTR_* macros")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Refer to API functions with parenthesis, making doxygen create
hyperlinks.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Have the SW event device conform to the service core convention, where
-EAGAIN is return in case no work was performed.
Prior to this patch, for an idle SW event device, a service lcore load
estimate based on RTE_SERVICE_ATTR_CYCLES would suggest 48% core
load.
At 7% of its maximum capacity, the SW event device needs about 15% of
the available CPU cycles* to perform its duties, but
RTE_SERVICE_ATTR_CYCLES would suggest the SW service used 48% of the
service core.
After this change, load deduced from RTE_SERVICE_ATTR_CYCLES will only
be a minor overestimation of the actual cycles used.
* The SW scheduler becomes more efficient at higher loads.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
As a part of its service function, a service usually polls some kind
of source (e.g., an RX queue, a ring, an eventdev port, or a timer
wheel) to retrieve one or more items of work.
In low-load situations, the service framework reports a significant
amount of cycles spent for all running services, despite the fact they
have performed little or no actual work.
The per-call cycle expenditure for an idle service (i.e., a service
currently without pending jobs) is typically very low. Polling an
empty ring or RX queue is inexpensive. However, since the service
function call frequency on an idle or lightly loaded lcore is going to
be very high indeed, the service function calls' cycles adds up to a
significant amount. The only thing preventing the idle services'
cycles counters to make up 100% of the available CPU cycles is the
overhead of the service framework itself.
If the RTE_SERVICE_ATTR_CYCLES or RTE_SERVICE_LCORE_ATTR_CYCLES are
used to estimate service core load, the cores may look very busy when
the system is mostly doing nothing useful at all.
This patch allows for an idle service to indicate that no actual work
was performed during a particular service function call (by returning
-EAGAIN). In such cases the RTE_SERVICE_ATTR_CYCLES and
RTE_SERVICE_LCORE_ATTR_CYCLES values are not incremented.
The convention of returning -EAGAIN for idle services may in the
future also be used to have the lcore enter a short sleep, or reduce
its operating frequency, in case all services are currently idle.
This change is backward-compatible.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Optimize service loop so that the starting point is the lowest-indexed
service mapped to the lcore in question, and terminate the loop at the
highest-indexed service.
While the worst case latency remains the same, this patch
significantly reduces the service framework overhead for the average
case. In particular, scenarios where an lcore only runs a single
service, or multiple services which id values are close (e.g., three
services with ids 17, 18 and 22), show significant improvements.
The worse case is a where the lcore two services mapped to it; one
with service id 0 and the other with id 63.
On a service lcore serving a single service, the service loop overhead
is reduced from ~190 core clock cycles to ~46, on an Intel Cascade
Lake generation Xeon. On weakly ordered CPUs, the gain is larger,
since the loop included load-acquire atomic operations.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Introduce a per-lcore counter for the total time spent on processing
services on that core.
This counter is useful when measuring individual lcore load.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Move the statistics from the service data structure to the per-lcore
struct. This eliminates contention for the counter cache lines, which
decreases the producer-side statistics overhead for services deployed
across many lcores.
Prior to this patch, enabling statistics for a service with a
per-service function call latency of 1000 clock cycles deployed across
16 cores on a Intel Xeon 6230N @ 2,3 GHz would incur a cost of ~10000
core clock cycles per service call. After this patch, the statistics
overhead is reduce to 22 clock cycles per call.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit fixes a potential racey-add that could occur if
multiple service-lcores were executing the same MT-safe service
at the same time, with service statistics collection enabled.
Because multiple threads can run and execute the service, the
stats values can have multiple writer threads, resulting in the
requirement of using atomic addition for correctness.
Note that when a MT unsafe service is executed, a spinlock is
held, so the stats increments are protected. This fact is used
to avoid executing atomic add instructions when not required.
Regular reads and increments are used, and only the store is
specified as atomic, reducing perf impact on e.g. x86 arch.
This patch causes a 1.25x increase in cycle-cost for polling a
MT safe service when statistics are enabled. No change was seen
for MT unsafe services, or when statistics are disabled.
Fixes: 21698354c8 ("service: introduce service cores concept")
Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
This commit improves the performance reporting of the service
cores polling loop to show both with and without statistics
collection modes. Collecting cycle statistics is costly, due
to calls to rte_rdtsc() per service iteration.
Reported-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
There is a possibility of deadlock in this API,
as same spinlock is tried to be acquired in nested manner.
If the lcore that is stopping the timer is different from the lcore
that owns the timer, the timer list lock is acquired in timer_del(),
even if local_is_locked is true. Because the same lock was already
acquired in rte_timer_stop_all(), the thread will hang.
This patch removes the acquisition of nested lock.
Fixes: 821c51267b ("timer: add function to stop all timers in a list")
Cc: stable@dpdk.org
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
When more than two packets are merged in a flow, and if we receive
a 3rd packet which is matching the sequence of the 2nd packet the
prev_idx will be 1 and not 2, hence resulting in packet re-ordering
Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Currently, when vm_power_manager exits, we are using a LIST_FOREACH
macro to iterate over VM info structures while freeing them. This
leads to use-after-free error. To address this, replace all usages of
LIST_* with TAILQ_* macros, and use the RTE_TAILQ_FOREACH_SAFE macro
to iterate and delete VM info structures.
Fixes: e8ae9b6625 ("examples/vm_power: channel manager and monitor in host")
Cc: stable@dpdk.org
Signed-off-by: Hamza Khan <hamza.khan@intel.com>
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
The function return type is changed to fixed width uint32_t
to be consistent with what appears to be the original authors intent.
It doesn't make much sense to return signed integers for these functions.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Structure rte_security_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
Added the support for rte_security_op.session_get_size()
in all the PMDs which support rte_security sessions and the
op was not supported.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
As per current design, rte_security_session_create()
unnecessarily use 2 mempool objects for a single session.
To address this, the API will now take only 1 mempool
object instead of 2. With this change, the library layer
will get the object from mempool and session priv data is
stored contiguously in the same mempool object.
User need to ensure that the mempool created in application
is big enough for session private data as well. This can be
ensured if the pool is created after getting size of session
priv data using API rte_security_session_get_size().
Since set and get pkt metadata for security sessions are now
made inline for Inline crypto/proto mode, a new member fast_mdata
is added to the rte_security_session.
To access opaque data and fast_mdata will be accessed via inline
APIs which can do pointer manipulations inside library from
session_private_data pointer coming from application.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
Structure rte_cryptodev_sym_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
This patch updates the scheduler PMD to use unified session
data structure. Previously thanks to the private session
array in cryptodev sym session there are no necessary
change needed for scheduler PMD other than the way ops
are enqueued/dequeued. The patch inherits the same design
in the original session data structure to the scheduler PMD
so the cryptodev sym session can be as a linear buffer for
both session header and driver private data.
With the change there are inevitable extra cost on both memory
(64 bytes per session per driver type) and cycle count (set
the correct session for each cop based on the worker before
enqueue, and retrieve the original session after dequeue).
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
As per current design, rte_cryptodev_sym_session_create() and
rte_cryptodev_sym_session_init() use separate mempool objects
for a single session.
And structure rte_cryptodev_sym_session is not directly used
by the application, it may cause ABI breakage if the structure
is modified in future.
To address these two issues, the rte_cryptodev_sym_session_create
will take one mempool object that the session and session private
data are virtually/physically contiguous, and initializes both
fields. The API rte_cryptodev_sym_session_init is removed.
rte_cryptodev_sym_session_create will now return an opaque session
pointer which will be used by the app and other APIs.
In data path, opaque session pointer is attached to rte_crypto_op
and the PMD can call an internal library API to get the session
private data pointer based on the driver id.
Note: currently single session may be used by different device
drivers, given it is initialized by them. After the change the
session created by one device driver cannot be used or
reinitialized by another driver.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Tested-by: Gagandeep Singh <g.singh@nxp.com>
Tested-by: David Coyle <david.coyle@intel.com>
Tested-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
During EAL init, all buses are probed and the devices found are
initialized. On eal_cleanup(), the inverse does not happen, meaning any
allocated memory and other configuration will not be cleaned up
appropriately on exit.
Currently, in order for device cleanup to take place, applications must
call the driver-relevant functions to ensure proper cleanup is done before
the application exits. Since initialization occurs for all devices on the
bus, not just the devices used by an application, it requires a)
application awareness of all bus devices that could have been probed on the
system, and b) code duplication across applications to ensure cleanup is
performed. An example of this is rte_eth_dev_close() which is commonly used
across the example applications.
This patch proposes adding bus cleanup to the eal_cleanup() to make EAL's
init/exit more symmetrical, ensuring all bus devices are cleaned up
appropriately without the application needing to be aware of all bus types
that may have been probed during initialization.
Contained in this patch are the changes required to perform cleanup for
devices on the PCI bus and VDEV bus during eal_cleanup(). There would be an
ask for bus maintainers to add the relevant cleanup for their buses since
they have the domain expertise.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Since 10 years, memzone allocation is allowed on secondary
processes. Now it's time to update the documentation accordingly.
At the same time, fix mempool, mbuf and ring documentation which rely on
memzones internally.
Bugzilla ID: 1074
Fixes: 916e4f4f4e ("memory: fix for multi process support")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Driver allocates "bp->rep_info" inside bnxt_init_rep_info() which is
invoked from bnxt_rep_port_probe(). But the memory is freed inside
bnxt_uninit_resources(), which is wrong. As a result, after error
recovery bp->rep_info will be NULL. The memory should have freed inside
bnxt_drv_uninit() to maintain symmetry of calls.
Fixes: 6dc83230b4 ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
We are not invoking rte_eth_switch_domain_free currently owing to
an unnecessary check. This patch fixes that.
Fixes: e2895305a5 ("net/bnxt: fix resource cleanup")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>