As described in [1] and as announced in [2], The field ``dataunit_len``
of the ``struct rte_crypto_cipher_xform`` moved to the end of the
structure and extended to ``uint32_t``.
In this way, sizes bigger than 64K bytes can be supported for data-unit
lengths.
[1] commit d014dddb2d ("cryptodev: support multiple cipher
data-units")
[2] commit 9a5c09211b ("doc: announce extension of crypto data-unit
length")
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch fixes a possible buffer overrun problem in crypto perf test.
Previously when user configured AAD size is over 12 bytes the copy
of template AAD will cause a buffer overrun.
The problem is fixed by only copy up to 12 bytes of AAD template.
Fixes: 8a5b494a7f ("app/test-crypto-perf: add AEAD parameters")
Cc: stable@dpdk.org
Signed-off-by: Przemyslaw Zegan <przemyslawx.zegan@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
In inline protocol inbound SA's, plain IPv4 and IPv6 packets are
delivered to application unlike inline crypto or lookaside.
Hence fix the application to not drop them when working in
single SA mode.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Adds max queue pairs limit devargs for crypto cnxk driver. This
can be used to set a limit on the number of maximum queue pairs
supported by the device. The default value is 63.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Reviewed-by: Anoob Joseph <anoobj@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
As reported by Dmitry, RTE_MEMPOOL_F_POOL_CREATED is a flag only
manipulated internally.
This flag is not supposed to be requested from an application and would
probably result in an incorrect behavior if an application did pass it.
At least one other internal flag has been added recently and more may be
introduced later.
Rework the check and export a mask of valid user flags for use in the
unit test.
Fixes: b240af8b10 ("mempool: enforce valid flags at creation")
Reported-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.
Fixes: fd943c764a ("mempool: deprecate xmem functions")
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add RTE_ prefix to helper macro to calculate mempool header size and
make it internal. Old macro is still available, but deprecated.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add RTE_ prefix to internal API defined in public header.
Use the prefix instead of double underscore.
Use uppercase for macros in the case of name conflict.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.
Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Move documentation into a separate line just before define.
Prepare to have a bit longer flag name because of namespace prefix.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Instead of polling for previous lock holder unlocking, use
wait_until_equal API.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Instead of polling for mcfg->magic to be updated, use wait_until_equal
API.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Remove the unnecessary header file rte_atomic.h
included in example module.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic32_test_and_set to compiler CAS atomic
operation for display_stats sync.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic32_cmpset to compiler atomic CAS
operation for channel status sync.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic usages to compiler atomic built-ins
for stats_read_pending sync in l2fwd_jobstats module.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic usages to compiler atomic built-ins
for thread sync.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic usages to compiler atomic builit-ins
for kni_stop and kni_pause sync.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic32_test_and_set usage to compiler atomic
CAS operation for display_stats sync.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic usages to compiler atomic built-ins
for global_exit_flag sync.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Convert rte_atomic usages to compiler atomic built-ins
for stats sync.
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
In stack module, remove the header file rte_atomic.h
as it is not being used.
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This patch adds the option --list (-l) to dpdk-telemetry.py which will
print all of the available file-prefixes for DPDK processes that have
telemetry enabled.
The prefixes will also be printed if the user passes an incorrect prefix
in the --file-prefix (-f) option.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
The instance option help text was incorrect, this patch corrects it.
Fixes: 11435aae20 ("usertools/telemetry: connect to separate instances")
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a new devarg: mr_mempool_reg_en.
On TX slow path, i.e. when an MR key for the address of the buffer
to send is not in the local cache, first try to retrieve it from
the database of registered mempools. Supported are direct and indirect
mbufs, as well as externally-attached ones from MLX5 MPRQ feature.
Lookup in the database of non-mempool memory is used as the last resort.
RX mempools are registered regardless of the devarg value.
On RX data path only the local cache and the mempool database is used.
If implicit mempool registration is disabled, these mempools
are unregistered at port stop, releasing the MRs.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Add internal API to register mempools, that is, to create memory
regions (MR) for their memory and store them in a separate database.
Implementation deals with multi-process, so that class drivers don't
need to. Each protection domain has its own database. Memory regions
can be shared within a database if they represent a single hugepage
covering one or more mempools entirely.
Add internal API to lookup an MR key for an address that belongs
to a known mempool. It is a responsibility of a class driver
to extract the mempool from an mbuf.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.
Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Data path performance can benefit if the PMD knows which memory it will
need to handle in advance, before the first mbuf is sent to the PMD.
It is impractical, however, to consider all allocated memory for this
purpose. Most often mbuf memory comes from mempools that can come and
go. PMD can enumerate existing mempools on device start, but it also
needs to track creation and destruction of mempools after the forwarding
starts but before an mbuf from the new mempool is sent to the device.
Add an API to register callback for mempool life cycle events:
* rte_mempool_event_callback_register()
* rte_mempool_event_callback_unregister()
Currently tracked events are:
* RTE_MEMPOOL_EVENT_READY (after populating a mempool)
* RTE_MEMPOOL_EVENT_DESTROY (before freeing a mempool)
Provide a unit test for the new API.
The new API is internal, because it is primarily demanded by PMDs that
may need to deal with any mempools and do not control their creation,
while an application, on the other hand, knows which mempools it creates
and doesn't care about internal mempools PMDs might create.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Fix spelling error which is causing reports of other patches failing.
Fixes: 69daa9e502 ("net/cnxk: support inline security setup for cn10k")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Removing the rawdev based octeontx2-ep driver as the dependent
common/octeontx2 will soon be going away. Moreover this driver is no
longer required as the net/octeontx_ep driver is sufficient.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
Removing the rawdev based octeontx2-dma driver as the dependent
common/octeontx2 will be soon be going away. Also a new DMA driver will
be coming in this place once the rte_dmadev library is in.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
Add a test case to validate the functionality of drivers' burst capacity
API implementations.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
For DMA devices which support the fill operation, run unit tests to
verify fill behaviour is correct.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Add a series of tests to inject bad copy operations into a dmadev to
test the error handling and reporting capabilities. Various combinations
of errors in various positions in a burst are tested, as are errors in
bursts with fence flag set, and multiple errors in a single burst.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Due to HW or driver limitations, not all dmadevs may support full error
handling e.g. safely managing and reporting an invalid address to a copy
operation. The skeleton dmadev, for example, being pure software will
always seg-fault if passed an invalid address. To indicate the
availability of safe error handling by a device, we add a capability
flag for it.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Add unit tests for various combinations of use for dmadev, copying
bursts of packets in various formats, e.g.
1. enqueuing two smaller bursts and completing them as one burst
2. enqueuing one burst and gathering completions in smaller bursts
3. using completed_status() function to gather completions rather than
just completed()
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
When running the dmadev_autotest, run the suite of copy tests on the
skeleton driver created for API testing too, rather than just destroying
the driver instances once the API tests are complete. This helps to
sanity check the tests themselves are reasonable.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
For each dmadev instance, perform some basic copy tests to validate that
functionality.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Run basic sanity tests for configuring, starting and stopping a dmadev
instance to help validate drivers. This also provides the framework for
future tests for data-path operation.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Add a function and wrapper macro to iterate over all DMA devices.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Add a burst capacity check API to the dmadev library. This API is useful to
applications which need to how many descriptors can be enqueued in the
current batch. For example, it could be used to determine whether all
segments of a multi-segment packet can be enqueued in the same batch or not
(to avoid half-offload of the packet).
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Add in a function to check if a device or vchan has completed all jobs
assigned to it, without gathering in the results. This is primarily for
use in testing, to allow the hardware to be in a known-state prior to
gathering completions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
This patch add dmadev API test which based on 'dma_skeleton' vdev. The
test cases could be executed using 'dmadev_autotest' command in test
framework.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Skeleton dmadevice driver, on the lines of rawdev skeleton, is for
showcasing of the dmadev library.
Design of skeleton involves a virtual device which is plugged into VDEV
bus on initialization.
Also, enable compilation of dmadev skeleton drivers.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
This patch add data plane API for dmadev.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
This patch add control plane API for dmadev.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
The 'dmadev' is a generic type of DMA device.
This patch introduce the 'dmadev' device allocation functions.
The infrastructure is prepared to welcome drivers in drivers/dma/
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
As stated in the API, dynamic field and flags should be created with no
additional flag (simply in the API for future changes).
Fix the dynamic flag register helper which was not enforcing it and add
unit tests.
Fixes: 4958ca3a44 ("mbuf: support dynamic fields and flags")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ray Kinsella <mdr@ashroe.eu>