There is no need to export this API. Remaining users should use the
rte_eal_vdev_init() function instead.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
A virtual device should get initialized through the rte_eal_vdev_init()
function to properly initialize the driver.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
This driver can use the library function rte_eth_dma_zone_reserve()
instead of duplicating the code.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
To properly embed the generic rte_device into the rte_eth_dev this reworks
the bonding API to call through rte_eal_vdev_init().
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
This is a preparation to embed the generic rte_device into the rte_eth_dev
also for virtual devices.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Secondary process doesn't properly attach to the rte_eth_device
initialized by the primary process.
Accessing device from secondary process (e.g. via rte_eth_rx_burst),
causes process to crash. because rte_eth_dev_data is not properly set.
The issue was flood by
'commit 7f95f78a8aea ("ethdev: clear data when allocating device")'
which now clears rte_eth_dev_data entry.
For pci devices the struct is initialized by rte_eth_dev_pci_probe
->eth_dev_attach_secondary().
However, for virtio-user virtio_user_pmd_probe() is called instead of
rte_eth_dev_pci_probe().
The fix is to call rte_eth_dev_attach_secondary(), for secondary
process, from virtio_user_pmd_probe.
Fixes: 7f95f78a8aea ("ethdev: clear data when allocating device")
Cc: stable@dpdk.org
Signed-off-by: Ami Sabo <amis@radware.com>
The patch change the prototype of callback function
(rte_intr_callback_fn) by removing the unnecessary parameter.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Failed to destroy tunnel filter rule if the action of
the tunnel filter is VF, root cause is the wrong vsi
used.
Fixes: c50474f31efe ("net/i40e: support tunnel filter to VF")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
build error:
.../event/sw/sw_evdev_worker.c: In function ‘sw_event_release’:
.../event/sw/sw_evdev_worker.c:52:3: error: unknown field ‘op’ specified
in initializer
Fixed by updating struct initialization.
Fixes: 656af9180014 ("event/sw: add worker core functions")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Disable for gcc < 4.7 and icc <= 14.0
PMD uses some compiler builtins and new compiler options. Tested with
gcc 4.5.1 and following were not supported:
option:
-Ofast
macros:
_Static_assert
__ORDER_LITTLE_ENDIAN__
__ORDER_BIG_ENDIAN__
__BYTE_ORDER__
__atomic_fetch_add
__ATOMIC_ACQUIRE
__atomic_load_n
__ATOMIC_RELAXED
__atomic_store_n
__ATOMIC_RELEASE
It is not easy to fix all in PMD, disabling PMD for older compilers.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch deprecates the following functions in 17.05,
which will be removed in 17.08.
- rte_crpytodev_scheduler_mode_get()
- rte_crpytodev_scheduler_mode_set()
These two new functions replace them, fixing the typo in their names.
- rte_cryptodev_scheduler_mode_get()
- rte_cryptodev_scheduler_mode_set()
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch fixes the incorrection slave session free operation.
Fixes: 57523e682bb7 ("crypto/scheduler: register operation functions")
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Refactor capabilities data structures to facilitate
defining different capability sets for different devices
without duplication of data.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch adds an API to get the run-time slaves number and list
of a cryptodev scheduler PMD.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch fixes segementation fault that may occur in case
of wrong parameters being provided to the cryptographic
session. Unused fields which would cause null dereference
are removed.
Fixes: 1703e94ac5ce ("qat: add driver for QuickAssist devices")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
DOCSIS BPI mode is handled in the QAT PMD by sending full blocks to the
hardware device for encryption and using OpenSSL libcrypto for pre- or
post-processing of any partial blocks.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Adds support in OpenSSL PMD for algorithm following the DOCSIS
specification, which combines DES-CBC for full DES blocks (8 bytes)
and DES-CFB for last runt block (less than 8 bytes).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Tested-by: Yang Gang <gangx.yang@intel.com>
Underlying IPSec Multi buffer library implements
DOCSIS specification, so this commit adds support
for this new feature, which combines AES-CBC for full
AES blocks (16 bytes) and AES-CFB for last runt block
(less than 16 bytes).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
IPSec Multi-buffer library v0.45 has been released,
which includes, among other features, support for DOCSIS BPI
specification and include AVX512 optimizations.
This new version added const qualifiers to some of the function
prototypes, so the PMD has been updated to include these changes.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Moving the crypto processing from the enqueue burst to the dequeue burst,
to remove the requirement to continually call the
rte_cryptodev_burst_enqueue function to guarantee that all operations get
flushed from the multi-buffer managers buffers.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
There is bug when more crypto ops are enqueued than dequeued.
The return value is not checked when trying to enqueue the
processed crypto op into the internal ring, which in the case of being
full will results in crypto ops and mbufs being leaked.
The issue is more obvious with different cores doing enqueue/dequeue.
This patch moves the crypto operation to the dequeue function which
fixes the above issue without having to check for the number of free
entries in the ring.
Fixes: eec136f3c54f ("aesni_gcm: add driver for AES-GCM crypto operations")
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Fail-over mode works with 2 slaves, primary slave and secondary slave.
In this mode, the scheduler will enqueue the incoming crypto op burst
to the primary slave. When one or more crypto ops are failed to be
enqueued, they then will be enqueued to the secondary slave.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Packet-size based distribution mode is a scheduling mode works with 2
slaves, primary slave and secondary slave, and distribute the enqueued
crypto ops to them based on their data lengths. A crypto op will be
distributed to the primary slave if its data length equals or bigger
than the designated threshold, otherwise it will be handled by the
secondary slave.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Since the new device configuration API is updated, we can make use of
this feature to the crypto scheduler PMD to configure its slaves
automatically with the same configurations it got. As originally the
slaves have to be manually configured one by one, this patch should
help reducing the coding complexity.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch changes the device configuration API for rte_cryptodev_ops
function prototype, and update all cryptodev PMDs for this change.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch optimizes the crypto op ordering by replacing the
ordering method from using rte_reorder library to using rte_ring
to avoid unnecessary crypto op storing and recovering cost.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch changes the enqueue and dequeue methods to cryptodev
scheduler PMD. Originally a 2-layer function call is carried out
upon enqueuing or dequeuing a burst of crypto ops. This patch
removes one layer to improve the performance.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The macro RTE_PMD_TAP_MAX_QUEUES was defined twice.
On machines with kernel < 3.8, IFF_MULTI_QUEUE didn't exist, and thus
both definitions used different values.
Fixes: cf5643661161 ("net/tap: move private elements to external header")
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Since patch "mbuf: structure reorganization" the compiler complains
sometimes (in some conditions):
.../drivers/net/mlx5/mlx5_rxtx.c: In function ‘mlx5_rx_burst’:
.../drivers/net/mlx5/mlx5_rxtx.c:2082:17: error: ‘len’ may be used
uninitialized in this function [-Werror=maybe-uninitialized]
len is not initialised as it will be at the first segment of a received
packet, but it remains hard for the compiler to determine it.
Fixes: 9964b965ad69 ("net/mlx5: re-add Rx scatter support")
Cc: stable@dpdk.org
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
build error:
.../drivers/event/octeontx/ssovf_worker.c(212):
error #592: variable "get_work0" is used before its value is set
RTE_SET_USED(get_work0);
^
.../drivers/event/octeontx/ssovf_worker.c(213):
error #592: variable "get_work1" is used before its value is set
RTE_SET_USED(get_work1);
^
For x86 these variables set but not used, move macros below
where values assigned.
Fixes: f61808eaa9ad ("event/octeontx: add start function")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
To avoid multiple stores on fast path, Ethernet drivers
aggregate the writes to data_off, refcnt, nb_segs and port
to an uint64_t data and write the data in one shot
with uint64_t* at &mbuf->rearm_data address.
Some of the non-IA platforms have store operation overhead
if the store address is not naturally aligned.This patch
fixes the performance issue on those targets.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Now that the m->next pointer and m->nb_segs is expected to be set (to
NULL and 1 respectively) after a mempool_get(), we can avoid to write them
in the Rx functions of drivers.
Only some drivers are patched, it's not an exhaustive patch. It gives
the idea to do the same in other drivers.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Set the value of m->refcnt to 1, m->nb_segs to 1 and m->next
to NULL when the mbuf is stored inside the mempool (unused).
This is done in rte_pktmbuf_prefree_seg(), before freeing or
recycling a mbuf.
Before this patch, the value of m->refcnt was expected to be 0
while in pool.
The objectives are:
- to avoid drivers to set m->next to NULL in the early Rx path, since
this field is in the second 64B of the mbuf and its access could
trigger a cache miss
- rationalize the behavior of raw_alloc/raw_free: one is now the
symmetric of the other, and refcnt is never changed in these functions.
To optimize the freeing of the segments, we try try to only update
m->refcnt, m->next, and m->nb_segs when it's required (idea from
Konstantin Ananyev <konstantin.ananyev@intel.com>).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Rename __rte_mbuf_raw_free() as rte_mbuf_raw_free() and make
it public. The old function is kept for compat but is marked as
deprecated.
The next commit changes the behavior of rte_mbuf_raw_free() to
make it more consistent with rte_mbuf_raw_alloc().
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Document the function and make it public, since it is used at several
places in the drivers. The old one is marked as deprecated.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
If device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
configuration then use different fast path dequeue handler to wait till
requested amount of nanosecond if the event is not available.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
SSO co-processor runs at a different frequency than core clock.
Request PF to convert the ns to SSO get_work timeout period.
On dequeue, If device is configured with
RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT configuration then
use different fast path dequeue handler to wait till requested
amount of nanosecond if the event is not available.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Gage Eads <gage.eads@intel.com>