VNI of VXLAN is parsed wrongly. The root cause is that
VNI array in VXLAN item also uses network byte ordering.
Fixes: 11777435c7 ("net/ixgbe: parse flow director filter")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Add more check on the tci mask of VLAN and VXLAN parser
in fdir filter rule pattern parser. If such check not added,
it maybe cause error in fdir configuration set check.
Fixes: 11777435c7 ("net/ixgbe: parse flow director filter")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
If a packet send is attempted with a packet larger than the NIC
is capable of processing (9208) it will be dropped with no
completion descriptor returned or completion index update, which
will lead to an mbuf leak and eventual hang.
Drop and count oversized Tx packets in the Tx burst function and
dereference/free the mbuf without sending it to the NIC.
Since the maximum Rx and Tx packet sizes are different on enic
and are now both being used, make the define ENIC_DEFAULT_MAX_PKT_SIZE
be 2 defines, one for Rx and one for Tx.
Fixes: fefed3d1e6 ("enic: new driver")
Cc: stable@dpdk.org
Signed-off-by: John Daley <johndale@cisco.com>
The probe parses for user-defined iface name. Let's use that value.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
There's no point in having a different internal MAC address than the one
provided by the kernel when creating the netdevice.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
pmd->fds[0], pmd->rxq[0] and pmd->txq[0] are set a couple of lines after
the for loop that initializes them to -1.
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
dev->data->name contains the device name, e.g. "net_tap0".
dev->data->dev_private->name contains the actual iface name,
e.g. "dtap0".
In any case, the name must to be consistent with the tun_alloc() call in
eth_dev_tap_create().
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
For some sizes of packets, the number of bytes copied in the work queue
element could be greater than the available size of the inline. In such
situation it could consume one more work queue element where it should
not.
Fixes: 0e8679fcdd ("net/mlx5: fix inline logic")
Cc: stable@dpdk.org
Reported-by: Elad Persiko <eladpe@mellanox.com>
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Fixes an issue which may occurs with the inline feature activated and a
packet greater than the max_inline requested.
In such situation, more work request elements can be consumed and in the
worst case override some still handled by the NIC, this can result in
sending garbage on the network or putting the work queue in error.
Fixes: 2a66cf3789 ("net/mlx5: support inline send")
Cc: stable@dpdk.org
Reported-by: Elad Persiko <eladpe@mellanox.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
First two bytes of the Ethernet header was written twice at the same place.
Fixes: b8fe952ec5 ("net/mlx5: prepare Tx vectorization")
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The patch is to fix sfc_set_mc_addr_list() behaviour in order
to make it accept an empty multicast address list thus making
it possible to remove multicast addresses inserted previously
Fixes: 0fa0070e43 ("net/sfc: support multicast addresses list controls")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Adding a flow when the port is stopped ends in an inconsistent situation
where the queue can receive traffic when it should not.
Record new rules and apply them as soon as the port is started.
Fixes: 2097d0d1e2 ("net/mlx5: support basic flow items and actions")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
A configuration structure for the MARK action must always be specified.
Fixes: ea3bc3b1df ("net/mlx5: support mark flow action")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Default masks were introduced in the API after its implementation in this
PMD.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Querying the link status can end up being in an inconsistent state,
like the port is reporting speed although it is down.
For this case another query is scheduled.
A race condition can occur between the scheduled query and link
status interrupt handlers.
When the scheduled query by-pass interrupt handlers, the link status
will be stuck in an inconsistent state.
This patch addresses the race condition by not blocking link status
queries in case delayed query is used.
Fixes: 198a3c339a ("mlx5: handle link status interrupts")
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The chosen fake capability (10G) is consistent with the reported
link speed in virtio_dev_link_update():
link.link_speed = SPEED_10G;
The feature is not marked in doc/guides/nics/features/virtio.ini
because it is only a fake value.
Signed-off-by: Ido Barnea <ibarnea@cisco.com>
[Thomas: comments added]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Rx and Tx queues share the common tap file descriptor, but save this
value separately.
Setting up Rx/Tx queue sets up both queues, release_queue close the
tap file but update file descriptor only for that queue.
This makes other queue's file descriptor invalid.
As a workaround, prevent release_queue callback to be called by default.
This is done by separating Rx/Tx setup functions, so that each only
setup its own queue, this prevents rte_eth_rx/tx_queue_setup() calling
release_queue before setup_queue.
Fixes: 02f96a0a82 ("net/tap: add TUN/TAP device PMD")
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Size of the mask is wrongly computed and make the validation process only
verify the first 4 bytes of the layer.
Fixes: 2097d0d1e2 ("net/mlx5: support basic flow items and actions")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
TCI field is read from the wrong place due to an invalid cast. Moreover
there is no need to limit matching to VID since PCP and DEI bits can be
matched as well.
Fixes: 12475fb203 ("net/mlx5: support VLAN flow item")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Fixes: a541407fe4 ("net/i40e: set VF MAC anti-spoofing from PF")
Fixes: 4cbc41efcb ("net/i40e: set VF VLAN anti-spoofing from PF")
Fixes: c0ec14757c ("net/i40e: set VF unicast promiscuous mode from PF")
Fixes: ae57070ca8 ("net/i40e: set VF multicast promiscuous mode from PF")
Fixes: 83bb95e3fe ("net/i40e: set VF VLAN insertion from PF")
Fixes: 61fff9b4c6 ("net/i40e: set VF broadcast mode from PF")
Fixes: c33abbc144 ("net/i40e: set VF VLAN tag from PF")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Change the order of releasing the VSI's.
Release the VMDq VSI's first, then release the main VSI.
Fixes: 3cb446b4ae ("i40e: free vmdq vsi when closing")
Cc: stable@dpdk.org
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The mac_addr_add callback function was simply replacing the primary MAC
address instead of adding new ones and the mac_addr_remove callback would
only remove the primary MAC form the adapter. Fix the functions to add or
remove new address. Allow up to 64 MAC addresses per port.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
Current ixgbevf driver get max_rx_pktlen = 15872, but in fact PF
supports 15872-byte jumbo frame and VF only supports 9728-byte jumbo
frame. If VF is running DPDK driver and set frame_size > 9728 ,PF
running kernel ixgbe driver will report an error and set VF failed.
This patch fixs DPDK ixgbevf driver to get correct jumbo frame size
of VF.
More datasheet references from Wei Dai:
In 82599 datasheet, there is an annotation in the chapter 1.3 Features
Summary (page 29)
The 82599 supports full-size 15.5 KB (15872-byte) jumbo packets while
in a basic mode of operation. When DCB mode is enabled,
or security engines enabled or virtualization is enabled, the 82599
supports 9.5 KB (9728-byte) jumbo packets.
In x540 datasheet, there is also an annotation in the chapter 1.3
Features Summary (page 13)
The X540 and 82599 support full-size 15.5 KB jumbo packets while in a
basic mode of operation. When DCB mode is enabled,
or security engines enabled, or virtualization is enabled, or OS2BMC is
enabled, then the X540 supports 9.5 KB jumbo packets.
Packets to/from MC longer than 2KB are filtered out.
In x550 datasheet, there is still also an annotation in the chapter 1.4
Feature Summary (page 23)
All the products support full-size 15.5 KB jumbo packets while in a
basic mode of operation. When DCB mode is enabled, or security
engines enabled, or virtualization is enabled, or OS2BMC is enabled,
then only 9.5 KB jumbo packets are supported. Packets to/
from the MC longer than 2 KB are filtered out.
Fixes: 2144f6630f ("ixgbe: add redirection table size in device info")
Cc: stable@dpdk.org
Signed-off-by: Yi Zhang <zhang.yi75@zte.com.cn>
Acked-by: Wei Dai <wei.dai@intel.com>
When no error reported in Rx descriptor, we should set
CKSUM_GOOD flag before return.
Fixes: 9966a00a06 ("net/i40e: enable bad checksum flags in vector Rx")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
data value could have been garbage if VPD access timed out for VPD read
request could not been issued.
Found with clang static analysis:
drivers/net/cxgbe/base/t4_hw.c:1577:22:
warning: The left operand of '&' is a garbage value
} while ((stats_reg & 0x1) && --max_poll);
~~~~~~~~~ ^
Fixes: fe0bd9ee5d ("net/cxgbe: support EEPROM access")
Cc: stable@dpdk.org
Signed-off-by: Emmanuel Roullit <emmanuel.roullit@gmail.com>
While ENA can handle checksum calculations in almost all cases,
it cannot do so when DF bit in IPv4 header is not set,
that is DF=0, and TSO is requested. For that situation pseudo
header must be prepared manually.
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Add checks to rte_pmd_ixgbe_macsec_* APIs to ensure that the
port is an ixgbe port.
Fixes: b35d309710 ("net/ixgbe: add MACsec offload")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Ignore the error=IXGBE_ERR_SFP_NOT_PRESENT when SFP is not present.
If it is not ignored, testpmd will fail during the NIC initialization
process.
Ixgbe kernel driver ignores this error and works well. So DPDK
does same thing.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Tested-by: Yuan Peng <yuan.peng@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Check if promisc mode was set when setting allmulti mode and vice-versa.
Introduced BNX2X_RX_MODE_ALLMULTI_PROMISC for the same. If check is
absent the filter configuration gets over written.
Fixes: 540a211084 ("bnx2x: driver core")
Fixes: 5dbc53d7e5 ("net/bnx2x: restrict Rx mask flags sent to the PF")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
filter_type is not set when removing all macvlan filters. It will
cause error when send AQ command to HW.
This patch fixes this issue.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
When VF sends request to remove MAC address, PF host will check
if it is a non-zero or unicast address. When VF remove a multicast
address, it will report error.
This patch fixes this issue.
Fixes: ec852c94af ("net/i40e: enhance sanity check of MAC")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
in case of an error argument list is not freed.
Fixes: e72dd09b61 ("net/mlx5: add support for configuration through kvargs")
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
The number of Tx queues requested by the user must not be overridden;
instead, the limits imposed by TSO must be applied to the advertised
maximum
Fixes: fec33d5bb3 ("net/sfc: support firmware-assisted TSO")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
The previous version relied on the fact that DMA sync for device and
PIO write barrier in pair. Now each does its job.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Remove RTE_LIBRTE_SFC_EFX_TSO config option since it is not
required any more:
- unreasonable limit on number of Tx queues when TSO is not
actually required should be solved using per-device parameter
- performance difference with and without TSO compiled in is small
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Alarms are not supported on the FreeBSD.
Application must poll link status periodically itself using
rte_eth_link_get_nowait() to avoid management event queue overflow.
Fixes: 2de39f4e13 ("net/sfc: periodic management EVQ polling using alarm")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
When the WQ is wrapped around, it wrongly checks the condition when
resetting the pointer. It should be compared against the end of the queue,
not the beginning of the queue. And this isn't even needed when the length
of the copying data crosses the boundary.
Fixes: fdcb0f5305 ("net/mlx5: use work queue buffer as a raw buffer")
Cc: stable@dpdk.org
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The size of Rx RSS indirection table was limited by 256, but it is not
required anymore for all Mellanox NICs. However, the librte_ether still
limits the size by 512.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
On receiving a compressed session of Rx completion, prefetch every entries
to be invalidated. Also, invalidate consumed completions per every 8
mini-completions, not to wait until the last entry is consumed. This helps
to reduce jitter in rx_burst.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
When no error reported in Rx descriptor, we should set CKSUM_GOOD flag
before return.
Fixes: b704f9071b ("net/i40e: implement new Rx checksum flag")
Cc: stable@dpdk.org
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
While handling link status change (LSC) interrupt, all interrupts are
blocked until delayed interrupt handler finishes.
The wait duration is at least one second and this may cause timeouts in
VF to PF mailbox.
Make sure only LSC interrupt is blocked while waiting for delayed
interrupt handler to finish.
Fixes: 0a45657a67 ("pci: rework interrupt handling")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
efx_phy_adv_cap_set() sets all advertised phy capabilities including
pause capabilities which are also configured using efx_mac_fcntl_set().
If we set speed and autonegotiation capabilities only, we should
preserve already configured pause capabilities.
Fixes: d23f3a89ab ("net/sfc: support link speed and duplex settings")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Fixes: 886f8d8a05 ("net/sfc: retrieve link info")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
Fixes: 886f8d8a05 ("net/sfc: retrieve link info")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
In fact efx_port_poll() always initializes it, but it isn't
explicitly documented feature of the API. Moreover, the API
annocation suggests that return code should be checked.
Fixes: 886f8d8a05 ("net/sfc: retrieve link info")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andrew Lee <alee@solarflare.com>
When any layout is used, the header is stored in the head room of mbuf.
mbuf is allocated and filled by user, means there is no gurateen the
header is all zero for non TSO case. Therefore, we have to do the reset
by ourself:
memest(hdr, 0, head_size);
The memset has two impacts on performance:
- memset could not be inlined, which is a bit costly.
- more importantly, it touches the mbuf, which could introduce severe
cache issues as described by former patch.
Similiary, we could do the same trick: reset just when necessary, when
the corresponding field is already 0, which is likely true for a simple
l2 forward case. It could boost the performance up to 20+% in micro
benchmarking.
Cc: stable@dpdk.org
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
TSO is now enabled, but it's not actually being used by default in a
simple L2 forward mode. In such case, we have to zero the virtio net
headers, to inform the vhost backend that no offload is being used:
hdr->csum_start = 0;
hdr->csum_offset = 0;
hdr->flags = 0;
hdr->gso_type = 0;
hdr->gso_size = 0;
hdr->hdr_len = 0;
Such writes could be very costly; it introduces severe cache issues:
The above operations introduce cache write for each packet, which
stalls the read operation from the vhost backend.
The fact that virtio net header is initiated to zero in PMD driver
init stage means that these costly writes are unnecessary and could
be avoided:
if (hdr->csum_start != 0)
hdr->csum_start = 0;
And that's what the macro ASSIGN_UNLESS_EQUAL does. With this, the
performance drop introduced by TSO enabling is recovered: it could
be up to 20% in micro benchmarking.
Fixes: 58169a9c81 ("net/virtio: support Tx checksum offload")
Fixes: 696573046e ("net/virtio: support TSO")
Cc: stable@dpdk.org
Cc: Olivier Matz <olivier.matz@6wind.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
The commit aed0b12930 ("net/vhost: fix socket file deleted on stop")
moves rte_vhost_driver_register and rte_vhost_driver_unregister from
dev_start() and dev_stop() into driver's probe() and remove().
Apps, like testpmd, using vhost pmd in server mode, usually calls
dev_stop() and dev_close() as quitting, instead of driver-specific
remove(). Then those unix socket files have no chance to get removed.
Semantically, device-specific things should be put into device-specific
APIs. Fix this issue by moving rte_vhost_driver_unregister, plus other
structure free into dev_close().
Fixes: aed0b12930 ("net/vhost: fix socket file deleted on stop")
Cc: stable@dpdk.org
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Value returned from malloc is not checked for errors before being used.
This patch fixes following coverity issue.
static struct vhost_memory_kernel *
prepare_vhost_memory_kernel(void)
{
...
vm = malloc(sizeof(struct vhost_memory_kernel) +
max_regions *
sizeof(struct vhost_memory_region));
...
>>> CID 140744: (NULL_RETURNS)
>>> Dereferencing a null pointer "vm".
mr = &vm->regions[k++];
Coverity issue: 140744
Fixes: e3b434818b ("net/virtio-user: support kernel vhost")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The vtpci_ops assignment needs the 'hw->port_id' as an input parameter.
That said, we should set 'hw->port_id' firstly, then do the vtpci_ops
assignment, while the code does reversely. That would result to a crash
when more than one virtio devices are used, because we keep assigning
proper vtpci_ops to virtio_hw_internal[0]->vtpci_ops, leaving the pointer
for other ports being NULL.
Reverse the order fixes this issue.
Fixes: 9470427c88 ("net/virtio: do not store PCI device pointer at shared memory")
Cc: stable@dpdk.org
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Adds Makefile for scheduler cryptodev PMD, and updates existing
Makefiles. Different than other cryptodev PMDs, scheduler PMD
is required to be built as shared libraries.
Adds scheduler PMD enable and debug flags to config/common_base.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Implements all standard operations required for cryptodev,
and register them to cryptodev operation function pointer table.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds crypto scheduler's PMD's probe and remove function and the device's
enqueue and dequeue burst functions. A cryptodev scheduler PMD is
then registered in the end.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Implements round-robin scheduling mode and register into cryptodev
scheduler ops structure. This mode enqueues a burst of operation
to one of its slaves, and iterates the next burst to the other
slave. Same procedure is done on dequeueing operations.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds the implementations of the APIs for scheduler cryptodev PMD.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds a number of internal structures for the cryptodev scheduler PMD. The
structures include the scheduler context, slave, queue pair context,
and session.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Adds APIs and function prototypes for the scheduler PMD to perform extra
operations other than standard cryptodev APIs.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This makes struct rte_cryptodev independent of struct rte_pci_device by
replacing it with a pointer to the generic struct rte_device.
This is inline with the recent changes in ethdev
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: John Griffin <john.griffin@intel.com>
Reviewed-by: Shreyansh Jain <shreyansh.jain@nxp.com>
IFF_MULTI_QUEUE does not exist in older kernels:
drivers/net/tap/rte_eth_tap.c:143:19: error: ‘IFF_MULTI_QUEUE’ undeclared
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add two new feature flags:
* RTE_CRYPTODEV_FF_CPU_NEON
represents ARM NEON (TM) instructions
* RTE_CRYPTODEV_FF_CPU_ARM_CE
represents ARM crypto extensions
Add them to both cryptodev library, documentation and relevant
PMD driver for ARMv8.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
This patch introduces crypto poll mode driver
using ARMv8 cryptographic extensions.
CPU compatibility with this driver is detected in
run-time and virtual crypto device will not be
created if CPU doesn't provide:
AES, SHA1, SHA2 and NEON.
This PMD is optimized to provide performance boost
for chained crypto operations processing,
such as encryption + HMAC generation,
decryption + HMAC validation. In particular,
cipher only or hash only operations are
not provided.
The driver currently supports AES-128-CBC
in combination with: SHA256 HMAC and SHA1 HMAC
and relies on the external armv8_crypto library:
https://github.com/caviumnetworks/armv8_crypto
Build ARMv8 crypto PMD if compiling for ARM64
and CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO option
is enable in the configuration file.
ARMV8_CRYPTO_LIB_PATH environment variable will
point to the appropriate library directory.
Signed-off-by: Zbigniew Bodek <zbigniew.bodek@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Current Cryptodev AES-NI GCM PMD is implemented using Multi Buffer
Crypto library.This patch reimplement the device using ISA-L Crypto
library: https://github.com/01org/isa-l_crypto.
The migration entailed the following additional support for:
* GMAC algorithm.
* 256-bit cipher key.
* Session-less mode.
* Out-of place processing
* Scatter-gatter support for chained mbufs (only out-of place and
destination mbuf must be contiguous)
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds a user defined name initializing parameter to cryptodev
library.
Originally, for software cryptodev PMD, the vdev name parameter is
treated as the driver identifier, and will create an unique name for each
device automatically, which is not necessarily as same as the vdev
parameter.
This patch allows the user to either create a unique name for his software
cryptodev, or by default, let the system creates a unique one. This should
help the user managing the created cryptodevs easily.
Examples:
CLI command fragment 1: --vdev "crypto_aesni_gcm_pmd"
The above command will result in creating a AESNI-GCM PMD with name of
"crypto_aesni_gcm_X", where postfix X is the number assigned by the system,
starting from 0. This fragment can be placed in the same CLI command
multiple times, resulting the postfixs incremented by one for each new
device.
CLI command fragment 2: --vdev "crypto_aesni_gcm_pmd,name=gcm1"
The above command will result in creating a AESNI-GCM PMD with name of
"gcm1". This fragment can be placed in the same CLI command multiple
times, as long as each having a unique name value.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch introduces RTE_CRYPTODEV_FF_MBUF_SCATTER_GATHER feature flag
informing that selected crypto device supports segmented mbufs natively
and doesn't need to be coalesced before crypto operation.
While using segmented buffers in crypto devices may have unpredictable
results, for PMDs which doesn't support it natively, additional check is
made for debug compilation.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
EVP_CIPHER_CTX_set_padding() function always returns 1, so the check is
unneeded.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Tested-by: Zhaoyan Chen <zhaoyan.chen@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch sets iv size in qat PMD to 12 bytes to be
conformant with nist SP800-38D.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch sets iv size in aesni gcm PMD to 12 bytes to be
conformant with nist SP800-38D.
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
This commit fixes pre-counter block (J0) padding by clearing
four most significant bytes before setting initial counter value.
Fixes: b2bb359747 ("crypto/aesni_gcm: move pre-counter block to driver")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Release v0.44 of Intel(R) Multi-Buffer Crypto for IPsec library adds
support for AVX512 instructions. This patch enables the new AVX512
accelerated functions from the aesni_mb_pmd crypto poll mode driver.
This patch set requires that the aesni_mb_pmd is linked against the
version 0.44 or greater of the Multi-Buffer Crypto for IPsec library.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Update driver to use new AESNI Multibuffer IPSec library single
operation functionality (cipher only and authentication only).
This patch also adds tests for this new feature.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When using sessionless crypto operations, crypto session
is obtained from a pool of sessions, when processing the
operation. Once the operation is processed, the session
is put back in the pool, but for the AESNI MB PMD, this
session was not being saved in the operation and therefore,
it did not return to the session pool.
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Extra bytes are being written at end of data while process standard
openssl cipher encryption. This behaviour is unexpected.
This patch disable the padding feature in openssl library, which is
causing the problem.
Fixes: d61f70b4c9 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
In out-of-place operation, data is DMAed from source mbuf
to destination mbuf. To avoid header data in dest mbuf being
overwritten, the minimal data-set should be DMAed.
Fixes: 39e0bee48e ("crypto/qat: rework request builder for performance")
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
The cryptodev API had specified that if the digest address field was
left empty on an authentication operation, then the PMD would assume
the digest was appended to the source or destination data.
This case was not handled at all by most PMDs and incorrectly handled
by the QAT PMD.
As no bugs were raised, it is assumed to be not needed, so this patch
removes it, rather than add handling for the case on all PMDs.
The digest can still be appended to the data, but its
address must now be provided in the op.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix portability
issues across different architectures.
CC: John Griffin <john.griffin@intel.com>
CC: Fiona Trahe <fiona.trahe@intel.com>
CC: Deepak Kumar Jain <deepak.k.jain@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Yong Wang <yongwang@vmware.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix portability
issues across different architectures.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
Suggested-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jan Medala <jan@semihalf.com>
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.
CC: Harish Patil <harish.patil@cavium.com>
CC: Rasesh Mody <rasesh.mody@cavium.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Replace the raw I/O device memory read/write access with eal abstraction
for I/O device memory read/write access to fix portability issues across
different architectures.
CC: Stephen Hurd <stephen.hurd@broadcom.com>
CC: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>