If pass-through a VF by vfio-pci to a Qemu VM, after FLR
in VM, the interrupt setting is not recoverd correctly
to host as below:
in VM guest:
Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
in Host:
Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
That was because in pci_reset_function, it first reads the
PCI configure and set FLR reset, and then writes PCI configure
as restoration. But not all the writing are successful to Host.
Because vfio-pci driver doesn't allow directly write PCI MSI-X
Cap.
To fix this issue, we need to move the interrupt enablement from
igb_uio probe to open device file. While it is also the similar as
the behaviour in vfio_pci kernel module code.
Fixes: b58eedfc7d ("igb_uio: issue FLR during open and release of device file")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Tested-by: Shijith Thotton <shijith.thotton@caviumnetworks.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
In igb_uio, FLR is issued during open device file. i40evf is trying
to initialize admin queue when driver probe, while the FLR is not
done by host driver. That will cause initialization fail.
This patch is adding the checking if VF reset is done before
adimin queue initialization.
Fixes: b58eedfc7d ("igb_uio: issue FLR during open and release of device file")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
The size of Rx completion queue should be doubled if compression is enabled
in case of non-vectorized Rx.
Fixes: 523f5a7421 ("net/mlx5: fix configuration of Rx CQE compression")
Cc: stable@dpdk.org
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
In case of NUMA reallocation, the virtqueue struct is reallocated
on another socket, meaning that its address changes.
In translate_ring_addresses(), addr pointer was not fetched again
after the reallocation, so it pointed to freed memory.
This patch just fetch again addr pointer after the reallocation.
Reported-by: Lei Yao <lei.a.yao@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Lei Yao <lei.a.yao@intel.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
In case of NUMA reallocation, virtqueue's iotlb list is broken,
has its head changes but first iotlb entry in the list still points
to the previous head pointer.
Also, in case of reallocation, we want the IOTLB cache mempool to be
on the new socket.
This patch perform a full re-init of the IOTLB cache when mempool
already exists, and calls the IOTLB cache init function in case
the virtqueue is being reallocated on a new socket.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
An optimization was done to only take the iotlb cache lock
once per packet burst instead of once per IOVA translation.
With this, IOTLB miss requests are sent to Qemu with the lock
held, which can cause a deadlock if the socket buffer is full,
and if Qemu is waiting for an IOTLB update to be done.
Holding the lock is not necessary when sending an IOTLB miss
request, as it is not manipulating the IOTLB cache list, which
the lock protects. Let's just release it while sending the
IOTLB miss.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
Compiler error:
irte_efd.o: In function `rte_efd_lookup':
rte_efd.c:(.text+0x6d6e): undefined reference to `efd_lookup_internal_avx2'
rte_efd.o: In function `rte_efd_lookup_bulk':
rte_efd.c:(.text+0x87d4): undefined reference to `efd_lookup_internal_avx2'
This can be observed with a compiler that doesn't support AVX2 and
shared build.
Fixes: 86d8989688 ("efd: add AVX2 vector lookup function")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Since port id has changed from uint8_t to uint16_t in dpdk code,
So update the change in related doc.
Fixes: f8244c6399 ("ethdev: increase port id range")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The variable "port" should be defined as uint16_t, fix it here.
Fixes: f8244c6399 ("ethdev: increase port id range")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Some functions applied were still developed on top of uint8_t port_id,
however port_id has been increased range to uint16_t. The patch fixes
the issue.
Fixes: f8244c6399 ("ethdev: increase port id range")
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
MSI masks contain a 1 if interrupt is masked, 0 if unmasked.
I got that wrong with the !!state calculation. For better
readability, the mask is now changed like in igbuio_msi_mask_irq.
Fixes: a8ea1e5fb6 ("igb_uio: fix unknown MSI symbols")
Signed-off-by: Markus Theil <markus.theil@tu-ilmenau.de>
Tested-by: Markus Theil <markus.theil@tu-ilmenau.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch partially reverts the commit d196343a25 and adds some
functions from Markus' previous version of the patch [1].
igb_uio uses pci_msi_unmask_irq() and pci_msi_mask_irq() kernel APIs
when kernel version is >= 3.19 because these APIs are implemented in
this Linux kernel version.
But these APIs only exported beginning from Linux kernel 4.5, so before
this Linux kernel version igb_uio kernel module is not usable,
and giving following warnings:
"igb_uio: Unknown symbol pci_msi_unmask_irq"
"igb_uio: Unknown symbol pci_msi_mask_irq"
The support for these APIs increased to Linux kernel >= 4.5
For older version of Linux kernel unmask_msi_irq() and mask_msi_irq()
are used but these functions are not exported at all.
Instead of these functions switched back to previous implementation in
igb_uio for MSI-X, and for MSI used igbuio_msi_mask_irq() from [1].
[1]
http://dpdk.org/dev/patchwork/patch/28144/
Fixes: d196343a25 ("igb_uio: use kernel functions for masking MSI-X")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Remove variable declaration from within for loop.
Fixes: f14791a812 ("examples/vm_power_mgr: add policy to channels")
Signed-off-by: David Hunt <david.hunt@intel.com>
Replaced _Static_assert compiler function with RTE_BUILD_BUG_ON()
to fix build issue with old gcc.
Fixes: 02fd6c7443 ("mempool/octeontx: support allocation")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
This patch enables x86 EFD file be compiled only if the compiler
supports AVX2 since it is already chosen at run-time.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
This patch modifies assignment of alignment unit from build-time
to run-time based on CPU flags that machine supports.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
This patch dynamically selects functions of memcpy at run-time based
on CPU flags that current machine supports. This patch uses function
pointers which are bind to the relative functions at constrctor time.
In addition, AVX512 instructions set would be compiled only if users
config it enabled and the compiler supports it.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
First, try to use CPUID Time Stamp Counter and Nominal Core Crystal
Clock Information Leaf to determine the tsc hz on platforms that
supports it (does not require privileged user).
If the CPUID leaf is not available, then try to determine the tsc hz by
reading the MSR 0xCE (requires privileged user).
Default to the tsc hz estimation if both methods fail.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Tested-by: Bruce Richardson <bruce.richardson@intel.com>
In ppc_64, rte_rdtsc() returns timebase register value which increments
at independent timebase frequency and hence not related to lcore cpu
frequency to derive TSC hz. Hence, we stick with master lcore frequency.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Use cntvct_el0 system register to get the system counter frequency.
If the system is configured with RTE_ARM_EAL_RDTSC_USE_PMU then
return 0(let the common code calibrate the tsc frequency).
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
When calibrating the TSC frequency, first, probe the architecture specific
function. If not available, use the existing calibrate scheme.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
app/test-crypto-perf/main.c:596:6: error: ‘total_nb_qps’ may be
used uninitialized in this function [-Werror=maybe-uninitialized]
if (i == total_nb_qps)
^
Fixes: c4f916e332 ("app/crypto-perf: support multiple queue pairs")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch works around compilation issues so far only seen on RHEL 7.2
using GCC 4.8.5:
[...]/mlx4_rxq.c: In function `mlx4_rx_queue_setup':
[...]/mlx4_rxq.c:473:3: error: missing initializer for field `ipackets' of
`struct mlx4_rxq_stats' [-Werror=missing-field-initializers]
[...]/mlx4_txq.c: In function `mlx4_tx_queue_setup':
[...]/mlx4_txq.c:265:3: error: missing initializer for field `opackets' of
`struct mlx4_txq_stats' [-Werror=missing-field-initializers]
Fixes: 7977082649 ("net/mlx4: drop live queue reconfiguration support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Fix xstats functions, rte_eth_xstats_get_names_by_id()
and rte_eth_xstats_get_by_id(), in current implementation
ethdev level reads all xstat values and filters out
the ones requested by the application. This behavior doesn't
benefit from PMD ops and doesn't provide the benefit the
API was created in the first place for. APIs are also unnecessarily
complicated. Both APIs have different returns for the same params.
In this fix, instead of reading all the stats and finding the
requested value, drivers can provide ops to get selected xstats.
API no longer crashes with certain params,
rte_eth_get_by_id returned seg fault with
"ids = NULL && values != NULL && n<max”
rte_eth_get_names_by_id returned seg fault with
"ids = NULL && values != NULL && n=0”
These now return max number of stats available, matching the other API.
rte_eth_get_by_id returned seg fault with
"ids != NULL && values = NULL && n<max”
This now returns -22,(EINVAL).
Standardized variable/parameter names between the 2 APIs.
Overall code complexity reduced.
Fixes: 79c913a42f ("ethdev: retrieve xstats by ID")
Signed-off-by: Lee Daly <lee.daly@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds loopback functionality used when the chip is a VF in order
to enable packet transmission between VFs and PF.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds hardware offloading support for IPV4, UDP and TCP checksum
verification, including inner/outer checksums on supported tunnel types.
It also restores packet type recognition support.
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds hardware offloading support for IPv4, UDP and TCP checksum
calculation, including inner/outer checksums on supported tunnel types.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds support for accessing the hardware directly when
handling Rx packets eliminating the need to use Verbs in the Rx data
path.
Rx scatter support: calculate the number of scatters on the fly
according to the maximum expected packet size.
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Modify PMD to send single-buffer packets directly to the device
bypassing the Verbs Tx post and poll routines.
Tx gather support: add support for transmitting packets spanning
over multiple buffers.
Take into consideration the amount of entries a packet occupies
in the TxQ when setting the report-completion flag of the chip.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Bring back support for automatic RSS with the default flow rules when not
in isolated mode. Balancing is done according to unspecified default
settings, as was the case before this entire rework.
Since the number of queues part of RSS contexts is limited to power of two
values, the number of configured queues is rounded down to its previous
power of two; extra queues are silently discarded. This does not prevent
dedicated flow rules from targeting them.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
When part of the RSS hash calculation, UDP packets are discarded (not
received on any queue) likely due to an issue with the kernel
implementation.
Temporarily disable UDP RSS support until this issue is resolved.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This patch dissociates single-queue indirection tables and hash QP objects
from Rx queue structures to relinquish their control to users through the
RSS flow rule action, while simultaneously allowing multiple queues to be
associated with RSS contexts.
Flow rules share identical RSS contexts (hashed fields, hash key, target
queues) to save on memory and other resources. The trade-off is some added
complexity due to reference counters management on RSS contexts.
The QUEUE action is re-implemented on top of an automatically-generated
single-queue RSS context.
The following hardware limitations apply to RSS contexts:
- The number of queues in a group must be a power of two.
- Queue indices must be consecutive, for instance the [0 1 2 3] set is
allowed, however [3 2 1 0], [0 2 1 3] and [0 0 1 1 2 3 3 3] are not.
- The first queue of a group must be aligned to a multiple of the context
size, e.g. if queues [0 1 2 3 4] are defined globally, allowed group
combinations are [0 1] and [2 3]; groups [1 2] and [3 4] are not
supported.
- RSS hash key, while configurable per context, must be exactly 40 bytes
long.
- The only supported hash algorithm is Toeplitz.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Device operation callbacks are not supposed to handle a missing private
data structure.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Work queues (WQs) are lower-level than standard queue pairs (QPs). They are
dedicated to one traffic direction and have to be used in conjunction with
indirection tables and special "hash" QPs to get the same level of
functionality.
These extra objects however are the building blocks for RSS support brought
by subsequent commits, as a single "hash" QP can manage several WQs through
an indirection table according to a hash algorithm and other parameters.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Since live Tx and Rx queues cannot be reused anymore without being
destroyed first, mbuf ring sizes are fixed and known from the start.
This allows a single allocation for queue data structures and mbuf ring
together, saving space and bringing them closer in memory.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
DPDK ensures that setup functions are never called on configured queues,
or only if they have previously been released.
PMDs therefore do not need to deal with the unexpected reconfiguration of
live queues which may fail with no easy way to recover. Dropping support
for this scenario greatly simplifies the code as allocation and setup steps
and checks can be merged.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Tx queue elements allocation function sets rte_errno properly and returns
its negative version. Reassigning this value to rte_errno is thus both
invalid and unnecessary.
Fixes: 9d14b27308 ("net/mlx4: standardize on negative errno values")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Although their "removed" version acts as a safety against unexpected bursts
while queues are being modified by the control path, these callbacks are
set per device instead of per queue. It makes sense to update them during
start/stop/close cycles instead of queue setup.
As a side effect, this commit addresses a bug left over from a prior
commit: bringing the link down causes the "removed" Tx callback to be used,
however the normal callback is not restored when bringing it back up,
preventing the application from sending traffic at all.
Updating callbacks for a link change is not necessary as bringing the
netdevice down is normally enough to prevent traffic from flowing in.
Fixes: 3f75a02719 ("net/mlx4: drop scatter/gather support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Implement promiscuous and all multicast through internal flow rules
automatically generated according to the configured mode.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Give users the ability to create flow rules that match all multicast
traffic. Like promiscuous flow rules, they come with restrictions such as
not allowing additional matching criteria.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This commit brings back VLAN filter configuration support without any
artificial limitation on the number of simultaneous VLANs that can be
configured (previously 127).
Also thanks to the fact it does not rely on fixed per-queue arrays for
potential Verbs flow handle storage anymore, this version wastes a lot less
memory (previously 128 * 127 * pointer size, i.e. 130 kiB per Rx queue,
only one of which actually had any use for this room: the RSS parent
queue).
The number of internal flow rules generated still depends on the number of
configured MAC addresses times that of configured VLAN filters though.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This commit brings back support for configuring up to 128 MAC addresses on
a port through internal flow rules automatically generated on demand.
Unlike its previous incarnation, the necessary extra flow rule for
broadcast traffic does not consume an entry from the MAC array anymore.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>