This counter saved the descriptor elements which are waiting to be
completed and was used to know if completion function should be
called.
This completion check can be done by other elements management
variables and we can prevent this counter management.
Remove this counter and replace the completion check easily by other
elements management variables.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Using power of 2 descriptors number makes the ring management easier
and allows to use mask operation instead of wraparound conditions.
Adjust Tx descriptor number to be power of 2 and change calculation to
use mask accordingly.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The previuse code took a send queue entry size for stamping from the
send queue entry pointed by completion queue entry; This 2 reads were
done per packet in completion stage.
The completion burst packets number is managed by fixed size stored in
Tx queue, so we can infer that each valid completion entry actually frees
the next fixed number packets.
The descriptors ring holds the send queue entry, so we just can infer
all the completion burst packet entries size by simple calculation and
prevent calculations per packet.
Adjust completion functions to free full completion bursts packets
by one time and prevent per packet work queue entry reads and
calculations.
Save only start of completion burst or Tx burst send queue entry
pointers in the appropriate descriptor element.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The Tx queue send ring was managed by Tx block head,tail,count and mask
management variables which were used for managing the send queue remain
space and next places of empty or completed work queue entries.
This method suffered from an actual addresses recalculation per packet,
an unnecessary Tx block based calculations and an expensive dual
management of Tx rings.
Move send queue ring calculation to be based on actual addresses while
managing it by descriptors ring indexes.
Add new work queue entry pointer to the descriptor element to hold the
appropriate entry in the send queue.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
mlx4 Tx block can handle up to 4 data segments or control segment + up
to 3 data segments. The first data segment in each not first Tx block
must validate Tx queue wraparound and must use IO memory barrier before
writing the byte count.
The previous multi-segment code used "for" loop to iterate over all
packet segments and separated first Tx block data case by "if"
statements.
Use switch case and unconditional branches instead of "for" loop can
optimize the case and prevents the unnecessary checks for each data
segment; This hints to compiler to create optimized jump table.
Optimize this case by switch case and unconditional branches usage.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
At error time, the first 4 bytes of each WQE Tx block still have not
writen, so no need to stamp them because they are already stamped.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no need to check Tx queue wraparound for segments which are
not at the beginning of a Tx block. Especially relevant in a single
segment case.
Remove unnecessary aforementioned checks from Tx path.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
When invalid lkey is sent to HW, HW sends an error notification in
completion function.
The previous code wouldn't crash but doesn't add any application report
in case of completion error, so application cannot know that packet
actually was dropped in case of invalid lkey.
Return back the lkey validation to Tx path.
Fixes: 2eee458746 ("net/mlx4: remove error flows from Tx fast path")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This workaround was needed to properly handle device removal with old
Mellanox OFED releases that are not supported by this PMD anymore.
Starting from rdma-core v16 this removal issue shouldn't happen when
setting MLX4_DEVICE_FATAL_CLEANUP environment variable to 1.
Set the aforementioned variable to 1.
Reverts: 5f4677c6ad ("net/mlx4: workaround verbs error after plug-out")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Inner VXLAN RSS was supported and performed by default prior to the entire
mlx4 refactoring that occurred in DPDK 17.11, however so far the new Verbs
RSS API did not provide means to enable it. This will be addressed in
Linux 4.15 and in RDMA core.
Thanks to RSS capabilities, the PMD can now probe for its support and
enable it again by default.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Until now, UDP RSS support could not be relied on due to a problem in the
Linux kernel implementation and mlx4 RSS capabilities were not reported at
all, hence the PMD had to make assumptions.
Since both issues will be addressed simultaneously in Linux 4.15 (related
patches already upstream) and likely backported afterward, UDP RSS support
can be enabled by probing RSS capabilities.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Supported RSS hash fields are listed in function mlx4_conv_rss_hf() and
duplicated in mlx4_flow_prepare(); the latter are used when RSS is
requested without specifying any parameters.
This commit standardizes on mlx4_conv_rss_hf().
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
A couple of structure fields are not Doxygen-friendly.
Fixes: 5db1d36408 ("net/mlx4: restore Tx checksum offloads")
Cc: stable@dpdk.org
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Add RSS hash result from CQE to mbuf,
Also, set PKT_RX_RSS_HASH in the ol_flags.
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
After processing completed packets, the owner bit of each TXBB comprised
in its WQEs must be invalidated. The loop stops short of processing the
last WQE.
Fixes: c3c977bbec ("net/mlx4: add Tx bypassing Verbs")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch improves Rx packet type offload report in case the device is
a virtual function device.
In these devices we observed that the L2 tunnel flag is set also for
non-tunneled packets, this leads to a complete misinterpretation of the
packet type being received.
This issue occurs since the tunnel_mode is not set to 0x7 by the driver
for virtual devices and therefore the value in the L2 tunnel flag is
meaningless and should be ignored.
Fixes: aee4a03fee ("net/mlx4: enhance Rx packet type offloads")
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch enhances the Rx packet type offload to also report the L4
protocol information in the hw ptype filled by the PMD for each received
packet.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Memory regions assigned to hardware and used during Tx/Rx are mapped to
mbuf pools. Each Rx queue creates its own MR based on the mempool
provided during queue setup, while each Tx queue looks up and registers
MRs for all existing mbuf pools instead.
Since most applications use few large mbuf pools (usually only a single
one per NUMA node) common to all Tx/Rx queues, the above approach wastes
hardware resources due to redundant MRs. This negatively affects
performance, particularly with large numbers of queues.
This patch therefore makes the entire MR registration common to all
queues using a reference count. A spinlock is added to protect against
asynchronous registration that may occur from the Tx side where new
mempools are discovered based on mbuf data.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This is done for consistency with the rest of the code.
Fixes: 078b8b452e ("net/mlx4: add RSS flow rule action support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Replace most of the memory barriers by IO memory barriers since they
are all targeted to the DRAM; This improves code efficiency for
systems which force store order between different addresses.
Only the doorbell register store should be protected by memory barrier
since it is targeted to the PCI memory domain.
Limit pre byte count store IO memory barrier for systems with cache
line size smaller than 64B (TXBB size).
This patch improves Tx performance by 0.2MPPS for one segment 64B
packets via 1 queue with 1 core test.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Optimize single segment case by processing it in different block which
prevents checks, calculations and barriers relevant only for multi
segment case.
Call a dedicated function for handling multi segments case.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Remove usage of variable which count the packets for completion and
doesn't add more information than packets counter.
Remove no space in elements ring check which is already covered by
regular Tx flow.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Associate memory region to mempool (on data path) in a short function.
Handle the less common case of adding a new memory region to mempool
in a separate function.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Users are not prevented from creating flow rules targeting nonexistent
queues, which silently makes such rules drop-like.
While it can be thought as a feature, reporting an error instead is
actually far more useful in order to catch common mistakes.
Fixes: 078b8b452e ("net/mlx4: add RSS flow rule action support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
When not in isolated mode, internal flow rules are automatically
maintained by the PMD to receive traffic according to global device
settings (MAC, VLAN, promiscuous mode and so on).
Since RSS support was added to the mix, it must also check whether Rx
queue configuration has changed when refreshing flow rules to prevent
the following from happening:
- With a smaller number of Rx queues, traffic is implicitly dropped
since the existing RSS context cannot be re-applied.
- With a larger number of Rx queues, traffic remains balanced within the
original (smaller) set of queues.
One workaround before this commit was to temporarily enter/leave
isolated mode to make it regenerate internal flow rules.
Fixes: 7d8675956f ("net/mlx4: add RSS support outside flow API")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit addresses the issue of Rx interrupts support with
the new Rx datapath introduced in DPDK version 17.11.
In order to generate an Rx interrupt an event queue is armed with the
consumer index of the Rx completion queue. Since version 17.11 this
index is handled by the PMD so it is now the responsibility of the
PMD to write this value when enabling Rx interrupts.
Fixes: 6681b84503 ("net/mlx4: add Rx bypassing Verbs")
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This commit defines MLX4_CQ_DB_CI_MASK which is used when updating
the consumer index of the completion queue instead of the hardcoded
0xffffff used until now.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The PCI lib defines the types and methods allowing to use PCI elements.
The PCI bus implements a bus driver for PCI devices by constructing
rte_bus elements using the PCI lib.
Move the relevant code out of the EAL to its expected place.
Libraries, drivers, unit tests and applications are updated to use the
new rte_bus_pci.h header when necessary.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
This flag is not necessary at the ether layer anymore.
Buses are able to advertise their hotplug support. The ether layer can
rely upon this capability instead of a special flag.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The corrupted code allowed to create internal rule with no any target
queue in case the rule creation occurred before queues creation.
For example, when user calls rte_eth_dev_default_mac_addr_set after
probe and before dev_configure, mlx4 fails because the RSS queue number
was 0.
The fix prevents internal rules creation before queues creation based on
future creation before traffic start.
Fixes: 7d8675956f ("net/mlx4: add RSS support outside flow API")
Fixes: bdcad2f484 ("net/mlx4: refactor internal flow rules")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The code as currently written requires TCP/UDP source and destination
ports to be always specified.
No such restriction is enforced by hardware; all TCP and UDP traffic
can be matched by providing an empty mask for these fields.
Fixes: 680d5280c2 ("net/mlx4: refactor flow item validation code")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Various hardware limitations apply to RSS indirection tables, one of
them being they must be an exact 1:1 mapping of the configured Rx queue
indices.
While this restriction is enforced when creating RSS flow rules, it is
not the case when Rx queues themselves are created; underlying WQ
numbers are assigned in turn, not according to queue index.
Applications such as l3fwd-power that create Rx queues from highest to
lowest index (or any other non-sequential order) thus fail to get a
working RSS context.
This commit postpones WQ initialization to dev_start(), once all Rx
queues are configured in order to address this issue.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
In case of error occurring while setting up indirection table and
related RSS context resources, intermediate objects are not cleaned up.
Moreover although unlikely, an error other than EINVAL (e.g. ENOMEM)
may be returned.
A description of mlx4_rss_attach()'s return value is also missing.
Fixes: 078b8b452e ("net/mlx4: add RSS flow rule action support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
According to the original commit, Rx queues cannot be created nor
destroyed while the device is started. Synchronizing flow rules during
such events is unnecessary as it occurs later when starting the device.
Fixes: 7977082649 ("net/mlx4: drop live queue reconfiguration support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Dumb unconditional iteration on flow rules should be performed using the
dedicated macro.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The list of libraries in LDLIBS was generated from the DEPDIRS-xyz
variable. This is valid when the subdirectory name match the library
name, but it's not always the case, especially for PMDs.
The patches removes this feature and explicitly adds the proper
libraries in LDLIBS.
Some DEPDIRS-xyz variables become useless, remove them.
Reported-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
This patch works around compilation issues so far only seen on RHEL 7.2
using GCC 4.8.5:
[...]/mlx4_rxq.c: In function `mlx4_rx_queue_setup':
[...]/mlx4_rxq.c:473:3: error: missing initializer for field `ipackets' of
`struct mlx4_rxq_stats' [-Werror=missing-field-initializers]
[...]/mlx4_txq.c: In function `mlx4_tx_queue_setup':
[...]/mlx4_txq.c:265:3: error: missing initializer for field `opackets' of
`struct mlx4_txq_stats' [-Werror=missing-field-initializers]
Fixes: 7977082649 ("net/mlx4: drop live queue reconfiguration support")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This patch adds loopback functionality used when the chip is a VF in order
to enable packet transmission between VFs and PF.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds hardware offloading support for IPV4, UDP and TCP checksum
verification, including inner/outer checksums on supported tunnel types.
It also restores packet type recognition support.
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds hardware offloading support for IPv4, UDP and TCP checksum
calculation, including inner/outer checksums on supported tunnel types.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch adds support for accessing the hardware directly when
handling Rx packets eliminating the need to use Verbs in the Rx data
path.
Rx scatter support: calculate the number of scatters on the fly
according to the maximum expected packet size.
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Modify PMD to send single-buffer packets directly to the device
bypassing the Verbs Tx post and poll routines.
Tx gather support: add support for transmitting packets spanning
over multiple buffers.
Take into consideration the amount of entries a packet occupies
in the TxQ when setting the report-completion flag of the chip.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Bring back support for automatic RSS with the default flow rules when not
in isolated mode. Balancing is done according to unspecified default
settings, as was the case before this entire rework.
Since the number of queues part of RSS contexts is limited to power of two
values, the number of configured queues is rounded down to its previous
power of two; extra queues are silently discarded. This does not prevent
dedicated flow rules from targeting them.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>