Commit Graph

232 Commits

Author SHA1 Message Date
Dekel Peled
1ccc479014 net/mlx5: fix Rx interrupt handling and cleanup
Recent patch added creation of Rx CQ using DevX API.
The reading of events from DevX channel was not done correctly.
This patch fixes the event reading, using the correct data structure.
Cleanup after CQ creation, in case of error, is also updated.

Fixes: 08d1838f64 ("net/mlx5: implement CQ for Rx using DevX API")

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-07-30 00:41:23 +02:00
Ophir Munk
385c19397e net/mlx5: fix premature disabling of interrupt
RXQ interrupts under Linux are based on the epoll mechanism. An expected
order of operations is as follows:
1. Call rte_eth_dev_rx_intr_enable(), to arm the CQ for receiving events
   on data input.
2. Block on rte_epoll_wait() with an array of file descriptors
   representing the CQ events. Upon data arrival the kernel will signal
   an input event on the corresponding CQ fd.
3. Call rte_eth_dev_rx_intr_disable() after the event was received and
   continue in polling mode. The mlx5 implementation of
   rte_eth_dev_rx_intr_disable() is to get the CQ event and ack it.

In practice applications may wake up from rte_epoll_wait() due to
timeout with no event to ack but still call
rte_eth_dev_rx_intr_disable() unconditionally.  In such cases the call
should return EAGAIN (since the file descriptors are non-blocked), as
opposed to EINVAL which indicates a real failure.  In case of EAGAIN the
PMD should not warn on "Unable to disable interrupt on Rx queue".

This commit fixes a earlier commit where the returned value 0 from
function devx_get_event() - was considered an error.

Fixes: 08d1838f64 ("net/mlx5: implement CQ for Rx using DevX API")

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Raslan Darawsheh <rasland@mellanox.com>
2020-07-30 00:41:22 +02:00
Viacheslav Ovsiienko
161d103b23 net/mlx5: add queue start and stop
The mlx5 PMD did not support queue_start and queue_stop eth_dev API
routines, queue could not be suspended and resumed during device
operation.

There is the use case when this feature is crucial for applications:

- there is the secondary process handling the queue
- secondary process crashed/aborted
- some mbufs were allocated or used by secondary application
- some mbufs were allocated by Rx queues to receive packets
- some mbufs were placed to send queue
- queue goes to undefined state

In this case there was no reliable way to recovery queue handling
by restarted secondary process but reset queue to initial state
freeing all involved resources, including buffers involved in queue
operations, reset the mbuf pools, and then reinitialize queue
to working state:

- reset mbuf pool, allocate all mbuf to initialize pool into
  safe state after the crush and allow safe mbuf free calls
- stop queue, free all potentially involved mbufs
- reset mbuf pool again
- start queue, reallocate mbufs needed

This patch introduces the queue start/stop feature with some
limitations:

- hairpin queues are not supported
- it is application responsibility to synchronize start/stop
  with datapath routines, rx/tx_burst must be suspended during
  the queue_start/queue_stop calls
- it is application responsibility to track queue usage and
  provide coordinated queue_start/queue_stop calls from
  secondary and primary processes.
- Rx queues with vectorized Rx routine and engaged CQE
  compression are not supported by this patch currently

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-07-21 15:46:30 +02:00
Dekel Peled
08d1838f64 net/mlx5: implement CQ for Rx using DevX API
This patch continues the work to use DevX API for different objects
creation and management.
On Rx control path, the RQ, RQT, and TIR objects can already be
created using DevX API.
This patch adds the support to create CQ for RxQ using DevX API.
The corresponding event channel is also created and utilized using
DevX API.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-07-21 15:46:30 +02:00
Ophir Munk
9d60f54569 common/mlx5: remove inclusion of Verbs header files
Several source files include Verbs header files as in (1). These source
files will not compile under non-Linux operating systems. This commit
removes this inclusion in two cases:

Case 1: There is no usage of ibv_* or mlx5dv_* symbols in the source
file so the inclusion in (1) can be safely removed.

Case 2: Verbs symbols are used. Please note the inclusion in (1) already
appears in file linux/mlx5_glue.h (which represents the interface
to the rdma-core library). Therefore, replace (1) in the source file
with (2).  Under non-Linux operating systems - file mlx5_glue.h will not
include (1).

(1)
 #include <infiniband/verbs.h>
 #include <infiniband/mlx5dv.h>

(2)
 #include <mlx5_glue.h>

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-21 15:46:30 +02:00
Ophir Munk
2aba9fc725 net/mlx5: replace Linux specific calls
The following Linux calls are replaced by their matching rte APIs.

mmap ==> rte_mem_map()
munmap == >rte_mem_unmap()
sysconf(_SC_PAGESIZE) ==> rte_mem_page_size()

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-21 15:46:30 +02:00
Suanming Mou
ac3fc732c4 net/mlx5: convert queue objects to unified malloc
This commit allocates the Rx/Tx queue objects from unified malloc
function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-21 15:46:30 +02:00
Suanming Mou
83c2047c5f net/mlx5: convert control path memory to unified malloc
This commit allocates the control path memory from unified malloc
function.

The objects be changed:

1. hlist;
2. rss key;
3. vlan vmwa;
4. indexed pool;
5. fdir objects;
6. meter profile;
7. flow counter pool;
8. hrxq and indirect table;
9. flow object cache resources;
10. temporary resources in flow create;

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-21 15:44:36 +02:00
Viacheslav Ovsiienko
a2854c4de1 net/mlx5: convert Rx timestamps in real-time format
The ConnectX-6DX supports the timestamps in various formats,
the new realtime format is introduced - the upper 32-bit word
of timestamp contains the UTC seconds and the lower 32-bit word
contains the nanoseconds. This patch detects what format is
configured in the NIC and performs the conversion accordingly.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-21 15:44:36 +02:00
Viacheslav Ovsiienko
24feb04596 net/mlx5: fix UAR lock sharing for multiport devices
The master and representors might be created over the multiport
Infiniband devices and the UAR resource allocated for sibling
ports might belong to the same underlying Infiniband device.
Hardware requires the write access to the UAR must be performed
as atomic 64-bit write, on 32-bit systems this is two sequential
writes, protected by lock. Due to possibility to share the same
UAR between sibling devices the locks must be moved to shared
context.

Fixes: f048f3d479 ("net/mlx5: switch to the shared IB device context")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-07-21 15:44:36 +02:00
Michael Baum
0f006468c5 net/mlx5: fix iterator type in Rx queue management
The mlx5_check_vec_rx_support function in the mlx5_rxtx_vec.c file
passes the RX queues array in the loop. Similarly, the mlx5_mprq_enabled
function in the mlx5_rxq.c file passes the RX queues array in the loop.

In both cases, the iterator of the loop is called i and the variable
representing the array size is called rxqs_n.
The i variable is of UINT16_T type while the rxqs_n variable is of
unsigned int type. The size of the rxqs_n variable is much larger than
the number of iterations allowed by the i type, theoretically there may
be a situation where the value of the rxqs_n will be greater than can be
represented by 16 bits and the loop will never end.

Change the type of i to UINT32_T.

Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Fixes: 6cb559d67b ("net/mlx5: add vectorized Rx/Tx burst for x86")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-06-30 14:52:30 +02:00
Ori Kam
262c7ad0dd common/mlx5: move doorbell record from net driver
The creation of DBR can be used by a number of different
Mellanox PMDs. for example RegEx / Net / VDPA.

This commits moves the DBR creation and release functions to common
folder.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-06-30 14:52:30 +02:00
Ophir Munk
391b8bcc81 common/mlx5: move some getter functions from net driver
Getter functions such as: 'mlx5_os_get_ctx_device_name',
'mlx5_os_get_ctx_device_path', 'mlx5_os_get_dev_device_name',
'mlx5_os_get_umem_id' are implemented under net directory. To enable
additional devices (e.g. regex, vdpa) to access these getter functions
they are moved under common directory.

As part of this commit string sizes DEV_SYSFS_NAME_MAX and
DEV_SYSFS_PATH_MAX are increased by 1 to make sure that the destination
string size in strncpy() function is bigger than the source string size.
This update will avoid GCC version 8 error -Werror=stringop-truncation.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-06-30 14:52:30 +02:00
Alexander Kozyrev
e891b54a9e net/mlx5: fix descriptors number adjustment
The number of descriptors to configure in a Rx/Tx queue is passed to
the mlx5_tx/rx_queue_pre_setup() function by value. That means any
adjustments of this variable are local and cannot affect the actual
value that is used to allocate mbufs in the mlx5_txq/rxq_new()
functions. Pass the number as a reference to actually update it.

Fixes: 6218063b39 ("net/mlx5: refactor Rx data path")
Fixes: 1d88ba1719 ("net/mlx5: refactor Tx data path")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-06-16 19:21:07 +02:00
Ophir Munk
c7f6ba0e53 net/mlx5: remove umem field dependency on Direct Verbs
umem field is used in several structs. Its type 'struct mlx5dv_devx_umem
*' is changed to 'void *'. This change will allow non-Linux OS
compilations.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-06-16 19:21:07 +02:00
Ophir Munk
e85f623e13 net/mlx5: remove attributes dependency on Verbs
Define 'struct mlx5_dev_attr' which is ibv and dv independent. It
contains attribute that were originally contained in 'struct
ibv_device_attr_ex' and 'struct mlx5dv_context dv_attr'. Add a new API
mlx5_os_get_dev_attr() which fills in the new defined struct.

Signed-off-by: Ophir Munk <ophirmu@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-06-16 19:21:07 +02:00
Michael Baum
ebed623f62 net/mlx5: fix hairpin Rx queue creation error path
The mlx5_rxq_obj_hairpin_new function defines a pointer named tmpl and
allocates memory for it using the rte_zmalloc_socket function.
Later, this function allocates memory to a variable inside tmpl using
the mlx5_devx_cmd_create_rq function.

In both cases, if the allocation fails, the code jumps to the error
label and frees allocated resources. However, in the first jump there
are still no resources to free and the jump only for the line return
NULL is unnecessary. Even worse, when it jumps to error label with
invalid tmpl it actually does dereference to a null pointer.
In contrast, the second jump needs to free the tmpl variable but the
function instead of freeing, tries to free the variable that it just
failed to allocate.
In addition, for another error, the function returns NULL without
freeing the tmpl variable before, causing a memory leak.

Delete the error label and replace each jump with local return NULL and
free tmpl variable if needed.

Fixes: e79c9be915 ("net/mlx5: support Rx hairpin queues")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-06-02 16:06:23 +02:00
Alexander Kozyrev
a24431dffb net/mlx5: improve logging of MPRQ selection
MPRQ is silently turned off in case there is not enough
Rx queues configured. Improve the logging to show a
warning in this case to notify a user about the Rx burst
function selected.

Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-04-21 22:28:06 +02:00
Suanming Mou
772dc0eb83 net/mlx5: convert hrxq to indexed
This commit converts hrxq to indexed.

Using the uint32_t index instead of pointer saves 4 bytes memory for the
flow handle. For millions flows, it will save several MBytes of memory.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-04-21 13:57:09 +02:00
Alexander Kozyrev
bd0d5930bf net/mlx5: enable MPRQ multi-stride operations
MPRQ feature should be updated to allow a packet to be received
into multiple strides in order to support the MTU exceeding 8KB.
Special care is needed to prevent the headroom corruption in the
multi-stride mode since the headroom space is borrowed by the PMD
from the tail of the preceding stride. Copy the whole packet into
a separate mbuf in this case or just the overlapping data if the
Rx scattering is supported by an application.

Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:08 +02:00
Alexander Kozyrev
ecb160456a net/mlx5: add device parameter for MPRQ stride size
Define a device parameter to configure log 2 of a stride size for MPRQ
- mprq_log_stride_size. User is able to specify a stride size in a range
allowed by an underlying hardware. The default stride size is defined as
2048 bytes to encompass most commonly used packet sizes in the Internet
(MTU 1518 and less) and will be used in case a maximum configured packet
size cannot fit into the largest possible stride size. Otherwise a
stride size is set to a large enough value to encompass a whole packet.

Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:08 +02:00
Bing Zhao
1ad9a3d09f net/mlx5: introduce buffer size parameter for hairpin
When creating a hairpin queue, the total data size and the maximal
number of packets are interrelated. The differ is the stride size.
Larger buffer size means big packet like jumbo could be supported,
but in the meanwhile, it will introduce more cache misses and have a
side effect on the performance.
Now a new device parameter "hp_buf_log_sz" is introduced for
applications to set the total data buffer size (the logarithm value).
Then the maximal number of packets will also be calculated
automatically by this value.
Applications could also change this value to a larger one in order
to support larger packets in hairpin case. A smaller value will be
beneficial for memory consumption.
If it is not set, the default value will be used.

Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-04-21 13:57:05 +02:00
Dekel Peled
a4e6ea97a5 common/mlx5: fix RSS key copy to TIR context
In function mlx5_devx_cmd_create_tir(), the 40 bytes of RSS key are
copied in 10 iterations, 4 bytes each time using the MLX5_SET macro.
As result the RSS key is copied into TIR context in swapped byte order.
This patch fixes the issue, using memcpy() to copy the RSS key as is.
The struct member mlx5_devx_tir_attr.rx_hash_toeplitz_key is updated
to byte array type.

Fixes: c3aea272ee ("net/mlx5: create advanced Rx object via DevX")
Cc: stable@dpdk.org

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:05 +02:00
Bing Zhao
c288d7b5eb net/mlx5: fix hairpin queue capacity
The hairpin TX/RX queue depth and packet size is fixed in the past.
When the firmware has some fix or improvement, the PMD will not
make full use of it. And also, 32 packets for a single queue will not
guarantee a good performance for hairpin flows. It will make the
stride size larger and for small packets, it is a waste of memory.
The recommended stride size is 64B now.

The parameter of hairpin queue setup needs to be adjusted.
1. A proper buffer size should support the standard jumbo frame with
9KB, and also more than 1 jumbo frame packet for performance.
2. Number of packets of a single queue should be the maximum
supported value (total buffer size / stride size).

There is no need to support the max capacity of total buffer size
because the memory consumption should also be taken into
consideration.

Fixes: e79c9be915 ("net/mlx5: support Rx hairpin queues")
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
2020-02-19 18:09:28 +01:00
Alexander Kozyrev
8e46d4e18f common/mlx5: improve assert control
Use the MLX5_ASSERT macros instead of the standard assert clause.
Depends on the RTE_LIBRTE_MLX5_DEBUG configuration option to define it.
If RTE_LIBRTE_MLX5_DEBUG is enabled MLX5_ASSERT is equal to RTE_VERIFY
to bypass the global CONFIG_RTE_ENABLE_ASSERT option.
If RTE_LIBRTE_MLX5_DEBUG is disabled, the global CONFIG_RTE_ENABLE_ASSERT
can still make this assert active by calling RTE_VERIFY inside RTE_ASSERT.

Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-02-05 09:51:21 +01:00
Matan Azrad
7b4f1e6bd3 common/mlx5: introduce common library
A new Mellanox vdpa PMD will be added to support vdpa operations by
Mellanox adapters.

This vdpa PMD design includes mlx5_glue and mlx5_devx operations and
large parts of them are shared with the net/mlx5 PMD.

Create a new common library in drivers/common for mlx5 PMDs.
Move mlx5_glue, mlx5_devx_cmds and their dependencies to the new mlx5
common library in drivers/common.

The files mlx5_devx_cmds.c, mlx5_devx_cmds.h, mlx5_glue.c,
mlx5_glue.h and mlx5_prm.h are moved as is from drivers/net/mlx5 to
drivers/common/mlx5.

Share the log mechanism macros.
Separate also the log mechanism to allow different log level control to
the common library.

Build files and version files are adjusted accordingly.
Include lines are adjusted accordingly.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-02-05 09:51:20 +01:00
Matan Azrad
543e218fa5 net/mlx5: separate DevX commands interface
The DevX commands interface is included in the mlx5.h file with a lot
of other PMD interfaces.

As an arrangement to make the DevX commands shared with different PMDs,
this patch moves the DevX interface to a new file called mlx5_devx_cmds.h.

Also remove shared device structure dependency on DevX commands.

Replace the DevX commands log mechanism from the mlx5 driver log
mechanism to the EAL log mechanism.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-02-05 09:51:20 +01:00
Dekel Peled
70ccb60568 net/mlx5: optimize Rx hash fields conversion
Previous fix added translation of Rx hash fields to PRM format.

This patch optimizes the fix, to perform value translation only
if value is not zero.
In case value is zero, there is no need to translate it.

Fixes: c3e33304a7 ("net/mlx5: fix setting of Rx hash fields")
Cc: stable@dpdk.org

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2020-01-20 18:02:17 +01:00
Viacheslav Ovsiienko
bdb8e5b1ea net/mlx5: allow allocated mbuf with external buffer
In the Rx datapath the flags in the newly allocated mbufs
are all explicitly cleared but the EXT_ATTACHED_MBUF must be
preserved. It would allow to use mbuf pools with pre-attached
external data buffers.

The vectorized rx_burst routines are updated in order to
inherit the EXT_ATTACHED_MBUF from mbuf pool private
RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF flag.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-01-20 23:39:11 +01:00
Dekel Peled
c3e33304a7 net/mlx5: fix setting of Rx hash fields
Rx hash fields were copied from input parameter into TIR attributes
directly, with no translation. As result the copied value was wrong.

This patch adds translation of value from input bitmap to the
appropriate format.

Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-01-17 19:45:23 +01:00
Dekel Peled
3d491dd6f2 net/mlx5: add define of LRO segment chunk size
Maximal size of coalesced LRO segment is set in TIR attributes as
number of chunks of size 256 bytes each.
Current implementation uses the hardcoded value 256 in several places.

This patch adds a definition for this value, and uses this definition
in all relevant places.
A debug message is added to clearly notify the actual configured size.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-01-17 19:45:23 +01:00
Matan Azrad
e7f4fbb301 net/mlx5: fix Rx queue release assertions
In debug mode, there is assertion to validate the CQ object before the
release.

Wrongly, the assertion is done for any type of RX queue even if it
doesn't use CQ at all, for example in hairpin Rx queue.

Ignore CQ assertion when hairpin queue is released.

Fixes: e79c9be915 ("net/mlx5: support Rx hairpin queues")

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
2019-11-20 17:36:06 +01:00
Dekel Peled
6b7af102d0 net/mlx5: fix getting Rx queue type
Function mlx5_rxq_get_type() uses the input queue index, without
checking it, as index to the Rx queues array.
If this value is too high, it will result in pointer to memory out
of Rx queues array bounds.

This patch adds check of the input queue index, to verify it is valid.

Fixes: d85c7b5ea5 ("net/mlx5: split hairpin flows")

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2019-11-20 17:36:05 +01:00
Dekel Peled
1c7e57f9bd net/mlx5: set maximum LRO packet size
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2019-11-12 01:43:47 +01:00
Pavan Nikhilesh
8b945a7f7d drivers/net: update Rx RSS hash offload capabilities
Add DEV_RX_OFFLOAD_RSS_HASH flag for all PMDs that support RSS hash
delivery.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2019-11-11 16:15:37 +01:00
Ori Kam
d85c7b5ea5 net/mlx5: split hairpin flows
Since the encap action is not supported in RX, we need to split the
hairpin flow into RX and TX.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-11-08 23:15:04 +01:00
Ori Kam
63bd16292c net/mlx5: support RSS on hairpin
Add support for rss on hairpin queues.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-11-08 23:15:04 +01:00
Ori Kam
e79c9be915 net/mlx5: support Rx hairpin queues
This commit adds the support for creating Rx hairpin queues.
Hairpin queue is a queue that is created using DevX and only used
by the HW. This results in that all the data part of the RQ is not being
used.

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-11-08 23:15:04 +01:00
Xiaoyu Min
8e2f25cf3c net/mlx5: fix crash on hash Rx queue handling for drop
When to create hrxq for the drop, it could fail on creating qp and goto
the error handle which will release created ind_table by calling drop
release function, which takes rte_ethdev as the only parameter and uses
the priv->drop_queue.hrxq as input to release.

Unfortunately, at this point, the hrxq is not allocated and
priv->drop_queue.hrxq is still NULL, which leads to a segfault.

This patch fixes the above by allocating the hrxq at first place and
when the error happens, hrxq is released as the last one.

This patch also release other allocated resources by the correct order,
which is missing previously.

Fixes: 78be885295 ("net/mlx5: handle drop queues as regular queues")
Cc: stable@dpdk.org

Reported-by: Zengmo Gao <gaozengmo@jd.com>
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-10-23 16:43:10 +02:00
Ori Kam
e72bd9603e net/mlx5: fix allocation size of RQT attribute
The receive queues list size is based on the size of uint32_t, so
when allocating the memory, the correct value should be used. Or
else there is risk to corrupt the memory, depending on the queues
number, because there is some pad area for alignment. If the queue
number is not large enough, the issue couldn't be observed.

Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org

Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-10-23 16:43:08 +02:00
Matan Azrad
17ed314c6c net/mlx5: allow LRO per Rx queue
Enabling LRO offload per queue makes sense because the user will
probably want to allocate different mempool for LRO queues - the LRO
mempool mbuf size may be bigger than non LRO mempool.

Change the LRO offload to be per queue instead of per port.

If one of the queues is with LRO enabled, all the queues will be
configured via DevX.

If RSS flows direct TCP packets to queues with different LRO enabling,
these flows will not be offloaded with LRO.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
5158260917 net/mlx5: allow implicit LRO flow
When a user configures LRO in the port offloads, he probably wants each
TCP packet will have a chance to open an LRO session.

The PMD wasn't configure LRO in the flow TIR if the flow is not
explicitly configured TCP item despite the flow included TCP traffic.

For example, the next flows were not LRO offloaded:
pattern eth / end, pattern eth / ip / end, pattern eth / ipv6 / end.

Enable LRO configuration for all the TIRs if LRO is configured in the
port.

No performance impact for non-LRO traffic in these TIRs.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
2579543f60 net/mlx5: handle LRO packets in regular Rx queue
When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.

In this case the SW should fix the relevant packet headers because
the HW doesn't update them according to the new created packet
characteristics but provides the update values in the CQE.

Add update header code to the regular Rx burst function to support LRO
feature.

Make sure the first mbuf has enough space to include each TCP header,
otherwise the header update may cross mbufs what complicates the
operation too match.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
18a68e046b net/mlx5: fix DevX Rx queue memory alignment
The alignment requested by the FW for WQ buffer allocation is 512.

Change it from cache line alignment to 512.

Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
bd41389e35 net/mlx5: allow LRO in regular Rx queue
LRO support was only for MPRQ, hence mprq Rx burst was selected when
LRO was configured in the port.

The current support for MPRQ is suffering from bad memory utilization
since an external mempool is allocated by the PMD for the packets data
in addition to the user mempool, besides that, the user may get packet
data addresses which were not configured by him.

Even though MPRQ has the best performance for packet receiving in the
most cases and because of the above facts it is better to remove the
automatic MPRQ select when LRO is configured.

Move MPRQ to be selected only when the user force it by the PMD
arguments including LRO case.

Allow LRO offload using the regular RQ with the regular Rx burst
function.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
1a241e5579 net/mlx5: fix DevX Rx queue type
When the Rx queue is not in striding RQ mode it should be configured as
cyclic RQ.

In this case the type remains 0 which means linked-list type.

Set the RQ type to be cyclic when the queue is not in striding RQ mode.

Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
b7d1e5d4d1 net/mlx5: fix DevX scattered Rx queue size
The WQ size configuration via DevX didn't take into account the maximum
number of segments per packet what wrongly caused to configure bigger
WQE size than the size expected by the PMD in other places.

The scatter mode stride size should be the size of segment multiplied
by the number of maximum segments per packet.
The number of WQEs per WQ should be the number of descriptors divided by
the number of the maximum segments per packet.

Fix the size calculations to the above rule.

Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
be39124e5b net/mlx5: support mbuf headroom for LRO packet
Patch [1] zeroes the mbuf headroom when the port is configured with LRO
because when working with more than one stride per packet the HW cannot
guaranty an headroom in the start stride of each packet.

Change the solution to support mbuf headroom by adding an empty buffer
as the first packet segment, scatter mode must be enabled to support it.

[1] http://patches.dpdk.org/patch/56912/

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
50c00baff7 net/mlx5: limit LRO size to maximum Rx packet
The field max_rx_pkt_len in Rx configuration indicates the maximum size
for Rx packet to be received.

There was no any field to indicate the maximum size of LRO packet to be
received by the application.

Assuming the user configures max_rx_pkt_len as the maximum LRO packet
length when LRO is configured on the port, the PMD limits the maximum
LRO packet size received from HW to be max_rx_pkt_len.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00
Matan Azrad
721c953018 net/mlx5: fix Rx scatter mode validation
If the mbuf size of the Rx mempool supplied by the user in the Rx setup
is unable to contain the maximum Rx packet length in addition to the
mbuf head-room, the Rx scatter offload must be configured. Otherwise,
there is not enough space in single mbuf to contain a packet with size
of the maximum Rx packet length.

The PMD did not return an error in the above mentioned case.

Return an error in the above case.

Fixes: 7d6bf6b866 ("net/mlx5: add Multi-Packet Rx support")
Fixes: edad38fcd0 ("net/mlx: enhance Rx scatter mode detection")
Cc: stable@dpdk.org

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
2019-07-29 16:54:27 +02:00