Get a burst mode information for Rx/Tx queues in mlx5.
Provide callback functions to show this information in
a "show rxq info" and "show txq info" output.
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Use the RTE_LIBRTE_MLX5_DEBUG configuration flag to get rid of dependency
on the NDEBUG definition. This is a preparation step to switch
from standard assert clauses to DPDK RTE_ASSERT ones in MLX5 driver.
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The inline feature is designed to save PCI bandwidth by copying some
of the data to the wqe. This feature if enabled works for all packets.
In some cases when using external memory, the PCI bandwidth is not
relevant since the memory can be accessed by other means.
This commit introduce the ability to control the inline with mbuf
granularity.
In order to use this feature the application should register the field
name, and restart the port.
Signed-off-by: Ori Kam <orika@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The CQE has owner bit to indicate if it is in SW control or HW.
Share a CQE check for all the mlx5 drivers.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
A new Mellanox vdpa PMD will be added to support vdpa operations by
Mellanox adapters.
This vdpa PMD design includes mlx5_glue and mlx5_devx operations and
large parts of them are shared with the net/mlx5 PMD.
Create a new common library in drivers/common for mlx5 PMDs.
Move mlx5_glue, mlx5_devx_cmds and their dependencies to the new mlx5
common library in drivers/common.
The files mlx5_devx_cmds.c, mlx5_devx_cmds.h, mlx5_glue.c,
mlx5_glue.h and mlx5_prm.h are moved as is from drivers/net/mlx5 to
drivers/common/mlx5.
Share the log mechanism macros.
Separate also the log mechanism to allow different log level control to
the common library.
Build files and version files are adjusted accordingly.
Include lines are adjusted accordingly.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch adds to MLX5 PMD support of matching on GTP item,
fields msg_type and teid, according to RFC [1].
GTP item validation and translation functions are added and called.
GTP tunnel type is added to supported tunnels.
[1] http://mails.dpdk.org/archives/dev/2019-December/152799.html
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
The free on completion queue keeps the indices of elts array,
all mbuf stored below this index should be freed on arrival
of normal send completion. In debug version it also contains
an index of completed transmitting descriptor (WQE) to check
queues synchronization.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The new software manged entity is introduced in Tx datapath
- free on completion queue. This queue keeps the information
how many buffers stored in elts array must freed on send
completion. Each element of the queue contains transmitting
descriptor index to be in synch with completion entries (in
debug build only) and the index in elts array to free buffers.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
This is preparation step, we are going to store the index
of elts to free on completion in the dedicated free on
completion queue, this patch updates the elts freeing routine
and updates Tx error handling routine to be synced with
coming new queue.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
In the Rx datapath the flags in the newly allocated mbufs
are all explicitly cleared but the EXT_ATTACHED_MBUF must be
preserved. It would allow to use mbuf pools with pre-attached
external data buffers.
The vectorized rx_burst routines are updated in order to
inherit the EXT_ATTACHED_MBUF from mbuf pool private
RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF flag.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
As the result of testing it was found that some hosts have
the performance penalty imposed by required write memory barrier
after doorbell writing. Before 19.08 release there was some
heuristics to decide whether write memory barrier should be
performed. For the bursts of recommended size (or multiple)
it was supposed there were some extra ongoing packets in the
next burst and write memory barrier may be skipped (supposed
to be performed in the next burst, at least after descriptor
writing).
This patch restores that behaviour, the devargs tx_db_nc=2
must be specified to engage this performance tuning feature.
Fixes: 8409a28573 ("net/mlx5: control transmit doorbell register mapping")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
The rdma core library can map doorbell register in two ways,
depending on the environment variable "MLX5_SHUT_UP_BF":
- as regular cached memory, the variable is either missing or
set to zero. This type of mapping may cause the significant
doorbell register writing latency and requires explicit
memory write barrier to mitigate this issue and prevent
write combining.
- as non-cached memory, the variable is present and set to
not "0" value. This type of mapping may cause performance
impact under heavy loading conditions but the explicit write
memory barrier is not required and it may improve core
performance.
The new devarg is introduced "tx_db_nc", if this parameter is
set to zero, the doorbell register is forced to be mapped to
cached memory and requires explicit memory barrier after
writing to. If "tx_db_nc" is set to non-zero value the doorbell
will be mapped as non-cached memory, not requiring the memory
barrier. If "tx_db_nc" is missing the behaviour will be defined
by presence of "MLX5_SHUT_UP_BF" in environment. If variable
is missed the default value zero will be set for ARM64 hosts
and one for others.
In run time the code checks the mapping type and provides the
memory barrier after writing to tx doorbell register if it is
needed. The mapping type is extracted directly from the
uar_mmap_offset field in the queue properties.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Since the encap action is not supported in RX, we need to split the
hairpin flow into RX and TX.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds the support for creating Tx hairpin queues.
Hairpin queue is a queue that is created using DevX and only used
by the HW.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently all Tx queues are created using Verbs.
This commit modify the naming so it will not include verbs,
since in next commit a new type will be introduce (hairpin)
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit adds the support for creating Rx hairpin queues.
Hairpin queue is a queue that is created using DevX and only used
by the HW. This results in that all the data part of the RQ is not being
used.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
In the process of recovery from error CQE, when using vectorized Rx
burst, the initialization of data length in mbufs was not done.
As a result the wrong length was left written in mbuf, causing
memory overwrite or wrong error report.
This patch fixes the initialization of mbuf data length during
recovery from error CQE, when using vectorized Rx burst,
Fixes: 88c0733535 ("net/mlx5: extend Rx completion with error handling")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The hw ptype information is missed for IP-in-IP tunnel.
It should be RTE_PTYPE_TUNNEL_IP ptype.
Fixes: 5e33bebdd8 ("net/mlx5: support IP-in-IP tunnel")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
There is the limit on completion descriptor fetch to improve
latency. If burst size is large there might be not enough
resources freed in completion processing. This fix reiterates
sending loop and allows multiple completion descriptor fetch
and processing.
Fixes: 18a1c20044 ("net/mlx5: implement Tx burst template")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The completion loop speed optimizations for error-free
operations are done - no CQE field fetch on each loop
iteration. Also, code size is oprimized - the flush
buffers routine is invoked once.
Fixes: 318ea4cfa1 ("net/mlx5: fix Tx completion descriptors fetching loop")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Enabling LRO offload per queue makes sense because the user will
probably want to allocate different mempool for LRO queues - the LRO
mempool mbuf size may be bigger than non LRO mempool.
Change the LRO offload to be per queue instead of per port.
If one of the queues is with LRO enabled, all the queues will be
configured via DevX.
If RSS flows direct TCP packets to queues with different LRO enabling,
these flows will not be offloaded with LRO.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When a user configures LRO in the port offloads, he probably wants each
TCP packet will have a chance to open an LRO session.
The PMD wasn't configure LRO in the flow TIR if the flow is not
explicitly configured TCP item despite the flow included TCP traffic.
For example, the next flows were not LRO offloaded:
pattern eth / end, pattern eth / ip / end, pattern eth / ipv6 / end.
Enable LRO configuration for all the TIRs if LRO is configured in the
port.
No performance impact for non-LRO traffic in these TIRs.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
LRO support was only for MPRQ, hence mprq Rx burst was selected when
LRO was configured in the port.
The current support for MPRQ is suffering from bad memory utilization
since an external mempool is allocated by the PMD for the packets data
in addition to the user mempool, besides that, the user may get packet
data addresses which were not configured by him.
Even though MPRQ has the best performance for packet receiving in the
most cases and because of the above facts it is better to remove the
automatic MPRQ select when LRO is configured.
Move MPRQ to be selected only when the user force it by the PMD
arguments including LRO case.
Allow LRO offload using the regular RQ with the regular Rx burst
function.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_rxq_release() calls mlx5_release_dbr() to release the
doorbell allocated for this Rx queue.
This call is relevant only for Rx queue objects created using
DevX API.
This patch adds the required check, to call mlx5_release_dbr()
only when relevant.
It also updates mlx5_release_dbr() to use the input offset correctly.
Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
NVGRE has a GRE header with c_rsvd0_ver value 0x2000 and protocol
value 0x6558.
These should be matched when item_nvgre is provided.
This patch adds validation function of NVGRE item.
It also updates the translate function of NVGRE item, to add the
required values, if they were not specified.
Original work by Xiaoyu Min <jackmin@mellanox.com>
Fixes: fc2c498ccb ("net/mlx5: add Direct Verbs translate items")
Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Xiaoyu Min <jackmin@mellanox.com>
LRO packet may consume all the stride memory, hence the PMD cannot
guaranty head-room for the LRO mbuf.
The issue is lack in HW support to write the packet in offset from the
stride start.
A new striding RQ feature may be added in CX6 DX to allow head-room and
tail-room for the LRO strides.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
As an arrangement to the LRO support when a packet can consume all the
stride memory, the external mbuf shared information cannot be anymore
in the end of the stride, because the HW may write the packet data to
all the stride memory.
Move the shared information memory from the stride to the control
memory of the external mbuf.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Implement LRO support using a single RQ object per DPDK RxQ.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Function mlx5_rxq_obj_new(), previously called mlx5_rxq_ibv_new(),
supports creating Rx queue objects using verbs.
This patch expands the relevant functions, to support creating
verbs or DevX Rx queue objects:
Function mlx5_rxq_obj_new() updated to create RQ object using DevX.
Function mlx5_ind_table_obj_new() updated to create RQT object using DevX.
Function mlx5_hrxq_new() updated to create TIR object using DevX.
New utility functions added to perform specific operations:
mlx5_devx_rq_new(), mlx5_devx_wq_attr_fill(),
mlx5_devx_create_rq_attr_fill().
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing use of DevX TIR object.
Hash Rx queue is currently created using verbs QP only.
The next patches will add the option to create it with a TIR object
using DevX.
This patch renames hrxq_ibv to hrxq wherever relevant, and adds
the DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing of DevX RQT object.
Rx indirection table object is currently created using verbs only.
The next patches will add the option to create an RQT object using
DevX.
This patch renames ind_table_ibv to ind_table_obj wherever relevant,
and adds the DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Prepare for introducing of DevX RxQ object.
RxQ object is currently created using verbs only.
The next patches will add the option to create RxQ object using DevX.
This patch renames rxq_ibv to rxq_obj wherever relevant, and adds the
DevX items to relevant structs.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
When using DevX API, memory for door-bell records should be allocated
by PMD and registered using DevX API.
This patch implements the utility functions to support it:
- Add struct mlx5_devx_dbr_page, containing door-bells page data.
- Add list of struct mlx5_devx_dbr_page door-bell pages to device
private data.
- Implement function mlx5_alloc_dbr_page() to allocate page for
door-bell records, and register it using DevX API.
- Implement function mlx5_get_dbr(). to acquire a door-bell record
from the door-bells page, allocating a new page if needed.
- Implement function mlx5_release_dbr() to release a door-bell
record that is no longer needed, freeing the containing page if
it becomes empty.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Add command-line argument to set LRO session timeout.
Add LRO settings struct in PMD configuration struct.
Add support of LRO offload in port configuration.
Add macros and function to check if LRO is supported and enabled.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This patch adds the implementation of tx_burst routine template.
The template supports all Tx offloads and multiple optimized
tx_burst routines can be generated by compiler from this one.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch updates the Tx datapath control and configuration
structures and code for managing Tx datapath settings.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
This patch removes the existing Tx datapath code
as preparation step before introducing the new
implementation. The following entities are being
removed:
- deprecated devargs support
- tx_burst() routines
- related PRM definitions
- SQ configuration code
- Tx routine selection code
- incompatible Tx completion code
The following devargs are deprecated and ignored:
- "txq_inline" is going to be converted to "txq_inline_max"
for compatibility issue
- "tx_vec_en"
- "txqs_max_vec"
- "txq_mpw_hdr_dseg_en"
- "txq_max_inline_len" is going to be converted
to "txq_inline_mpw" for compatibility issue
The deprecated devarg keys are recognized by PMD
and ignored/converted to the new ones in order not
to block device probing.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
The RQ errors recovery mechanism in the PMD invokes a Verbs functions to
modify the RQ states in order to reset the RQ and to reactivate it.
These Verbs functions are not allowed to be invoked from a secondary
process, hence the PMD skips the recovery when the error is captured by
secondary processes queues.
Using the DPDK IPC mechanism the secondary process can request Verbs
queues state modifications to be done synchronically by the primary
process.
Add support for secondary process Rx errors recovery.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
When WQEs are posted to the HW to send packets, the PMD may get a
completion report with error from the HW, aka error CQE which is
associated to a bad WQE.
The error reason may be bad address, wrong lkey, bad sizes, etc.
that can wrongly be configured by the PMD or by the user.
Checking all the optional mistakes to prevent error CQEs doesn't make
sense due to performance impacts and huge complexity.
The error CQEs change the SQ state to error state what causes all the
next posted WQEs to be completed with CQE flush error forever.
Currently, the PMD doesn't handle Tx error CQEs and even may crashed
when one of them appears.
Extend the Tx data-path to detect these error CQEs, to report them by
the statistics error counters, to recover the SQ by moving the state
to ready again and adjusting the management variables appropriately.
Sometimes the error CQE root cause is very hard to debug and even may
be related to some corner cases which are not reproducible easily, hence
a dump file with debug information will be created for the first number
of error CQEs, this number can be configured by the PMD probe
parameters.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
When WQEs are posted to the HW to receive packets, the PMD may receive
a completion report with error from the HW, aka error CQE which is
associated to a bad WQE.
The error reason may be bad address, wrong lkey, small buffer size,
etc. that can wrongly be configured by the PMD or by the user.
Checking all the optional mistakes to prevent error CQEs doesn't make
sense due to performance impacts, moreover, some error CQEs can be
triggered because of the packets coming from the wire when the DPDK
application has no any control.
Most of the error CQE types change the RQ state to error state what
causes all the next received packets to be dropped by the HW and to be
completed with CQE flush error forever.
The current solution detects these error CQEs and even reports the
errors to the user by the statistics error counters but without
recovery, so if the RQ inserted to the error state it never moves to
ready state again and all the next packets ever will be dropped.
Extend the error CQEs handling for recovery by moving the state to
ready again, and rearranging all the RQ WQEs and the management
variables appropriately.
Sometimes the error CQE root cause is very hard to debug and even may
be related to some corner cases which are not reproducible easily,
hence a dump file with debug information will be created for the first
number of error CQEs, this number can be configured by the PMD probe
parameters.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Add a global function in the PMD which dumps debug information to
specific file.
The data can be printed in hexadecimal format or as regular string.
The number of debug files per PMD entity should be limited by a new PMD
probe parameter called max_dump_files_num.
The files will be created in the /var/log directory or in the current
directory.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
There is a full correlation between the CQE indexes to the WQE indexes
in the vectorized Rx queues management.
When the RQ is inserted to the reset state, the correlation may break
because the HW starts the RQ polling from index 0 while the CQ polling
continues regularly.
As an arrangement to CQE errors handling, when the RQ can be reset,
the correlation dependence should be removed from all the Rx queues
index managements.
Remove the aforementioned dependence from the vectorized Rx burst
functions.
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Multiple functions were declared in header file mlx5_rxtx.h,
implemented in mlx5_rxq.c, and called only in mlx5_rxq.c.
This patch moves all these functions declarations into mlx5_rxq.c,
as static functions.
Some functions implementation was copied higher in the file to
precede the functions calls.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Functions implemented but never called:
mlx5_rxq_ibv_releasable()
mlx5_rxq_cleanup()
mlx5_txq_ibv_releasable()
Function declared but not implemented:
rxq_alloc_mprq_buf()
This patch removes these functions from code and header file.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
Add 'rte_' prefix to structures:
- rename struct ether_addr as struct rte_ether_addr.
- rename struct ether_hdr as struct rte_ether_hdr.
- rename struct vlan_hdr as struct rte_vlan_hdr.
- rename struct vxlan_hdr as struct rte_vxlan_hdr.
- rename struct vxlan_gpe_hdr as struct rte_vxlan_gpe_hdr.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Currently the allocation of the jump to QP is done in flow apply,
this results in memory leak.
This patch fixes this issue by moving the allocation and release of the
jump to QP action to the responsibility of the hrxq.
Fixes: cbb66daa3c ("net/mlx5: prepare Direct Verbs for Direct Rule")
Cc: stable@dpdk.org
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
UAR (User Access Region) register does not need to be remapped for
primary process but it should be remapped only for secondary process.
UAR register table is in the process private structure in
rte_eth_devices[],
(struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private
The actual UAR table follows the data structure and the table is used
for both Tx and Rx.
For Tx, BlueFlame in UAR is used to ring the doorbell.
MLX5_TX_BFREG(txq) is defined to get a register for the txq. Processes
access its own private data to acquire the register from the UAR table.
For Rx, the doorbell in UAR is required in arming CQ event. However, it
is a known issue that the register isn't remapped for secondary process.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Queue index is redundantly stored for both Rx and Tx structures.
E.g. txq_ctrl->idx and txq->stats.idx. Both are consolidated to single
storage - rxq->idx and txq->idx.
Also, rxq and txq are moved to the beginning of its control structure
(rxq_ctrl and txq_ctrl) for cacheline alignment.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
The implementation reuses the external memory registration work done by
commit[1].
Note about representors:
The current representor design will not work
with those map and unmap functions. The reason is that for representors
we have multiple IB devices share the same PCI function, so mapping will
happen only on one of the representors and not all of them.
While it is possible to implement such support, the IB representor
design is going to be changed during DPDK19.05. The new design will have
a single IB device for all representors, hence sharing of a single
memory region between all representors will be possible.
[1]
commit 7e43a32ee0
("net/mlx5: support externally allocated static memory")
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>