numam-dpdk/drivers/net/mlx5/mlx5_mr.h
Dmitry Kozlyuk fec28ca0e3 net/mlx5: support mempool registration
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a new devarg: mr_mempool_reg_en.

On TX slow path, i.e. when an MR key for the address of the buffer
to send is not in the local cache, first try to retrieve it from
the database of registered mempools. Supported are direct and indirect
mbufs, as well as externally-attached ones from MLX5 MPRQ feature.
Lookup in the database of non-mempool memory is used as the last resort.

RX mempools are registered regardless of the devarg value.
On RX data path only the local cache and the mempool database is used.
If implicit mempool registration is disabled, these mempools
are unregistered at port stop, releasing the MRs.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-19 16:35:16 +02:00

27 lines
596 B
C

/* SPDX-License-Identifier: BSD-3-Clause
* Copyright 2018 6WIND S.A.
* Copyright 2018 Mellanox Technologies, Ltd
*/
#ifndef RTE_PMD_MLX5_MR_H_
#define RTE_PMD_MLX5_MR_H_
#include <stddef.h>
#include <stdint.h>
#include <sys/queue.h>
#include <rte_ethdev.h>
#include <rte_rwlock.h>
#include <rte_bitmap.h>
#include <rte_memory.h>
#include <mlx5_common_mr.h>
/* First entry must be NULL for comparison. */
#define mlx5_mr_btree_len(bt) ((bt)->len - 1)
void mlx5_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr,
size_t len, void *arg);
#endif /* RTE_PMD_MLX5_MR_H_ */