net/mlx5: fix Rx packet padding config via DevX
Received packets can be aligned to the size of the cache line on PCI transactions. This could improve performance by avoiding partial cache line writes in exchange for increased PCI bandwidth. This feature is supposed to be controlled by the rxq_pkt_pad_en devarg and it is true for an RxQ created via the Verbs API. But in the DevX API case, it is erroneously controlled by the rxq_cqe_pad_en devarg instead, which is in charge of the CQE padding instead and should not control the RxQ creation. Fix DevX RxQ creation by using the proper configuration flag for Rx packet padding that is being set by the rxq_pkt_pad_en devarg. Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX") Cc: stable@dpdk.org Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
This commit is contained in:
parent
a0e4728c43
commit
ff2deada2e
@ -294,7 +294,7 @@ static void
|
||||
mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
|
||||
struct mlx5_devx_wq_attr *wq_attr)
|
||||
{
|
||||
wq_attr->end_padding_mode = priv->config.cqe_pad ?
|
||||
wq_attr->end_padding_mode = priv->config.hw_padding ?
|
||||
MLX5_WQ_END_PAD_MODE_ALIGN :
|
||||
MLX5_WQ_END_PAD_MODE_NONE;
|
||||
wq_attr->pd = priv->sh->pdn;
|
||||
|
Loading…
x
Reference in New Issue
Block a user