net/ixgbe: remove memory barrier from NEON Rx

The memory barrier was intended for descriptor data integrity (see
comments in [1]). As later NEON loads were implemented and a whole
entry is loaded in one-run and atomic, that makes the ordering of
partial loading unnecessary. Remove it accordingly.

Corrected couple of code comments.

In terms of performance, observed slightly higher average throughput
in tests with 82599ES NIC.

[1] http://patches.dpdk.org/patch/18153/

Fixes: 989a84050542 ("net/ixgbe: fix received packets number for ARM NEON")
Cc: stable@dpdk.org

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
This commit is contained in:
Ruifeng Wang 2019-08-28 16:24:53 +08:00 committed by Ferruh Yigit
parent f1f0f39806
commit 18b7d4eb3d

View File

@ -214,13 +214,13 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
uint32_t var = 0;
uint32_t stat;
/* B.1 load 1 mbuf point */
/* B.1 load 2 mbuf point */
mbp1 = vld1q_u64((uint64_t *)&sw_ring[pos]);
/* B.2 copy 2 mbuf point into rx_pkts */
vst1q_u64((uint64_t *)&rx_pkts[pos], mbp1);
/* B.1 load 1 mbuf point */
/* B.1 load 2 mbuf point */
mbp2 = vld1q_u64((uint64_t *)&sw_ring[pos + 2]);
/* A. load 4 pkts descs */
@ -228,7 +228,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
descs[1] = vld1q_u64((uint64_t *)(rxdp + 1));
descs[2] = vld1q_u64((uint64_t *)(rxdp + 2));
descs[3] = vld1q_u64((uint64_t *)(rxdp + 3));
rte_smp_rmb();
/* B.2 copy 2 mbuf point into rx_pkts */
vst1q_u64((uint64_t *)&rx_pkts[pos + 2], mbp2);