i40e: improve performance of vector PMD

An analysis of the i40e code using Intel® VTune™ Amplifier 2016 showed
that the code was unexpectedly causing stalls due to "Loads blocked by
Store Forwards". This can occur when a load from memory has to wait
due to the prior store being to the same address, but being of a smaller
size i.e. the stored value cannot be directly returned to the loader.
[See ref: https://software.intel.com/en-us/node/544454]

These stalls are due to the way in which the data_len values are handled
in the driver. The lengths are extracted using vector operations, but those
16-bit lengths are then assigned using scalar operations i.e. 16-bit
stores.

These regular 16-bit stores actually have two effects in the code:
* they cause the "Loads blocked by Store Forwards" issues reported
* they also cause the previous loads in the RX function to actually be a
load followed by a store to an address on the stack, because the 16-bit
assignment can't be done to an xmm register.

By converting the 16-bit store operations into a sequence of SSE blend
operations, we can ensure that the descriptor loads only occur once, and
avoid both the additional stores and loads from the stack, as well as the
stalls due to the blocked loads.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
This commit is contained in:
Bruce Richardson 2016-04-14 17:02:36 +01:00
parent ad981b6c7a
commit 0b6493fbb0

View File

@ -192,11 +192,7 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
static inline void
desc_pktlen_align(__m128i descs[4])
{
__m128i pktlen0, pktlen1, zero;
union {
uint16_t e[4];
uint64_t dword;
} vol;
__m128i pktlen0, pktlen1;
/* mask everything except pktlen field*/
const __m128i pktlen_msk = _mm_set_epi32(PKTLEN_MASK, PKTLEN_MASK,
@ -206,18 +202,18 @@ desc_pktlen_align(__m128i descs[4])
pktlen1 = _mm_unpackhi_epi32(descs[1], descs[3]);
pktlen0 = _mm_unpackhi_epi32(pktlen0, pktlen1);
zero = _mm_xor_si128(pktlen0, pktlen0);
pktlen0 = _mm_srli_epi32(pktlen0, PKTLEN_SHIFT);
pktlen0 = _mm_and_si128(pktlen0, pktlen_msk);
pktlen0 = _mm_packs_epi32(pktlen0, zero);
vol.dword = _mm_cvtsi128_si64(pktlen0);
/* let the descriptor byte 15-14 store the pkt len */
*((uint16_t *)&descs[0]+7) = vol.e[0];
*((uint16_t *)&descs[1]+7) = vol.e[1];
*((uint16_t *)&descs[2]+7) = vol.e[2];
*((uint16_t *)&descs[3]+7) = vol.e[3];
pktlen0 = _mm_packs_epi32(pktlen0, pktlen0);
descs[3] = _mm_blend_epi16(descs[3], pktlen0, 0x80);
pktlen0 = _mm_slli_epi64(pktlen0, 16);
descs[2] = _mm_blend_epi16(descs[2], pktlen0, 0x80);
pktlen0 = _mm_slli_epi64(pktlen0, 16);
descs[1] = _mm_blend_epi16(descs[1], pktlen0, 0x80);
pktlen0 = _mm_slli_epi64(pktlen0, 16);
descs[0] = _mm_blend_epi16(descs[0], pktlen0, 0x80);
}
/*