eal/ppc: fix global memory barrier
From previous patch description: "to improve performance on PPC64, use light weight sync instruction instead of sync instruction." Excerpt from IBM doc [1], section "Memory barrier instructions": "The second form of the sync instruction is light-weight sync, or lwsync. This form is used to control ordering for storage accesses to system memory only. It does not create a memory barrier for accesses to device memory." This patch removes the use of lwsync, so calls to rte_wmb() and rte_rmb() will provide correct memory barrier to ensure order of accesses to system memory and device memory. [1] https://www.ibm.com/developerworks/systems/articles/powerpc.html Fixes: d23a6bd04d72 ("eal/ppc: fix memory barrier for IBM POWER") Cc: stable@dpdk.org Signed-off-by: Dekel Peled <dekelp@mellanox.com>
This commit is contained in:
parent
a1c6b70786
commit
8015c5593a
@ -63,11 +63,7 @@ extern "C" {
|
||||
* Guarantees that the STORE operations generated before the barrier
|
||||
* occur before the STORE operations generated after.
|
||||
*/
|
||||
#ifdef RTE_ARCH_64
|
||||
#define rte_wmb() asm volatile("lwsync" : : : "memory")
|
||||
#else
|
||||
#define rte_wmb() asm volatile("sync" : : : "memory")
|
||||
#endif
|
||||
|
||||
/**
|
||||
* Read memory barrier.
|
||||
@ -75,11 +71,7 @@ extern "C" {
|
||||
* Guarantees that the LOAD operations generated before the barrier
|
||||
* occur before the LOAD operations generated after.
|
||||
*/
|
||||
#ifdef RTE_ARCH_64
|
||||
#define rte_rmb() asm volatile("lwsync" : : : "memory")
|
||||
#else
|
||||
#define rte_rmb() asm volatile("sync" : : : "memory")
|
||||
#endif
|
||||
|
||||
#define rte_smp_mb() rte_mb()
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user