All API's should check that they support the flag values
passed. If an application passes an invalid flag it could
cause problems in later ABI.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
gcc 10.1.1 reports a warning for the ext_bkt_id variable:
../lib/librte_hash/rte_cuckoo_hash.c:
In function ‘__rte_hash_add_key_with_hash’:
../lib/librte_hash/rte_cuckoo_hash.c:1104:29:
warning: ‘ext_bkt_id’ may be used uninitialized in this function
[-Wmaybe-uninitialized]
1104 | (h->buckets_ext[ext_bkt_id - 1]).sig_current[0] = short_sig;
| ~~~~~~~~~~~^~~
The return value of rte_ring_sc_dequeue_elem() is already checked,
but also initialize ext_bkt_id to zero (invalid value) and check
that it also overwritten.
Fixes: fbfe568103 ("hash: use 32-bit elements rings to save memory")
Cc: stable@dpdk.org
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Implement rte_hash_lookup_with_hash_bulk_data() and
rte_hash_lookup_with_hash_bulk() - bulk lookup
functions with precomputed hash signatures.
Add these two functions into performance tests.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
There is a common macro __rte_aligned for alignment,
which is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Remove setting ALLOW_EXPERIMENTAL_API individually for each Makefile and
meson.build. Instead, enable ALLOW_EXPERIMENTAL_API flag across app, lib
and drivers.
This changes reduces the clutter across the project while still
maintaining the functionality of ALLOW_EXPERIMENTAL_API i.e. warning
external applications about experimental API usage.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Those headers are internal and should not be distributed.
Fixes: 5b9656b157 ("lib: build with meson")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Adding new API function to query the maximum key ID
that could possibly be returned by rte_hash_add_key and
rte_hash_add_key_with_hash. When RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD
is set, the maximum key id is larger than the entry count specified
by the user.
Signed-off-by: Kumar Amber <kumar.amber@intel.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
The freelist and external bucket indices are 32b. Using rings
that use 32b element sizes will save memory.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Merge all versions in linker version script files to DPDK_20.0.
This commit was generated by running the following command:
:~/DPDK$ buildtools/update-abi.sh 20.0
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Since the library versioning for both stable and experimental ABI's is
now managed globally, the LIBABIVER and version variables no longer
serve any useful purpose, and can be removed.
The replacement in Makefiles was done using the following regex:
^(#.*\n)?LIBABIVER\s*:=\s*\d+\n(\s*\n)?
(LIBABIVER := numbers, optionally preceded by a comment and optionally
succeeded by an empty line)
The replacement for meson files was done using the following regex:
^(#.*\n)?version\s*=\s*\d+\n(\s*\n)?
(version = numbers, optionally preceded by a comment and optionally
succeeded by an empty line)
[David]: those variables are manually removed for the files:
- drivers/common/qat/Makefile
- lib/librte_eal/meson.build
[David]: the LIBABIVER is restored for the external ethtool example
library.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Within rte_hash_reset, calling a while loop to dequeue one by
one from the ring, while not using them at all, is wasting cycles,
The patch just flush the ring by resetting the indices can save CPU
cycles.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
When a hash entry is added, there are 2 sets of stores.
1) The application writes its data to memory (whose address
is provided in rte_hash_add_key_with_hash_data API (or NULL))
2) The rte_hash library writes to its own internal data structures;
key store entry and the hash table.
The only ordering requirement between these 2 is that - store
to the application data must complete before the store to key_index.
There are no ordering requirements between the stores to
key/signature and store to application data. The synchronization
point for application data can be any point between the 'store to
application data' and 'store to the key_index'. So, 'pdata' should not
be a guard variable for the data in hash table. It should be a guard
variable only for the application data written to the memory location
pointed by 'pdata'. Hence, in the lookup functions, 'pdata' can be
loaded after full key comparison succeeds.
The synchronization point for the application data (store-release
to 'pdata' in key store) is changed to be consistent with the order
of loads in lookup function. However, this change is cosmetic and
does not affect the functionality.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Relaxed signature comparison is done first. Further ordered loads
are done only if the signature matches. Any false positives are
caught by the full key comparison. This provides performance
benefits as load-acquire is executed only when required.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Now that everything that has ever accessed the shared memory
config is doing so through the public API's, we can make it
internal. Since we're removing quite a few headers from
rte_eal_memconfig.h, we need to add them back in places
where this header is used.
This bumps the ABI, so also change all build files and make
update documentation.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Currently, locking/unlocking the TAILQ list requires direct
access to the shared memory config. Add an API to do the same,
and search-and-replace all usages.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
Putting a '__attribute__((deprecated))' in the middle of a function
prototype does not result in the expected result with gcc (while clang
is fine with this syntax).
$ cat deprecated.c
void * __attribute__((deprecated)) incorrect() { return 0; }
__attribute__((deprecated)) void *correct(void) { return 0; }
int main(int argc, char *argv[]) { incorrect(); correct(); return 0; }
$ gcc -o deprecated.o -c deprecated.c
deprecated.c: In function ‘main’:
deprecated.c:3:1: warning: ‘correct’ is deprecated (declared at
deprecated.c:2) [-Wdeprecated-declarations]
int main(int argc, char *argv[]) { incorrect(); correct(); return 0; }
^
Move the tag on a separate line and make it the first thing of function
prototypes.
This is not perfect but we will trust reviewers to catch the other not
so easy to detect patterns.
sed -i \
-e '/^\([^#].*\)\?__rte_experimental */{' \
-e 's//\1/; s/ *$//; i\' \
-e __rte_experimental \
-e '/^$/d}' \
$(git grep -l __rte_experimental -- '*.h')
Special mention for rte_mbuf_data_addr_default():
There is either a bug or a (not yet understood) issue with gcc.
gcc won't drop this inline when unused and rte_mbuf_data_addr_default()
calls rte_mbuf_buf_addr() which itself is experimental.
This results in a build warning when not accepting experimental apis
from sources just including rte_mbuf.h.
For this specific case, we hide the call to rte_mbuf_buf_addr() under
the ALLOW_EXPERIMENTAL_API flag.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
We had some inconsistencies between functions prototypes and actual
definitions.
Let's avoid this by only adding the experimental tag to the prototypes.
Tests with gcc and clang show it is enough.
git grep -l __rte_experimental |grep \.c$ |while read file; do
sed -i -e '/^__rte_experimental$/d' $file;
sed -i -e 's/ *__rte_experimental//' $file;
sed -i -e 's/__rte_experimental *//' $file;
done
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Replaced multiple neon instructions with single equivalent instruction.
This made simpler code and a bit higher performance.
Hash bulk lookup had 0.1% ~ 3% performance gain in tests on ARM A72
platforms.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
In rte_hash, with current implementation, it is possible that keys
are stored at indexes greater than the number of total entries.
Currently, in rte_hash_free_key_with_position(), due to incorrect
computation of total_entries, application cannot free keys with
indexes greater than the number of total entries.
This patch fixes this incorrect computation of total_entries.
Bugzilla ID: 261
Fixes: 9d033dac7d ("hash: support no free on delete")
Cc: stable@dpdk.org
Reported-by: Linfan <zhongdahulinfan@163.com>
Suggested-by: Linfan <zhongdahulinfan@163.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Currently, in rte_hash_free_key_with_position(), the position returned
to the ring of free_slots leads to an unexpected conflict with a key
already in use.
This patch fixes incorrect position returned to the ring of free_slots.
Bugzilla ID: 261
Fixes: 9d033dac7d ("hash: support no free on delete")
Cc: stable@dpdk.org
Reported-by: Linfan <zhongdahulinfan@163.com>
Suggested-by: Linfan <zhongdahulinfan@163.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
rte_hash_hash is multi-thread safe but not multi-process safe
because of the use of function pointers. Previous document
and comment says the other way around. This commit fixes
the issue.
Fixes: fc1f2750a3 ("doc: programmers guide")
Fixes: 48a3991196 ("hash: replace with cuckoo hash implementation")
Cc: stable@dpdk.org
Reported-by: Andrey Nikolaev <gentoorion@gmail.com>
Suggested-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
For files that already have rte_string_fns.h included in them, we can
do a straight replacement of snprintf(..."%s",...) with strlcpy. The
changes in this patch were auto-generated via command:
spatch --sp-file devtools/cocci/strlcpy-with-header.cocci --dir . --in-place
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Implemented signature compare function based on neon intrinsic.
Hash bulk lookup had 3% - 6% performance gain after optimization.
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The __rte_hash_lookup_bulk() function returns void, and therefore
should not return with an expression. This commit fixes the following
compiler warning when attempting to compile with "-pedantic -std=c11".
warning: ISO C forbids ‘return’ with expression, in function
returning void [-Wpedantic]
Fixes: 9eca8bd7a6 ("hash: separate lock-free and r/w lock lookup")
Cc: stable@dpdk.org
Signed-off-by: Jeff Shaw <jeffrey.b.shaw@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
gcc 7 and 8 with O3 will generate vzeroupper from rte_memcpy
into TSX region which may abort the TSX transaction.
This fix changes rte_memcpy to memcpy which will not insert
extra vzeroupper into the library.
Fixes: f2e3001b53 ("hash: support read/write concurrency")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The lock-free algorithm has caused significant lookup
performance regression for certain use cases. The
regression is attributed to the use of non-relaxed
memory orderings. 2 versions of the lookup functions
are created. One that uses the RW lock and the one that
is lock-free. This restores the performance regression
caused for use cases that used RW lock version of the
lookup function.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Suggested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
There is a rte_pause in hash table reset function.
Since the loop is not a polling loop on shared
data structure, the rte_pause is not needed.
Fixes: b26473ff8f ("hash: add reset function")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Add lock-free read-write concurrency. This is achieved by the
following changes.
1) Add memory ordering to avoid race conditions. The only race
condition that can occur is - using the key store element
before the key write is completed. Hence, while inserting the element
the release memory order is used. Any other race condition is caught
by the key comparison. Memory orderings are added only where needed.
For ex: reads in the writer's context do not need memory ordering
as there is a single writer.
key_idx in the bucket entry and pdata in the key store element are
used for synchronisation. key_idx is used to release an inserted
entry in the bucket to the reader. Use of pdata for synchronisation
is required due to updation of an existing entry where-in only
the pdata is updated without updating key_idx.
2) Reader-writer concurrency issue, caused by moving the keys
to their alternative locations during key insert, is solved
by introducing a global counter(tbl_chng_cnt) indicating a
change in table.
3) Add the flag to enable reader-writer concurrency during
run time.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Fix the key store array element alignment such that every array
element is aligned on KEY_ALIGNMENT boundary. This is required to
make 'pdata' in 'struct rte_hash_key' align on its natural boundary
for atomic load/store.
Fixes: 473d1bebce ("hash: allow to store data in hash table")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
rte_hash_lookup_xxx APIs return the index of slot in
the key store. Application(reader) can use that index to reference
other data structures in its scope. Because of this, the
index should not be freed till the application completes
using the index.
RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL is introduced to support this.
When this flag is enabled rte_hash_del_xxx APIs do not free the
key-store index/internal memory associated with the deleted
entry. The new API rte_hash_free_key_with_position should be called
to free the key-store index/internal memory after calling
rte_hash_del_xxx APIs.
Suggested-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
RW concurrency is required with single writer and multiple reader
usecase as well. Hence, multi-writer should not be enabled by default when
RW concurrency is enabled.
Fixes: f2e3001b53 ("hash: support read/write concurrency")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This commit changes the hashing mechanism to "partial-key
hashing" to calculate bucket index and signature of key.
This is proposed in Bin Fan, et al's paper
"MemC3: Compact and Concurrent MemCache with Dumber Caching
and Smarter Hashing". Basically the idea is to use "xor" to
derive alternative bucket from current bucket index and
signature.
With "partial-key hashing", it reduces the bucket memory
requirement from two cache lines to one cache line, which
improves the memory efficiency and thus the lookup speed.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In use cases that hash table capacity needs to be guaranteed,
the extendable bucket feature can be used to contain extra
keys in linked lists when conflict happens. This is similar
concept to the extendable bucket hash table in packet
framework.
This commit adds the extendable bucket feature. User can turn
it on or off through the extra flag field during table
creation time.
Extendable bucket table composes of buckets that can be
linked list to current main table. When extendable bucket
is enabled, the hash table load can always achieve 100%.
In other words, the table can always accommodate the same
number of keys as the specified table size. This provides
100% table capacity guarantee.
Although keys ending up in the ext buckets may have longer
look up time, they should be rare due to the cuckoo
algorithm.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In rte_hash_iterate, the reader lock did not protect the
while loop which checks empty entry. This created a race
condition that the entry may become empty when enters
the lock, then a wrong key data value would be read out.
This commit reads out the position in the while condition,
which makes sure that the position will not be changed
to empty before entering the lock.
Fixes: f2e3001b53 ("hash: support read/write concurrency")
Cc: stable@dpdk.org
Reported-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since the depth-first search of cuckoo path is removed, we do not
need the macro anymore which specifies the depth of the cuckoo
search.
Fixes: f2e3001b53 ("hash: support read/write concurrency")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Describing the thread-safety support more accurately for
API documentation.
Fixes: f2e3001b53 ("hash: support read/write concurrency")
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
rte_hash_lookup_data() and rte_hash_lookup_with_hash_data()
functions return the index of the table where the key is stored
when this is found, and not 0 as the Doxygen currently states.
Also, these functions, and rte_hash_get_key_with_position()
return negative values when keys are not found (-EINVAL and -ENOENT),
where the minus sign was missing.
Bugzilla ID: 78
Fixes: 473d1bebce ("hash: allow to store data in hash table")
Fixes: 6dc34e0afe ("hash: retrieve a key given its position")
Cc: stable@dpdk.org
Reported-by: Petr Houska <t-pehous@microsoft.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add a new function, rte_hash_count, to return the number of keys that
are currently stored in the hash table. Corresponding test functions are
added into hash_test and hash_multiwriter test.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The existing implementation of librte_hash does not support read-write
concurrency. This commit implements read-write safety using rte_rwlock
and rte_rwlock TM version if hardware transactional memory is available.
Both multi-writer and read-write concurrency is protected by rte_rwlock
now. The x86 specific header file is removed since the x86 specific RTM
function is not called directly by rte hash now.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This commit refactors the hash table lookup/add/del code
to remove some code duplication. Processing on primary bucket can
also apply to secondary bucket with same code.
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This commit calculates the needed key slot size more
accurately. The previous local cache fix requires
the free slot ring to be larger than actually needed.
The calculation of the value is inaccurate.
Fixes: 5915699153 ("hash: fix scaling by reducing contention")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Current multi-writer implementation uses Intel TSX to
protect the cuckoo path moving but not the cuckoo
path searching. After searching, we need to verify again if
the same empty slot still exists at the beginning of the TSX
region. Otherwise another writer could occupy the empty slot
before the TSX region. Current code does not verify.
Fixes: be856325cb ("hash: add scalable multi-writer insertion with Intel TSX")
Cc: stable@dpdk.org
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>