Having to force an alignment does not impact the user,
so it should not be a warning.
The log level is reduced from warning to debug.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, the hash list saves the hash key in the hash entry. And the
key is mostly used to get the bucket index only.
Save the entire 64 bits key to the entry will not be a good option if
the key is only used to get the bucket index. Since 64 bits costs more
memory for the entry, mostly the signature data in the key only uses
32 bits. And in the unregister function, the key in the entry causes
extra bucket index calculation.
This commit saves the bucket index to the entry instead of the hash key.
For the hash list like table, tag and mreg_copy which save the signature
data in the key, the signature data is moved to the resource data struct
itself.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Since all the hash table operations are related with one dedicated
bucket, the hash table lock and gen_cnt can be allocated per-bucket.
Currently, the hash table uses one global lock to protect all the
buckets, that global lock avoids the buckets to be operated at one
time, it hurts the hash table performance. And the gen_cnt updated
by the entire hash table causes incorrect redundant list research.
This commit optimized the lock and gen_cnt to bucket solid allows
different bucket entries can be operated more efficiently.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In mlx5 internal hash list tool, there is a log print when an entry
allocation is failed: Can't allocate hash list entry.
Some initialization checks triggers hash list registration in order to
check some capabilities. Here, the failure in registration doesn't
lead to failure in the initialization flow, that is why the log level
can be lower.
Move the entry allocation failure log to debug level.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Asaf Penso <asafp@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The entry variable assert in the mlx5_hlist_register() function is not
correct. Remove the invalid entry variable.
Fixes: e69a59227d ("net/mlx5: support concurrent access for hash list")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
In previous commits the hash list objects have been converted
to new thread safe hash list. The legacy hash list code can be
removed now.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
New API of linked list for cache:
- Optimized for small amount cache list.
- Optimized for read-most list.
- Thread safe.
- Since number of entries are limited, entries allocated by API.
- For dynamic entry size, pass 0 as entry size, then the creation
callback allocate the entry.
- Since number of entries are limited, no need to use indexed pool to
allocate memory. API will remove entry and free with mlx5_free.
- Search API is not supposed to be used in multi-thread.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In order to support hash list concurrent access, adding next:
1. List level read/write lock.
2. Entry reference counter.
3. Entry create/match/remove callback.
4. Remove insert/lookup/remove function which are not thread safe.
5. Add register/unregister function to support entry reuse.
For better performance, lookup function uses read lock to
allow concurrent lookup from different thread, all other hash list
modification functions uses write lock which blocks concurrent
modification and lookups from other thread.
The exact objects change will be applied in the next patches.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The ID generation API used an integer pool to save released ID, To
support multiple flow, it has to be enhanced to be thread safe.
Indexed pool could be used to generate unique ID by setting size of pool
entry to zero. Since bitmap is used, an extra benefits is saving memory
to about one bit per entry. Further more indexed pool could be thread
safe by enabling lock.
This patch leverages indexed pool to generate ID, removes
unused ID generating API.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
To make indexed pool to be used as ID generator, this patch allows entry
size to be zero.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This commit adds thread safety support in three level table using
spinlock and reference counter for each table entry.
An new mlx5_l3t_prepare_entry() function is added in order to support
multiple-thread operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
The mlx5 PMD hashed list was designed in approach to contain the items
with unique keys only. Now there is the need to store the objects with
possible key collisions. It is not expected to have many collisions
(very likely to have a few ones), but keys become not unique.
This commit adds the hash list extended functions in order to support
insertion and lookup for the lists with non-unique keys.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
For the case which data is linked with sequence increased index, the
array table will be more efficient than hash table once need to search
one data entry in large numbers of entries. Since the traditional hash
tables has fixed table size, when huge numbers of data saved to the hash
table, it also comes lots of hash conflict.
But simple array table also has fixed size, allocates all the needed
memory at once will waste lots of memory. For the case don't know the
exactly number of entries will be impossible to allocate the array.
Then the multiple level table helps to balance the two disadvantages.
Allocate a global high level table with sub table entries at first,
the global table contains the sub table entries, and the sub table will
be allocated only once the corresponding index entry need to be saved.
e.g. for up to 32-bits index, three level table with 10-10-12 splitting,
with sequence increased index, the memory grows with every 4K entries.
The currently implementation introduces 10-10-12 32-bits splitting
Three-Level table to help the cases which have millions of entries to
save. The index entries can be addressed directly by the index, no
search will be needed.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the indexed memory pool bitmap start address is not aligned
to cacheline size explicitly. The bitmap initialization requires the
address should be cacheline aligned. In that case, the initialization
maybe failed if the address is not cacheline aligned.
Add RTE_CACHE_LINE_ROUNDUP() to the trunk size calculation to make sure
the bitmap offset address will start with cacheline aligned.
Fixes: a3cf59f56c ("net/mlx5: add indexed memory pool")
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Tested-by: Lijian Zhang <lijian.zhang@arm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
While entries are fully freed in trunk, it means the trunk is free now.
User may prefer the free trunk memory can be reclaimed.
Add the trunk release memory option for indexed pool in this case.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit add trunk dynamic grow for the indexed pool.
In case for pools which are not sure the entry number needed, pools can
be configured in increase progressively mode. It means the trunk size
will be increased dynamically one after one, then reach a stable value.
It saves memory to avoid allocate a very big trunk at beginning.
User should set both the grow_shift and grow_trunk to help the trunk
grow works. Keep one or both grow_shift and grow_trunk as 0 makes the
trunk work as fixed size.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Currently, the memory allocated by rte_malloc() also introduced more
than 64 bytes overhead. It means when allocate 64 bytes memory, the
real cost in memory maybe double. And the libc malloc() overhead is 16
bytes, If users try allocating millions of small memory blocks, the
overhead costing maybe huge. And save the memory pointer will also be
quite expensive.
Indexed memory pool is introduced to save the memory for allocating
huge amount of small memory blocks. The indexed memory uses trunk and
bitmap to manage the memory entries. While the pool is empty, the trunk
slot contains memory entry array will be allocated firstly. The bitmap
in the trunk records the entry allocation. The offset of trunk slot in
the pool and the offset of memory entry in the trunk slot compose the
index for the memory entry. So, by the index, it will be very easy to
address the memory of the entry. User saves the 32 bits index for the
memory resource instead of the 64 bits pointer.
User should create different pools for allocating different size of
small memory block. It means one pool provides one fixed size of small
memory blocked allocating.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Use the MLX5_ASSERT macros instead of the standard assert clause.
Depends on the RTE_LIBRTE_MLX5_DEBUG configuration option to define it.
If RTE_LIBRTE_MLX5_DEBUG is enabled MLX5_ASSERT is equal to RTE_VERIFY
to bypass the global CONFIG_RTE_ENABLE_ASSERT option.
If RTE_LIBRTE_MLX5_DEBUG is disabled, the global CONFIG_RTE_ENABLE_ASSERT
can still make this assert active by calling RTE_VERIFY inside RTE_ASSERT.
Signed-off-by: Alexander Kozyrev <akozyrev@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Introduce simple hash list to the mlx5 utilities. User can define
its own data structure containing the mlx5_hlist_entry and create
the hash list table via the creation interface. Then the entry will
be inserted into the table and linked to the corresponding list
head. User should guarantee there is no collision of the key and
provide a callback function to handle all the remaining entries in
the table when destroying the hash list. User should define a proper
number of the list heads in the table in order to get a better
performance. The LSB of the 'key' is used to calculate the index of
the head in the list heads array.
This implementation is not multi-threads safe right now.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>