numam-dpdk/lib/mempool
Morten Brørup a2833ecc5e mempool: fix get objects from mempool with cache
A flush threshold for the mempool cache was introduced in DPDK version
1.3, but rte_mempool_do_generic_get() was not completely updated back
then, and some inefficiencies were introduced.

Fix the following in rte_mempool_do_generic_get():

1. The code that initially screens the cache request was not updated
with the change in DPDK version 1.3.
The initial screening compared the request length to the cache size,
which was correct before, but became irrelevant with the introduction of
the flush threshold. E.g. the cache can hold up to flushthresh objects,
which is more than its size, so some requests were not served from the
cache, even though they could be.
The initial screening has now been corrected to match the initial
screening in rte_mempool_do_generic_put(), which verifies that a cache
is present, and that the length of the request does not overflow the
memory allocated for the cache.

This bug caused a major performance degradation in scenarios where the
application burst length is the same as the cache size. In such cases,
the objects were not ever fetched from the mempool cache, regardless if
they could have been.
This scenario occurs e.g. if an application has configured a mempool
with a size matching the application's burst size.

2. The function is a helper for rte_mempool_generic_get(), so it must
behave according to the description of that function.
Specifically, objects must first be returned from the cache,
subsequently from the backend.
After the change in DPDK version 1.3, this was not the behavior when
the request was partially satisfied from the cache; instead, the objects
from the backend were returned ahead of the objects from the cache.
This bug degraded application performance on CPUs with a small L1 cache,
which benefit from having the hot objects first in the returned array.
(This is probably also the reason why the function returns the objects
in reverse order, which it still does.)
Now, all code paths first return objects from the cache, subsequently
from the backend.

The function was not behaving as described (by the function using it)
and expected by applications using it. This in itself is also a bug.

3. If the cache could not be backfilled, the function would attempt
to get all the requested objects from the backend (instead of only the
number of requested objects minus the objects available in the backend),
and the function would fail if that failed.
Now, the first part of the request is always satisfied from the cache,
and if the subsequent backfilling of the cache from the backend fails,
only the remaining requested objects are retrieved from the backend.

The function would fail despite there are enough objects in the cache
plus the common pool.

4. The code flow for satisfying the request from the cache was slightly
inefficient:
The likely code path where the objects are simply served from the cache
was treated as unlikely. Now it is treated as likely.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
2022-10-08 22:52:51 +02:00
..
mempool_trace_points.c lib: remove librte_ prefix from directory names 2021-04-21 14:04:09 +02:00
meson.build mempool: add telemetry endpoint 2021-10-22 22:40:59 +02:00
rte_mempool_ops_default.c lib: remove librte_ prefix from directory names 2021-04-21 14:04:09 +02:00
rte_mempool_ops.c dev: hide driver object 2022-09-23 16:14:34 +02:00
rte_mempool_trace_fp.h lib: remove librte_ prefix from directory names 2021-04-21 14:04:09 +02:00
rte_mempool_trace.h lib: remove librte_ prefix from directory names 2021-04-21 14:04:09 +02:00
rte_mempool.c lib: remove unneeded header includes 2022-02-22 13:10:39 +01:00
rte_mempool.h mempool: fix get objects from mempool with cache 2022-10-08 22:52:51 +02:00
version.map version: 22.11-rc0 2022-07-21 12:13:48 +02:00