Commit Graph

221 Commits

Author SHA1 Message Date
Bruce Richardson
63b3907833 build: remove library name from version map file name
Since each version map file is contained in the subdirectory of the library
it refers to, there is no need to include the library name in the filename.
This makes things simpler in case of library renaming.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
2020-10-19 22:13:59 +02:00
Hemant Agrawal
b24b892895 mempool: dump handler index and name
Enhance the dump function to also print the ops index
and associated mempool ops name

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-06 23:44:15 +02:00
Olivier Matz
e4233e1d0a mempool: promote some experimental functions as stable
Move symbols introduced in version <= 19.11 in the stable ABI.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2020-10-06 12:19:21 +02:00
Olivier Matz
9b427d5229 mempool: remove v20 ABI compatibility
Remove the deprecated v20 ABI of rte_mempool_populate_iova() and
rte_mempool_populate_virt().

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2020-10-06 12:19:21 +02:00
Sachin Saxena
0621033073 mempool: dump socket attribute
Enhance the dump function to also print socket_id attribute
passed at creation time.

Signed-off-by: Sachin Saxena <sachin.saxena@oss.nxp.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-06 00:57:16 +02:00
Thomas Monjalon
28e3c8b286 mempool: remove physical address aliases
Remove the deprecated unioned fields physaddr and phys_addr
from the structures rte_mempool_objhdr and rte_mempool_memhdr.
They are replaced with the fields iova which are at the same offsets.

Remove the deprecated macro MEMPOOL_F_NO_PHYS_CONTIG
which is an alias of the more recent MEMPOOL_F_NO_IOVA_CONTIG.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
2020-09-19 00:25:37 +02:00
Ciara Power
ec260aa3ad config: remove default configs used with make
Make is not supported for compiling DPDK, the config files are no
longer needed.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-09-08 00:11:30 +02:00
Ciara Power
3cc6ecfdfe build: remove makefiles
A decision was made [1] to no longer support Make in DPDK, this patch
removes all Makefiles that do not make use of pkg-config, along with
the mk directory previously used by make.

[1] https://mails.dpdk.org/archives/dev/2020-April/162839.html

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-09-08 00:09:50 +02:00
Thomas Monjalon
4f86c0ba19 version: 20.11-rc0
Start a new release cycle with empty release notes.

The ABI version becomes 21.0.
The ABI major is back to normal, having only one number (21 vs 20.0).
The map files are updated to the new ABI major number (21).
The ABI exceptions are dropped.
Travis ABI check is disabled because compatibility is not preserved.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
2020-08-12 11:32:16 +02:00
Zhike Wang
9dbe628a7b mempool: fix allocation in memzone during retry
If allocation is successful on the first attempt, typically
there is no problem since we allocated everything required and
we'll terminate the loop (if memory chunk is really sufficient
to populate required number of mempool elements).

If the first attempt fails, we try to allocate half
of mem_size and it succeed, we'll have one more iteration of
the for-loop to allocate memory for remaining elements and
should not try the next time with quarter of the mem_size.

It is wrong that max_alloc_size is divided by 2 in the
case of successful allocation as well, or invalid memory
can be allocated, and leads to population failure, then errno
other than ENOMEM may be returned.

Fixes: 3a3d0c75b4 ("mempool: fix slow allocation of large pools")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Signed-off-by: Zhike Wang <wangzhike@jd.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2020-07-22 01:27:10 +02:00
David Marchand
5c307ba2a5 eal: register non-EAL threads as lcores
DPDK allows calling some part of its API from a non-EAL thread but this
has some limitations.
OVS (and other applications) has its own thread management but still
want to avoid such limitations by hacking RTE_PER_LCORE(_lcore_id) and
faking EAL threads potentially unknown of some DPDK component.

Introduce a new API to register non-EAL thread and associate them to a
free lcore with a new NON_EAL role.
This role denotes lcores that do not run DPDK mainloop and as such
prevents use of rte_eal_wait_lcore() and consorts.

Multiprocess is not supported as the need for cohabitation with this new
feature is unclear at the moment.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-07-08 14:41:05 +02:00
Fady Bader
972848da5f mempool: use generic memory syscall wrappers
Using generic memory management calls instead of Unix memory management
calls for mempool.

Signed-off-by: Fady Bader <fady@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
2020-07-07 01:24:55 +02:00
David Marchand
0fc601af3a trace: simplify trace point registration
RTE_TRACE_POINT_DEFINE and RTE_TRACE_POINT_REGISTER must come in pairs.
Merge them and let RTE_TRACE_POINT_REGISTER handle the constructor part.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-07-05 21:34:21 +02:00
Dmitry Kozlyuk
262c4ee791 trace: add size_t field emitter
It is not guaranteed that sizeof(long) == sizeof(size_t). On Windows,
sizeof(long) == 4 and sizeof(size_t) == 8 for 64-bit programs.
Tracepoints using "long" field emitter are therefore invalid there.
Add dedicated field emitter for size_t and use it to store size_t values
in all existing tracepoints.

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
2020-06-15 19:27:00 +02:00
David Marchand
ebaee64097 trace: simplify trace point headers
Invert the current trace point headers logic by making
rte_trace_point_register.h include rte_trace_point.h.

There is no more need for a RTE_TRACE_POINT_REGISTER_SELECT special macro
since including rte_trace_point_register.h itself means we want to
register trace points.

The unexplained "provider" notion is removed from the documentation and
rte_trace_point_provider.h is merged into rte_trace_point.h.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-05-06 13:50:32 +02:00
Olivier Matz
c0280d5d8a mempool: return 0 if area is too small on populate
Change rte_mempool_populate_iova() and rte_mempool_populate_virt() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.

As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
2020-05-05 00:27:05 +02:00
Fady Bader
036d82365e mempool: remove inline functions from export list
The code didn't compile when using exported mempool functions
under Windows.

compilation error logs:
rte_mempool_exports.def : error LNK2001:
unresolved external symbol rte_mempool_cache_flush
rte_mempool_exports.def : error LNK2001:
unresolved external symbol rte_mempool_default_cache
rte_mempool_exports.def : error LNK2001:
unresolved external symbol rte_mempool_generic_get
rte_mempool_exports.def : error LNK2001:
unresolved external symbol rte_mempool_generic_put
lib\librte_mempool.dll.a : fatal error LNK1120: 4 unresolved externals
clang: error: linker command failed with exit code 1120 (use -v to see invocation)

The cause was that there were some inline functions that were included
in the export list.
To solve this the functions, which are implemented in the header
and shouldn't be exported, were removed from rte_mempool_version.map
export list.

Fixes: 4b5062755a ("mempool: allow user-owned cache")
Fixes: 656f2d3ede ("mempool: deprecate specific get and put functions")
Cc: stable@dpdk.org

Signed-off-by: Fady Bader <fady@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-04-24 18:11:49 +02:00
Sunil Kumar Kori
40b75c73d1 mempool: add tracepoints
Add tracepoints at important and mandatory APIs for tracing support.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-04-23 15:40:11 +02:00
Pavan Nikhilesh
acec04c4b2 build: disable experimental API check internally
Remove setting ALLOW_EXPERIMENTAL_API individually for each Makefile and
meson.build. Instead, enable ALLOW_EXPERIMENTAL_API flag across app, lib
and drivers.
This changes reduces the clutter across the project while still
maintaining the functionality of ALLOW_EXPERIMENTAL_API i.e. warning
external applications about experimental API usage.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
2020-04-14 16:22:34 +02:00
Ciara Power
f42c9ac5b6 lib: fix unnecessary double negation
An equality expression already returns either 0 or 1.
There is no need to use double negation for these cases.

Fixes: ea672a8b16 ("mbuf: remove the rte_pktmbuf structure")
Fixes: a0fd91cefc ("mempool: rename functions with confusing names")
Cc: stable@dpdk.org

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-02-14 17:39:46 +01:00
Jerin Jacob
3f2d6766e3 mempool: remove memory wastage on non-x86
The existing optimize_object_size() function address the memory object
alignment constraint on x86 for better performance.

Different (micro) architecture may have different memory alignment
constraint for better performance and it not the same as the existing
optimize_object_size().

Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution
based on the address and some may have a different formula.

Introducing arch_mem_object_align() function to abstract
the difference between different (micro) architectures to avoid
wasting memory for mempool object alignment for the architecture
that it is not required to do so.

Details on the amount of memory saving:

Currently, arm64 based architectures use the default (nchan=4,
nrank=1). The worst case is for an object whose size (including mempool
header) is 2 cache lines, where it is optimized to 3 cache lines (+50%).

Examples for cache lines size = 64:
  orig     optimized
  64    -> 64           +0%
  128   -> 192          +50%
  192   -> 192          +0%
  256   -> 320          +25%
  320   -> 320          +0%
  384   -> 448          +16%
  ...
  2304  -> 2368         +2.7%  (~mbuf size)

Additional details:
https://www.mail-archive.com/dev@dpdk.org/msg149157.html

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-01-20 16:37:27 +01:00
Olivier Matz
43503c59ad mempool: fix populate with small virtual chunks
To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().

This case should not throw an error in rte_mempool_populate_virt().
Instead, the area that is too small should just be ignored.

To fix this issue, change the return value of
rte_mempool_populate_iova() to 0 when no object can be populated,
so it can be ignored by the caller. As this would be an API/ABI change,
only do this modification internally for now.

Fixes: 354788b60c ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Alvin Zhang <alvinx.zhang@intel.com>
2020-01-20 16:34:54 +01:00
Olivier Matz
3a3d0c75b4 mempool: fix slow allocation of large pools
When allocating a mempool which is larger than the largest
available area, it can take a lot of time:

a- the mempool calculate the required memory size, and tries
   to allocate it, it fails
b- then it tries to allocate the largest available area (this
   does not request new huge pages)
c- add this zone to the mempool, this triggers the allocation
   of a mem hdr, which request a new huge page
d- back to a- until mempool is populated or until there is no
   more memory

This can take a lot of time to finally fail (several minutes): in step
a- it takes all available hugepages on the system, then release them
after it fails.

The problem appeared with commit eba11e3646 ("mempool: reduce wasted
space on populate"), because smaller chunks are now allowed. Previously,
it had to be at least one page size, which is not the case in step b-.

To fix this, implement our own way to allocate the largest available
area instead of using the feature from memzone: if an allocation fails,
try to divide the size by 2 and retry. When the requested size falls
below min_chunk_size, stop and return an error.

Fixes: eba11e3646 ("mempool: reduce wasted space on populate")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Ali Alnubani <alialnu@mellanox.com>
2020-01-20 12:12:20 +01:00
Olivier Matz
f159c61c35 mempool: fix anonymous populate
The documentation says that a negative errno is returned on error, but
in most places that's not the case.

Fix the documentation and the exceptions in code. The second one
(return from populate_virt) also fixes a memory leak.

Note that testpmd was using the function correctly.

Fixes: aa10457eb4 ("mempool: make mempool populate and free api public")
Fixes: 6780f72fb8 ("mempool: populate with anonymous memory")
Fixes: 66e7ba0bad ("mempool: ensure mempool is initialized before populating")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2020-01-20 12:12:20 +01:00
Pawel Modrak
85ff364f3b build: align symbols with global ABI version
Merge all versions in linker version script files to DPDK_20.0.

This commit was generated by running the following command:

:~/DPDK$ buildtools/update-abi.sh 20.0

Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2019-11-20 23:05:39 +01:00
Anatoly Burakov
fbaf943887 build: remove individual library versions
Since the library versioning for both stable and experimental ABI's is
now managed globally, the LIBABIVER and version variables no longer
serve any useful purpose, and can be removed.

The replacement in Makefiles was done using the following regex:

	^(#.*\n)?LIBABIVER\s*:=\s*\d+\n(\s*\n)?

(LIBABIVER := numbers, optionally preceded by a comment and optionally
succeeded by an empty line)

The replacement for meson files was done using the following regex:

	^(#.*\n)?version\s*=\s*\d+\n(\s*\n)?

(version = numbers, optionally preceded by a comment and optionally
succeeded by an empty line)

[David]: those variables are manually removed for the files:
- drivers/common/qat/Makefile
- lib/librte_eal/meson.build
[David]: the LIBABIVER is restored for the external ethtool example
library.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2019-11-20 23:05:39 +01:00
Anatoly Burakov
84191ddeb5 mempool: remove check for bad IOVA when populating
Currently, mempool will check if IOVA is bad for a segment, and reject
the IOVA if hugepages are also enabled. This check is wrong because now
that we have external memory segments, they are allowed to have their
IOVA's to be invalid. This check also doesn't make much sense in the
first place, because the following code can handle bad IOVA's perfectly
well (and in fact, this check is not triggering a failure when
--no-huge option is enabled), so there is not much sense to check for
this in the first place.

Fixes: 950e8fb4e1 ("mem: allow registering external memory areas")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
2019-11-19 21:41:43 +01:00
Anatoly Burakov
2a7fd3ef38 mempool: use actual IOVA addresses when populating
Currently, when mempool is being populated, we get IOVA address
of every segment using rte_mem_virt2iova(). This works for internal
memory, but does not really work for external memory, and does not
work on platforms which return RTE_BAD_IOVA as a result of this
call (such as FreeBSD). Moreover, even when it works, the function
in question will do unnecessary pagewalks in IOVA as PA mode, as
it falls back to rte_mem_virt2phy() instead of just doing a lookup in
internal memseg table.

To fix it, replace the call to first attempt to look through the
internal memseg table (this takes care of internal and external memory),
and fall back to rte_mem_virt2iova() when unable to perform VA->IOVA
translation via memseg table.

Fixes: 66cc45e293 ("mem: replace memseg with memseg lists")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
2019-11-19 21:41:43 +01:00
Olivier Matz
b32037f7ef mempool: use specific macro for object alignment
For consistency, RTE_MEMPOOL_ALIGN should be used in place of
RTE_CACHE_LINE_SIZE. They have the same value, because the only arch
that was defining a specific value for it has been removed from DPDK.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:34:19 +01:00
Olivier Matz
84626a0d61 mempool: prevent objects from being across pages
When populating a mempool, ensure that objects are not located across
several pages, except if user did not request IOVA-contiguous objects.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-11-06 11:34:19 +01:00
Olivier Matz
23bdcedcd8 mempool: introduce helpers for populate and required size
Introduce new functions that can used by mempool drivers to
calculate required memory size and to populate mempool.

For now, these helpers just replace the *_default() functions
without change. They will be enhanced in next commit.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-11-06 11:11:13 +01:00
Olivier Matz
b291e69423 mempool: introduce function to get mempool page size
In rte_mempool_populate_default(), we determine the page size,
which is needed for calc_size and allocation of memory.

Move this in a function and export it, it will be used in a next
commit.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:12 +01:00
Olivier Matz
035ee5bea5 mempool: remove optimistic IOVA-contiguous allocation
The previous commit reduced the amount of required memory when
populating the mempool with non IOVA-contiguous memory.

Since there is no big advantage to have a fully iova-contiguous mempool
if it is not explicitly asked, remove this code, it simplifies the
populate function.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:11 +01:00
Olivier Matz
eba11e3646 mempool: reduce wasted space on populate
The size returned by rte_mempool_op_calc_mem_size_default() is aligned
to the specified page size. Therefore, with big pages, the returned size
can be much more that what we really need to populate the mempool.

For instance, populating a mempool that requires 1.1GB of memory with
1GB hugepages can result in allocating 2GB of memory.

This problem is hidden most of the time due to the allocation method of
rte_mempool_populate_default(): when try_iova_contig_mempool=true, it
first tries to allocate an iova contiguous area, without the alignment
constraint. If it fails, it fallbacks to an aligned allocation that does
not require to be iova-contiguous. This can also fallback into several
smaller aligned allocations.

This commit changes rte_mempool_op_calc_mem_size_default() to relax the
alignment constraint to a cache line and to return a smaller size.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:10 +01:00
Olivier Matz
354788b60c mempool: allow populating with unaligned virtual area
rte_mempool_populate_virt() currently requires that both addr
and length are page-aligned.

Remove this unneeded constraint which can be annoying with big
hugepages (ex: 1GB).

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:09 +01:00
Olivier Matz
a2b5a8722f mempool: clarify default populate function
No functional change.
Clarify the populate function to make future changes easier to
understand.

Rename the variables:
- to avoid negation in the name
- to have more understandable names

Remove useless variable (no_pageshift is equivalent to pg_sz == 0).

Remove duplicate affectation of "external" variable.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-10-16 10:41:21 +02:00
Anatoly Burakov
028669bc9f eal: hide shared memory config
Now that everything that has ever accessed the shared memory
config is doing so through the public API's, we can make it
internal. Since we're removing quite a few headers from
rte_eal_memconfig.h, we need to add them back in places
where this header is used.

This bumps the ABI, so also change all build files and make
update documentation.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
2019-07-06 10:32:34 +02:00
Anatoly Burakov
119cee86cd eal: add API to lock/unlock mempool list
Currently, in order to lock access to the mempool list, a direct
access to the shared memory structure is needed. Add an API to do
the same, and search-and-replace all usages.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
2019-07-05 22:31:39 +02:00
Anatoly Burakov
a36f5ce06e eal: add API to lock/unlock tailq list
Currently, locking/unlocking the TAILQ list requires direct
access to the shared memory config. Add an API to do the same,
and search-and-replace all usages.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
2019-07-05 22:13:23 +02:00
John McNamara
8bd5f07c7a doc: fix spelling reported by aspell in comments
Fix spelling errors in the doxygen docs.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
2019-05-03 00:38:14 +02:00
Bruce Richardson
6723c0fc72 replace snprintf with strlcpy
Do a global replace of snprintf(..."%s",...) with strlcpy, adding in the
rte_string_fns.h header if needed.  The function changes in this patch were
auto-generated via command:

  spatch --sp-file devtools/cocci/strlcpy.cocci --dir . --in-place

and then the files edited using awk to add in the missing header:

  gawk -i inplace '/include <rte_/ && ! seen { \
  	print "#include <rte_string_fns.h>"; seen=1} {print}'

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2019-04-04 22:46:05 +02:00
Bruce Richardson
f9acaf84e9 replace snprintf with strlcpy without adding extra include
For files that already have rte_string_fns.h included in them, we can
do a straight replacement of snprintf(..."%s",...) with strlcpy. The
changes in this patch were auto-generated via command:

spatch --sp-file devtools/cocci/strlcpy-with-header.cocci --dir . --in-place

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2019-04-04 22:45:54 +02:00
Jerin Jacob
55878866eb use appropriate EAL macro for constructors
Use eal's RTE_INIT abstraction for defining constructors.

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2019-03-27 23:10:57 +01:00
Anatoly Burakov
65ff37b105 malloc: add function to check if socket is external
An API is needed to check whether a particular socket ID belongs
to an internal or external heap. Prime user of this would be
mempool allocator, because normal assumptions of IOVA
contiguousness in IOVA as VA mode do not hold in case of
externally allocated memory.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-10-11 11:11:25 +02:00
Anatoly Burakov
5282bb1c36 mem: allow memseg lists to be marked as external
When we allocate and use DPDK memory, we need to be able to
differentiate between DPDK hugepage segments and segments that
were made part of DPDK but are externally allocated. Add such
a property to memseg lists.

This breaks the ABI, so document the change in release notes.
This also breaks a few internal assumptions about memory
contiguousness, so adjust malloc code in a few places.

All current calls for memseg walk functions were adjusted to
ignore external segments where it made sense.

Mempools is a special case, because we may be asked to allocate
a mempool on a specific socket, and we need to ignore all page
sizes on other heaps or other sockets. Previously, this
assumption of knowing all page sizes was not a problem, but it
will be now, so we have to match socket ID with page size when
calculating minimum page size for a mempool.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
2018-10-11 10:24:29 +02:00
Andrew Rybchenko
91ad034919 mempool: fold memory size calculation helper
rte_mempool_calc_mem_size_helper() was introduced to avoid
code duplication and used in deprecated rte_mempool_mem_size() and
rte_mempool_op_calc_mem_size_default(). Now the first one is removed
and it is better to fold the helper into the second one to make it
more readable.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 23:41:57 +02:00
Andrew Rybchenko
e2594a36e9 mempool: remove deprecated functions
Functions rte_mempool_populate_phys(), rte_mempool_virt2phy() and
rte_mempool_populate_phys_tab() are just wrappers for corresponding
IOVA functions and were deprecated in v17.11.

Functions rte_mempool_xmem_create(), rte_mempool_xmem_size(),
rte_mempool_xmem_usage() and rte_mempool_populate_iova_tab() were
deprecated in v18.05 and removal was announced earlier in v18.02.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 23:41:57 +02:00
Pablo de Lara
b2a4654f08 mempool: check for zero size creation
Currently, a mempool can be created if the number of
objects is zero. However, in this scenario,
rte_mempool_create should return NULL,
as the mempool created is useless otherwise.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 15:35:02 +02:00
Anatoly Burakov
460354cd4e mempool: fix virtual address population
Currently, populate_virt will check if mempool is already populated.
This will cause inability to reserve multi-chunk mempools if
contiguous memory is not a hard requirement, because if allocating
all-contiguous memory fails, mempool will retry with virtual addresses
and will call populate_virt. It seems that the original code never
anticipated more than one non-physically contiguous area.

Fix it by removing the check in populate virt. populate_anon() function
calls populate_virt() also, and it can be reasonably inferred that it is
expecting that virtual area is not already populated. Even though a
similar check is already in place there, also add the check that was
part of populate_virt() just in case.

Fixes: aab4f62d6c ("mempool: support no hugepage mode")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-05-15 16:30:14 +02:00
Ferruh Yigit
04db1d0da7 lib: clear experimental version tag in linker scripts
Remove version tag from experimental block in linker version scripts
(.map files).

That label is not used by linker and information only. It is useful
for version blocks but not useful for experimental block but confusing.
Removing those labels.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
2018-05-14 03:37:28 +02:00