Commit Graph

143 Commits

Author SHA1 Message Date
Fady Bader
972848da5f mempool: use generic memory syscall wrappers
Using generic memory management calls instead of Unix memory management
calls for mempool.

Signed-off-by: Fady Bader <fady@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
2020-07-07 01:24:55 +02:00
Olivier Matz
c0280d5d8a mempool: return 0 if area is too small on populate
Change rte_mempool_populate_iova() and rte_mempool_populate_virt() to
return 0 instead of -EINVAL when there is not enough room to store one
object, as it can be helpful for applications to distinguish this
specific case.

As this is an ABI change, use symbol versioning to preserve old
behavior for binary applications.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Lukasz Wojciechowski <l.wojciechow@partner.samsung.com>
2020-05-05 00:27:05 +02:00
Sunil Kumar Kori
40b75c73d1 mempool: add tracepoints
Add tracepoints at important and mandatory APIs for tracing support.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-04-23 15:40:11 +02:00
Jerin Jacob
3f2d6766e3 mempool: remove memory wastage on non-x86
The existing optimize_object_size() function address the memory object
alignment constraint on x86 for better performance.

Different (micro) architecture may have different memory alignment
constraint for better performance and it not the same as the existing
optimize_object_size().

Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution
based on the address and some may have a different formula.

Introducing arch_mem_object_align() function to abstract
the difference between different (micro) architectures to avoid
wasting memory for mempool object alignment for the architecture
that it is not required to do so.

Details on the amount of memory saving:

Currently, arm64 based architectures use the default (nchan=4,
nrank=1). The worst case is for an object whose size (including mempool
header) is 2 cache lines, where it is optimized to 3 cache lines (+50%).

Examples for cache lines size = 64:
  orig     optimized
  64    -> 64           +0%
  128   -> 192          +50%
  192   -> 192          +0%
  256   -> 320          +25%
  320   -> 320          +0%
  384   -> 448          +16%
  ...
  2304  -> 2368         +2.7%  (~mbuf size)

Additional details:
https://www.mail-archive.com/dev@dpdk.org/msg149157.html

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-01-20 16:37:27 +01:00
Olivier Matz
43503c59ad mempool: fix populate with small virtual chunks
To populate a mempool with a virtual area, the mempool code calls
rte_mempool_populate_iova() for each iova-contiguous area. It happens
(rarely) that this area is too small to store one object. In this case,
rte_mempool_populate_iova() returns an error, which is forwarded by
rte_mempool_populate_virt().

This case should not throw an error in rte_mempool_populate_virt().
Instead, the area that is too small should just be ignored.

To fix this issue, change the return value of
rte_mempool_populate_iova() to 0 when no object can be populated,
so it can be ignored by the caller. As this would be an API/ABI change,
only do this modification internally for now.

Fixes: 354788b60c ("mempool: allow populating with unaligned virtual area")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Alvin Zhang <alvinx.zhang@intel.com>
2020-01-20 16:34:54 +01:00
Olivier Matz
3a3d0c75b4 mempool: fix slow allocation of large pools
When allocating a mempool which is larger than the largest
available area, it can take a lot of time:

a- the mempool calculate the required memory size, and tries
   to allocate it, it fails
b- then it tries to allocate the largest available area (this
   does not request new huge pages)
c- add this zone to the mempool, this triggers the allocation
   of a mem hdr, which request a new huge page
d- back to a- until mempool is populated or until there is no
   more memory

This can take a lot of time to finally fail (several minutes): in step
a- it takes all available hugepages on the system, then release them
after it fails.

The problem appeared with commit eba11e3646 ("mempool: reduce wasted
space on populate"), because smaller chunks are now allowed. Previously,
it had to be at least one page size, which is not the case in step b-.

To fix this, implement our own way to allocate the largest available
area instead of using the feature from memzone: if an allocation fails,
try to divide the size by 2 and retry. When the requested size falls
below min_chunk_size, stop and return an error.

Fixes: eba11e3646 ("mempool: reduce wasted space on populate")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Ali Alnubani <alialnu@mellanox.com>
2020-01-20 12:12:20 +01:00
Olivier Matz
f159c61c35 mempool: fix anonymous populate
The documentation says that a negative errno is returned on error, but
in most places that's not the case.

Fix the documentation and the exceptions in code. The second one
(return from populate_virt) also fixes a memory leak.

Note that testpmd was using the function correctly.

Fixes: aa10457eb4 ("mempool: make mempool populate and free api public")
Fixes: 6780f72fb8 ("mempool: populate with anonymous memory")
Fixes: 66e7ba0bad ("mempool: ensure mempool is initialized before populating")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2020-01-20 12:12:20 +01:00
Anatoly Burakov
84191ddeb5 mempool: remove check for bad IOVA when populating
Currently, mempool will check if IOVA is bad for a segment, and reject
the IOVA if hugepages are also enabled. This check is wrong because now
that we have external memory segments, they are allowed to have their
IOVA's to be invalid. This check also doesn't make much sense in the
first place, because the following code can handle bad IOVA's perfectly
well (and in fact, this check is not triggering a failure when
--no-huge option is enabled), so there is not much sense to check for
this in the first place.

Fixes: 950e8fb4e1 ("mem: allow registering external memory areas")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
2019-11-19 21:41:43 +01:00
Anatoly Burakov
2a7fd3ef38 mempool: use actual IOVA addresses when populating
Currently, when mempool is being populated, we get IOVA address
of every segment using rte_mem_virt2iova(). This works for internal
memory, but does not really work for external memory, and does not
work on platforms which return RTE_BAD_IOVA as a result of this
call (such as FreeBSD). Moreover, even when it works, the function
in question will do unnecessary pagewalks in IOVA as PA mode, as
it falls back to rte_mem_virt2phy() instead of just doing a lookup in
internal memseg table.

To fix it, replace the call to first attempt to look through the
internal memseg table (this takes care of internal and external memory),
and fall back to rte_mem_virt2iova() when unable to perform VA->IOVA
translation via memseg table.

Fixes: 66cc45e293 ("mem: replace memseg with memseg lists")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Bo Chen <box.c.chen@intel.com>
2019-11-19 21:41:43 +01:00
Olivier Matz
b32037f7ef mempool: use specific macro for object alignment
For consistency, RTE_MEMPOOL_ALIGN should be used in place of
RTE_CACHE_LINE_SIZE. They have the same value, because the only arch
that was defining a specific value for it has been removed from DPDK.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:34:19 +01:00
Olivier Matz
84626a0d61 mempool: prevent objects from being across pages
When populating a mempool, ensure that objects are not located across
several pages, except if user did not request IOVA-contiguous objects.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-11-06 11:34:19 +01:00
Olivier Matz
b291e69423 mempool: introduce function to get mempool page size
In rte_mempool_populate_default(), we determine the page size,
which is needed for calc_size and allocation of memory.

Move this in a function and export it, it will be used in a next
commit.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:12 +01:00
Olivier Matz
035ee5bea5 mempool: remove optimistic IOVA-contiguous allocation
The previous commit reduced the amount of required memory when
populating the mempool with non IOVA-contiguous memory.

Since there is no big advantage to have a fully iova-contiguous mempool
if it is not explicitly asked, remove this code, it simplifies the
populate function.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:11 +01:00
Olivier Matz
eba11e3646 mempool: reduce wasted space on populate
The size returned by rte_mempool_op_calc_mem_size_default() is aligned
to the specified page size. Therefore, with big pages, the returned size
can be much more that what we really need to populate the mempool.

For instance, populating a mempool that requires 1.1GB of memory with
1GB hugepages can result in allocating 2GB of memory.

This problem is hidden most of the time due to the allocation method of
rte_mempool_populate_default(): when try_iova_contig_mempool=true, it
first tries to allocate an iova contiguous area, without the alignment
constraint. If it fails, it fallbacks to an aligned allocation that does
not require to be iova-contiguous. This can also fallback into several
smaller aligned allocations.

This commit changes rte_mempool_op_calc_mem_size_default() to relax the
alignment constraint to a cache line and to return a smaller size.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:10 +01:00
Olivier Matz
354788b60c mempool: allow populating with unaligned virtual area
rte_mempool_populate_virt() currently requires that both addr
and length are page-aligned.

Remove this unneeded constraint which can be annoying with big
hugepages (ex: 1GB).

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2019-11-06 11:11:09 +01:00
Olivier Matz
a2b5a8722f mempool: clarify default populate function
No functional change.
Clarify the populate function to make future changes easier to
understand.

Rename the variables:
- to avoid negation in the name
- to have more understandable names

Remove useless variable (no_pageshift is equivalent to pg_sz == 0).

Remove duplicate affectation of "external" variable.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2019-10-16 10:41:21 +02:00
Anatoly Burakov
028669bc9f eal: hide shared memory config
Now that everything that has ever accessed the shared memory
config is doing so through the public API's, we can make it
internal. Since we're removing quite a few headers from
rte_eal_memconfig.h, we need to add them back in places
where this header is used.

This bumps the ABI, so also change all build files and make
update documentation.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
2019-07-06 10:32:34 +02:00
Anatoly Burakov
119cee86cd eal: add API to lock/unlock mempool list
Currently, in order to lock access to the mempool list, a direct
access to the shared memory structure is needed. Add an API to do
the same, and search-and-replace all usages.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
2019-07-05 22:31:39 +02:00
Anatoly Burakov
a36f5ce06e eal: add API to lock/unlock tailq list
Currently, locking/unlocking the TAILQ list requires direct
access to the shared memory config. Add an API to do the same,
and search-and-replace all usages.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: David Marchand <david.marchand@redhat.com>
2019-07-05 22:13:23 +02:00
Bruce Richardson
f9acaf84e9 replace snprintf with strlcpy without adding extra include
For files that already have rte_string_fns.h included in them, we can
do a straight replacement of snprintf(..."%s",...) with strlcpy. The
changes in this patch were auto-generated via command:

spatch --sp-file devtools/cocci/strlcpy-with-header.cocci --dir . --in-place

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2019-04-04 22:45:54 +02:00
Anatoly Burakov
65ff37b105 malloc: add function to check if socket is external
An API is needed to check whether a particular socket ID belongs
to an internal or external heap. Prime user of this would be
mempool allocator, because normal assumptions of IOVA
contiguousness in IOVA as VA mode do not hold in case of
externally allocated memory.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-10-11 11:11:25 +02:00
Anatoly Burakov
5282bb1c36 mem: allow memseg lists to be marked as external
When we allocate and use DPDK memory, we need to be able to
differentiate between DPDK hugepage segments and segments that
were made part of DPDK but are externally allocated. Add such
a property to memseg lists.

This breaks the ABI, so document the change in release notes.
This also breaks a few internal assumptions about memory
contiguousness, so adjust malloc code in a few places.

All current calls for memseg walk functions were adjusted to
ignore external segments where it made sense.

Mempools is a special case, because we may be asked to allocate
a mempool on a specific socket, and we need to ignore all page
sizes on other heaps or other sockets. Previously, this
assumption of knowing all page sizes was not a problem, but it
will be now, so we have to match socket ID with page size when
calculating minimum page size for a mempool.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
2018-10-11 10:24:29 +02:00
Andrew Rybchenko
91ad034919 mempool: fold memory size calculation helper
rte_mempool_calc_mem_size_helper() was introduced to avoid
code duplication and used in deprecated rte_mempool_mem_size() and
rte_mempool_op_calc_mem_size_default(). Now the first one is removed
and it is better to fold the helper into the second one to make it
more readable.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 23:41:57 +02:00
Andrew Rybchenko
e2594a36e9 mempool: remove deprecated functions
Functions rte_mempool_populate_phys(), rte_mempool_virt2phy() and
rte_mempool_populate_phys_tab() are just wrappers for corresponding
IOVA functions and were deprecated in v17.11.

Functions rte_mempool_xmem_create(), rte_mempool_xmem_size(),
rte_mempool_xmem_usage() and rte_mempool_populate_iova_tab() were
deprecated in v18.05 and removal was announced earlier in v18.02.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 23:41:57 +02:00
Pablo de Lara
b2a4654f08 mempool: check for zero size creation
Currently, a mempool can be created if the number of
objects is zero. However, in this scenario,
rte_mempool_create should return NULL,
as the mempool created is useless otherwise.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 15:35:02 +02:00
Anatoly Burakov
460354cd4e mempool: fix virtual address population
Currently, populate_virt will check if mempool is already populated.
This will cause inability to reserve multi-chunk mempools if
contiguous memory is not a hard requirement, because if allocating
all-contiguous memory fails, mempool will retry with virtual addresses
and will call populate_virt. It seems that the original code never
anticipated more than one non-physically contiguous area.

Fix it by removing the check in populate virt. populate_anon() function
calls populate_virt() also, and it can be reasonably inferred that it is
expecting that virtual area is not already populated. Even though a
similar check is already in place there, also add the check that was
part of populate_virt() just in case.

Fixes: aab4f62d6c ("mempool: support no hugepage mode")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-05-15 16:30:14 +02:00
Olivier Matz
5751ff40fe mempool: fix alignment of memzone length when populating
When populating a mempool with the default function, if there is not
enough virtually contiguous memory for the whole mempool, it will be
populated with several chunks. A chunk of the maximum available length
is requested with:

  mz = rte_memzone_reserve_aligned(..., len=0, ..., align=x)

If align is smaller than the page size, the address and the length of
the memzone may not be a multiple of the page size. This makes
rte_mempool_populate_virt() to fail because it requires them to be
page-aligned. This patch fixes that.

The problem can be reproduced easily by allocating more than available
memory:
  ./build/app/testpmd -l 0,1 -- --total-num-mbufs=65536
  ...
  Cause: Creation of mbuf pool for socket 0 failed: Invalid argument

After the patch, the error code is correct:
  ./build/app/testpmd -l 0,1 -- --total-num-mbufs=65536
  ...
  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory

Fixes: ba0009560c ("mempool: support new allocation methods")

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-05-08 15:58:20 +02:00
Artem V. Andreev
8a80fa4723 mempool: support block dequeue operation
If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <artem.andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-26 23:34:07 +02:00
Andrew Rybchenko
05912855bc mempool: remove callback to register memory area
The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:17:43 +02:00
Andrew Rybchenko
fd943c764a mempool: deprecate xmem functions
Move rte_mempool_xmem_size() code to internal helper function
since it is required in two places: deprecated rte_mempool_xmem_size()
and non-deprecated rte_mempool_op_calc_mem_size_default().

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:17:41 +02:00
Andrew Rybchenko
ce1f2c61ed mempool: remove callback to get capabilities
The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:16:12 +02:00
Andrew Rybchenko
e1174f2d53 mempool: add op to populate objects using provided memory
The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:03:32 +02:00
Andrew Rybchenko
0a48646893 mempool: add op to calculate memory size to be allocated
Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-04-24 02:02:58 +02:00
Artem V. Andreev
66e7ba0bad mempool: ensure mempool is initialized before populating
Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <artem.andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:41:01 +02:00
Andrew Rybchenko
4143b12200 mempool: rename flag to control IOVA-contiguous objects
Flag MEMPOOL_F_NO_PHYS_CONTIG is renamed as MEMPOOL_F_NO_IOVA_CONTIG
to follow IO memory contiguous terminology.
MEMPOOL_F_NO_PHYS_CONTIG is kept for backward compatibility and
deprecated.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:39:20 +02:00
Andrew Rybchenko
25e6755056 mempool: fix leak when no objects are populated
Fixes: 84121f1971 ("mempool: store memory chunks in a list")
Cc: stable@dpdk.org

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:28:53 +02:00
Anatoly Burakov
66cc45e293 mem: replace memseg with memseg lists
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, but multiple lists. To be precise,
each hugepage size available on the system gets one or more memseg
lists, per socket.

In order to support dynamic memory allocation, we reserve all
memory in advance (unless we're in 32-bit legacy mode, in which
case we do not preallocate memory). As in, we do an anonymous
mmap() of the entire maximum size of memory per hugepage size, per
socket (which is limited to either RTE_MAX_MEMSEG_PER_TYPE pages or
RTE_MAX_MEM_MB_PER_TYPE megabytes worth of memory, whichever is the
smaller one), split over multiple lists (which are limited to
either RTE_MAX_MEMSEG_PER_LIST memsegs or RTE_MAX_MEM_MB_PER_LIST
megabytes per list, whichever is the smaller one). There is also
a global limit of CONFIG_RTE_MAX_MEM_MB megabytes, which is mainly
used for 32-bit targets to limit amounts of preallocated memory,
but can be used to place an upper limit on total amount of VA
memory that can be allocated by DPDK application.

So, for each hugepage size, we get (by default) up to 128G worth
of memory, per socket, split into chunks of up to 32G in size.
The address space is claimed at the start, in eal_common_memory.c.
The actual page allocation code is in eal_memalloc.c (Linux-only),
and largely consists of copied EAL memory init code.

Pages in the list are also indexed by address. That is, in order
to figure out where the page belongs, one can simply look at base
address for a memseg list. Similarly, figuring out IOVA address
of a memzone is a matter of finding the right memseg list, getting
offset and dividing by page size to get the appropriate memseg.

This commit also removes rte_eal_dump_physmem_layout() call,
according to deprecation notice [1], and removes that deprecation
notice as well.

On 32-bit targets due to limited VA space, DPDK will no longer
spread memory to different sockets like before. Instead, it will
(by default) allocate all of the memory on socket where master
lcore is. To override this behavior, --socket-mem must be used.

The rest of the changes are really ripple effects from the memseg
change - heap changes, compile fixes, and rewrites to support
fbarray-backed memseg lists. Due to earlier switch to _walk()
functions, most of the changes are simple fixes, however some
of the _walk() calls were switched to memseg list walk, where
it made sense to do so.

Additionally, we are also switching locks from flock() to fcntl().
Down the line, we will be introducing single-file segments option,
and we cannot use flock() locks to lock parts of the file. Therefore,
we will use fcntl() locks for legacy mem as well, in case someone is
unfortunate enough to accidentally start legacy mem primary process
alongside an already working non-legacy mem-based primary process.

[1] http://dpdk.org/dev/patchwork/patch/34002/

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:55:39 +02:00
Anatoly Burakov
8f7335c1be mempool: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:49:16 +02:00
Anatoly Burakov
ba0009560c mempool: support new allocation methods
If a user has specified that the zone should have contiguous memory,
add a memzone flag to request contiguous memory. Otherwise, account
for the fact that unless we're in IOVA_AS_VA mode, we cannot
guarantee that the pages would be physically contiguous, so we
calculate the memzone size and alignments as if we were getting
the smallest page size available.

However, for the non-IOVA contiguous case, existing mempool size
calculation function doesn't give us expected results, because it
will return memzone sizes aligned to page size (e.g. a 1MB mempool
may use an entire 1GB page), therefore in cases where we weren't
specifically asked to reserve non-contiguous memory, first try
reserving a memzone as IOVA-contiguous, and if that fails, then
try reserving with page-aligned size/alignment.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:48 +02:00
Andrew Rybchenko
ce42ae42bc mempool: fix physical contiguous check
There is not specified dependency between rte_mempool_populate_default()
and rte_mempool_populate_iova(). So, the second should not rely on the
fact that the first adds capability flags to the mempool flags.

Fixes: 65cf769f5e ("mempool: detect physical contiguous objects")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-02-06 00:51:19 +01:00
Olivier Matz
7e92cef514 mempool: use SPDX tags
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2018-02-01 02:27:22 +01:00
Pavan Nikhilesh
be2d94e5eb mempool: fix first memory area notification
Mempool creation needs to be completed first before notifying mempool to
register the mempool area.

Fixes: 12b8cc1a7e ("mempool: notify memory area to pool")
Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-01-12 18:30:20 +01:00
Thomas Monjalon
c3ee68b879 mempool: rename populate functions to IOVA
The functions rte_mempool_populate_phys() and
rte_mempool_populate_phys_tab() are renamed to
rte_mempool_populate_iova() and rte_mempool_populate_iova_tab().
The deprecated functions are kept as aliases to avoid breaking the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:26:34 +01:00
Thomas Monjalon
efd785f994 mempool: rename addresses from physical to IOVA
The struct fields phys_addr_t rte_mempool_objhdr.physaddr and
rte_mempool_memhdr.phys_addr are renamed to rte_iova_t iova.
The deprecated names are kept in an anonymous union to avoid breaking
the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:25:55 +01:00
Thomas Monjalon
f17ca7870f memzone: rename address from physical to IOVA
The struct rte_memzone field .phys_addr is renamed to .iova.
The deprecated name is kept in an anonymous union to avoid breaking
the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 22:25:44 +01:00
Thomas Monjalon
62196f4e09 mem: rename address mapping function to IOVA
The function rte_mem_virt2phy() is kept and used in functions which
works only with physical addresses.
For all other calls this function is replaced by rte_mem_virt2iova()
which does a direct mapping (no conversion) in the VA case.

Note: the new function rte_mem_virt2iova() function matches the
behaviour implemented in rte_mem_virt2phy() by the commit
680f6c1260 ("mem: honor IOVA mode in virt2phy")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 22:24:19 +01:00
Hemant Agrawal
e9508b64ca mempool: remove get capability debug log
This is not required to be printed for every mempool call.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-10-12 03:30:26 +02:00
Jianfeng Tan
a7cb2e20d2 mem: remove API to get physical address in dom0
Previously, to get MFN address in dom0, this API is a wrapper to
obtain the "physical address".

As we will removed xen dom0 support, this API is not necessary.

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-10-09 01:52:37 +02:00
Jianfeng Tan
1950bd7694 xen: remove dependency in libraries
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-10-09 01:52:08 +02:00
Santosh Shukla
12b8cc1a7e mempool: notify memory area to pool
HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00