124 Commits

Author SHA1 Message Date
Bruce Richardson
f9acaf84e9 replace snprintf with strlcpy without adding extra include
For files that already have rte_string_fns.h included in them, we can
do a straight replacement of snprintf(..."%s",...) with strlcpy. The
changes in this patch were auto-generated via command:

spatch --sp-file devtools/cocci/strlcpy-with-header.cocci --dir . --in-place

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2019-04-04 22:45:54 +02:00
Anatoly Burakov
65ff37b105 malloc: add function to check if socket is external
An API is needed to check whether a particular socket ID belongs
to an internal or external heap. Prime user of this would be
mempool allocator, because normal assumptions of IOVA
contiguousness in IOVA as VA mode do not hold in case of
externally allocated memory.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-10-11 11:11:25 +02:00
Anatoly Burakov
5282bb1c36 mem: allow memseg lists to be marked as external
When we allocate and use DPDK memory, we need to be able to
differentiate between DPDK hugepage segments and segments that
were made part of DPDK but are externally allocated. Add such
a property to memseg lists.

This breaks the ABI, so document the change in release notes.
This also breaks a few internal assumptions about memory
contiguousness, so adjust malloc code in a few places.

All current calls for memseg walk functions were adjusted to
ignore external segments where it made sense.

Mempools is a special case, because we may be asked to allocate
a mempool on a specific socket, and we need to ignore all page
sizes on other heaps or other sockets. Previously, this
assumption of knowing all page sizes was not a problem, but it
will be now, so we have to match socket ID with page size when
calculating minimum page size for a mempool.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
2018-10-11 10:24:29 +02:00
Andrew Rybchenko
91ad034919 mempool: fold memory size calculation helper
rte_mempool_calc_mem_size_helper() was introduced to avoid
code duplication and used in deprecated rte_mempool_mem_size() and
rte_mempool_op_calc_mem_size_default(). Now the first one is removed
and it is better to fold the helper into the second one to make it
more readable.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 23:41:57 +02:00
Andrew Rybchenko
e2594a36e9 mempool: remove deprecated functions
Functions rte_mempool_populate_phys(), rte_mempool_virt2phy() and
rte_mempool_populate_phys_tab() are just wrappers for corresponding
IOVA functions and were deprecated in v17.11.

Functions rte_mempool_xmem_create(), rte_mempool_xmem_size(),
rte_mempool_xmem_usage() and rte_mempool_populate_iova_tab() were
deprecated in v18.05 and removal was announced earlier in v18.02.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 23:41:57 +02:00
Pablo de Lara
b2a4654f08 mempool: check for zero size creation
Currently, a mempool can be created if the number of
objects is zero. However, in this scenario,
rte_mempool_create should return NULL,
as the mempool created is useless otherwise.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-08-05 15:35:02 +02:00
Anatoly Burakov
460354cd4e mempool: fix virtual address population
Currently, populate_virt will check if mempool is already populated.
This will cause inability to reserve multi-chunk mempools if
contiguous memory is not a hard requirement, because if allocating
all-contiguous memory fails, mempool will retry with virtual addresses
and will call populate_virt. It seems that the original code never
anticipated more than one non-physically contiguous area.

Fix it by removing the check in populate virt. populate_anon() function
calls populate_virt() also, and it can be reasonably inferred that it is
expecting that virtual area is not already populated. Even though a
similar check is already in place there, also add the check that was
part of populate_virt() just in case.

Fixes: aab4f62d6c1c ("mempool: support no hugepage mode")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-05-15 16:30:14 +02:00
Olivier Matz
5751ff40fe mempool: fix alignment of memzone length when populating
When populating a mempool with the default function, if there is not
enough virtually contiguous memory for the whole mempool, it will be
populated with several chunks. A chunk of the maximum available length
is requested with:

  mz = rte_memzone_reserve_aligned(..., len=0, ..., align=x)

If align is smaller than the page size, the address and the length of
the memzone may not be a multiple of the page size. This makes
rte_mempool_populate_virt() to fail because it requires them to be
page-aligned. This patch fixes that.

The problem can be reproduced easily by allocating more than available
memory:
  ./build/app/testpmd -l 0,1 -- --total-num-mbufs=65536
  ...
  Cause: Creation of mbuf pool for socket 0 failed: Invalid argument

After the patch, the error code is correct:
  ./build/app/testpmd -l 0,1 -- --total-num-mbufs=65536
  ...
  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory

Fixes: ba0009560c30 ("mempool: support new allocation methods")

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2018-05-08 15:58:20 +02:00
Artem V. Andreev
8a80fa4723 mempool: support block dequeue operation
If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.

Signed-off-by: Artem V. Andreev <artem.andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-26 23:34:07 +02:00
Andrew Rybchenko
05912855bc mempool: remove callback to register memory area
The callback is not required any more since there is a new callback
to populate objects using provided memory area which provides
the same information.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:17:43 +02:00
Andrew Rybchenko
fd943c764a mempool: deprecate xmem functions
Move rte_mempool_xmem_size() code to internal helper function
since it is required in two places: deprecated rte_mempool_xmem_size()
and non-deprecated rte_mempool_op_calc_mem_size_default().

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:17:41 +02:00
Andrew Rybchenko
ce1f2c61ed mempool: remove callback to get capabilities
The callback was introduced to let generic code to know octeontx
mempool driver requirements to use single physically contiguous
memory chunk to store all objects and align object address to
total object size. Now these requirements are met using a new
callbacks to calculate required memory chunk size and to populate
objects using provided memory chunk.

These capability flags are not used anywhere else.

Restricting capabilities to flags is not generic and likely to
be insufficient to describe mempool driver features. If required
in the future, API which returns structured information may be
added.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:16:12 +02:00
Andrew Rybchenko
e1174f2d53 mempool: add op to populate objects using provided memory
The callback allows to customize how objects are stored in the
memory chunk. Default implementation of the callback which simply
puts objects one by one is available.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 02:03:32 +02:00
Andrew Rybchenko
0a48646893 mempool: add op to calculate memory size to be allocated
Size of memory chunk required to populate mempool objects depends
on how objects are stored in the memory. Different mempool drivers
may have different requirements and a new operation allows to
calculate memory size in accordance with driver requirements and
advertise requirements on minimum memory chunk size and alignment
in a generic way.

Bump ABI version since the patch breaks it.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2018-04-24 02:02:58 +02:00
Artem V. Andreev
66e7ba0bad mempool: ensure mempool is initialized before populating
Callback to calculate required memory area size may require mempool
driver data to be already allocated and initialized.

Signed-off-by: Artem V. Andreev <artem.andreev@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:41:01 +02:00
Andrew Rybchenko
4143b12200 mempool: rename flag to control IOVA-contiguous objects
Flag MEMPOOL_F_NO_PHYS_CONTIG is renamed as MEMPOOL_F_NO_IOVA_CONTIG
to follow IO memory contiguous terminology.
MEMPOOL_F_NO_PHYS_CONTIG is kept for backward compatibility and
deprecated.

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:39:20 +02:00
Andrew Rybchenko
25e6755056 mempool: fix leak when no objects are populated
Fixes: 84121f197187 ("mempool: store memory chunks in a list")
Cc: stable@dpdk.org

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:28:53 +02:00
Anatoly Burakov
66cc45e293 mem: replace memseg with memseg lists
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, but multiple lists. To be precise,
each hugepage size available on the system gets one or more memseg
lists, per socket.

In order to support dynamic memory allocation, we reserve all
memory in advance (unless we're in 32-bit legacy mode, in which
case we do not preallocate memory). As in, we do an anonymous
mmap() of the entire maximum size of memory per hugepage size, per
socket (which is limited to either RTE_MAX_MEMSEG_PER_TYPE pages or
RTE_MAX_MEM_MB_PER_TYPE megabytes worth of memory, whichever is the
smaller one), split over multiple lists (which are limited to
either RTE_MAX_MEMSEG_PER_LIST memsegs or RTE_MAX_MEM_MB_PER_LIST
megabytes per list, whichever is the smaller one). There is also
a global limit of CONFIG_RTE_MAX_MEM_MB megabytes, which is mainly
used for 32-bit targets to limit amounts of preallocated memory,
but can be used to place an upper limit on total amount of VA
memory that can be allocated by DPDK application.

So, for each hugepage size, we get (by default) up to 128G worth
of memory, per socket, split into chunks of up to 32G in size.
The address space is claimed at the start, in eal_common_memory.c.
The actual page allocation code is in eal_memalloc.c (Linux-only),
and largely consists of copied EAL memory init code.

Pages in the list are also indexed by address. That is, in order
to figure out where the page belongs, one can simply look at base
address for a memseg list. Similarly, figuring out IOVA address
of a memzone is a matter of finding the right memseg list, getting
offset and dividing by page size to get the appropriate memseg.

This commit also removes rte_eal_dump_physmem_layout() call,
according to deprecation notice [1], and removes that deprecation
notice as well.

On 32-bit targets due to limited VA space, DPDK will no longer
spread memory to different sockets like before. Instead, it will
(by default) allocate all of the memory on socket where master
lcore is. To override this behavior, --socket-mem must be used.

The rest of the changes are really ripple effects from the memseg
change - heap changes, compile fixes, and rewrites to support
fbarray-backed memseg lists. Due to earlier switch to _walk()
functions, most of the changes are simple fixes, however some
of the _walk() calls were switched to memseg list walk, where
it made sense to do so.

Additionally, we are also switching locks from flock() to fcntl().
Down the line, we will be introducing single-file segments option,
and we cannot use flock() locks to lock parts of the file. Therefore,
we will use fcntl() locks for legacy mem as well, in case someone is
unfortunate enough to accidentally start legacy mem primary process
alongside an already working non-legacy mem-based primary process.

[1] http://dpdk.org/dev/patchwork/patch/34002/

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:55:39 +02:00
Anatoly Burakov
8f7335c1be mempool: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:49:16 +02:00
Anatoly Burakov
ba0009560c mempool: support new allocation methods
If a user has specified that the zone should have contiguous memory,
add a memzone flag to request contiguous memory. Otherwise, account
for the fact that unless we're in IOVA_AS_VA mode, we cannot
guarantee that the pages would be physically contiguous, so we
calculate the memzone size and alignments as if we were getting
the smallest page size available.

However, for the non-IOVA contiguous case, existing mempool size
calculation function doesn't give us expected results, because it
will return memzone sizes aligned to page size (e.g. a 1MB mempool
may use an entire 1GB page), therefore in cases where we weren't
specifically asked to reserve non-contiguous memory, first try
reserving a memzone as IOVA-contiguous, and if that fails, then
try reserving with page-aligned size/alignment.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:48 +02:00
Andrew Rybchenko
ce42ae42bc mempool: fix physical contiguous check
There is not specified dependency between rte_mempool_populate_default()
and rte_mempool_populate_iova(). So, the second should not rely on the
fact that the first adds capability flags to the mempool flags.

Fixes: 65cf769f5e6a ("mempool: detect physical contiguous objects")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-02-06 00:51:19 +01:00
Olivier Matz
7e92cef514 mempool: use SPDX tags
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2018-02-01 02:27:22 +01:00
Pavan Nikhilesh
be2d94e5eb mempool: fix first memory area notification
Mempool creation needs to be completed first before notifying mempool to
register the mempool area.

Fixes: 12b8cc1a7e86 ("mempool: notify memory area to pool")
Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-01-12 18:30:20 +01:00
Thomas Monjalon
c3ee68b879 mempool: rename populate functions to IOVA
The functions rte_mempool_populate_phys() and
rte_mempool_populate_phys_tab() are renamed to
rte_mempool_populate_iova() and rte_mempool_populate_iova_tab().
The deprecated functions are kept as aliases to avoid breaking the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:26:34 +01:00
Thomas Monjalon
efd785f994 mempool: rename addresses from physical to IOVA
The struct fields phys_addr_t rte_mempool_objhdr.physaddr and
rte_mempool_memhdr.phys_addr are renamed to rte_iova_t iova.
The deprecated names are kept in an anonymous union to avoid breaking
the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:25:55 +01:00
Thomas Monjalon
f17ca7870f memzone: rename address from physical to IOVA
The struct rte_memzone field .phys_addr is renamed to .iova.
The deprecated name is kept in an anonymous union to avoid breaking
the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 22:25:44 +01:00
Thomas Monjalon
62196f4e09 mem: rename address mapping function to IOVA
The function rte_mem_virt2phy() is kept and used in functions which
works only with physical addresses.
For all other calls this function is replaced by rte_mem_virt2iova()
which does a direct mapping (no conversion) in the VA case.

Note: the new function rte_mem_virt2iova() function matches the
behaviour implemented in rte_mem_virt2phy() by the commit
680f6c12600f ("mem: honor IOVA mode in virt2phy")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 22:24:19 +01:00
Hemant Agrawal
e9508b64ca mempool: remove get capability debug log
This is not required to be printed for every mempool call.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-10-12 03:30:26 +02:00
Jianfeng Tan
a7cb2e20d2 mem: remove API to get physical address in dom0
Previously, to get MFN address in dom0, this API is a wrapper to
obtain the "physical address".

As we will removed xen dom0 support, this API is not necessary.

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-10-09 01:52:37 +02:00
Jianfeng Tan
1950bd7694 xen: remove dependency in libraries
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-10-09 01:52:08 +02:00
Santosh Shukla
12b8cc1a7e mempool: notify memory area to pool
HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
56d5c1079e mempool: introduce block size alignment flag
Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated.

Example:
- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
65cf769f5e mempool: detect physical contiguous objects
The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
3bbc406a87 mempool: get capabilities
Allow the mempool driver to advertise his pool capabilities.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capabilities() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capabilities to mempool flags param.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
6eac187bff mempool: add flags arg in xmem size and usage
xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.

Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:43:33 +02:00
Santosh Shukla
0cc0f8aaa3 mempool: change flags from int to unsigned int
mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:43:26 +02:00
Olivier Matz
ad10c17821 mem: do not advertise physical address when no hugepages
When populating a mempool with a virtual memory area, the mempool
library expects to be able to get the physical address of each page.

When started with --no-huge, the physical addresses may not be available
because the pages are not locked in memory. It sometimes returns
RTE_BAD_PHYS_ADDR, which makes the mempool_populate() function to fail.

This was working before the commit cdc242f260e7 ("eal/linux: support
running as unprivileged user"), because rte_mem_virt2phy() was returning
0 instead of RTE_BAD_PHYS_ADDR, which was seen as a valid physical
address.

Since --no-huge is a debug function that breaks the support of physical
drivers, always set physical addresses to RTE_BAD_PHYS_ADDR in memzones
or in rte_mem_virt2phy(), and ensure that mempool won't complain in that
case.

Fixes: cdc242f260e7 ("eal/linux: support running as unprivileged user")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jan Blunck <jblunck@infradead.org>
2017-07-04 17:51:22 +02:00
Stephen Hemminger
c5ba278876 lib: remove unnecessary void cast
Remove unnecessary casts of void * pointers to a specific type.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2017-04-11 18:05:10 +02:00
Shreyansh Jain
44cebef721 mempool: fix crash when handler not found
In case the stack or ring mempool handler are compiled as shared
library and not linked in with test binary, segfault is reported.
This is because return value of rte_mempool_set_ops_byname is not
being checked in rte_mempool_ops_alloc.

This patch handles error returned from rte_mempool_set_ops_byname
when a mempool is not found.

Fixes: 449c49b93a6b ("mempool: support handler operations")

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-03 18:53:10 +02:00
Olivier Matz
f3bc028909 mempool: remove deprecated count functions
As announced in the deprecation notice, remove these functions.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-02-21 12:05:46 +01:00
Wei Zhao
e8453b9f3c mempool: remove redundant socket id assignment
There is a redundant repetition mempool socket_id assignment in the
file rte_mempool.c in function rte_mempool_create_empty. The
statement "mp->socket_id = socket_id;"appear twice in line 821
and 824. One of them is redundant, so delete it.

Fixes: 85226f9c526b ("mempool: introduce a function to create an empty pool")

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-12-06 15:18:51 +01:00
Nipun Gupta
f5e9ed5c4e mempool: fix leak if populate fails
This patch fixes the issue of memzone not being freed incase the
rte_mempool_populate_phys fails in the rte_mempool_populate_default

This issue was identified when testing with OVS ~2.6
- configure the system with low memory (e.g. < 500 MB)
- add bridge and dpdk interfaces
- delete brigde
- keep on repeating the above sequence.

Fixes: d1d914ebbc25 ("mempool: allocate in several memory chunks by default")

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-11-12 22:27:09 +01:00
Wei Dai
1e975578bc mempool: fix search of maximum contiguous pages
paddr[i] + pg_sz always points to the start physical address of the
2nd page after pddr[i], so only up to 2 pages can be combinded to
be used. With this revision, more than 2 pages can be used.

Fixes: 84121f197187 ("mempool: store memory chunks in a list")

Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2016-10-25 23:18:47 +02:00
Ferruh Yigit
8e0437473d mempool: fix comments of create functions
Fixes: 85226f9c526b ("mempool: introduce a function to create an empty pool")
Fixes: d1d914ebbc25 ("mempool: allocate in several memory chunks by default")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-10-04 11:15:19 +02:00
Weiliang Luo
e15922d75a mempool: fix corruption due to invalid handler
When using rte_mempool_create(), the mempool handler is selected
depending on the flags given by the user:
  - multi-consumer / multi-producer
  - multi-consumer / single-producer
  - single-consumer / multi-producer
  - single-consumer / single-producer

The flags were not properly tested, resulting in the selection of sc/sp
handler if sc/mp or mc/sp was asked. This can lead to corruption or
crashes because the get/put operations are not atomic.

Fixes: 449c49b93a6b ("mempool: support handler operations")

Signed-off-by: Weiliang Luo <droidluo@gmail.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-09-16 16:16:37 +02:00
Amine Kherbouche
f03723017a remove unused ring includes
This patch removes all unused <rte_ring.h> headers.

Signed-off-by: Amine Kherbouche <amine.kherbouche@6wind.com>
2016-09-16 10:16:02 +02:00
Thomas Monjalon
cae54ac47c mempool: fix unsafe removal from list by callback
If a mempool is removed from the list by a callback function
during rte_mempool_walk(), the TAILQ_FOREACH loop will fail unexpectedly.
It is fixed by using the safe version of the loop macro.

Reported-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-07-25 22:20:51 +02:00
Adrien Mazarguil
1cc275ef61 mempool: fix empty structure definition
This commit addresses the following warning reported by clang, which
happens by default, as long as CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is disabled:

 warning: empty struct has size 0 in C, size 1 in C++

C and C++ must use the same size for objects to avoid corruption during run
time.

Fixes: 97e7e685bfcd ("mempool: add structure for object trailers")

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-07-15 17:26:52 +02:00
Olivier Matz
4d0c51a85d mempool: fix creation with Xen dom0
Restore the use of 2M hugepages when using Xen Dom0 that was
dropped during mempool rework.

Fixes: c042ba20674a ("mempool: rework support of Xen dom0")

Reported-by: Huilong Xu <huilongx.xu@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2016-07-11 19:10:09 +02:00
Bruce Richardson
a0fd91cefc mempool: rename functions with confusing names
The mempool_count and mempool_free_count behaved contrary to what their
names suggested. The free_count function actually returned the number of
elements that were allocated from the pool, not the number unallocated as
the name implied.

Fix this by introducing two new functions to replace the old ones,
* rte_mempool_avail_count to replace rte_mempool_count
* rte_mempool_in_use_count to replace rte_mempool_free_count

In this patch, the new functions are added, and the old ones are marked
as deprecated. All apps and examples that use the old functions are
updated to use the new functions.

Fixes: af75078fece3 ("first public release")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2016-07-01 12:35:57 +02:00