Commit Graph

160 Commits

Author SHA1 Message Date
Andrew Rybchenko
25e6755056 mempool: fix leak when no objects are populated
Fixes: 84121f1971 ("mempool: store memory chunks in a list")
Cc: stable@dpdk.org

Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2018-04-24 01:28:53 +02:00
Anatoly Burakov
66cc45e293 mem: replace memseg with memseg lists
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, but multiple lists. To be precise,
each hugepage size available on the system gets one or more memseg
lists, per socket.

In order to support dynamic memory allocation, we reserve all
memory in advance (unless we're in 32-bit legacy mode, in which
case we do not preallocate memory). As in, we do an anonymous
mmap() of the entire maximum size of memory per hugepage size, per
socket (which is limited to either RTE_MAX_MEMSEG_PER_TYPE pages or
RTE_MAX_MEM_MB_PER_TYPE megabytes worth of memory, whichever is the
smaller one), split over multiple lists (which are limited to
either RTE_MAX_MEMSEG_PER_LIST memsegs or RTE_MAX_MEM_MB_PER_LIST
megabytes per list, whichever is the smaller one). There is also
a global limit of CONFIG_RTE_MAX_MEM_MB megabytes, which is mainly
used for 32-bit targets to limit amounts of preallocated memory,
but can be used to place an upper limit on total amount of VA
memory that can be allocated by DPDK application.

So, for each hugepage size, we get (by default) up to 128G worth
of memory, per socket, split into chunks of up to 32G in size.
The address space is claimed at the start, in eal_common_memory.c.
The actual page allocation code is in eal_memalloc.c (Linux-only),
and largely consists of copied EAL memory init code.

Pages in the list are also indexed by address. That is, in order
to figure out where the page belongs, one can simply look at base
address for a memseg list. Similarly, figuring out IOVA address
of a memzone is a matter of finding the right memseg list, getting
offset and dividing by page size to get the appropriate memseg.

This commit also removes rte_eal_dump_physmem_layout() call,
according to deprecation notice [1], and removes that deprecation
notice as well.

On 32-bit targets due to limited VA space, DPDK will no longer
spread memory to different sockets like before. Instead, it will
(by default) allocate all of the memory on socket where master
lcore is. To override this behavior, --socket-mem must be used.

The rest of the changes are really ripple effects from the memseg
change - heap changes, compile fixes, and rewrites to support
fbarray-backed memseg lists. Due to earlier switch to _walk()
functions, most of the changes are simple fixes, however some
of the _walk() calls were switched to memseg list walk, where
it made sense to do so.

Additionally, we are also switching locks from flock() to fcntl().
Down the line, we will be introducing single-file segments option,
and we cannot use flock() locks to lock parts of the file. Therefore,
we will use fcntl() locks for legacy mem as well, in case someone is
unfortunate enough to accidentally start legacy mem primary process
alongside an already working non-legacy mem-based primary process.

[1] http://dpdk.org/dev/patchwork/patch/34002/

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:55:39 +02:00
Anatoly Burakov
8f7335c1be mempool: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:49:16 +02:00
Anatoly Burakov
ba0009560c mempool: support new allocation methods
If a user has specified that the zone should have contiguous memory,
add a memzone flag to request contiguous memory. Otherwise, account
for the fact that unless we're in IOVA_AS_VA mode, we cannot
guarantee that the pages would be physically contiguous, so we
calculate the memzone size and alignments as if we were getting
the smallest page size available.

However, for the non-IOVA contiguous case, existing mempool size
calculation function doesn't give us expected results, because it
will return memzone sizes aligned to page size (e.g. a 1MB mempool
may use an entire 1GB page), therefore in cases where we weren't
specifically asked to reserve non-contiguous memory, first try
reserving a memzone as IOVA-contiguous, and if that fails, then
try reserving with page-aligned size/alignment.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:48 +02:00
Andrew Rybchenko
4119163cd2 mempool: fix library version in meson build
Fixes: 5b9656b157 ("lib: build with meson")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2018-03-28 00:07:35 +02:00
Andrew Rybchenko
ce42ae42bc mempool: fix physical contiguous check
There is not specified dependency between rte_mempool_populate_default()
and rte_mempool_populate_iova(). So, the second should not rely on the
fact that the first adds capability flags to the mempool flags.

Fixes: 65cf769f5e ("mempool: detect physical contiguous objects")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-02-06 00:51:19 +01:00
Olivier Matz
7e92cef514 mempool: use SPDX tags
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2018-02-01 02:27:22 +01:00
Bruce Richardson
6c9457c279 build: replace license text with SPDX tag
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Luca Boccassi <bluca@debian.org>
2018-01-30 21:58:59 +01:00
Bruce Richardson
5b9656b157 lib: build with meson
Add non-EAL libraries to DPDK build. The compat lib is a special case,
along with the previously-added EAL, but all other libs can be build using
the same set of commands, where the individual meson.build files only need
to specify their dependencies, source files, header files and ABI versions.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Luca Boccassi <luca.boccassi@gmail.com>
2018-01-30 17:49:16 +01:00
Adrien Mazarguil
0d440d081c lib: fix missing includes in exported headers
Many exported headers rely on definitions found in rte_config.h without
including it, as shown by the following command:

 grep -L '^#include <rte_config.h>' -- \
  $(grep -Rl \
    $(sed -n '/^#define \([^ ]\+\).*$/{s//\1/;H;};${x;s/\n//;s/\n/\\|/g;p;}' \
      build/include/rte_config.h) \
    -- build/include/)

We cannot assume external applications will include rte_config.h on their
own, neither directly nor through a -include parameter like DPDK does
internally.

This not only causes obvious compilation failures that can be reproduced
with check-includes.sh such as:

 [...]/rte_memory.h:88:43: error: ‘RTE_CACHE_LINE_SIZE’ was not declared in
     this scope
  #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
                                            ^

It also results in less visible issues, for instance rte_hash_crc.h relying
on RTE_ARCH_X86_64's presence to provide dedicated inline functions.

This patch partially reverts the commit below and adds missing include
lines to the remaining files.

Fixes: f1a7a5c5f4 ("remove include of generated config header")
Cc: stable@dpdk.org

Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2018-01-17 00:31:05 +01:00
Pavan Nikhilesh
be2d94e5eb mempool: fix first memory area notification
Mempool creation needs to be completed first before notifying mempool to
register the mempool area.

Fixes: 12b8cc1a7e ("mempool: notify memory area to pool")
Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-01-12 18:30:20 +01:00
Bruce Richardson
369991d997 lib: use SPDX tag for Intel copyright files
Replace the BSD license header with the SPDX tag for files
with only an Intel copyright on them.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2018-01-04 22:41:39 +01:00
Thomas Monjalon
505a6925d8 mempool: increase ABI version
API and ABI of mempool library has been changed in 17.11.

Fixes: 02604520b2 ("mempool: remove unused flags argument")
Fixes: 0cc0f8aaa3 ("mempool: change flags from int to unsigned int")
Fixes: 6eac187bff ("mempool: add flags arg in xmem size and usage")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 23:59:19 +01:00
Thomas Monjalon
c3ee68b879 mempool: rename populate functions to IOVA
The functions rte_mempool_populate_phys() and
rte_mempool_populate_phys_tab() are renamed to
rte_mempool_populate_iova() and rte_mempool_populate_iova_tab().
The deprecated functions are kept as aliases to avoid breaking the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:26:34 +01:00
Thomas Monjalon
b0eca11631 mempool: rename address mapping function to IOVA
The function rte_mempool_virt2phy() is renamed to rte_mempool_virt2iova().
The new function has one less parameter because it is unused.
The deprecated function is kept as an alias to avoid breaking the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:26:13 +01:00
Thomas Monjalon
efd785f994 mempool: rename addresses from physical to IOVA
The struct fields phys_addr_t rte_mempool_objhdr.physaddr and
rte_mempool_memhdr.phys_addr are renamed to rte_iova_t iova.
The deprecated names are kept in an anonymous union to avoid breaking
the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-11-06 22:25:55 +01:00
Thomas Monjalon
f17ca7870f memzone: rename address from physical to IOVA
The struct rte_memzone field .phys_addr is renamed to .iova.
The deprecated name is kept in an anonymous union to avoid breaking
the API.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 22:25:44 +01:00
Thomas Monjalon
62196f4e09 mem: rename address mapping function to IOVA
The function rte_mem_virt2phy() is kept and used in functions which
works only with physical addresses.
For all other calls this function is replaced by rte_mem_virt2iova()
which does a direct mapping (no conversion) in the VA case.

Note: the new function rte_mem_virt2iova() function matches the
behaviour implemented in rte_mem_virt2phy() by the commit
680f6c1260 ("mem: honor IOVA mode in virt2phy")

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-11-06 22:24:19 +01:00
Olivier Matz
cbc12b0a96 mk: do not generate LDLIBS from directory dependencies
The list of libraries in LDLIBS was generated from the DEPDIRS-xyz
variable. This is valid when the subdirectory name match the library
name, but it's not always the case, especially for PMDs.

The patches removes this feature and explicitly adds the proper
libraries in LDLIBS.

Some DEPDIRS-xyz variables become useless, remove them.

Reported-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2017-10-24 02:14:57 +02:00
Hemant Agrawal
e9508b64ca mempool: remove get capability debug log
This is not required to be printed for every mempool call.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2017-10-12 03:30:26 +02:00
Jianfeng Tan
a7cb2e20d2 mem: remove API to get physical address in dom0
Previously, to get MFN address in dom0, this API is a wrapper to
obtain the "physical address".

As we will removed xen dom0 support, this API is not necessary.

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-10-09 01:52:37 +02:00
Jianfeng Tan
1950bd7694 xen: remove dependency in libraries
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-10-09 01:52:08 +02:00
Santosh Shukla
12b8cc1a7e mempool: notify memory area to pool
HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify its start/end address to HW pool manager.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
56d5c1079e mempool: introduce block size alignment flag
Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.

Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
  sure that requested 'n' object gets correctly populated.

Example:
- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
  for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
  memzone area x. So total number of the object which can fit in the
  pool area is n-1, Which is incorrect behavior.

Therefore we request one additional object (/block_sz area) from memzone
when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
65cf769f5e mempool: detect physical contiguous objects
The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.

The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
3bbc406a87 mempool: get capabilities
Allow the mempool driver to advertise his pool capabilities.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capabilities() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capabilities to mempool flags param.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:58:39 +02:00
Santosh Shukla
6eac187bff mempool: add flags arg in xmem size and usage
xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.

Following patch will make use of that.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:43:33 +02:00
Santosh Shukla
0cc0f8aaa3 mempool: change flags from int to unsigned int
mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:43:26 +02:00
Santosh Shukla
02604520b2 mempool: remove unused flags argument
* Remove redundant 'flags' API description from
  - __mempool_generic_put
  - __mempool_generic_get
  - rte_mempool_generic_put
  - rte_mempool_generic_get

* Remove unused 'flags' argument from
  - rte_mempool_generic_put
  - rte_mempool_generic_get

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-10-06 21:42:33 +02:00
Olivier Matz
ad10c17821 mem: do not advertise physical address when no hugepages
When populating a mempool with a virtual memory area, the mempool
library expects to be able to get the physical address of each page.

When started with --no-huge, the physical addresses may not be available
because the pages are not locked in memory. It sometimes returns
RTE_BAD_PHYS_ADDR, which makes the mempool_populate() function to fail.

This was working before the commit cdc242f260 ("eal/linux: support
running as unprivileged user"), because rte_mem_virt2phy() was returning
0 instead of RTE_BAD_PHYS_ADDR, which was seen as a valid physical
address.

Since --no-huge is a debug function that breaks the support of physical
drivers, always set physical addresses to RTE_BAD_PHYS_ADDR in memzones
or in rte_mem_virt2phy(), and ensure that mempool won't complain in that
case.

Fixes: cdc242f260 ("eal/linux: support running as unprivileged user")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jan Blunck <jblunck@infradead.org>
2017-07-04 17:51:22 +02:00
Jerin Jacob
c0583d98a9 eal: introduce macro for always inline
Different drivers use internal macros like force_inline for compiler
always inline feature.
Standardizing it through __rte_always_inline macro.

Verified the change by comparing the output binary file.
No difference found in the output binary file with this change.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2017-06-06 17:21:55 +02:00
Stephen Hemminger
c5ba278876 lib: remove unnecessary void cast
Remove unnecessary casts of void * pointers to a specific type.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2017-04-11 18:05:10 +02:00
Shreyansh Jain
1263b426ff mempool: move stack handler as a driver
Moved from lib/librte_mempool, stack mempool handler is an independent
driver.
Shared builds would now require to link in librte_mempool_stack for
"stack" mempool handler.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-03 19:45:45 +02:00
Shreyansh Jain
9a8e9b57f5 mempool: move ring handler as a driver
Moved from lib/librte_mempool, ring mempool is now an independent
driver.
Shared builds would now need to add librte_mempool_ring for:
* ring_mp_mc
* ring_sp_sc
* ring_sp_mc
* ring_mp_sc

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-03 19:45:45 +02:00
Shreyansh Jain
44cebef721 mempool: fix crash when handler not found
In case the stack or ring mempool handler are compiled as shared
library and not linked in with test binary, segfault is reported.
This is because return value of rte_mempool_set_ops_byname is not
being checked in rte_mempool_ops_alloc.

This patch handles error returned from rte_mempool_set_ops_byname
when a mempool is not found.

Fixes: 449c49b93a ("mempool: support handler operations")

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-03 18:53:10 +02:00
Andriy Berestovskyy
240e992f74 mempool: fix typos
Signed-off-by: Andriy Berestovskyy <andriy.berestovskyy@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-03 18:53:10 +02:00
Gage Eads
8477f2f50d mempool: update non-EAL thread note
Commit 30e6399892 ("mempool: support non-EAL thread") added the
capability for non-EAL threads to use the mempool library. This commit
removes the note indicating that the mempool library cannot be used safely
by non-EAL threads, and replaces it with a more up-to-date note.

Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-03 18:53:10 +02:00
Bruce Richardson
ecaed092b6 ring: return remaining entry count when dequeuing
Add an extra parameter to the ring dequeue burst/bulk functions so that
those functions can optionally return the amount of remaining objs in the
ring. This information can be used by applications in a number of ways,
for instance, with single-consumer queues, it provides a max
dequeue size which is guaranteed to work.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-03-29 22:32:20 +02:00
Bruce Richardson
14fbffb0aa ring: return free space when enqueuing
Add an extra parameter to the ring enqueue burst/bulk functions so that
those functions can optionally return the amount of free space in the
ring. This information can be used by applications in a number of ways,
for instance, with single-producer queues, it provides a max
enqueue size which is guaranteed to work. It can also be used to
implement watermark functionality in apps, replacing the older
functionality with a more flexible version, which enables apps to
implement multiple watermark thresholds, rather than just one.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-03-29 22:32:04 +02:00
Bruce Richardson
cfa7c9e6fc ring: make bulk and burst return values consistent
The bulk fns for rings returns 0 for all elements enqueued and negative
for no space. Change that to make them consistent with the burst functions
in returning the number of elements enqueued/dequeued, i.e. 0 or N.
This change also allows the return value from enq/deq to be used directly
without a branch for error checking.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-03-29 22:25:37 +02:00
Olivier Matz
feb9f680cd mk: optimize directory dependencies
Before this patch, the management of dependencies between directories
had several issues:

- the generation of .depdirs, done at configuration is slow: it can take
  more than one minute on some slow targets (usually ~10s on a standard
  PC without -j).

- for instance, it is possible to express a dependency like:
  - app/foo depends on lib/librte_foo
  - and lib/librte_foo depends on app/bar
  But this won't work because the directories are traversed with a
  depth-first algorithm, so we have to choose between doing 'app' before
  or after 'lib'.

- the script depdirs-rule.sh is too complex.

- we cannot use "make -d" for debug, because the output of make is used for
  the generation of .depdirs.

This patch moves the DEPDIRS-* variables in the upper Makefile, making
the dependencies much easier to calculate. A DEPDIRS variable is still
used to process library dependencies in LDLIBS.

After this commit, "make config" is almost immediate.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Robin Jarry <robin.jarry@6wind.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
2017-03-27 23:28:43 +02:00
Olivier Matz
93092a5610 mempool: remove deprecated get and put functions
As announced in the deprecation notice, remove the functions for
single/multi producer/consumer enqueue/dequeue.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-02-21 12:05:46 +01:00
Olivier Matz
f3bc028909 mempool: remove deprecated count functions
As announced in the deprecation notice, remove these functions.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-02-21 12:05:46 +01:00
Olivier Matz
e09ff22d53 mempool: fix stack handler dequeue
The return value of the stack handler is wrong: it should be 0 on
success, not the number of objects dequeued.

This could lead to memory leaks depending on how the caller checks the
return value (ret < 0 or ret != 0). This was also breaking autotests
with debug enabled, because the debug cookies are only updated when the
function returns 0, so the cookies were not updated, leading to
an abort().

Fixes: 295a530b0844 ("mempool: add stack mempool handler")
Cc: stable@dpdk.org

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-01-29 23:38:33 +01:00
Wenfeng Liu
454a0a7009 mempool: use cache in single producer or consumer mode
Currently we will check mempool flags when we put/get objects from
mempool. However, this makes cache useless when mempool is SC|SP,
SC|MP, MC|SP cases.
This patch makes cache available in above cases and improves performance.

Signed-off-by: Wenfeng Liu <liuwf@arraynetworks.com.cn>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2017-01-13 16:38:09 +01:00
Yong Wang
247bde5231 doc: fix typos in code comments
Signed-off-by: Yong Wang <wang.yong19@zte.com.cn>
Acked-by: John McNamara <john.mcnamara@intel.com>
2016-12-06 15:25:01 +01:00
Olivier Matz
8ee94d8aee mempool: fix API documentation
A previous commit changed the local_cache table into a
pointer, reducing the size of the rte_mempool structure.

Fix the API comment of rte_mempool_create() related to
this modification.

Fixes: 213af31e09 ("mempool: reduce structure size if no cache needed")

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
2016-12-06 15:18:51 +01:00
Wei Zhao
580db2a5ce mempool: remove a redundant word in comment
There is a redundant repetition word "for" in comment line of the
file rte_mempool.h after the definition of RTE_MEMPOOL_OPS_NAMESIZE.
The word "for" appears twice in line 359 and 360. One of them is
redundant, so delete it.

Fixes: 449c49b93a ("mempool: support handler operations")

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-12-06 15:18:51 +01:00
Wei Zhao
e8453b9f3c mempool: remove redundant socket id assignment
There is a redundant repetition mempool socket_id assignment in the
file rte_mempool.c in function rte_mempool_create_empty. The
statement "mp->socket_id = socket_id;"appear twice in line 821
and 824. One of them is redundant, so delete it.

Fixes: 85226f9c52 ("mempool: introduce a function to create an empty pool")

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-12-06 15:18:51 +01:00
Nipun Gupta
f5e9ed5c4e mempool: fix leak if populate fails
This patch fixes the issue of memzone not being freed incase the
rte_mempool_populate_phys fails in the rte_mempool_populate_default

This issue was identified when testing with OVS ~2.6
- configure the system with low memory (e.g. < 500 MB)
- add bridge and dpdk interfaces
- delete brigde
- keep on repeating the above sequence.

Fixes: d1d914ebbc ("mempool: allocate in several memory chunks by default")

Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2016-11-12 22:27:09 +01:00