Commit Graph

12699 Commits

Author SHA1 Message Date
Anatoly Burakov
11d2f002e0 crypto/dpaa_sec: use iova2virt instead of memseg iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:54:42 +02:00
Anatoly Burakov
00bc40e265 bus/fslmc: use iova2virt instead of memseg iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:54:40 +02:00
Anatoly Burakov
982aa8af52 bus/dpaa: use iova2virt instead of memseg iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:54:37 +02:00
Anatoly Burakov
eca28edd98 mem: add iova2virt function
This is reverse lookup of PA to VA. Using this will make
other code less dependent on internals of mem API.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:54:00 +02:00
Anatoly Burakov
746c346d76 net/virtio: use memseg contig walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:53:57 +02:00
Anatoly Burakov
552afc420a mem: add contig walk function
This function is meant to walk over first segment of each
VA-contiguous group of memsegs.

For future users of this function, this is done so that
there is less dependency on internals of mem API and less
noise later change sets.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:53:38 +02:00
Anatoly Burakov
20681b17ba vfio/spapr: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:53:36 +02:00
Anatoly Burakov
12167c0cc2 vfio/type1: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:53:08 +02:00
Anatoly Burakov
ced746fc31 test: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:53:02 +02:00
Anatoly Burakov
8f7335c1be mempool: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:49:16 +02:00
Anatoly Burakov
221b67bca0 eal: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:48:15 +02:00
Anatoly Burakov
8594a2026b net/mlx5: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:48:12 +02:00
Anatoly Burakov
7411d03249 bus/pci: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:48:10 +02:00
Anatoly Burakov
49a28baed2 bus/fslmc: use memseg walk instead of iteration
Reduce dependency on internal details of EAL memory subsystem, and
simplify code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:48:06 +02:00
Anatoly Burakov
2b9f98d8a5 mem: add function to walk all memsegs
For code that might need to iterate over list of allocated
segments, using this API will make it more resilient to
internal API changes and will prevent copying the same
iteration code over and over again.

Additionally, down the line there will be locking implemented,
so users of this API will not need to care about locking
either.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:47:25 +02:00
Anatoly Burakov
ba0009560c mempool: support new allocation methods
If a user has specified that the zone should have contiguous memory,
add a memzone flag to request contiguous memory. Otherwise, account
for the fact that unless we're in IOVA_AS_VA mode, we cannot
guarantee that the pages would be physically contiguous, so we
calculate the memzone size and alignments as if we were getting
the smallest page size available.

However, for the non-IOVA contiguous case, existing mempool size
calculation function doesn't give us expected results, because it
will return memzone sizes aligned to page size (e.g. a 1MB mempool
may use an entire 1GB page), therefore in cases where we weren't
specifically asked to reserve non-contiguous memory, first try
reserving a memzone as IOVA-contiguous, and if that fails, then
try reserving with page-aligned size/alignment.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:48 +02:00
Anatoly Burakov
96b4a66487 net/vmxnet3: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:45 +02:00
Anatoly Burakov
25553d44c4 net/virtio: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Venkatesh Srinivas <venkateshs@google.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:43 +02:00
Anatoly Burakov
17a4cd24a5 net/qede: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Harish Patil <harish.patil@cavium.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:42 +02:00
Anatoly Burakov
7100076182 net/i40e: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:40 +02:00
Anatoly Burakov
46e4fb122f net/enic: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: John Daley <johndale@cisco.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:39 +02:00
Anatoly Burakov
a58c59716c net/ena: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:38 +02:00
Anatoly Burakov
72c280fac2 net/cxgbe: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:36 +02:00
Anatoly Burakov
5da1f2d8eb net/bnxt: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:35 +02:00
Anatoly Burakov
66291ed828 net/bnx2x: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:33 +02:00
Anatoly Burakov
12899df9a4 net/avf: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:30 +02:00
Anatoly Burakov
2d84772bf8 crypto/qat: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

Also, remove the weird page alignment code.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:45:25 +02:00
Anatoly Burakov
74dbbcd6f8 ethdev: use contiguous allocation for DMA memory
All hardware drivers should allocate IOVA-contiguous
memzones for their hardware resources.

This fixes the following drivers in one go:

grep -Rl rte_eth_dma_zone_reserve drivers/

drivers/net/avf/avf_rxtx.c
drivers/net/thunderx/nicvf_ethdev.c
drivers/net/e1000/igb_rxtx.c
drivers/net/e1000/em_rxtx.c
drivers/net/fm10k/fm10k_ethdev.c
drivers/net/vmxnet3/vmxnet3_rxtx.c
drivers/net/liquidio/lio_rxtx.c
drivers/net/i40e/i40e_rxtx.c
drivers/net/sfc/sfc.c
drivers/net/ixgbe/ixgbe_rxtx.c
drivers/net/nfp/nfp_net.c

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:44:53 +02:00
Anatoly Burakov
23fa86e529 memzone: enable IOVA-contiguous reserving
This adds a new flag to request reserved memzone to be IOVA
contiguous. This is useful for allocating hardware resources like
NIC rings/queues etc.For now, hugepage memory is always contiguous,
but we need to prepare the drivers for the switch.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:44:05 +02:00
Anatoly Burakov
5ea85289a9 malloc: support contiguous allocation
No major changes, just add some checks in a few key places, and
a new parameter to pass around.

Also, add a function to check malloc element for physical
contiguousness. For now, assume hugepage memory is always
contiguous, while non-hugepage memory will be checked.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:43:55 +02:00
Anatoly Burakov
d1162b77c9 malloc: replace panics with error messages
We shouldn't ever panic in libraries, let alone in EAL, so
replace all panic messages with error messages.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:43:50 +02:00
Anatoly Burakov
883179b493 malloc: make free return resulting element
This will be needed because we need to know how big is the
new empty space, to check whether we can free some pages as
a result.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:43:41 +02:00
Anatoly Burakov
0a59238f80 malloc: make free list removal function public
We will need to be able to remove entries from free lists from
heaps during certain events, such as rollbacks, or when freeing
memory to the system (where a previously element disappears and
thus can no longer be in the free list).

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:41:39 +02:00
Anatoly Burakov
f21aa4ec9d malloc: make join elements function public
Down the line, we will need to join free segments to determine
whether the resulting contiguous free space is bigger than a
page size, allowing to free some memory back to the system.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:38:08 +02:00
Anatoly Burakov
180698a82c test: add command to dump malloc heap contents
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:37:57 +02:00
Anatoly Burakov
30bc6bf0d5 malloc: add function to dump heap contents
Malloc heap is now a doubly linked list, so it's now possible to
iterate over each malloc element regardless of its state.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:37:53 +02:00
Anatoly Burakov
bb372060da malloc: make heap a doubly-linked list
As we are preparing for dynamic memory allocation, we need to be
able to handle holes in our malloc heap, hence we're switching to
doubly linked list, and prepare infrastructure to support it.

Since our heap is now aware where are our first and last elements,
there is no longer any need to have a dummy element at the end of
each heap, so get rid of that as well. Instead, let insert/remove/
join/split operations handle end-of-list conditions automatically.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:37:46 +02:00
Anatoly Burakov
b5dd92226f malloc: move all locking to heap
Down the line, we will need to do everything from the heap as any
alloc or free may trigger alloc/free OS memory, which would involve
growing/shrinking heap.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:37:39 +02:00
Anatoly Burakov
b7cc54187e mem: move virtual area function in common directory
Move get_virtual_area out of linuxapp EAL memory and make it
common to EAL, so that other code could reserve virtual areas
as well.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
2018-04-11 19:33:06 +02:00
Anatoly Burakov
bef5a2d629 vfio: do not needlessly check for IOVA mode
We already set IOVA addresses of memsegs and memzones to VA
address during initialization, so we don't need to check
whether we're in RTE_IOVA_VA mode anywhere else.

Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-04-11 02:18:19 +02:00
Anatoly Burakov
634d549c9c bus/fslmc: do not needlessly check for IOVA mode
We already set IOVA addresses of memsegs and memzones to VA
address during initialization, so we don't need to check
whether we're in RTE_IOVA_VA mode anywhere else.

Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-04-11 02:18:03 +02:00
Anatoly Burakov
048303b6f3 mem: do not use physical addresses in IOVA as VA mode
We already use VA addresses for IOVA purposes everywhere if we're in
RTE_IOVA_VA mode:
 1) rte_malloc_virt2phy()/rte_malloc_virt2iova() always return VA addresses
 2) Because of 1), memzone's IOVA is set to VA address on reserve
 3) Because of 2), mempool's IOVA addresses are set to VA addresses

The only place where actual physical addresses are stored is in memsegs at
init time, but we're not using them anywhere, and there is no external API
to get those addresses (aside from manually iterating through memsegs), nor
should anyone care about them in RTE_IOVA_VA mode.

So, fix EAL initialization to allocate VA-contiguous segments at the start
without regard for physical addresses (as if they weren't available), and
use VA to set final IOVA addresses for all pages.

Fixes: 62196f4e09 ("mem: rename address mapping function to IOVA")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-04-11 02:15:24 +02:00
Anatoly Burakov
72b75134b8 app/crypto-perf: fix IOVA translation
IOVA addresses should be found by calling rte_virt2iova() as
opposed to rte_virt2phy(), as physical address may not be
equal to IOVA address.

Fixes: 2eb6a1a3e5 ("app/crypto-perf: fix crypto op init")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
2018-04-11 02:12:16 +02:00
Hemant Agrawal
9c75ffc4e6 usertools: change to SPDX license identifier
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
2018-04-11 01:48:12 +02:00
Hemant Agrawal
0a56e151f8 usertools: add missing SPDX identifier
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
2018-04-11 01:47:49 +02:00
Shahaf Shuler
5feecc57d9 align SPDX Mellanox copyrights
Aligning Mellanox SPDX copyrights to a single format.
In addition replace to SPDX licence files which were missed.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
2018-04-11 01:47:47 +02:00
Jan Viktorin
07cbe8f27d eal/arm: use SPDX tag for Cavium and RehiveTech copyright file
Replace the BSD license header with the SPDX tag for files
with a RehiveTech and Cavium copyright on them.

Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
2018-04-11 01:47:46 +02:00
Jan Viktorin
27d8b82635 use SPDX tag for RehiveTech copyright files
Replace the BSD license header with the SPDX tag for files
with only an RehiveTech copyright on them.

Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
2018-04-11 01:47:43 +02:00
Pablo de Lara
8fb3b25760 maintainers: call out subtree committers
The MAINTAINERS file contains information of the maintainers
of the different components on DPDK.
However, it does not give any information on who maintains the
different subtrees which accept new commits for these components.

This commit adds a list of the subtree committers.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-04-11 00:52:51 +02:00
Ferruh Yigit
adf837fe76 maintainers: maintain avf PMD in next-net-intel
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
2018-04-10 23:58:00 +02:00