Upon pool free request from application, Octeon FPA free
does following:
- Uses mbox to reset fpapf pool setup.
- frees fpavf resources.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Upon pool allocation request by application, Octeontx FPA alloc
does following:
- Gets free pool from pci fpavf array.
- Uses mbox to communicate fpapf driver about,
* gpool-id
* pool block_sz
* alignemnt
- Programs fpavf pool boundary.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
A mempool device is set of PCIe vfs.
On Octeontx HW, each mempool devices are enumerated as
separate SRIOV VF PCIe device.
In order to expose as a mempool device:
On PCIe probe, the driver stores the information associated with the
PCIe device and later upon application pool request
(e.g. rte_mempool_create_empty), Infrastructure creates a pool device
with earlier probed PCIe VF devices.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
IGB_UIO compilation recently got enabled for ARM64 by default
The igb_uio compilation against ARM64 based stock 4.x (e.g. 4.13)
kernel is giving compilation warnings:
igb_uio.c: In function ‘igbuio_pci_irqcontrol’:
igb_uio.c:115:25: error: implicit declaration of function
‘irq_get_irq_dat ’ [-Werror=implicit-function-declaration]
struct irq_data *irq = irq_get_irq_data(udev->info.irq);
^
igb_uio.c:115:25: error: initialization makes pointer from integer without
a cast [-Werror=int-conversion]
Fixes: d196343a25 ("igb_uio: use kernel functions for masking MSI-X")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Use rte_bsf32 and fast bit unset operation to optimize the
softrss computation.
The following measurements shows improvement over the default
softrss computation function.
tuple lens old(cycles) new(cycles)
3 1225 337
9 3743 992
Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
Reviewed-by: Vladimir Medvedkin <medvedkinv@gmail.com>
When adding a new entry in a hash table, there is
a maximum number of evictions that can be
performed. When the counter of these evictions reaches
this maximum, the entry cannot be added, as it is considered
that the algorithm has encountered an infinite loop.
The problem with the current implementation, is that this
counter was declared as a static variable.
If there are multiple threads adding entries in the same table
or in different tables, they should access different counters,
one per core and per table.
Therefore, the variable has been modified to be non-static.
Fixes: 243e93a504 ("hash: fix unlimited cuckoo path")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This is not bugfix, but it's convenient to help developer
to review and maintain the igbuio codes.
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
This patch adds MSI IRQ mode in a way, that should
also work on older kernel versions. The base for my patch
was an attempt to do this in cf705bc36c which was later
reverted in d8ee82745a. Compilation was tested on Linux 3.2,
4.10 and 4.12.
Signed-off-by: Markus Theil <markus.theil@tu-ilmenau.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
For better readability throughout the module, the destruction
order is changed to the exact inverse construction order.
Signed-off-by: Markus Theil <markus.theil@tu-ilmenau.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The patch which introduced the usage of pci_alloc_irq_vectors
came after the patch which switched to non-threaded ISR (f0d1896fa1),
but did not use non-threaded ISR, if pci_alloc_irq_vectors
is used.
Fixes: 99bb58f3ad ("igb_uio: switch to new irq function for MSI-X")
Cc: stable@dpdk.org
Signed-off-by: Markus Theil <markus.theil@tu-ilmenau.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
igb_uio already allocates irqs using pci_alloc_irq_vectors on
recent kernels >= 4.8. The interrupt disable code was not
using the corresponding pci_free_irq_vectors, but the also
deprecated pci_disable_msix, before this fix.
Fixes: 99bb58f3ad ("igb_uio: switch to new irq function for MSI-X")
Cc: stable@dpdk.org
Signed-off-by: Markus Theil <markus.theil@tu-ilmenau.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Interrupt setup code in igb_uio has to deal with multiple
types of interrupts and kernel versions. This patch moves
the setup and teardown code into own functions, to make
it more readable.
Signed-off-by: Markus Theil <markus.theil@tu-ilmenau.de>
Tested-by: Markus Theil <markus.theil@tu-ilmenau.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
When using EXTRA_CFLAGS="-g -O3" in the build the -O3 causes
compiler warnings. Using Ubuntu 17.04 gcc compiler.
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
detect SLE version reverse chronologically as ">=" is being used.
Fixes: 2972254ce1 ("kni: fix build on Suse 12 SP3")
Cc: stable@dpdk.org
Signed-off-by: Nirmoy Das <ndas@suse.de>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The kernel patch was merged to support pci resource mapping.
https://patchwork.kernel.org/patch/9677441/
So enable igu_uio in the default arm64 configuration.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify its start/end address to HW pool manager.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.
Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional object is needed to make
sure that requested 'n' object gets correctly populated.
Example:
- Let's say that we get 'x' size of memory chunk from memzone.
- And application has requested 'n' object from mempool.
- Ideally, we start using objects at start address 0 to...(x-block_sz)
for n obj.
- Not necessarily first object address i.e. 0 is aligned to block_sz.
- So we derive 'offset' value for block_sz alignment purpose i.e..'off'.
- That 'off' makes sure that start address of object is blk_sz aligned.
- Calculating 'off' may end up sacrificing first block_sz area of
memzone area x. So total number of the object which can fit in the
pool area is n-1, Which is incorrect behavior.
Therefore we request one additional object (/block_sz area) from memzone
when MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag is set.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.
The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a pool is contiguous.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Allow the mempool driver to advertise his pool capabilities.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capabilities() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capabilities to mempool flags param.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.
Following patch will make use of that.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Now that dpdk supports more than one mempool drivers and
each mempool driver works best for specific PMD, example:
- sw ring based mempool for Intel PMD drivers.
- dpaa2 HW mempool manager for dpaa2 PMD driver.
- fpa HW mempool manager for Octeontx PMD driver.
Application would like to know the best mempool handle
for any port.
Introducing rte_eth_dev_pool_ops_supported() API,
which allows PMD driver to advertise
his supported pool capability to the application.
Supported pools are categorized in below priority:-
- Best mempool handle for this port (Highest priority '0')
- Port supports this mempool handle (Priority '1')
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
DPDK has support for both sw and hw mempool and
currently user is limited to use ring_mp_mc pool.
In case user want to use other pool handle,
need to update config RTE_MEMPOOL_OPS_DEFAULT, then
build and run with desired pool handle.
Introducing eal option to override default pool handle.
Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing
pool handle to eal `--mbuf-pool-ops-name=""`.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
iova autodetection depends on rte_bus_scan result. Result of bus scan will
have updated device_list and each device in that list has its '.kdev' state
updated. That kdrv state used to detect iova mapping mode for that device.
_device_parse() has dependency on rt_bus_scan so,
Below calls moved up in the eal initialization order:
- eal_option_device_parse
- rte_bus_scan
And based on the result of rte_bus_scan_iommu_class - select iova
mapping mode.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Introducing rte_eal_iova_mode() helper API. This API
used by non-eal library for detecting iova mode.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
API(rte_bus_get_iommu_class) helps to automatically detect and select
appropriate iova mapping scheme for iommu capable device on that bus.
Algorithm for iova scheme selection for bus:
0. Iterate through bus_list.
1. Collect each bus iova mode value and update into 'mode' var.
2. Mode selection scheme is:
if mode == 0 then iova mode is _pa,
if mode == 1 then iova mode is _pa,
if mode == 2 then iova mode is _va,
if mode == 3 then iova mode ia _pa.
So mode !=2 will be default iova mode (_pa).
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Get iommu class of PCI device on the bus and returns preferred iova
mapping mode for that bus.
Patch also introduces RTE_PCI_DRV_IOVA_AS_VA drv flag.
Flag used when driver needs to operate in iova=va mode.
Algorithm for iova scheme selection for PCI bus:
0. If no device bound then return with RTE_IOVA_DC mapping mode,
else goto 1).
1. Look for device attached to vfio kdrv and has .drv_flag set
to RTE_PCI_DRV_IOVA_AS_VA.
2. Look for any device attached to UIO class of driver.
3. Check for vfio-noiommu mode enabled.
If 2) & 3) is false and 1) is true then select
mapping scheme as RTE_IOVA_VA. Otherwise use default
mapping scheme (RTE_IOVA_PA).
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Introducing rte_pci_get_iommu_class API which helps to get iommu class
of PCI device on the bus and returns preferred iova mapping mode for
PCI bus.
Patch also adds rte_pci_get_iommu_class definition for:
- bsdapp: api returns default iova mode.
- linuxapp: Has stub implementation, Followup patch has complete
implementation.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Export rte_pci_match() function as it needed in the followup patch.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Tested-by: Hemant Agrawal <hemant.agrawal@nxp.com>
As for the testpmd flow command which uses uint16_t since the beginning by
chance, switch to portid_t for consistency.
Fixes: 14ab03825b1d ("ethdev: increase port id range")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
In order to support more than 256 virtual ports, the field "port"
in rte_mbuf has been increased to 16 bits. The initialization/reset
value of the field "port" should be changed from 0xff to 0xffff
accordingly.
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Extend port_id definition from uint8_t to uint16_t in lib and drivers
data structures, specifically rte_eth_dev_data. Modify the APIs,
drivers and app using port_id at the same time.
Fix some checkpatch issues from the original code and remove some
unnecessary cast operations.
release_17_11 and deprecation docs have been updated in this patch.
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
There are two bonding APIs using ABI versioning, and both have
port_id as parameter. Since we are already breaking ABI, no need
to keep older versions of APIs.
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Before this patch, the devices discovered from VFIO layer were
being added in the device list in the order received from directory
scan. This causes an issue in case devices are reordered.
This patch makes all the devices inserted in the device list in
sorted order according to their name.
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
enable the printing of objects during debuging.
use RTE_LOG to avoid function name printing for object name.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Instead of enabling the RX checksum by default, make it
enable only with user ethernet configuration
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The current code is sending 8 packet in each internal loop.
In some of the conditions, mbuf is being allocated or freed.
In case of error, the code is returning without taking care of
such buffer. It is better to send already prepared buffer and err
for the current failure only.
Fixes: 9e5f3e6d36 ("net/dpaa2: handle non-hardware backed buffer pool")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>