Since IV parameters (offset and length) should not
change for operations in the same session, these parameters
are moved to the crypto transform structure, so they will
be stored in the sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Since IV now is copied after the crypto operation, in
its private size, IV can be passed only with offset
and length.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
In order to facilitate the access to the private data,
after the crypto operation, two new macros have been
implemented:
- rte_crypto_op_ctod_offset(c,t,o), which returns a pointer
to "o" bytes after the start of the crypto operation
(rte_crypto_op)
- rte_crypto_op_ctophys_offset(c, o), which returns
the physical address of the data "o" bytes after the
start of the crypto operation (rte_crypto_op)
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
rte_crypto_op and rte_crypto_sym_op structures
were marked as cache aligned.
However, since these structures are always initialized
in a mempool, this alignment is useless, since the mempool
forces the alignment of its objects.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Instead of storing a pointer to operation specific parameters,
such as symmetric crypto parameters, use a zero-length array,
to mark that these parameters will be stored after the
generic crypto operation structure, which was already assumed
in the code, reducing the memory footprint of the crypto operation.
Besides, it is always expected to have rte_crypto_op
and rte_crypto_sym_op (the only operation specific parameters
structure right now) to be together, as they are initialized
as a single object in the crypto operation pool.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Storing a pointer to the user data is unnecessary,
since user can store additional data, after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Instead of storing some crypto operation flags,
such as operation status, as enumerations,
store them as uint8_t, for memory efficiency.
Also, reserve extra 5 bytes in the crypto operation,
for future additions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Session type (operation with or without session) is not
something specific to symmetric operations.
Therefore, the variable is moved to the generic crypto operation
structure.
Since this is an ABI change, the cryptodev library version
gets bumped.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
syscall always returns '-1' on failure and there is no point
in printing that value. 'errno' is much more informative.
Fixes: 586e390013 ("vhost: export numa node")
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Check return value of pthread_mutex_init(). Also destroy
mutex in case of other erros before returning.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Make sure we catch and log failed calls to pthread
functions.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a check for strdup() return value and fail gracefully if we
get a bad return code.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The MTU feature support check has to be done against MTU
feature bit mask, and not bit position.
Fixes: 72e8543093 ("vhost: add API to get MTU value")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To compare enabled features in current device we must use bit
mask instead of bit position.
Fixes: c843af3aa1 ("vhost: access header only if offloading is supported")
Cc: stable@dpdk.org
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
There is no way to bypass IP checksum verification in Linux
kernel, no matter skb->ip_summed is assigned as CHECKSUM_UNNECESSARY
or CHECKSUM_PARTIAL.
So any packets with bad IP checksum will be dropped at VM IP layer.
To correct, we check this flag PKT_TX_IP_CKSUM to calculate IP csum.
Fixes: 859b480d5a ("vhost: add guest offload setting")
Cc: stable@dpdk.org
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
As PKT_TX_TCP_SEG flag in mbuf->ol_flags implies PKT_TX_TCP_CKSUM,
applications, e.g., testpmd, don't set PKT_TX_TCP_CKSUM when TSO
is set.
This leads to that packets get dropped in VM tcp stack layer because
of bad TCP csum.
To fix this, we make sure TCP NEEDS_CSUM info is set into virtio net
header when PKT_TX_TCP_SEG is set, so that VM tcp stack will not
check the TCP csum.
Fixes: 859b480d5a ("vhost: add guest offload setting")
Cc: stable@dpdk.org
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
vsocket->conn_mutex was allocated with pthread_mutex_init() but never
freed with pthread_mutex_destroy(). This is a potential memory leak,
depending on how pthread_mutex_t is implemented.
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Some PMDs need to know the tunnel type in order to handle advance TX
features. This patch adds a new TX offload flag for MPLS-in-UDP packets.
Signed-off-by: Harish Patil <harish.patil@cavium.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
In some environments, the PCI domain can be larger than 16 bits.
For example, a PCI device passed through in Azure gets a synthetic domain
id which is internally generated based on GUID. The PCI standard does
not restrict domain to be 16 bits.
This change breaks ABI for API's that expose PCI address structure.
The printf format for PCI remains unchanged, so that on most
systems (with only 16 bit domain) the output format is unchanged
and is 4 characters wide. For example: 0000:00:01.0
Only on sysetms with higher bits will the domain take up more
space; example: 12000:00:01.0
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
The function strtoul returns unsigned long and can be directly
assigned to a smaller type. Removing the casts allows easier
expansion of PCI domain.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Device name resides in two different locations, in rte_device->name and
in ethernet device private data.
For now, the copy in the ethernet device private data is required for
multi process support, the name is the how secondary process finds about
primary process device.
But in the ethdev library some eth_dev->data->name usage can be
converted to rte_device->name.
This patch updates ethdev to use rte_device->name when possible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_device->name copied into eth_dev->name, right now size is same for
both but the requirement is not clear.
This patch highlights the relation without changing actual sizes.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new meta pattern item RTE_FLOW_TYPE_ITEM_FUZZY in flow API.
This is for device that support fuzzy match option.
Usually a fuzzy match is fast but the cost is accuracy.
i.e. Signature Match only match pattern's hash value, but it is
possible that two different patterns have the same hash value.
Matching accuracy level can be configured by subfield threshold.
Driver can divide the range of threshold and map to different
accuracy levels that device support.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
When primary process is booted with --file-prefix option, the API,
rte_eal_primary_proc_alive(), uses a wrong config file path to
check if primary process is alive.
Fix it by calling helper function to get config file path.
Fixes: dd3e00138d ("eal: check if primary process is alive")
Cc: stable@dpdk.org
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Suppose we have 2 virtio devices for a VM, with only the first one,
virtio0, binding to igb_uio. Start a primary DPDK process, driving
only virtio0. Then start a secondary DPDK process, it encounters
segfault at eth_virtio_dev_init() because hw is NULL, when trying
to initialize the 2nd virtio devices.
1539 if (!hw->virtio_user_dev) {
We could add a precheck to return error when hw is NULL. But the
root cause is that virtio devices which are not driven by the primary
process are not exluded by secondary eal probe function.
To support legacy virtio devices bound to none kernel driver, we
removed RTE_PCI_DRV_NEED_MAPPING in
commit 962cf902e6 ("pci: export device mapping functions").
At the boot of primary process, ether dev is allocated in rte_eth_devices
array, rte_eth_dev_data is also allocated in rte_eth_dev_data array; then
probe function fails; and ether dev is released. However, the entry in
rte_eth_dev_data array is not cleared. Then we start secondary process,
and try to attach the virtio device that not used in primary process,
the field, dev_private (or hw), in rte_eth_dev_data, is NULL.
To fail the dev attach, we need to clear the field, name, when we
release any ether devices in primary, so that below loop in
rte_eth_dev_attach_secondary() will not find any matched names.
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
if (strcmp(rte_eth_dev_data[i].name, name) == 0)
break;
}
Fixes: 6d890f8ab5 ("net/virtio: fix multiple process support")
Cc: stable@dpdk.org
Reported-by: Reshma Pattan <reshma.pattan@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
When populating a mempool with a virtual memory area, the mempool
library expects to be able to get the physical address of each page.
When started with --no-huge, the physical addresses may not be available
because the pages are not locked in memory. It sometimes returns
RTE_BAD_PHYS_ADDR, which makes the mempool_populate() function to fail.
This was working before the commit cdc242f260 ("eal/linux: support
running as unprivileged user"), because rte_mem_virt2phy() was returning
0 instead of RTE_BAD_PHYS_ADDR, which was seen as a valid physical
address.
Since --no-huge is a debug function that breaks the support of physical
drivers, always set physical addresses to RTE_BAD_PHYS_ADDR in memzones
or in rte_mem_virt2phy(), and ensure that mempool won't complain in that
case.
Fixes: cdc242f260 ("eal/linux: support running as unprivileged user")
Cc: stable@dpdk.org
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jan Blunck <jblunck@infradead.org>
Added CRC compute APIs for arm64 utilizing the pmull
capability.
Added new file net_crc_neon.h to hold the arm64 pmull
CRC implementation.
Added wrappers in rte_vect.h for those neon intrinsics
which are not supported in GCC version < 7.
Verified the changes with crc_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Moved the definition of GCC_VERSION from lib/librte_table/rte_lru.h
to lib/librte_eal/common/include/rte_common.h.
Tested compilation on:
* arm64 with gcc
* x86 with gcc and clang
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Reviewed-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Since SSE4 is now part of the minimum requirements for DPDK, we don't need
the scalar version on x86.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Since SSE4 is now part of the minimum requirements for DPDK, we don't need
to check for its presence any more.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Since SSE4 is now part of the minimum requirements for DPDK, we don't need
to check for its presence any more.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Since SSE4 is now part of the minimum requirements for DPDK, we don't need
to check for its presence any more.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Since SSE4 is now part of the minimum requirements for DPDK, we don't need
a fallback case to handle selection of algorithm when SSE4 is unavailable.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Since SSE4 is now part of the minimum requirements for DPDK, we now longer
need this check.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Since SSE4 is now part of the minimum requirements for DPDK, we now longer
need this check.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Our x86 baseline is to have support for SSE4.2, so therefore there is no
point in conditions around the inclusion of SSE1 - SSE4 headers.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Don't zero the pages during each mmap. Instead, only zero the pages
when they are not already mmapped. Otherwise, the multi-process
support will be broken, as the pages will be zeroed when secondary
processes map the memory. Besides, track the open and mmap operations
on the cdev, and prevent the module from being unloaded when it is
still in use.
Fixes: 82f9318055 ("contigmem: zero all pages during mmap")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Using the new hotplug API allows attach to be backwards compatible while
decoupling it from the concrete bus implementations.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
This is changing the API of rte_eal_dev_detach().
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
This allows the buses to plug and probe specific devices.
This is meant to be a building block for hotplug support.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
This new method allows buses to expose their devices in a controlled
manner. A comparison function is provided by the user to discriminate
between devices, using arbitrary data as identifier.
It is possible to start an iteration from a specific point, in order to
continue a search.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
This helper allows to iterate over all registered buses and find one
matching data used as parameter.
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Remove rte_pause() definition from rte_common.h and
switchover to architecture specific rte_pause.h
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The patch does not provide any functional change for ppc64
with respect to existing rte_pause() definition.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Chao Zhu <chaozhu@linux.vnet.ibm.com>
The patch does not provide any functional change for x86
with respect to existing rte_pause() definition.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The patch does not provide any functional change for ARM32
with respect to existing rte_pause() definition.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jan Viktorin <viktorin@rehivetech.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Each architecture may have different instructions for optimized
and power consumption aware rte_pause() implementation.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Fixed warning -Wasm-operand-widths seen with armv8a
clang compilation.
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Fixed warning -Wunknown-warning-option seen with
armv8a clang compilation.
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Compile the armv8a CRC32 support only if the machine
has the CRC extensions i.e if RTE_MACHINE_CPUFLAG_CRC32
is defined.
Removed the .arch assembly directives as these are no
more necessary.
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Instead of simply busy-waiting for slave in rte_eal_wait_lcore()
do rte_pause(). This will give power savings.
This also fixes warning -Wempty-body seen with armv8a clang
compilation.
Suggested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
* Added new file rte_lru_arm64.h for holding arm64 specific
definitions
* Verified the changes with table_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
* Moved all x86 related lru defines to rte_lru_x86.h while
retaining all common defines in rte_lru.h
* Verified the changes with table_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
* Added file lib/librte_efd/rte_efd_arm64.h to hold arm64
specific definitions
* Verified the changes with efd_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
* Removed setting CONFIG_RTE_SCHED_VECTOR=n from armv8a config
so that the setting from common_base is taken as the default
setting for armv8a
* Verified the changes with sched_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
Verified the changes with thash_autotest unit test case
Signed-off-by: Ashwin Sekhar T K <ashwin.sekhar@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@linaro.org>
At some places, the log2() function is used despite this function
works on float. This introduces a dependency to the math lib but
most of the time it is not required because we want an integer log2.
Add a new helper to do this job and fix nfp driver.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
This patch fixes a typo in the eth device API doc, device
config. not stored between calls to rte_eth_dev_start/stop()
should be restored before a call to rte_eth_dev_start()
instead of after a call to rte_eth_dev_start().
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
This patch fixes a trivial typo in rte_ethdev.h; it should be
"RX multicast OFF" and not "RX multicast OF".
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Change the rte_eth_dev_callback_process function to return int,
and add a void *ret_param parameter.
The new parameter is used by ixgbe and i40e instead of abusing
the user data of the callback.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
rte_memzone_reserve() provides cache line alignment, but
struct rte_ring may require more than cache line alignment: on x86-64,
it needs 128-byte alignment due to PROD_ALIGN and CONS_ALIGN, which are
128 bytes, but cache line size is 64 bytes.
Fixes runtime warnings with UBSan enabled.
Fixes: d9f0d3a1ff ("ring: remove split cacheline build setting")
Cc: stable@dpdk.org
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
In kni_allocate_mbufs(), we attempt to add max_burst (32) count of mbuf
always into alloc_q, which is excessively leading too many rte_pktmbuf_
free() when alloc_q is contending at high packet rate (for eg 10Gig data).
In a situation when alloc_q fifo can only accommodate very few (or zero)
mbuf, create only what needed and add in fifo.
With this patch, we could stop random network stall in KNI at higher packet
rate (eg 1G or 10G data between vEth0 and PMD) sufficiently exhausting
alloc_q on above condition. I tested i40e PMD for this purpose in ppc64le.
Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_pktmbuf_headroom() and rte_pktmbuf_tailroom() should be usable
with any segment, not only with headered ones, so is_header should be 0
when we call for sanity check inside them.
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
There is no need for initializing the complete
packet buffer with zero as the packet data area will be
overwritten by the NIC Rx HW anyway.
The testpmd configures the packet mempool
with around 180k buffers with
2176B size. In existing scheme, the init routine
needs to memset around ~370MB vs the proposed scheme
requires only around ~22MB on 128B cache aligned system.
Useful in running DPDK in HW simulators/emulators,
where millions of cycles have an impact on boot time.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Currently EAL allocates hugepages one by one not paying attention
from which NUMA node allocation was done.
Such behaviour leads to allocation failure if number of available
hugepages for application limited by cgroups or hugetlbfs and
memory requested not only from the first socket.
Example:
# 90 x 1GB hugepages availavle in a system
cgcreate -g hugetlb:/test
# Limit to 32GB of hugepages
cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test
# Request 4GB from each of 2 sockets
cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ...
EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB
EAL: 32 not 90 hugepages of size 1024 MB allocated
EAL: Not enough memory available on socket 1!
Requested: 4096MB, available: 0MB
PANIC in rte_eal_init():
Cannot init memory
This happens beacause all allocated pages are
on socket 0.
Fix this issue by setting mempolicy MPOL_PREFERRED for each hugepage
to one of requested nodes using following schema:
1) Allocate essential hugepages:
1.1) Allocate as many hugepages from numa N to
only fit requested memory for this numa.
1.2) repeat 1.1 for all numa nodes.
2) Try to map all remaining free hugepages in a round-robin
fashion.
3) Sort pages and choose the most suitable.
In this case all essential memory will be allocated and all remaining
pages will be fairly distributed between all requested nodes.
New config option RTE_EAL_NUMA_AWARE_HUGEPAGES introduced and
enabled by default for linuxapp except armv7 and dpaa2.
Enabling of this option adds libnuma as a dependency for EAL.
Fixes: 77988fc08d ("mem: fix allocating all free hugepages")
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Flag dev_started should be cleared after dev_stop() function call
because the flag is checked inside the dev_stop() function.
Fixes: d11b0f30df ("cryptodev: introduce API and framework for crypto devices")
Cc: stable@dpdk.org
Signed-off-by: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add PCI probe/remove/init/uninit functions in a separate
file rte_cryptodev_pci.h, which do not use cryptodev driver,
in order to be removed in next commits.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Call rte_cryptodev_pmd_release_device() if probing a
PCI crypto device, instead of accessing the variables
directly. This will be useful when rte_cryptodev_pci_probe()
gets moved to a separate file.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Move all functions handling virtual devices to a separate
header file "rte_cryptodev_vdev.h", in order to leave only
generic functions for any device in the rest of the files.
It also creates the file "rte_cryptodev_pmd.c", with the
implementations of these functions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Do not set PCI information in the device information structure
for any crypto device, just for the ones that are PCI, so
this is set internally in the PCI crypto PMDs (only QAT now).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
rte_cryptodev_devices_get() function returns an array of devices
sharing the same driver.
Instead of having two different paths depending on the device being
virtual or physical, retrieve the driver name from rte_device structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
rte_cryptodev_devices_get() function was parsing a crypto
device name as an argument, but the function actually
returns device identifiers of devices that share the
same crypto driver, so the argument should be driver name, instead.
Fixes: 38227c0e3a ("cryptodev: retrieve device info")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When retrieving device information for a crypto driver,
driver name was only set when it was a PCI driver.
Getting the driver name from rte_device structure
allows rte_cryptodev_get_info() function to return it
regardless they are virtual or physical devices.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Only non virtual devices were storing the pointer to
rte_device structure in rte_cryptodev, which will be needed
to retrieve the driver name for any device.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Currently when a malloc_elem is split after resizing, any padding
present in the elem is ignored. This causes the resized elem to be too
small when padding is present, and user data can overwrite the beginning
of the following malloc_elem.
Solve this by including the size of the padding when computing where to
split the malloc_elem.
Fixes: af75078fec ("first public release")
Signed-off-by: Jamie Lavigne <lavignen@amazon.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
When debug level logging enabled (--log-level=8) each driver failed to
probe the device printed, like:
EAL: Driver (net_ark) doesn't match the device
EAL: Driver (net_avp) doesn't match the device
EAL: Driver (net_bnxt) doesn't match the device
EAL: Driver (net_cxgbe) doesn't match the device
EAL: Driver (net_e1000_igb) doesn't match the device
EAL: Driver (net_e1000_igb_vf) doesn't match the device
EAL: Driver (net_e1000_em) doesn't match the device
EAL: Driver (net_ena) doesn't match the device
EAL: Driver (net_enic) doesn't match the device
EAL: Driver (net_fm10k) doesn't match the device
EAL: Driver (net_i40e) doesn't match the device
EAL: Driver (net_i40e_vf) doesn't match the device
....
Overall hundreds of similar lines printed, because all drivers printed
for all devices. This is too much noise and there is already a log
message printed when device matched.
Removing the debug log completely.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
The function rte_eth_tx_done_cleanup() was missing in the map file
so it cannot be used by applications linking to shared libraries.
pktgen uses it since version 3.2.0.
Fixes: 44a718c457 ("ethdev: add API to free consumed buffers in Tx ring")
Cc: stable@dpdk.org
Signed-off-by: Luca Boccassi <luca.boccassi@gmail.com>
The NUMA node information for PCI devices provided through
sysfs is invalid for AMD Opteron(TM) Processor 62xx and 63xx
on Red Hat Enterprise Linux 6, and VMs on some hypervisors.
It is good to see more checking for valid values.
Typical wrong numa node in some VMs:
$ cat /sys/devices/pci0000:00/0000:00:18.6/numa_node
-1
Signed-off-by: Tonghao Zhang <nic@opencloud.tech>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
The error return code for rte_ring_sc_dequeue_bulk() and
rte_ring_mc_dequeue_bulk() function should be -ENOENT rather
than -ENOBUFS as described in the function description.
Fixes: cfa7c9e6fc ("ring: make bulk and burst return values consistent")
Cc: stable@dpdk.org
Signed-off-by: Anand B Jyoti <anand.b.jyoti@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Defining the value 0 as default value for dequeue timeout
will help the application reduce the configuration setup
if the application is interested only in default
timeout value.
removed "min_dequeue_limit" negative testcase as
min_dequeue_limit value could be zero(which is
default timeout now) if driver has
dev_info->min_dequeue_timeout_ns = 1.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Introducing the burst mode capability flag to express the event device
is capable of operating in burst mode for enqueue(forward, release) and
dequeue operation. If the device is not capable, then the application
still uses the rte_event_dequeue_burst() and rte_event_enqueue_burst()
but PMD accepts only one event at a time which is any way transparent
with the current rte_event_*_burst API semantics.
It solves two purposes:
1) Fix performance regression on the PMD which supports only nonburst
mode, and this issue is two-fold.
Typically the burst_worker main loop consists of following pseudo code:
while(1)
{
uint16_t nb_rx = rte_event_dequeue_burst(ev,..);
for (i=0; i < nb_rx; i++) {
process(ev[i]);
if (is_release_required(ev[i]))
release_the_event(ev);
}
uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
events, nb_rx);
while (nb_tx < nb_rx)
nb_tx += rte_event_enqueue_burst(dev_id, port_id,
events + nb_tx, nb_rx - nb_tx);
}
Typically the non_burst_worker main loop consists of following pseudo code:
while(1)
{
uint16_t nb_rx = rte_event_dequeue_burst(&ev, , 1);
if (!nb_rx)
continue;
process(ev);
while (rte_event_enqueue_burst(dev, port, &ev, 1) != 1);
}
Following overhead has been seen on nonburst mode capable PMDs with
burst mode version
- Extra explicit release(PMD does release on implicitly on next
dequeue) and thus avoids the cost additional driver function overhead.
- Extra "for" loop for event processing which compiler cannot detect at
runtime
2) Simplify the application configuration by avoiding the application to
find the correct enqueue and dequeue depth across different PMD.
If burst mode is not supported then, PMD can ignore depth field.
This will enable to write portable applications and makes
RFC eventdev_pipeline application works on OCTEONTX PMD
http://dpdk.org/dev/patchwork/patch/23799/
If an application wishes to get the maximum performance on nonburst
capable PMD then the application can write the code in a way that by
keeping packet processing function as inline functions and launch the
workers based on the capability.
The generic burst based worker still work on those PMDs without
any code change but this scheme needed only when the application wants
to gets the maximum performance out of nonburst capable PMDs.
This patch is based the on the real world test cases
http://dpdk.org/dev/patchwork/patch/24832/, Where without this scheme
20.9% performance drop observed per core.
See worker_wrapper(), perf_queue_worker(), perf_queue_worker_burst()
functions to use this scheme in a portable way without losing performance
on both sets of PMDs and achieving the portability.
http://dpdk.org/dev/patchwork/patch/24832/
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Made libeventdev library independent of VDEV bus by moving vdev pmd
specific function to rte_eventdev_pmd_vdev.h header file. Eventdev VDEV
PMD can include that for generic eventdev VDEV init and uninit function
enablement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Made libeventdev library independent of PCI bus by moving pci pmd
specific function to rte_eventdev_pmd_pci.h header file. Eventdev PCI
PMD can include that for generic eventdev PCI probe and remove function
enablement.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove rte_event_dev_close() from rte_event_pmd_release() function so
that rte_event_pmd_release() can be used in stateless way. This will
enable rte_event_pmd_vdev_uninit() function to avoid using
eventdev_globals global variable and the need for exposing the a
global variable to PMD.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Remove the PCI dependency from generic data structures
and moved the PCI specific code to rte_event_pmd_pci*
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
If the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
is not set indicates the device is centralized and thus needs
a dedicated scheduling thread that repeatedly calls
rte_event_schedule().
Update the worker thread code snippet to match
the description.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The nb_atomic_flows and nb_atomic_order_sequences fields are only inspected
if the queue is configured for atomic or ordered scheduling, respectively.
This commit updates the documentation to reflect that.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
The rte_ipv4_fragment_packet API expects that the link/interface MTU value
passed in be divisible by 8 bytes. Given the name of the parameter is
"mtu" rather than "frag_size" it is not necessarily the case that it will
be divisible by 8. An MTU of 1500 happens to produce a max fragment size
of 1480 (1500 - sizeof(ipv4_hdr)) which is divisible by 8 but other MTU
values such as 1600 or 9000 do not produce values that are divisible by 8.
Unfortunately, the API checks that the frag_size value produced is
divisible by 8 with a call to RTE_ASSERT which is only enabled when the
RTE_LOG_LEVEL >= RTE_LOG_DEBUG. In cases where the log level is set
normally the code silently continues and produces IP fragments that have
invalid fragment offset values.
An application may not have control over what MTU a user selects and rather
than have each application adjust the MTU to pass a suitable value to the
fragmentation API this change modifies the fragmentation API to handle
cases where the "mtu" argument is not divisible by 8 and automatically
adjust the internal "frag_size".
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The rte_ip_frag_table_destroy procedure simply releases the memory for the
table without freeing the packet buffers that may be referenced in the hash
table for in-flight or incomplete packet reassembly operations. To prevent
leaked mbufs go through the list of fragments and free each one
individually.
Fixes: 416707812c ("ip_frag: refactor reassembly code into a proper library")
Cc: stable@dpdk.org
Reported-by: Matt Peters <matt.peters@windriver.com>
Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Socket id parsed from the user was checked
if it was in the range of available sockets.
This check is unnecessary, as the socket specified
might not have memory anyway, so it will fail
at memory allocation.
Therefore, the best solution is to remove this check.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
From v20 to v1604, number of tbl8 can be up to 1<<24,
(uint8_t) or (uint16_t) may truncate the number of
index of tlb8 in v1604 and cause wrong number.
Fixes: dc81ebbaca ("lpm: extend IPv4 next hop field")
Cc: stable@dpdk.org
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since vhost_user_set_features failure is not handled in any way, a
single error log has been added to at least to let the user know that
something has gone wrong.
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The queue allocation was changed, from allocating one queue-pair at a
time to one queue at a time. Most of the changes have been done, but
just with one being missed: the size of copying the old queue is still
based on queue-pair at numa_realloc(), which leads to overwritten issue.
As a result, crash may happen.
Fix it by specifying the right copy size. Also, the net queue macros
are not used any more. Remove them.
Fixes: ab4d7b9f1a ("vhost: turn queue pair to vring")
Cc: stable@dpdk.org
Reported-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Jens Freimann <jfreiman@redhat.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Accessing fields of a packed struct through unaligned pointers is
undefined behavior. Instead of passing pointers to particular fields,
a pointer to the root struct should be used. This patch does exactly
that.
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch fixes a memory leak.
virtio_net::guest_pages is allocated in vhost_setup_mem_table(),
reallocated in add_one_guest_page(), but never freed.
Fixes: e246896178 ("vhost: get guest/host physical address mappings")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: Jens Freimann <jfreiman@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch implements the ops rx_queue_count for vhost PMD by adding
a helper function rte_vhost_rx_queue_count in vhost lib.
The ops rx_queue_count gets vhost RX queue avail count and helps to
understand the queue fill level.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
When we try to allocate guest pages we need to check the return value of
malloc(). Print an error message and return when it fails.
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The function rte_mem_lock_page() was added for Linux only.
The file eal_common_memory.c is a better place to make it
available in FreeBSD also.
The issue is seen when trying to compile bnxt on FreeBSD:
bnxt_hwrm.c: undefined reference to `rte_mem_lock_page'
Fixes: 3097de6e6b ("mem: get physical address of any pointer")
Reported-by: Fangfang Wei <fangfangx.wei@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The flow API defines several structures whose fields must be specified in
network order. This commit documents them using explicit type names and
related endianness conversion macros.
No ABI change.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
These macros resolve to constant expressions that allow developers to
perform endianness conversion on static/const objects, even outside of
function scope as they do not translate to function calls.
This is most useful for static initializers and constant values (whenever
it has to be performed at compilation time). Run-time endianness conversion
of variable values should keep using rte_*_to_*() calls for best
performance.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit introduces new rte_{le,be}{16,32,64}_t types and updates
rte_{le,be,cpu}_to_{le,be,cpu}_*() accordingly.
These types are added for documentation purposes, mainly to clarify the
byte ordering to use for storage when not CPU order. Doing so eliminates
uncertainty and conversion mistakes.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Fixing typos across dpdk source code using codespell utility.
Skipped the ethdev driver's base code fixes to keep the base
code intact.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Isolated mode can be requested by applications on individual ports to avoid
ingress traffic outside of the flow rules they define.
Besides making ingress more deterministic, it allows PMDs to safely reuse
resources otherwise assigned to handle the remaining traffic, such as
global RSS configuration settings, VLAN filters, MAC address entries,
legacy filter API rules and so on in order to expand the set of possible
flow rule types.
To minimize code complexity, PMDs implementing this mode may provide
partial (or even no) support for flow rules when not enabled (e.g. no
priorities, no RSS action). Applications written to use the flow API are
therefore encouraged to enable it.
Once effective, leaving isolated mode may not be possible depending on PMD
implementation.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
build error with icc version 17.0.4 (gcc version 7.0.0 compatibility):
In file included from .../dpdk/lib/librte_hash/rte_fbk_hash.h(59),
from .../dpdk/lib/librte_hash/rte_fbk_hash.c(54):
.../dpdk/x86_64-native-linuxapp-icc/include/rte_hash_crc.h(480):
error #1292: unknown attribute "fallthrough"
__attribute__ ((fallthrough));
^
In file included from .../dpdk/lib/librte_hash/rte_fbk_hash.h(59),
from .../dpdk/lib/librte_hash/rte_fbk_hash.c(54):
.../dpdk/x86_64-native-linuxapp-icc/include/rte_hash_crc.h(486):
error #1292: unknown attribute "fallthrough"
__attribute__ ((fallthrough));
^
This code patch hit when gcc > 7 installed and ICC doesn't recognize
fallthrough attribute.
Fixed by disabling code when compiled with ICC.
Fixes: 3dfb9facb0 ("lib: add switch fall-through comments")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
build error:
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:
In function ‘igb_kni_probe’:
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2483:30:
error: ‘%d’ directive output may be truncated writing between 1 and 5
bytes into a region of size between 0 and 11
[-Werror=format-truncation=]
"%d.%d, 0x%08x, %d.%d.%d",
^~
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2483:8:
note: directive argument in the range [0, 65535]
"%d.%d, 0x%08x, %d.%d.%d",
^~~~~~~~~~~~~~~~~~~~~~~~~
.../dpdk/build/build/lib/librte_eal/linuxapp/kni/igb_main.c:2481:4:
note: ‘snprintf’ output between 23 and 43 bytes into a destination of
size 32
snprintf(adapter->fw_version,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
sizeof(adapter->fw_version),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"%d.%d, 0x%08x, %d.%d.%d",
~~~~~~~~~~~~~~~~~~~~~~~~~~
fw.eep_major, fw.eep_minor, fw.etrack_id,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fw.or_major, fw.or_build, fw.or_patch);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fixed by increasing buffer size to 43 as suggested in compiler log.
Fixes: b9ee370557 ("kni: update kernel driver ethtool baseline")
Cc: stable@dpdk.org
Reported-by: Nirmoy Das <ndas@suse.de>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Markos Chandras <mchandras@suse.de>
Some ethdev devices like nicvf thunderx PMD need special treatment for
Secondary queue set(SQS) PCIe VF devices, where, it expects to not unmap
or free the memory without registering the ethdev subsystem.
Introducing a new RTE_PCI_DRV_KEEP_MAPPED_RES
PCI driver flag to request PCI subsystem to not unmap the mapped PCI
resources(PCI BAR address) if unsupported device detected.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_driver->name has the driver name and all physical and virtual
devices has access to it.
Previously it was not possible for virtual ethernet devices to access
rte_driver->name field (because eth_dev used to keep only pci_dev),
and it was required to save driver name in the device private struct.
After re-works on bus and vdev, it is possible for all bus types to
access rte_driver.
It is able to remove the driver name from ethdev device private data and
use eth_dev->device->driver->name.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Jan Blunck <jblunck@infradead.org>
Move all bypass functions to ixgbe pmd and remove function
pointers from the eth_dev_ops struct.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Instead of many PMD define their own macro, define a generic one in
ethdev and use that in PMDs.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Currently, 'rte_eth_dev_get_port_by_name' changes transmitted
'port_id' unconditionally. This is undocumented and misleading
behaviour as user may expect unchanged value in case of error.
Otherwise, there is no sense having both return value and
a pointer in the function.
Fixes: 9c5b8d8b9f ("ethdev: clean port id retrieval when attaching")
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Different drivers use internal macros like force_inline for compiler
always inline feature.
Standardizing it through __rte_always_inline macro.
Verified the change by comparing the output binary file.
No difference found in the output binary file with this change.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Checking against VFIO_MAX_GROUPS goes beyond the maximum array
index which should be (VFIO_MAX_GROUPS - 1).
Coverity issue: 144555, 144556, 144557
Fixes: 94c0776b1b ("support hotplug")
Cc: stable@dpdk.org
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
If the socket_id is invalid (e.g. -2, -3), the
memzone_reserve_aligned_thread_unsafe should return the
EINVAL and not ENOMEM. To avoid it, we should check the
socket_id before calling malloc_heap_alloc.
Signed-off-by: Tonghao Zhang <nic@opencloud.tech>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Fixes memory access errors detected by Coverity.
All cases are the maximum permissable value causing an
out-by-one overrun.
Coverity issue: 143433, 143434, 143460, 143464
Fixes: 349950ddb9 ("metrics: add information metrics library")
Signed-off-by: Remy Horton <remy.horton@intel.com>
When rte_lpm.h is used on x86, -O0 option (no optimization at all)
given to gcc causes a compile error like this:
error: the last argument must be an 8-bit immediate
i24 = _mm_srli_si128(i24, sizeof(uint64_t));
-O0 option is useful for debugging and code coverage measurement, but
this error prevents DPDK programs from building. This patch replaces
"sizeof(uint64_t)" with a constant literal "8" to work around the issue.
The issue occurs on gcc/g++ versions from 4.8 to 5.
Signed-off-by: Sangjin Han <sangjin@eecs.berkeley.edu>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The debug assertions when allocating a raw mbuf are not correct since
commit 8f094a9ac5 ("mbuf: set mbuf fields while in pool"),
which triggers a panic when using this function in debug mode
Change the expected number of segments to 1 instead of 0, and
factorize these sanity checks.
Fixes: 8f094a9ac5 ("mbuf: set mbuf fields while in pool")
Signed-off-by: Gregory Etelson <gregory@weka.io>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Coverity reported that an argument for sizeof was used improperly.
We should allocate memory for value size that pointer points to,
instead of pointer size itself.
Coverity issue: 144522
Fixes: 79c913a42f ("ethdev: retrieve xstats by ID")
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
This fixes compiler warnings with GCC 7 for arm64 build.
Fixes: da8dcc27f6 ("hash: use armv8-a CRC32 instructions")
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Some drivers (such as virtio) may need to read more than 4 bytes
data from PCI configuration space via rte_eal_pci_read_config().
But it will return with an error on FreeBSD when the expected
data length is bigger than the size of pi.pi_data whose type is
u_int32_t. This patch removes this limitation.
Fixes: 632b2d1dee ("eal: provide functions to access PCI config")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The first param of out*() on FreeBSD is port, and the second one
is data. But they are reversed in DPDK. This patch fixes it.
Fixes: 756ce64b1e ("eal: introduce PCI ioport API")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When physical NICs are binded to igb_uio/uio-pci-generic, they cannot
be used in DPDK app in Xen dom0.
Due to (1) a restriction that phys addresses should be availabe is added
by commit cdc242f260 ("eal/linux: support running as unprivileged user"),
(2) and previous implementation of the test to check if phys addresses are
available (using a variable on the stack) just works for non-Xen
environment. Actually, for Xen dom0, the physical addresses are always
available if the memory is initialized successfully..
To fix it, we add an precheck to bypass the physical address availability
test.
Fixes: cdc242f260 ("eal/linux: support running as unprivileged user")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Previous fix for properly handling devices from the same VFIO group
introduced another bug where the file descriptor of a kernel vfio
group is used as the index for tracking number of devices of a vfio
group struct handled by dpdk vfio code. Instead of the file
descriptor itself, the vfio group object that file descriptor is
registered with has to be used.
This patch introduces specific functions for incrementing or
decrementing the device counter for a specific vfio group using the
vfio file descriptor as a parameter. Note the code is not optimized
as the vfio group is found sequentially going through the vfio group
array but this should not be a problem as this is not related to
packet handling at all.
Fixes: a9c349e3a1 ("vfio: fix device unplug when several devices per group")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
The #ifdef only had the break in the else leg rather than in the first leg,
leading to the value set their being overridden on fall-through.
Fixes: 986ff526fb ("net: add CRC computation API")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
With GCC 7 we need to explicitly document when we are falling through from
one switch case to another.
Fixes: af75078fec ("first public release")
Fixes: 8bae1da2af ("hash: fallback to software CRC32 implementation")
Fixes: 9ec201f5d6 ("mbuf: provide bulk allocation")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Some customers find adding MAC addr to VF sometimes can fail,
but it is still stored in dev->data->mac_addrs[ ]. So this
can lead to some errors that assumes the non-zero entry in
dev->data->mac_addrs[ ] is valid.
Following acknowledgements are from specific NIC PMD
maintainer for their managing part.
This patch changes the ethdev internal API, it should not be
backported to a stable/LTS release so far.
Fixes: af75078fec ("first public release")
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Apply the new flag PKT_RX_VLAN_STRIPPED to the software emulation case
(currently only for virtio and af_packet).
Fixes: b37b528d95 ("mbuf: add new Rx flags for stripped VLAN")
Cc: stable@dpdk.org
Signed-off-by: Michał Mirosław <michal.miroslaw@atendesoftware.pl>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
To clean alloc_q, which has physical addresses of the mbufs, kni lib
free the pkt_mempool, but this leads a crash in kni unit test.
KNI library shouldn't free the pkt_mempool.
Implementation updated to find the mbufs in the alloc_q and return them
back to mempool.
Fixes: 8eba5ebd18 ("kni: fix possible memory leak")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>