In general, DPDK libraries to not print error messages to
stdout because that is often redirected to /dev/null for daemons.
This patch changes cfgfile library to use RTE_LOG with its
own type.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
No need to initialize variable if it is immediately overwritten.
It is better style not do unnecessary initialization with modern
tools since it lets compiler and other static checkers detect
uninitialized data.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
If the timer subsystem is not initialized before rte_timer_manage (for
example) is invoked, a pointer to a shared hugepage memory region will
still be null and dereferenced when it is checked for validity; handle
this case.
Fixes: c0749f7096 ("timer: allow management in shared memory")
Cc: stable@dpdk.org
Signed-off-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
No of workers should never exceed RTE_MAX_LCORE.
RTE_DIST_ALG_SINGLE also require no of workers check.
Fixes: 775003ad2f ("distributor: add new burst-capable library")
Cc: stable@dpdk.org
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: David Hunt <david.hunt@intel.com>
The old comment, on top of the function rte_eal_has_hugepages(),
is really outdated and not generic enough.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Within rte_hash_reset, calling a while loop to dequeue one by
one from the ring, while not using them at all, is wasting cycles,
The patch just flush the ring by resetting the indices can save CPU
cycles.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Currently, the flush is done by dequeuing the ring in a while loop. It is
much simpler to flush the queue by resetting the head and tail indices.
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Currently PKT_TX_IP_CKSUM is being set into mbuf->ol_flags during
fragmentation operation implicitly by the library. Because of this,
application is forced to use checksum offload whether it is supported
by platform or not.
Also documentation does not provide any expected value of ol_flags in
returned fragmented mbufs so application will never come to know that which
offloads are enabled. So transmission may be failed for the platforms which
does not support checksum offload.
So removing mentioned flag from the library.
Mentioned change is part of http://patches.dpdk.org/patch/53475.
Changes for reassembly operation is already accepted. This patch set
implements the similar change for fragmentation operation.
Fixes: e29fc44370 ("ip_frag: remove IP checkum offload flag")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
In ppc64le, expanding DMA areas always fail because we cannot remove
a DMA window. As a result, we cannot allocate more than one memseg in
ppc64le. This is because vfio_spapr_dma_mem_map() doesn't unmap all
the mapped DMA before removing the window. This patch fixes this
incorrect behavior.
I also fixed the order of ioctl for unregister and unmap. The ioctl
for unregister sometimes report device busy errors due to the
existence of mapped area.
Signed-off-by: Takeshi Yoshimura <tyos@jp.ibm.com>
Acked-by: David Christensen <drc@linux.vnet.ibm.com>
Once the library usage is over, it must be deinitialized which
will free the shared memory reserved during initialization.
Observed an issue while running 'metrics_autotest' continuously
without quiting. For the first run 'metrics_autotest' passes
all test cases but second run onwards first test case fails
because metrics library is already initialized during first run.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
va2pa depends on the physical address and virtual address offset of
current mbuf. It may get the wrong physical address of next mbuf which
allocated in another hugepage segment.
In rte_mempool_populate_default(), trying to allocate whole block of
contiguous memory could be failed. Then, it would reserve memory in
several memzones that have different physical address and virtual address
offsets. The rte_mempool_populate_default() is used by
rte_pktmbuf_pool_create().
Fixes: 8451269e6d ("kni: remove continuous memory restriction")
Cc: stable@dpdk.org
Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
rte_kni does not follow standard style rules.
Noticed some extra \ line continuation etc.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The config create function did not store the mem config address in
the shared memconfig structure, so the secondary processes couldn't
map it at the required address.
Fixes: b149a70642 ("eal/freebsd: add config reattach in secondary process")
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The commit db90b4969e ("vfio: retry creating sPAPR DMA window")
introduced a build breakage on old Linux. Linux <4.2 does not define ddw in
struct vfio_iommu_spapr_tce_info. Without ddw, we cannot change window size
and so should give up the creation. I just exculuded the retrying code if
ddw is not supported.
Fixes: db90b4969e ("vfio: retry creating sPAPR DMA window")
Signed-off-by: Takeshi Yoshimura <tyos@jp.ibm.com>
Tested-by: Anatoly Burakov <anatoly.burakov@intel.com>
This patch fixes the out-of-bounds coverity issue by removing the
offending line of code at line 107 in rte_flow_classify_parse.c
which is never executed.
Coverity issue: 343454
Fixes: be41ac2a33 ("flow_classify: introduce flow classify library")
Cc: stable@dpdk.org
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Currently, when fbarray is destroyed, the fbarray structure is not
zeroed out, which leads to stale data being there and confusing
secondary process init in legacy mem mode. Fix it by always
memsetting the fbarray to zero when destroying it.
Fixes: 5b61c62cfd ("fbarray: add internal tailq for mapped areas")
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Populating the eventfd in rte_intr_enable in each request to vfio
triggers a reconfiguration of the interrupt handler on the kernel side.
The problem is that rte_intr_enable is often used to re-enable masked
interrupts from drivers interrupt handlers.
This reconfiguration leaves a window during which a device could send
an interrupt and then the kernel logs this (unsolicited from the kernel
point of view) interrupt:
[158764.159833] do_IRQ: 9.34 No irq handler for vector
VFIO api makes it possible to set the fd at setup time.
Make use of this and then we only need to ask for masking/unmasking
legacy interrupts and we have nothing to do for MSI/MSIX.
"rxtx" interrupts are left untouched but are most likely subject to the
same issue.
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1654824
Fixes: 5c782b3928 ("vfio: interrupts")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Tested-by: Shahed Shaikh <shshaikh@marvell.com>
Now that there is a version of ether_aton in rte_ether, it can
be used by the cmdline ethernet address parser.
Note: ether_aton_r can not be used in cmdline because
the old code would accept either bytes XX:XX:XX:XX:XX:XX
or words XXXX:XXXX:XXXX and we need to keep compatibility.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Using bit operations like or and xor is faster than a loop
on all architectures. Really just explicit unrolling.
Similar cast to uint16 unaligned is already done in
other functions here.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Use rte_eth_unformat_addr, so that ethdev can be built and work
without the cmdline library. The dependency on cmdline was
an arrangement of convenience anyway.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Make a function that can be used in place of eth_aton_r
to convert a string to rte_ether_addr. This function
allows both byte (xx:xx:xx:xx:xx:xx) and word (XXXX:XXXX:XXXX)
format and has the same lack of error handling as the original.
This also allows ethdev to no longer have a hard dependency
on the cmdline library.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Formatting Ethernet address and getting a random value are
not in critical path so they should not be inlined.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Rami Rosen <ramirose@gmail.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add new rte_flow_item_gre_key in order to match the optional key field.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Having this info logged by default when analysing bug reports
has proved to be useful.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
When a hash entry is added, there are 2 sets of stores.
1) The application writes its data to memory (whose address
is provided in rte_hash_add_key_with_hash_data API (or NULL))
2) The rte_hash library writes to its own internal data structures;
key store entry and the hash table.
The only ordering requirement between these 2 is that - store
to the application data must complete before the store to key_index.
There are no ordering requirements between the stores to
key/signature and store to application data. The synchronization
point for application data can be any point between the 'store to
application data' and 'store to the key_index'. So, 'pdata' should not
be a guard variable for the data in hash table. It should be a guard
variable only for the application data written to the memory location
pointed by 'pdata'. Hence, in the lookup functions, 'pdata' can be
loaded after full key comparison succeeds.
The synchronization point for the application data (store-release
to 'pdata' in key store) is changed to be consistent with the order
of loads in lookup function. However, this change is cosmetic and
does not affect the functionality.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Relaxed signature comparison is done first. Further ordered loads
are done only if the signature matches. Any false positives are
caught by the full key comparison. This provides performance
benefits as load-acquire is executed only when required.
Fixes: e605a1d36 ("hash: add lock-free r/w concurrency")
Cc: stable@dpdk.org
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Tested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
The functions rte_service_may_be_active(), rte_service_lcore_attr_get(),
and rte_service_attr_reset_all() were introduced nearly a year ago in DPDK
18.08. They can be considered non-experimental for the 19.08 release.
rte_service_may_be_active() is used by the sw PMD, and this commit allows
it to not need any experimental API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Currently PKT_TX_IP_CKSUM is being set into mbuf->ol_flags
during fragmentation and reassemble operation implicitly.
Because of this, application is forced to use checksum offload
whether it is supported by platform or not.
Also documentation does not provide any expected value of ol_flags
in returned mbuf (reassembled or fragmented) so application will never
come to know that which offloads are enabled. So transmission may be failed
for the platforms which does not support checksum offload.
Also, IPv6 does not contain any checksum field in header so setting
mbuf->ol_flags with PKT_TX_IP_CKSUM is itself invalid.
So removing mentioned flag from the library.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
If there are multiple threads contending, they all attempt to take the
spinlock lock at the same time once it is released. This results in a
huge amount of processor bus traffic, which is a huge performance
killer. Thus, if we somehow order the lock-takers so that they know who
is next in line for the resource we can vastly reduce the amount of bus
traffic.
This patch added MCS lock library. It provides scalability by spinning
on a CPU/thread local variable which avoids expensive cache bouncings.
It provides fairness by maintaining a list of acquirers and passing the
lock to each CPU/thread in the order they acquired the lock.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
sPAPR allows only page_shift from VFIO_IOMMU_SPAPR_TCE_GET_INFO ioctl.
However, Linux 4.17 or before returns incorrect page_shift for Power9.
I added the code for retrying creation of sPAPR DMA window.
Signed-off-by: Takeshi Yoshimura <tyos@jp.ibm.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add support for RFC 4301(5.1.2) to update of
Type of service field and Traffic class field
bits inside ipv4/ipv6 packets for outbound cases
and inbound cases which deals with the update of
the DSCP/ENC bits inside each of the fields.
Signed-off-by: Marko Kovacevic <marko.kovacevic@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Extension to BBDEV operations to support 5G
on top of existing 4G operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Renaming of the enums and structure which were LTE specific to
allow for extension and support for 5GNR operations.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Acked-by: Amr Mokhtar <amr.mokhtar@intel.com>
Some PMDs can only support digest being
encrypted separately in auth-cipher operations.
Thus it is required to add feature flag in PMD
to reflect if it does support digest-appended
both: digest generation with encryption and
decryption with digest verification.
This patch also adds information about new
feature flag to the release notes.
Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
This patch explains what are the conditions
and how to use digest appended for auth-cipher
operations.
Signed-off-by: Damian Nowak <damianx.nowak@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
When a cryptodev is created in a primary process,
rte_cryptodev_data_alloc reserves a memzone.
However, this memzone was not released when the cryptodev
is uninitialized. After that, new cryptodev cannot be
created due to memzone name conflict.
This commit frees the memzone when a cryptodev is
uninitialized, fixing this bug. This approach is chosen
instead of keeping and reusing the old memzone, because
the new cryptodev could belong to a different NUMA socket.
Also, rte_cryptodev_data pointer is now properly recorded
in cryptodev_globals.data array.
Bugzilla ID: 105
Signed-off-by: Junxiao Shi <git@mail1.yoursunny.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
When esn is used then high-order 32 bits are included in ICV
calculation however are not transmitted. Update packet length
to be consistent with auth data offset and length before crypto
operation. High-order 32 bits of esn will be removed from packet
length in crypto post processing.
Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Add support for packets that consist of multiple segments.
Take into account that trailer bytes (padding, ESP tail, ICV)
can spawn across multiple segments.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Reconstructing IPv6 header after encryption or decryption requires
updating 'next header' value in the preceding protocol header, which
is determined by parsing IPv6 header and iteratively looking for
next IPv6 header extension.
It is required that 'l3_len' in the mbuf metadata contains a total
length of the IPv6 header with header extensions up to ESP header.
Fixes: 4d7ea3e145 ("ipsec: implement SA data-path API")
Cc: stable@dpdk.org
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Introduce new function for IPv6 header extension parsing able to
determine extension length and next protocol number.
This function is helpful when implementing IPv6 header traversing.
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Tested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Adding a new field, ff_disable, to allow applications to control the
features enabled on the crypto device. This would allow for efficient
usage of HW/SW offloads.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
This patch adds an option to support both IV (of all supported sizes)
and J0 when using Galois Counter Mode of crypto operation.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Define IPv4 Minimum IHL and VHL according to rfc791 (see [1])
"The Version field indicates the format of the
internet header."
"Internet Header Length (ihl) is the length of the
internet header in 32 bit words, and thus points
to the beginning of the data. Note that
the minimum value for a correct header is 5."
[1] https://tools.ietf.org/html/rfc791
Signed-off-by: Saleh Alsouqi <salehals@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Enable missing support for runtime configuration (setting/getting)
of QinQ strip rx offload for a given ethdev.
Signed-off-by: Vivek Sharma <viveksharma@marvell.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
In current implementation, an action which requires parameters
must accept them enclosed in a structure.
Some actions require a single, trivial type parameter, but it still
must be enclosed in a structure.
This obligation results in multiple, action-specific structures, each
containing a single trivial type parameter.
This patch introduces a new approach, allowing an action configuration
object of any type, trivial or a structure.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Add actions:
- INC_TCP_SEQ - Increase sequence number in the outermost TCP header.
- DEC_TCP_SEQ - Decrease sequence number in the outermost TCP header.
- INC_TCP_ACK - Increase acknowledgment number in the outermost TCP
header.
- DEC_TCP_ACK - Decrease acknowledgment number in the outermost TCP
header.
Original work by Xiaoyu Min.
This patch uses the new approach introduced by [1], using a simple
integer instead of using an action-specific structure for each of
the new actions.
[1] http://patches.dpdk.org/patch/55882/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
If PCI Ethernet device driver removes it on close
(RTE_ETH_DEV_CLOSE_REMOVE) and later PCI device itself is unplugged,
it should not fail because of Ethernet device is already removed.
Fixes: 23ea57a2a0 ("ethdev: complete closing of port")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reported-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>