hw flib code is updated as per the latest set of APIs and macros
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
The full digest size of GCM/GMAC algorithms is 16 bytes.
However, it is sometimes truncated to a smaller size (such as in IPSec).
This commit allows a user to generate a digest of any size
up to the full size.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
When IV size is 12, padding to 16 bytes is required
and the LSB must be set to 1, according to the spec.
However, the Multi-buffer library is already doing this,
so it is not necessary to do it in the PMD.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
Add support for SHAx-HMAC key sizes larger than the block size.
For these sizes, the input key is digested with the non-HMAC
version of the algorithm and used as the key.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
The full digest size of CMAC algorithm is 16 bytes.
However, it is sometimes truncated to a smaller size (such as in IPSec).
This commit allows a user to generate a digest of any size
up to the full size.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
The truncated digest size for AES-CMAC is 12 and not 16,
as the Multi-buffer library can output both 12 and 16 bytes.
Fixes: 6491dbbece ("crypto/aesni_mb: support AES CMAC")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
When creating a crypto session, check if
ther requested digest size is supported for
AES-XCBC-MAC and AES-CCM.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
HMAC algorithms (MD5 and SHAx) have different full digest sizes.
However, they are often truncated to a smaller size (such as in IPSec).
This commit allows a user to generate a digest of any size
up to the full size.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Marko Kovacevic <marko.kovacevic@intel.com>
In order to process crypto operations in the AESNI MB PMD,
they need to be sent to the buffer manager of the Multi-buffer library,
through the "job" structure.
Currently, it is checked if there are outstanding operations to process
in the ring, before getting a new job. However, if there are no available
jobs in the manager, a flush operation needs to take place, freeing some
of the jobs, so it can be used for the outstanding operation.
In order to avoid leaving the dequeued operation without being processed,
the maximum number of operations that can be flushed is the remaining
operations to return, which is the maximum number of operations that can
be return minus the number of operations ready to be returned
(nb_ops - processed_jobs), minus 1 (for the new operation).
The problem comes when (nb_ops - processed_jobs) is 1 (last operation to
dequeue). In that case, flush_mb_mgr is called with maximum number of
operations equal to 0, which is wrong, causing a potential overrun in the
"ops" array.
Besides, the operation dequeued from the ring will be leaked, as no more
operations can be returned.
The solution is to first check if there are jobs available in the manager.
If there are not, flush operation gets called, and if enough operations
are returned from the manager, then no more outstanding operations get
dequeued from the ring, avoiding both the memory leak and the array
overrun.
If there are enough jobs, PMD tries to dequeue an operation from the ring.
If there are no operations in the ring, the new job pointer is not used,
and it will be used in the next get_next_job call, so no memory leak
happens.
Fixes: 0f548b50a1 ("crypto/aesni_mb: process crypto op on dequeue")
Cc: stable@dpdk.org
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When a bonding port is stopped also stop and deactivate all slaves.
Otherwise slaves will be still listed as active.
Fixes: 2efb58cbab ("bond: new link bonding library")
Cc: stable@dpdk.org
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The virtio features VIRTIO_NET_F_CSUM, VIRTIO_NET_F_HOST_TSO4
and VIRTIO_NET_F_HOST_TSO6 are supported by the virtio PMD.
But they are missing in the supported feature set. And since
below commit:
commit 4174a7b59d ("net/virtio: improve Tx offload features negotiation")
Virtio PMD will announce the Tx offloading capabilities based
on the features read from the device. And virtio-user won't
report the features which are not in virtio-PMD's supported
feature set. So since that commit, virtio-user won't announce
the DEV_TX_OFFLOAD_UDP_CKSUM, DEV_TX_OFFLOAD_TCP_CKSUM and
DEV_TX_OFFLOAD_TCP_TSO offloading capabilities even if the
vhost backend supports them.
This patch adds these missing features, and virtio-user will
report them if the backend supports them.
Fixes: 142678d429 ("net/virtio-user: fix wrongly get/set features")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The multiple queue support in vhost-kernel is broken because
the dev->vhostfd is only available for vhost-user. We should
always try to enable queue pairs when it's not in server mode.
Fixes: 201a416517 ("net/virtio-user: fix multiple queues fail in server mode")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The patch introduces scatter/gather support on transmit path.
A separate Tx callback is added and set if the application
requests multisegment Tx offload. Multiple descriptors are
sent per one packet.
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yelena Krivosheev <yelena@marvell.com>
This patch introduces necessary changes required by MUSDK 18.09 library.
* As of MUSDK 18.09, pp2_cookie_t is no longer available. Now
RX descriptor cookie is defined as plain u64 so existing cast
is no longer valid.
* MUSDK 18.09 increased number of available bpools (buffer hw pools) by
introducing dma regions support. Update mvpp2 driver accordingly.
* replace MV_NET_IP4_F_TOS with MV_NET_IP4_F_DSCP
Before this patch, API allowed to configure a classification rule
according to IPv4 TOS, which was not supported in classifier. This patch
fixes this by using proper field.
* use 48 bit address mask
We cannot get pointers exceeding 48 bits thus using 48 bit
mask for extracting higher IOVA address bits is enough.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Signed-off-by: Yuval Caduri <cyuval@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Reviewed-by: Shlomi Gridish <sgridish@marvell.com>
Reviewed-by: Alan Winkowski <walan@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
This commit updates MTU and MRU related calculations.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yelena Krivosheev <yelena@marvell.com>
Reviewed-by: Dmitri Epshtein <dima@marvell.com>
Functional change:
Open receive cls/qos related features, only if the
config file contains an rx_related configuration entry.
This allows to configure tx_related entries, w/o unintentionally
opening rx cls/qos.
Code:
'use_global_defaults' is by default set to '1'.
Only if an rx_related entry was configured, it is updated to '0'.
rx cls/qos is performed only if 'use_global_defaults' is '0'.
Default TC configuration is now only mandatory when
'use_global_defaults' is '0'.
Signed-off-by: Yuval Caduri <cyuval@marvell.com>
Reviewed-by: Natalie Samsonov <nsamsono@marvell.com>
Tested-by: Natalie Samsonov <nsamsono@marvell.com>
Add init and deinit functionality to flow implementation.
Init puts structures used by flow in a sane sate.
Deinit deallocates all resources used by flow.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Reviewed-by: Shlomi Gridish <sgridish@marvell.com>
Change QoS configuration file syntax for port's default policer
setup.
Since default policer configuration is performed before
any other policer configuration we can pick a default id.
This simplifies default policer configuration since user
no longer has to choose ids from range [0, PP2_CLS_PLCR_NUM].
Explicitly document values for rate_limit_enable field.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Cleanup sources by moving common code to the pmd
header file.
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
This changes stop/start/configure behavior due to issue in MUSDK
library itself. From now on, ppio can be reconfigured only after interface
is closed.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yuval Caduri <cyuval@marvell.com>
Descriptor limits are used by applications to take
optimal decisions on queue sizing.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Chas Williams <chas3@att.com>
Default Rx/Tx configuration has become a helpful
resource for applications relying on the optimal
values to make rte_eth_rxconf and rte_eth_txconf
structures. These structures can then be tweaked.
Default configuration is also used internally by
rte_eth_rx_queue_setup or rte_eth_tx_queue_setup
API calls when NULL pointer is passed by callers
with the argument for custom queue configuration.
The use cases of bonding driver may also benefit
from exercising default settings in the same way.
Restructure the code to collect various settings
from slave ports and make it possible to combine
default Rx/Tx configuration of these devices and
report it to the callers of rte_eth_dev_info_get.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Chas Williams <chas3@att.com>
Scatter Rx is already supported by CXGBE PMD. So, add the missing
DEV_RX_OFFLOAD_SCATTER flag to the list of supported Rx offload
features.
Also, move the macros for supported list of offload features to
header file.
Fixes: 436125e641 ("net/cxgbe: update to Rx/Tx offload API")
Cc: stable@dpdk.org
Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
In efx_mcdi_phy_module_get_info() probe the
transceiver identification byte rather than assume
the module matches the fixed port type. This
supports scenarios such as a SFP mounted in a QSFP
port via a QSA module.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add a function which makes an MCDI GET_LINK request and
packages up the results. Currently, the get-link function
is triggered from several entry points which then pass
on or store selected parts of the data. When the driver
needs to obtain the current link state, it is more
efficient to do this in a single call.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Previously only some of the code was guarded by this which caused
a build error when EFSYS_OPT_RX_SCALE is 0 (e.g. in manftest).
Signed-off-by: Tom Millington <tmillington@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Limit the port mode bandwidth calculations by the maximum
reported link speed. This system detects 25G vs 10G cards,
and 100G port modes vs 40G.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Change the interface to ef10_nic_get_port_mode_bandwidth()
so more NIC information can be used to infer bandwidth
requirements. Huntington calculations separated out
completely.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add cases for the new port modes supported by X2 NICs.
Lane bandwidth is calculated for pre-X2 cards so is an
underestimate for X2 in 25G/100G modes.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>From Medford onwards, the newer constants enumerating
port modes should be used.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Adjust data types in interface to permit the complete
module information buffer to be obtained in a single
call.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Rearrange so the valid addresses are visible to the caller.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Adjust bounds so the interface supports reading
the last available byte of data.
Fixes: 19b64c6ac3 ("net/sfc/base: import libefx base")
Cc: stable@dpdk.org
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
When the system goes out of buffers temporarily, the logs
further slow down the system. There is no need for this
continuos logs.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Few fields in compat are giving re-defination error
with new drivers such as caam_jr.
Checks have been added.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This is to avoid the checks in datapath and help in performance.
LS1046 has different data stash settings.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This patch add supports in bus driver for qbman to support
and configure portal based FDs, which can be used for interrupt
based processing.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The link speed shall be on the basis of mac type.
Fixes: 799db4568c ("net/dpaa: support device info and speed capability")
Cc: shreyansh.jain@nxp.com
Cc: stable@dpdk.org
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Set the missing dev data mtu for the correct size.
Set the max supported size in hw, if user is asking for more.
Fixes: 9658ac3a4e ("net/dpaa: set the correct frame size in device MTU")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The current code has the hardcoded seq for fq allocation.
It require multiple changes, when some of the interfaces
are assigned to kernel stack. Changing it on the mac
id basis provide the flexibility to assign any interface
to kernel.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Some PMDs, especially ones with vector receives, require a minimum number
of receive buffers in order to receive any packets. If the first slave
read leaves less than this number available, a read from the next slave
may return 0 implying that the slave doesn't have any packets which
results in skipping over that slave as the next active slave.
To fix this, implement round robin for the slaves during receive that
is only advanced to the next slave at the end of each receive burst.
This is also done to provide some additional fairness in processing in
other bonding RX burst routines as well.
Fixes: 2efb58cbab ("bond: new link bonding library")
Cc: stable@dpdk.org
Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Matan Azrad <matan@mellanox.com>
The AVF driver was missing $(WERROR_FLAGS) in it's cflags, which means
that a number of compilation errors were getting missed. This patch adds
in the flag and fixes most of the errors, just disabling the
strict-aliasing ones.
Fixes: 22b123a36d ("net/avf: initialize PMD")
Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Fixes: 319c421f38 ("net/avf: enable SSE Rx Tx")
CC: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Compiling with all warnings turned on causes errors about unused variables
and an unused label. Remove these to allow building without having to
disable those warnings.
Fixes: 69dd4c3d08 ("net/avf: enable queue and device")
Fixes: 3fd7a3719c ("net/avf: enable ops for MTU setting")
Fixes: d6bde6b5ea ("net/avf: enable Rx interrupt")
Fixes: 22b123a36d ("net/avf: initialize PMD")
Fixes: 319c421f38 ("net/avf: enable SSE Rx Tx")
Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Fixes: 1060591ead ("net/avf: enable bulk allocate Rx")
CC: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Add support of imissed and q_errors statistics, reported by PCIE_QPRDC
register (see datasheet, section 11.27.2.60), which exposes the number
of receive packets dropped for a queue.
Signed-off-by: Julien Meunier <julien.meunier@nokia.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Problem was spotted and tested on e1000, should be also present in
ixgbe_vf representor.
Fixes: cf80ba6e20 ("net/ixgbe: add support for representor ports")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Problem was spotted and tested on e1000, should be also present in
i40e_vf representor.
Fixes: e0cb96204b ("net/i40e: add support for representor ports")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Problem was spotted and tested on e1000, should be also present in
fm10k.
Fixes: 30f3ce999e ("net/fm10k: convert to new Tx offloads API")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In former API, ETH_TXQ_FLAGS_NOMULTSEGS was merely a hint indicating
that application will never send multisegmented packets, allowing
pmd to choose different tx methods accordingly.
In new API, DEV_TX_OFFLOAD_MULTI_SEGS became an offload capability
that is advertised by pmds, some of them do not advertise it and
expect to never receive fragmented packets (octeontx, axgbe)
So an ethdev that supports multisegmented packets should properly
advertise it.
Fixes: e5c05e6590 ("net/e1000: convert to new Tx offloads API")
Cc: stable@dpdk.org
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
There are two function that call sfc_tx_qfini():
sfc_tx_fini_queues() and sfc_tx_queue_release(). But only
sfc_tx_queue_release() sets tx_queues pointer of the device data to NULL.
It may lead to the scenario in which a queue is destroyed by
sfc_tx_fini_queues() and after the queue is attempted to be destroyed again
by sfc_tx_queue_release().
Move NULL assignment to sfc_tx_qfini().
Fixes: b1b7ad933b ("net/sfc: set up and release Tx queues")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
There are two function that call sfc_rx_qfini():
sfc_rx_fini_queues() and sfc_rx_queue_release(). But only
sfc_rx_queue_release() sets rx_queues pointer of the device data to NULL.
It may lead to the scenario in which a queue is destroyed by
sfc_rx_fini_queues() and after the queue is attempted to be destroyed again
by sfc_rx_queue_release().
Move NULL assignment to sfc_rx_qfini().
Fixes: ce35b05c63 ("net/sfc: implement Rx queue setup release operations")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
This statistic should include 64byte and smaller frames.
Fix EF10 calculation to match Siena code.
Fixes: 8c7c723dfe ("net/sfc/base: import MAC statistics")
Cc: stable@dpdk.org
Signed-off-by: Andy Moreton <amoreton@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The capability bits to request FEC modes are implicitly valid
when the corresponding FEC mode is a supported capability.
Drivers expect that it is only valid to advertise those
capabilities explicitly marked as supported. The capabilities
reported by firmware is modified with the implicit capabilities
to present the explicit model to drivers.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Client drivers may use either legacy flags, for example,
EFX_RX_HASH_TCPIPV4, or generalised flags, for example,
EFX_RX_HASH(IPV4_TCP, 4TUPLE), to configure RSS hash.
The libefx is able to recognise what scheme is used.
Legacy flags may be consumed directly by a chip-specific handler to
configure the NIC, that is, on EF10, these flags can be used to fill
in legacy RSS mode field in MCDI request. Generalised flags can also
be directly used in EF10-specific handler as they are fully compatible
with additional fields of the same MCDI request.
Legacy flags undergo conversion to generalised flags before they
are consumed by a chip-specific handler. This conversion is used to
make sure that chip-specific handlers expect only generalised flags
in the input for the sake of clarity of the code.
Depending on firmware capabilities, a chip-specififc handler either
supplies the input to the NIC directly, for example,
EFX_RX_HASH(IPV4_TCP, 4TUPLE) flag will enable 4 bits in
RSS_CONTEXT_SET_FLAGS_IN_TCP_IPV4_RSS_MODE field on EF10, or takes
the opportunity to translate the input to enable bits which don't map
to the generic flag, like setting
RSS_CONTEXT_SET_FLAGS_IN_TOEPLITZ_TCPV4_EN on EF10 when the firmware
claims no support for additional modes.
However, this approach has introduced a severe problem which can be
reproduced with ultra-low-latency firmware variant. In order to enable
IP hash, EF10-specific handler requires the user to request 2-tuple
hash for IP-other, TCP and UDP traffic classes, unconditionally.
In example, IPv4 hash can be enabled using the following input:
EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE).
At the same time, on ultra-low-latency firmware, the common code will
never report support for any UDP tuple to the client driver. That is,
in the same example, the driver will use EFX_RX_HASH(IPV4_TCP, 2TUPLE) |
EFX_RX_HASH(IPV4, 2TUPLE). This input will not be recognised by
EF10-specific handler, and RSS_CONTEXT_SET_FLAGS_IN_TOEPLITZ_IPV4_EN
bit will not be set in the MCDI request.
In order to solve the problem, the patch removes conversion code
from chip-specific handlers and adds appropriate code to convert
EFX_RX_HASH() flags to their legacy counterparts to the common scale
mode set function. If the firmware does not support additional modes,
the function will convert generalised flags to legacy flags correctly
without any demand for UDP flags and pass the result to a chip-specific
handler.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
RSS mode bits can be accessed a lot easier in the hash
type value provided that the variable type is uint32_t.
The macro helper can be removed to enhance readability.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The efx_rx_scale_hash_flags_get interface is unsafe, as it does not
have an argument for the size of the output buffer used to return
the flags. While the only caller currently supplies a sufficiently
large buffer, this should be checked at runtime to avoid writing
past the end of the buffer.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The API which is used to list supported hash flags verifies
hash algorithm choice before writing the output. This check
is based on a switch() statement which has only two options
and no distinctive actions to be conducted for each of them.
Use simpler code instead of switch() to improve readability.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The function used to retrieve supported RSS flags has an
argument which should be named properly to indicate
that it's a pointer.
Fixes: 613cbe75ae ("net/sfc/base: add a new means to control RSS hash")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
NIC config is initialized during NIC probe.
Fixes: 19b64c6ac3 ("net/sfc/base: import libefx base")
Cc: stable@dpdk.org
Signed-off-by: Mark Spender <mspender@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The efx_nic_hw_unavailable() checks ensure that if the NIC hardware
has failed or has been physically removed then libefx will stop
further attempts to access the hardware.
Add an interface for libefx clients to force unavailability, so the
hardware is treated as dead or removed even if still physically present.
Signed-off-by: Andy Moreton <amoreton@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add efx_nic_hw_unavailable() routine to check for hardware presence
before continuing with NIC operations.
Signed-off-by: Andy Moreton <amoreton@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
In SF bug 61297 it's been confirmed that the hardware does not always
calculate the TCP checksum correctly with TSO sends.
The value of the Total Length field (IPv4) or Payload Length field
(IPv6) is the critical factor. We're sufficiently confident that if
these fields are zero then the checksum will be calculated correctly.
The information may be used by the drivers to check if the workaround is
required when FATSOv2 is implemented.
Signed-off-by: Mark Spender <mspender@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The SFN driver's PartitionControl WMI object requires an API to parse
and filter partition data in TLV format, particularly for the Dynamic
Config partition. The ef10_nvram_buffer functions provide this
functionality but are tied to use with license partition only.
Modify functions so they are applicable to all TLV partitions and add
functions to support in-place tag modification.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Extend efx_mcdi_get_port_modes() to optionally pass on the default
port mode field. This provides a more direct way of handling the case
where the dynamic config does not specify the port mode than the
alternative of a lookup table indexed by MCFW subtype.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Functions to process the DHCP option list format used by the expansion
ROM config buffers, to support extracting and updating of individual
options.
The initial use case is the driver presenting the global and per-PF
options as separate items, with the driver implementing the
synchronization of global options across the configuration buffers
for all PFs.
Signed-off-by: Richard Houldsworth <rhouldsworth@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Size of provided memory should be consistent with specified size.
Fixes: dfb3b1ce15 ("net/sfc/base: import monitors access via MCDI")
Cc: stable@dpdk.org
Signed-off-by: Martin Harvey <mharvey@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Description of sensors is generated from firmware sources.
Signed-off-by: Martin Harvey <mharvey@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
No need for probe messages when a TxQ is too full for a post to be done.
Existing drivers check if there is room in the queue before posting
descriptors, even though efx_tx_qdesc_post() does the check itself.
The new SFN Windows driver doesn't perform the check before calling
efx_tx_qdesc_post(), but that means these probes can get frequently
printed out. It's normal driver behaviour so there's no need to print
an error.
Signed-off-by: Mark Spender <mspender@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Remove obsolete monitor types since Falcon SFN4000 series adapters
no longer supported by libefx.
Rename MCDI monitors to be consistent with YML.
The code may be simplified and generalized since only MCDI monitors
remain.
Signed-off-by: Martin Harvey <mharvey@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Fixes: 17551f6dff ("net/sfc/base: add API to control UDP tunnel ports")
Cc: stable@dpdk.org
Signed-off-by: Vijay Srivastava <vijays@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Move empty definitions for platform-specific annotations from efsys.h
to EFX headers.
Signed-off-by: Martin Harvey <mharvey@solarflare.com>
Signed-off-by: Andrew Lee <alee@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add definitions of dynamic config and expansion ROM backup
partitions.
Signed-off-by: Paul Fox <pfox@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Functions declared in mcdi_mon.h are implemented in mcdi_mon.c.
The build fails if compiler options require declaration before definition.
Fixes: dfb3b1ce15 ("net/sfc/base: import monitors access via MCDI")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Required by GLD cstyle.
Fixes: d4f4b8f9d2 ("net/sfc/base: make RxQ type data an union")
Cc: stable@dpdk.org
Signed-off-by: Andy Moreton <amoreton@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
EF10 signed image layout header is generated from firmware sources.
Signed-off-by: Andrew Jackson <ajackson@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Found by PreFAST warnings.
Fixes: 3f2f0189dd ("net/sfc/base: add signed image layout support")
Cc: stable@dpdk.org
Signed-off-by: Martin Harvey <mharvey@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Found by PreFAST.
Fixes: 3f2f0189dd ("net/sfc/base: add signed image layout support")
Cc: stable@dpdk.org
Signed-off-by: Martin Harvey <mharvey@solarflare.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
'stats_reset()' callback was missing because the device backend doesn't
support it.
This commit adds a workaround to this and implements the callback by
taking a snapshot of the stats (SNAPSHOT) each time 'stats_reset()'
is called. When getting stats with 'stats_get()', hw stats which
always increase reduce SNAPSHOT stats.
That's how we get the "real" stats since the last 'stats_reset()'.
Signed-off-by: Yogev Chaimovich <yogev@cgstowernetworks.com>
Acked-by: Yong Wang <yongwang@vmware.com>
The MAC addresses are generated in a similar manner as in the TAP PMD,
where the address is based on the number of PCAP ports created.
This is useful for the purposes of debugging DPDK applications using
PCAP devices instead of real devices where multiple devices should still
have unique MAC addresses. This method was chosen over randomly
assigning MAC addresses to make the creation of pcaps, specifically
matching the destination ethernet address field to an interface, easier.
Signed-off-by: Cian Ferriter <cian.ferriter@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Releasing a queue that is already released by slave may cause a
segmentation fault. For example, after a successfull device
configuration a queue is set up. Afterwards the device is reconfigured
with an invalid argument, forcing slaves to release the queues
(e.g. rte_eth_dev.data.tx_queues). Finally the failsafe's queues
are released. The queue release functions also try to release slaves'
queues using ETH(sdev)->data->tx_queues which is NULL at the time.
Add checks for NULL slaves' Tx and Rx queues before releasing them.
Fixes: a46f8d584e ("net/failsafe: add fail-safe PMD")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Gaetan Rivet <gaetan.rivet@6wind.com>
IFCVF can help to log dirty page in live migration stage,
each queue's index can be read and configured to support
VHOST_USER_GET_VRING_BASE and VHOST_USER_SET_VRING_BASE.
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Tested-by: Xiaolong Ye <xiaolong.ye@intel.com>
Check the firmware status at init time. If the firmware is in
recovery mode, alert the user to check it.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Use ixgbe_identify_sfp_module_X550em to update SFP identification
flow. ixgbe_identify_sfp_module_X550em includes specific checks for
X550 about supported SFP modules.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Replace "=" operation with "|=" operation to only set the intended
register bits.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add FM NVM recovery mode check. Allow the software to detect this.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Cleanup UNREFERENCED_1PARAMETER() macro because "hw" is used.
And remove Light Spring codes because the device was never
productised. And cleanup unused bypass codes.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Update the GPL and BSD license headers to use the SPDX License
Identifier instead.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
It's possible to have much more hugepage backed memory regions
than what vhost-kernel supports due to the memory hotplug, which
may cause problems. A better solution is to have the virtio-user
pass all the memory ranges reserved by DPDK to vhost-kernel.
Fixes: 12ecb2f63b ("net/virtio-user: support memory hotplug")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Recently some memory APIs were introduced to allow users to
get the file descriptor and offset for each memory segment.
We can leverage those APIs to get rid of the /proc magic on
memory table preparation in vhost-user backend.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Deadlock can occur when allocating memory if a vhost-kernel
based virtio-user device is in use. To fix the deadlock,
we will take memory hotplug lock explicitly in virtio-user
when necessary, and always call the _thread_unsafe memory
functions.
Bugzilla ID: 81
Fixes: 12ecb2f63b ("net/virtio-user: support memory hotplug")
Cc: stable@dpdk.org
Reported-by: Seán Harte <seanbh@gmail.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Tested-by: Seán Harte <seanbh@gmail.com>
Reviewed-by: Seán Harte <seanbh@gmail.com>
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Management firmware does not properly clean IGU block in PF FLR flow
which may result in undelivered attentions for link events from
default status block.
Add a workaround in PMD to execute extra IGU cleanup right after PF FLR
is done.
Fixes: 9e2f08a4ad ("net/qede/base: add request for PF FLR before load request")
Cc: stable@dpdk.org
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
This patch implement eth_dev_ops->rx_descriptor_status
callback.
Walk through receive completion ring to calculate receive
descriptors used by firmware and then provide the status of
offset accordingly.
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
- HW does not include CRC in received frame when passed to host,
so no need to consider CRC length while calculating Rx buffer size.
- In scattered Rx mode, driver may allocate Rx buffer larger than
the size of mbuf because it tries to adjust the buffer size to cache
line size by ceiling it. Fix this by flooring the size instead of
ceiling.
- Consider the rule imposed by HW regarding the minimum size of Rx buffer
in scattered Rx mode -
(MTU + Maximum L2 Header Size + 2) / ETH_RX_MAX_BUFF_PER_PKT
Fixes: f6033f2497 ("net/qede: fix minimum buffer size and scatter Rx check")
CC: stable@dpdk.org
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
- Add support for rte_flow_validate(), rte_flow_create() and
rte_flow_destroy() APIs
- This patch adds limited support for the flow items
because of the limited filter profiles supported by HW.
- Only 4 tuples - src and dst IP (v4 or v6) addresses and
src and dst port IDs of TCP or UDP.
- also, only redirect to queue action is supported.
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
- In order to prepare the base for RTE FLOW support,
convert common code used for flow director support
into common aRFS code.
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
- PMD does not fill vtc_flow field of IPv6 header while
constructing a packet for IPv6 filter. Hence filter was
not getting applied properly.
- IPv6 addresses got swapped while copying src and dst addresses.
- Same issue with UDP and TCP port ids.
Fixes: 622075356e ("net/qede: support ntuple and flow director filter")
Cc: stable@dpdk.org
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
When trust mode is set to ON, VF can change it's MAC address
inspite PF has set a forced MAC for that VF from HV.
Earlier similar functionality is provided by module parameter
"allow_vf_mac_change_mode" of qed.
This change makes few changes in behavior of VF shadow config -
- Let driver track the VF mac in shadow config as long as trust
mode is OFF.
- Once trust mode is ON, we should not care about MACs in shadow
config (because we never intend to fall back because of lack of restore
implementation).
- Delete existing shadow MAC (this helps when trust mode is turned OFF,
and VF tries to add new MAC – it won’t fail that time since we have
a clean slate).
- Skip addition and deletion of MACs in shadow configs.
Signed-off-by: Shahed Shaikh <shahed.shaikh@cavium.com>
Fix logic for sfp get rx_los, tx_fault, tx_disable, and sfp set tx_disable.
Fixes: bdc40630a8 ("net/qede/base: add APIs for xcvr")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
Limit the number of non ethernet queues to 64, allowing a max queues to
status block ratio of 2:1 in case of storage target. Theoretically a
non-target storage PF can have 128 queues and SBs.
This change is to support 64 entries for a target iSCSI/FCoE PF and 128
for a non-target.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
Fix to program the HW registers with proper ether type.
Fixes: 36f45bce25 ("net/qede/base: fix to support OVLAN mode")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
Add support for the following OneView APIs:
- ecore_mcp_ov_update_mtu() - Send MTU value to the management FW.
- ecore_mcp_ov_update_mac() - Send MAC address to the management FW.
- ecore_mcp_ov_update_eswitch() - Send eswitch_mode to management FW
after the firmware load.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
This fix adds a ecore_mcp_update_stag() handler to handle the STAG update
events from management FW and program the STAG value.
It also clears the stag config on PF, when management FW invalidates
the stag value.
Fixes: ec94dbc573 ("qede: add base driver")
Cc: stable@dpdk.org
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
The driver setting of "allow_experimental_apis" was not being used when
building the base code. To allow this we can manually put in a check
in the base code files for the setting and set the appropriate cflag
if it's needed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Tested-by: Ilya Maximets <i.maximets@samsung.com>
If the device is not clearly reset by the previous driver and holds
some invalid ring addr, and the relay thread kicks it before HW is
properly re-configured, a bad DMA request may happen.
Besides, the notify_addr which is used by the relay thread is set in
the vdpa_ifcvf_start function, if a kick relay happens before
vdpa_ifcvf_start finishes, a null addr is accessed.
Fixes: a3f8150eac ("net/ifcvf: add ifcvf vDPA driver")
Cc: stable@dpdk.org
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Remove driver log when no interrupt event indicated
in alarm handler for both PF and VF, otherwise there
will be lots of prints which makes console unusable.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
For IA, the AVX2 vector path is only recommended to be used on later
platforms (identified by AVX512 support, like SKL etc.) This is because
performance benchmark shows downgrade when running AVX2 vector path on
early platform (BDW/HSW) in some cases. But we still observe perf gain
with some real work loading.
So this patch introduced the new devarg use-latest-supported-vec to
force the driver always selecting the latest supported vec path. Then
apps are able to take AVX2 path on early platforms. And this logic can
be re-used if we will have AVX512 vec path in future.
This patch only affects IA platforms. The selected vec path would be
like the following:
Without devarg/devarg = 0:
Machine vPMD
AVX512F AVX2
AVX2 SSE4.2
SSE4.2 SSE4.2
<SSE4.2 Not Supported
With devarg = 1
Machine vPMD
AVX512F AVX2
AVX2 AVX2
SSE4.2 SSE4.2
<SSE4.2 Not Supported
Other platforms can also apply the same logic if necessary in future.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Compile Mellanox driver when their external dependencies are met. A
glue version of the driver can still be requested by using the
-Denable_driver_mlx_glue=true
Meson will try to find the required external libraries. When they are
not installed system wide, they can be provided though CFLAGS, LDFLAGS
and LD_LIBRARY_PATH environment variables, example (considering
RDMA-Core is installed in /tmp/rdma-core):
# CLFAGS=-I/tmp/rdma-core/build/include \
LDFLAGS=-L/tmp/rdma-core/build/lib \
LD_LIBRARY_PATH=/tmp/rdma-core/build/lib \
meson output
# LD_LIBRARY_PATH=/tmp/rdma-core/build/lib \
ninja -C output install
Note: LD_LIBRARY_PATH before ninja is necessary when the meson
configuration has changed (e.g. meson configure has been called), in
such situation the LD_LIBRARY_PATH is necessary to invoke the
autoconfiguration script.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Compile Mellanox driver when its external dependencies are met. A
glue version of the driver can still be requested by using the
-Denable_driver_mlx_glue=true
Meson will try to find the required external libraries. When they are
not installed system wide, they can be provided though CFLAGS, LDFLAGS
and LD_LIBRARY_PATH environment variables, example (considering
RDMA-Core is installed in /tmp/rdma-core):
# CLFAGS=-I/tmp/rdma-core/build/include \
LDFLAGS=-L/tmp/rdma-core/build/lib \
LD_LIBRARY_PATH=/tmp/rdma-core/build/lib \
meson output
# LD_LIBRARY_PATH=/tmp/rdma-core/build/lib \
ninja -C output install
Note: LD_LIBRARY_PATH before ninja is necessary when the meson
configuration has changed (e.g. meson configure has been called), in
such situation the LD_LIBRARY_PATH is necessary to invoke the
autoconfiguration script.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
RSS configuration works for all e1000 NICs except 82576.
This patch fixes this issue by correcting queue number
in RSS configuration.
Fixes: 424ae915ba ("net/e1000: move RSS to flow API")
Fixes: ac8d22de23 ("ethdev: flatten RSS configuration in flow API")
Cc: stable@dpdk.org
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Wei Zhao <wei.zhao1@intel.com>
Removing double copy of driver information. 04664e5c83 has shifted
that from driver's probe to bus's probe.
Fixes: 04664e5c83 ("drivers/bus: fill driver reference after NXP probing")
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Removing double copy of driver information. 04664e5c83 has shifted
that from driver's probe to bus's probe.
Fixes: 04664e5c83 ("drivers/bus: fill driver reference after NXP probing")
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The probing functions of NXP buses were missing to set
the driver used for successfully probing a device.
The NXP driver and the generic rte_driver are now set
in the device structures.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
The rte_afu_driver is assigned to rte_afu_device.driver during probing.
There is no need of accessing the rte_afu_driver via rte_device.driver
and type casting to its container.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Note that the library built by meson will not have the _uio suffix:
librte_pmd_vmxnet3.so - as it follows the directory name, while the
legacy makefile rename it to librte_pmd_vmxnet3_uio.so.
Signed-off-by: Luca Boccassi <bluca@debian.org>
Versions of meson prior to 0.47 flattened the parameters to the
"set_variable" function, which meant that the function could not take
array variables as a parameter. Therefore, we need to disable driver
tracking for those older versions, in order to maintain compatibility
with the minimum supported 0.41 version, and also v0.45 shipped in
Ubuntu 18.04 release.
Fixes: 806c45dd48 ("build: add configuration summary at end of config")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Timothy Redaelli <tredaelli@redhat.com>
After running meson to configure a DPDK build, it can be useful to know
what was automatically enabled or disabled. Therefore, print out by way of
summary a categorised list of libraries and drivers to be built.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This PMD is built with -Wno-format, which means GCC errors out if
-Wformat-security is used.
Fixes: e940646b20 ("drivers/net: build Intel NIC PMDs with meson")
Cc: stable@dpdk.org
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This PMD is built with -Wno-format, which means GCC errors out if
-Wformat-security is used.
Fixes: 56bb54ea1b ("raw/ifpga/base: add Intel FPGA OPAE share code")
Cc: stable@dpdk.org
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The QAT compression driver was named "qat".
Rename to compress_qat for consistency with other compressdev drivers
and with crypto_qat.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Comments says "no csum error report support" but there is no check
related csum offloads. Removing the comment.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Integrate accelerated networking support into netvsc PMD.
This allows netvsc to manage VF without using failsafe or vdev_netvsc.
For the exception vswitch path some tests like transmit
get a 22% increase in packets/sec.
For the VF path, the code is slightly shorter but has no
real change in performance.
Pro:
* using netvsc is more like other DPDK NIC's
* the exception packet uses less CPU
* much smaller code size
* no locking required on VF transmit/receive path
* no legacy Linux network device to get mangled by userspace
* much simpler (1K vs 9K) LOC
* unified extended statistics
Con:
* using netvsc has more complex startup model
* no bifurcated driver support
* no flow support (since host does not have flow API).
* no tunnel offload support
* no receive interrupt support
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Implement callback functionality on link state changes.
This is not really driven off of interrupt file descriptor like most other
PMD's. Instead, it happens when a link state change message arrives
in the common ring buffer.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
If application sends faster than vswitch can keep up, then the
transmit descriptor pool will be exhausted. This is not a failure
so change the name statistic and don't include it in oerrors.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Removed DEV_RX_OFFLOAD_CRC_STRIP offload flag.
Without any specific Rx offload flag, default behavior by PMDs is to
strip CRC.
PMDs that support keeping CRC should advertise DEV_RX_OFFLOAD_KEEP_CRC
Rx offload capability.
Applications that require keeping CRC should check PMD capability first
and if it is supported can enable this feature by setting
DEV_RX_OFFLOAD_KEEP_CRC in Rx offload flag in rte_eth_dev_configure()
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jan Remes <remes@netcope.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Bonding driver ignores the value of RSS key (that is set in the port RSS
configuration) in bond_ethdev_configure(). So the only way to set
non-default RSS key is by using rss_hash_update(). This is not an
expected behaviour.
Make the bond_ethdev_configure() set default RSS key only if
requested key is set to NULL.
Fixes: 734ce47f71 ("bonding: support RSS dynamic configuration")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Chas Williams <chas3@att.com>
Default Redirection Table that is set in bonding driver is distributed
evenly over all Rx queues only within every RETA group (the first RETA
entries in every group are always start with zero). But in the most
drivers, default RETA is distributed over all Rx queues without sequence
resets in the beginning of a new group, which implies more balanced
per-core load.
Change the default RETA to be evenly distributed over all Rx queues
considering the whole table.
Fixes: 734ce47f71 ("bonding: support RSS dynamic configuration")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Chas Williams <chas3@att.com>
Add flow operations to match packets based on destination MAC address.
Allocate and program hardware MPS table with the destination MAC
address to be matched against. The returned MPS index is then used while
offloading flows to LETCAM (maskfull) and HASH (maskless) filter regions.
Also update existing mac_addr_set() to use the new MPS table API.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add API to program and manage hardware Multi Port Switch table. MPS
holds destination MAC addresses to be matched against incoming packets
for further rule processing. Packets not matching any entry in MPS table
will be dropped by default, unless the underlying port is in promiscuous
mode.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add flow API operations to offload vlan push, pop, and rewrite actions.
For vlan push or rewrite actions, allocate and program an entry from
L2T table. Use the L2T index to program vlan actions for LETCAM
(maskfull) and HASH (maskless) filters.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Add API to program and manage hardware Layer 2 Table. L2T holds
information necessary to rewrite specific fields in packet, such
as destination MAC address and vlan id.
Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
>From the Intel Ethernet Controller X710/XXV710/XL710 Specification
Update:
Starting from NVM 5.02, if the Set Local LLDP MIB command is
received while the DCBx specific agent is stopped, the command
returns an EPERM error. If the command is received while the
LLDP agent is stopped, it sets the local MIB without exchanging
LLDP with peer, and returns SUCCESS.
This results in the harmless, but annoying, diagnostic:
default dcb config fails. err = -53, aq_err = 1.
So, if possible (older firmwares cannot safely stop LLDP), stop the
LLDP daemon when we are in software mod before we attempt to call
i40e_set_dcb_config.
Signed-off-by: Chas Williams <chas3@att.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch adds alarm handler, and then i40e
PF will use alarm handler instead of interrupt
handler when device is started and Rx interrupt
mode is disabled. This way will save CPU cycles
during receiving packets.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch checks negotiated features to see if necessary to offload
before set the tap device offload capabilities. It also checks if kernel
support the TUNSETOFFLOAD operation.
Fixes: 5e97e42025 ("net/virtio-user: enable offloading")
Cc: stable@dpdk.org
Signed-off-by: Eric Zhang <eric.zhang@windriver.com>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Rxq cq_ci was 16 bits while hardware is expecting to wrap
around 24 bits, this caused interrupt failure after burst of packets.
Fixes: 43e9d9794c ("net/mlx5: support upstream rdma-core")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
In case of a temporary failure the ixgbe driver can return the internal
error IXGBE_ERR_RESET_FAILED to the application. Instead, return
-EAGAIN as per the public API specification.
Fixes: cddaf87a1e ("lib: fix unused values")
Cc: stable@dpdk.org
Signed-off-by: Luca Boccassi <bluca@debian.org>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
When bond slave devices cannot transmit all packets in bufs array,
tx_burst callback shall merge the un-transmitted packets back to
bufs array. Recent merge logic introduced a bug which causes
invalid mbuf addresses being written to bufs array.
When caller frees the un-transmitted packets, due to invalid addresses,
application will crash.
The fix is avoid shifting mbufs, and directly write un-transmitted
packets back to bufs array.
Fixes: 09150784a7 ("net/bonding: burst mode hash calculation")
Cc: stable@dpdk.org
Signed-off-by: Jia Yu <jyu@vmware.com>
Acked-by: Chas Williams <chas3@att.com>
Some NFP firmwares support live changes to the MAC address, but
this is not always true and the firmware advertises it accordingly.
This patch checks if firmware does not support live changes and
sets RTE_ETH_DEV_NOLIVE_MAC_ADDR in that case.
Cc: stable@dpdk.org
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Mark internal variables static to avoid potential redefinition
errors later on.
Signed-off-by: Natalie Samsonov <nsamsono@marvell.com>
Reviewed-by: Yelena Krivosheev <yelena@marvell.com>
Fix used_bpools array initialization by using range initializer.
This way all necessary variables are properly initialized regardless
of PP2_NUM_PKT_PROC value.
Fixes: 0ddc9b815b ("net/mrvl: add net PMD skeleton")
Cc: stable@dpdk.org
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Add MVEP (Marvell Embedded Processors) to drivers/common which
will keep code reused by current and future MRVL PMDs.
Right now we have only common DMA memory initialization routines there.
Signed-off-by: Liron Himi <lironh@marvell.com>
Signed-off-by: Tomasz Duszynski <tdu@semihalf.com>
Reviewed-by: Natalie Samsonov <nsamsono@marvell.com>
On the code after the below commits, the criteria to select the IPV4 or
IPV6 hash functions was the existence of some ETH_RSS_IPV4 RSS types on
the flow rule.
The check is wrong. For example ETH_RSS_NONFRAG_IPV4_TCP will not select
the IPV4 hash which will cause the packet to be spread in a bad way.
Fix it by adding the corresponding types needed for each hash selection.
Fixes: 592f05b29a ("net/mlx5: add RSS flow action")
Fixes: fd0b70316b ("net/mlx5: support inner RSS computation")
Cc: stable@dpdk.org
Reported-by: Yaroslav Brustinov <ybrustin@cisco.com>
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
On ConnectX-4 Lx the Multi Packet Send (MPW) feature is considered
un-secure, as on some cases were the application provides incorrect mbufs
on the Tx burst the host or NIC can get stuck.
Hence, disabling the feature by default for this specific NIC.
Users can still enable this feature and enjoy the performance gain
(mostly for low number of cores) by using the txq_mpw_en devarg.
This patch will impact the out of the box performance of some application
using ConnectX-4 Lx for the sack of security and robustness.
Since we need different defaults based on the underlying device the mpw
field in the configuration struct was extended to contain also the
MLX5_ARG_UNSET option.
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>