We keep a common vq structure, containing only vq related fields,
and then split others into RX, TX and control queue respectively.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
[Jianfeng Tan: found and fixed 2 bugs]
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The commit dd856dfcb9 introduced an optimization that prepends virtio
header to mbuf data. It can be used when the tx mbuf is writeable, so we
need to check that the mbuf is direct (i.e. it embeds its own data).
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The class id is not filled and makes probing to fail.
Updated the code to use RTE_PCI_DEVICE which fills
the class id with a wildcard value.
Fixes: 701c8d80c8 ("pci: support class id probing")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library support SSE4.1 instruction set,
so feature flags of the crypto device must be updated
to reflect this.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Underlying libsso_snow3g library now supports bit-level
operations, so PMD has been updated to allow them.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
In order to avoid using magic numbers, macros for
the IV and digest lengths for Snow3G have been added.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library that SNOW3G PMD uses has been updated,
so now it is called libsso_snow3g. Also, the path to the library
has been renamed to reflect this changes (now called LIBSSO_SNOW3G_PATH).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The variables AESNI_MULTI_BUFFER_LIB_PATH and LIBSSO_PATH
are not required for "make clean".
It is the same fix as in the commit e277b2397.
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the test-pmd
and proc_info applications to use the new xstats API, and removes
deprecated code associated with the old API.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the virtio driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the i40e driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the fm10k driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the e1000 driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the ixgbe driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Although ppc supports both endianesses, qemu supposes that the cpu is
big endian and enforces this for the virtio-net stuff.
Fix PCI accesses in legacy mode. Only ppc64le is supported at the moment.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.
Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:
PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
is enabled in the RX configuration of the PMD.
For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.
This patch also updates the drivers. For PKT_RX_VLAN_PKT:
- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.
For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.
[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The SYSFS_PCI_DEVICES is a constant that makes the PCI testing
difficult as it points to an absolute path. We remove using this
constant and introducing a function pci_get_sysfs_path that gives
the same value. However, the user can pass a SYSFS_PCI_DEVICES env
variable to override the path. It is now possible to create a fake
sysfs hierarchy for testing.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
This patch adds missing DEPDIRS to avoid any library referring to
symbols they are not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add the missing external dependency to pthread to avoid referring to
symbols the library is not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Up to now dependencies between DPDK internal libraries have been
untracked at shared library level, requiring applications to know
about library internal dependencies and often consequently overlinking.
Since the dependencies are already recorded for build ordering in the
makefiles with DEPDIRS-y we can use that information to generate LDLIBS
entries for internal libraries automatically.
Also revert commit 8180554d82 ("vhost: fix linkage of driver with
library") which is made redundant by this change.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
This patch provides counter mode support to AES-NI multi-buffer library.
The following cipher algorithm is enabled:
- RTE_CRYPTO_CIPHER_AES_CTR
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added possibility for AES to work in counter mode
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
To avoid GCC warning about "dereferencing type-punned pointer will break
strict-aliasing rules" aad_len pointer is dereferenced instead of direct
dereferencing of uint32_t* cast of the middle of an array.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix an error with computation of physical address of
content descriptor in the symmetric operations session
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix null pointer dereferencing by reporting if null and
exiting the function.
Coverity issue: 126584
Fixes: c0f87eb525 ("cryptodev: change burst API to be crypto op oriented")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Fix wrong indentation for return value
Coverity issue: 126585
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Removed comparison against $CC in Makefiles as
in cross-compiling mode CC can be a different string
instead of string "gcc"
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The qede driver depends on libz but the LDLIBS entry in makefile
was missing. Also because of the external dependency, make it
disabled in default config as per common DPDK policy on external deps.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some 64-bit variables are printed for debug.
%PRIx64 qualifier must be used because %lx is not long enough
on 32-bit systems
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
The RTE_ETH_VALID_PORTID_OR_ERR_RET macro is used in some places
to check if a port id is valid or not. This commit makes use of it in
some new parts of the code.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some architectures (ex: Power8) have a cache line size of 128 bytes,
so the drivers should not expect that prefetching the second part of
the mbuf with rte_prefetch0(&m->cacheline1) is valid.
This commit add helpers that can be used by drivers to prefetch the
rx or tx part of the mbuf, whatever the cache line size.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Introduce rte_mempool_populate_default() which allocates
mempool objects in several memzones.
The mempool header is now always allocated in a specific memzone
(not with its objects). Thanks to this modification, we can remove
many specific behavior that was required when hugepages are not
enabled in case we are using rte_mempool_xmem_create().
This change requires to update how kni and mellanox drivers lookup for
mbuf memory. For now, this will only work if there is only one memory
chunk (like today), but we could make use of rte_mempool_mem_iter() to
support more memory chunks.
We can also remove RTE_MEMPOOL_OBJ_NAME that is not required anymore for
the lookup, as memory chunks are referenced by the mempool.
Note that rte_mempool_create() is still broken (it was the case before)
when there is no hugepages support (rte_mempool_create_xmem() has to be
used). This is fixed in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Do not use paddr table to store the mempool memory chunks.
This will allow to have several chunks with different virtual addresses.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Now that the mempool objects are chained into a list, we can use it to
browse them. This implies a rework of rte_mempool_obj_iter() API, that
does not need to take as many arguments as before. The previous function
is kept as a private function, and renamed in this commit. It will be
removed in a next commit of the patch series.
The only internal users of this function are the mellanox drivers. The
code is updated accordingly.
Introducing an API compatibility for this function has been considered,
but it is not easy to do without keeping the old code, as the previous
function could also be used to browse elements that were not added in a
mempool. Moreover, the API is already be broken by other patches in this
version.
The library version was already updated in
commit 213af31e09 ("mempool: reduce structure size if no cache needed")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit removes the const qualifier for the mempool in
rte_mempool_walk() callback prototype.
Indeed, most functions that can be done on a mempool require a non-const
mempool pointer, except the dump and the audit. Therefore, the
mempool_walk() is more useful if the mempool pointer is not const.
This is required by next commit where the mellanox drivers use
rte_mempool_walk() to iterate the mempools, then rte_mempool_obj_iter()
to iterate the objects in each mempool.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit renames mempool_obj_ctor_t as mempool_obj_cb_t.
In next commits, we will add the ability to populate the
mempool and iterate through objects using the same function.
We will use the same callback type for that. As the callback is
not a constructor anymore, rename it into rte_mempool_obj_cb_t.
The rte_mempool_obj_iter_t that was used to iterate over objects
will be removed in next commits.
No functional change.
In this commit, the API is preserved through a compat typedef.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Many drivers provide their own implementation of rte_mbuf_raw_alloc(),
duplicating the code. Introduce a new public function in rte_mbuf to
allocate a raw mbuf (uninitialized).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
With gcc >= 6.0, qede base driver fails to build with:
drivers/net/qede/base/ecore_cxt.c: In function 'ecore_cdu_init_common':
cc1: error: left shift of negative value [-Werror=shift-negative-value]
Since the base drivers are untouchable, work around by disabling
the warning.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
When virtio was proposed in DPDK, there is no API to free memzones.
But this has changed since rte_memzone_free() has been implemented by
commit ff909fe21f ("mem: introduce memzone freeing").
This patch is to make sure memzones in struct virtqueue, like mz and
virtio_net_hdr_mz, are freed when queue is released or setup fails.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The "drv_flags" is set with device as the input, which means different
device (say, modern vs legacy) could end up with a different value. And
the fact that "drv_flags" is shared by all devices means that every time
we add a new device, it simply overwrites the value configured from the
last device.
Therefore, when two virtio devices have different flags, it may lead to
wrong result, such as virtio would set irq config when it's not supported.
Making the flag per device (using "dev->data->dev_flags") could let us
have different value for each device, which would avoid the above issue.
Fixes: da978dfdc4 ("virtio: use port IO to get PCI resource")
Reported-by: David Marchand <david.marchand@6wind.com>
Suggested-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Avail ring is updated by the frontend and consumed by the backend.
There are frequent core to core cache transfers for the avail ring.
This optmization avoids avail ring entry index update if the entry
already holds the same value.
As DPDK virtio PMD implements FIFO free descriptor list (also for
performance reason of CACHE), in which descriptors are allocated
from the head and freed to the tail, with this patch in most cases
avail ring will remain the same, then it would be valid in both caches
of frontend and backend.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Currently, vhost PMD doesn't have linkage for librte_vhost, even though
it depends on librte_vhost APIs. This causes a linkage error if below
conditions are fulfilled.
- DPDK libraries are compiled as shared libraries.
- DPDK application doesn't link librte_vhost.
- Above application tries to link vhost PMD using '-d' DPDK option.
The patch adds linkage for librte_vhost to vhost PMD not to cause an
above error.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
check merge-able header as it is supported.
previously we don't support merge-able feature, so non merge-able
header is checked.
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
After the do-while loop, idx could be VQ_RING_DESC_CHAIN_END (32768)
when it's the last vring desc buf we can get. Therefore, following
expresssion could lead to a segfault error, as it tries to access
beyond the desc memory boundary.
start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
This bug could be reproduced easily with "set fwd txonly" in the
guest PMD, where the dequeue on host is slower than the guest Tx,
that running out of free desc buf is pretty easy.
The fix is straightforward and easy, just remove it, as we have
already set desc flags properly inside the do-while loop.
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
[Yuanhan Liu: commit log reword]
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Issue: output of appliations and debug info of DPDK may be mixed up
in same line when enabling below debug options of virtio:
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DRIVER
This patch adds "\n" in the tail of definitions like PMD_RX_LOG,
PMD_TX_LOG, and PMD_DRV_LOG, and removes some "\n" when using these
macros.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Previously, for tunnel packets, such as VXLAN/NVGRE, the vlan
tags of the inner header will be stripped without putting vlan
info to descriptor, what is not expected behaviour.
This patch fixes it by changing hardware configuration to leave
the inner packet alone.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Add three new functions to the vf ops structure:
* rx_queue_count
* rxq_info_get
* txq_info_get
In all cases, the corresponding PF APIs can be reused.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The code to provide mbufs for RX used m->data_off instead of
RTE_PKTMBUF_HEADROOM as the position inside the mbuf for the data to be
written. As the mbuf is uninitialised, this could potentially cause Rx
data to be placed at the wrong address in the mbuf - or even outside it.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
RTE_PCI_DRV_DETACHABLE flag is required to indicate that a device
can be detached during execution.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
mbufs were not properly released post-tx when they are chained.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Some apps calling some functions from different threads at the
same time could lead to reconfig problems. Reconfig mechanism is
based on a hardware queue where incrementing a counter signals the
firmware to do the reconfig. If there is a second increment before the
first one has been processed the firmware will stop and a device
reset is necessary.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
librte_malloc was long since merged into librte_eal, mop up the
leftover references to it from driver Makefiles.
Fixes: 2f9d47013e ("mem: move librte_malloc to eal/common")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Applied with qede Makefile update too.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
On hosts running a non-DPDK PF driver, the VF has no means of changing
the HW CRC strip setting for a RX queue. It's implicitly enabled.
This patch checks if the host is running a non-DPDK PF kernel driver,
and returns an error, if HW CRC stripping was not requested in the port
configuration.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In i40evf_config_vlan_pvid the check for NULL for the dev value is
unnecessary, since this value is passed in from the ethdev API which
will ensure that a valid rte_eth_dev structure is provided.
Furthermore, all code paths leading to this function already use the
dev value.
Issue identified by Coverity.
Coverity ID 13302:
There may be a null pointer dereference, or else the comparison against
null is unnecessary.
In i40evf_config_vlan_pvid: All paths that lead to this null pointer
comparison already dereference the pointer earlier
Fixes: 2b12431b53 ("i40e: add vlan stripping and insertion to VF")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
In Table 8-16 of the "Intel® Ethernet Controller XL710 Datasheet" it is
stated that when the whole packet is written to a single buffer, the
header length field in the descriptor will be 0. This means that when
extracting the packet/data_len field from the descriptor in the driver
we do not need to mask out the extra header-length bits.
Inside the vector driver, this reduces the need to pull all four pktlen
fields into a single register to work on. Instead of a shift and mask,
we now need to only do a shift. Therefore, we can work on each descriptor
independently, processing each using one shift intrinsic and a blend.
This change makes the code shorter and easier to read, so we can pull it
into the main descriptor processing loop instead of needing its own
function. This in turn makes the descriptor processing in the loop as a
whole slightly easier to read as it's more linear.
In terms of performance, in testing this change shows little effect, with
single-core perf tests showing a very slight improvement.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
An analysis of the i40e code using Intel® VTune™ Amplifier 2016 showed
that the code was unexpectedly causing stalls due to "Loads blocked by
Store Forwards". This can occur when a load from memory has to wait
due to the prior store being to the same address, but being of a smaller
size i.e. the stored value cannot be directly returned to the loader.
[See ref: https://software.intel.com/en-us/node/544454]
These stalls are due to the way in which the data_len values are handled
in the driver. The lengths are extracted using vector operations, but those
16-bit lengths are then assigned using scalar operations i.e. 16-bit
stores.
These regular 16-bit stores actually have two effects in the code:
* they cause the "Loads blocked by Store Forwards" issues reported
* they also cause the previous loads in the RX function to actually be a
load followed by a store to an address on the stack, because the 16-bit
assignment can't be done to an xmm register.
By converting the 16-bit store operations into a sequence of SSE blend
operations, we can ensure that the descriptor loads only occur once, and
avoid both the additional stores and loads from the stack, as well as the
stalls due to the blocked loads.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Later commits to improve the driver will make use of the SSE4.1
_mm_blend_epi16 intrinsic, so:
* set the compilation level to always have SSE4.1 support,
* and add in a runtime check for SSE4.1 as part of the condition checks
for vector driver selection.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
This patch removes several redundant forward declarations
in i40e_fdir.c.
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds LLDP and DCBX capabilities to the qede PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The physical link is handled by the management Firmware.
This patch lays the infrastructure for interrupt/attention handling in
the driver, as link change notifications arrive via async interrupts,
as well as the handling of such notifications. It adds async event
notification handler interfaces to the PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
This patch adds the features to supports configuration of various Layer 2
elements, such as channels and filtering options.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The Qlogic Everest Driver for Ethernet(QEDE) Poll Mode Driver(PMD) is
the DPDK specific module for QLogic FastLinQ QL4xxxx 25G/40G CNA family
of adapters as well as their virtual functions (VF) in SR-IOV context.
This patch adds QEDE PMD, which interacts with base driver and
initialises the HW.
This patch content also includes:
- eth_dev_ops callbacks
- Rx/Tx support for the driver
- link default configuration
- change link property
- link up/down/update notifications
- vlan offload and filtering capability
- device/function/port statistics
- qede nic guide and updated overview.rst
Note that the follow on commits contain the code for the features mentioned
in documents but not implemented in this patch.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The base driver is the backend module for the QLogic FastLinQ QL4xxxx
25G/40G CNA family of adapters as well as their virtual functions (VF)
in SR-IOV context.
The purpose of the base module is to:
- provide all the common code that will be shared between the various
drivers that would be used with said line of products. Flows such as
chip initialization and de-initialization fall under this category.
- abstract the protocol-specific HW & FW components, allowing the
protocol drivers to have clean APIs, which are detached in its
slowpath configuration from the actual Hardware Software Interface(HSI).
This patch adds a base module without any protocol-specific bits.
I.e., this adds a basic implementation that almost entirely falls under
the first category.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
Fix issue reported by Coverity.
Coverity ID 13193: Bad bit shift operation (BAD_SHIFT)
large_shift: In expression 1 << pool, left shifting by more than 31 bits
has undefined behavior. The shift amount, pool, is at least 32.
This patch is a rework of register addr selection logic and mask
computation to made it more readable and avoid bit overflow when 32 bit
value is shifted over its size for pool > 31.
Fixes: fe3a45fd41 ("ixgbe: add VMDq support")
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The statistics queried by calling rte_eth_stats_get are zero when
the API is first called on the port. The root cause is because the
offset_loaded flag is not set correctly after device start.
This patch fixes this issue by resetting statistics at initialization
time. The resetting process will set offset_loaded flag.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
When building a chain of mbufs for a multi-segment packet, the
packet_type field resides at the end of the chain. It should be
copied forward to the head of the list.
Also, uses RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE to guard packet-type
computation. The mbuf fields are not copied when this define is not set.
Fixes: fe65e1e1ce ("fm10k: add vector scatter Rx")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The masking for the RX/TX enable bit was incorrect in the rx and tx
queue stop functions. Instead of using "& MASK" it used "| MASK" which
would always return true. This error was found by converity scan.
CID 13215 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: txdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
CID 13216 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: rxdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
Coverity issue: 13215
Coverity issue: 13216
Fixes: 029fd06d40 ("ixgbe: queue start and stop")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The position of register values within i40e register dumps is
supposed to reflect the register addresses. These were not being
correctly calculated.
Fixes: d9efd0136a ("i40e: add EEPROM and registers dumping")
Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Run ixgbe driver through checkpatch and fix the issues highlighted
Fix line spacing, some bad indentation, and in a couple
of cases use short circuit (already there) return to lessen indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Applied with four additional fixes for issues highlighted by checkpatch
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The macro RTE_VERIFY always checks a condition.
It is optimized with "unlikely" hint.
While this macro is well suited for test applications, it is preferred
in libraries and examples to enable such check in debug mode.
That's why the macro RTE_ASSERT is introduced to call RTE_VERIFY only
if built with debug logs enabled.
A lot of assert macros were duplicated and enabled with a specific flag.
Removing these #ifdef allows to test these code branches more easily
and avoid dead code pitfalls.
The ENA_ASSERT is kept (in debug mode only) because it has more
parameters to log.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some statistics were deprecated since release 2.1 (49f386542a).
The last deprecated counter to be used was imcasts.
The VF loopback statistics are also removed as they are used only
in igb and duplicated in extended statistics.
The new counters should be added to extended statistics.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Compilation fails on 32 bits on Xen driver, due to wrong casting:
drivers/net/xenvirt/virtqueue.h: In function ‘virtqueue_enqueue_xmit’:
drivers/net/xenvirt/virtqueue.h:234:24: error: cast from pointer to integer
of different size [-Werror=pointer-to-int-cast]
start_dp[idx].addr = rte_pktmbuf_mtod(cookie, uint64_t);
^
Fixes: d6b324c00f ("mbuf: get DMA address")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
VxLAN & NVGRE are supported by x550. As we know HW can parse
the packet and tell SW the type info. For VxLAN & NVGRE packets
there's some change. HW will not tell SW the info of the outer
header but the inner header instead. But we always take the
info as it's for the outer header. So the packet type info is
not right when x550 receives VxLAN & NVGRE packets.
As x550 only supports IPv4 VxLAN & NVGRE packets, we can tell
the outer header of VxLAN is IPv4 + UDP, and the outer header
of NVGRE is IPv4 only. What we don't know is if there's
optional field in the outer IPv4 header.
This patch implement the support of packet type for VxLAN &
NVGRE. And it fixes the wrong packet type issue either.
BTW:
It doesn't fix any existing commit as although it resolve an
issue it's more like a new feature but not a fix.
Reported-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the vhost PMD were configured with more queues than the guest, the old
code would segfault in rte_vhost_enable_guest_notification due to a NULL
virtqueue pointer.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
renamed rte_cryptodev_sym_session.type -> dev_type
(as it's not a session type, but a device type)
renamed rte_crypto_sym_op.type -> sess_type
(as it's not an op type, but a session type)
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
AES-GCM PMD only supports 128-bit keys, but not 192 and 256,
which the capabilities structure was reporting.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
After some testing, it was found that retrieving numa information
about a vhost device via a call to get_mempolicy is more
accurate when performed during the new_device callback versus
the vring_state_changed callback, in particular upon initial boot
of the VM. Performing this check during new_device is also
potentially more efficient as this callback is only triggered once
during device initialisation, compared with vring_state_changed
which may be called multiple times depending on the number of
queues assigned to the device.
Reorganise the code to perform this check and assign the correct
socket_id to the device during the new_device callback.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With (ICC) 16.0.2 20160204, getting following warnings:
.../drivers/net/ena/base/ena_com.c(492): error #3656: variable
"flags" may be used before its value is set
ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
.../drivers/net/ena/base/ena_com.c(1971): error #3656: variable
"mem_handle" may be used before its value is set
ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, len,
For both warnings the variable value is ignored, so there is no defect.
To comfort compiler warning, a initial value provided to variables.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Issue:
when using the following CLI in testpmd to enable ipv6 TSO feature
(set --txqflags=0 in the testpmd command)
set verbose 1
csum set ip hw 0
csum set udp hw 0
csum set tcp hw 0
csum set sctp hw 0
csum set outer-ip hw 0
csum parse_tunnel on 0
tso set 800 0
set fwd csum
start
We will not get what we want, the ipv6 packets sent out from IXIA can be
received by i40e, but cannot forward to another port.
The root cause is when HW doing the TSO offload for packets, it does not only
depends on the context descriptor to define the MSS and TSO payload size, it
also need to know whether this packets is ipv4 or ipv6, we use
i40e_txd_enable_checksum to generate the related fields for data descriptor.
But PMD will not call i40e_txd_enable_checksum if only the TSO offload flag is
set. The reason why ipv4 works fine for TSO in testpmd csum mode is csum engine
will set the ip csum flag when the packet is ipv4 and TSO is enabled but
will not set the flag for ipv6 and this flag will cause the
i40e_txd_enable_checksum to be invoked. For both the cases the TSO flag will be
set, so we need to use TSO flag to trigger the i40e_txd_enable_checksum.
The right logic here is we enable csum offload for both ipv4 and ipv6 when TSO
flag is set.
Fixes: e3f0151f ("i40e: enable Tx checksum only for offloaded packets")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The issue is the VF's link speed kept as 10G and status always was up.
It did not change even the physical link's status changed.
This patch fixes this issue to make VF's link info consistent with
physical link.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
A problem is found on i350 VF. We found TX will happen once
per 4 packets. If only 1~3 packets are received, they will
not be forwarded. But the real problem is on RX side. The
reason is the default RX write-back threshold is changed to
4, so every first 3 packets may be hung there.
This patch checks the RX wthresh when setting up the RX
queue, and forces it to be 1, so every packet can be handled
immediately.
Fixes: 4a41c17dba ("igb: set default thresholds based on MAC type")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
For simple TX the virtio-net header must be zeroed, but it was using memory
that had been initialized with indirect descriptor tables. This resulted in
"unsupported gso type" errors from librte_vhost.
We can use the same memory for every descriptor to save cachelines in the
vswitch.
Fixes: 6dc5de3a ("virtio: use indirect ring elements")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
The link speed configuration is now done with bitmaps so 100G speed
requires only a new bit flag.
The actual link speed is a number so its size must be increased from
16-bit to 32-bit.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Tested-by: Matej Vido <vido@cesnet.cz>
This patch redesigns the API to set the link speed/s configuration
of an ethernet port. Specifically:
- it allows to define a set of advertised speeds for
auto-negociation.
- it allows to disable link auto-negociation (single fixed speed).
- default: auto-negociate all supported speeds.
A flag autoneg in struct rte_eth_link indicates if link speed was a
result of auto-negociation or was fixed by configuration.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The speed capabilities of a device can be retrieved with
rte_eth_dev_info_get().
The new field speed_capa is initialized in the drivers without
taking care of device characteristics in this patch.
When the capabilities of a driver are accurate, the table in
overview.rst must be filled.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
The speed numbers ETH_LINK_SPEED_ are renamed ETH_SPEED_NUM_.
The prefix ETH_LINK_SPEED_ is kept for AUTONEG and will be used
for bit flags in next patch.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Some duplex values are replaced from 0 to half-duplex when link is down.
Some drivers are still using their own constants for duplex modes.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Define and use ETH_LINK_UP and ETH_LINK_DOWN where appropriate.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Loop that calculates total number of tx descriptors in slave tx queues
should iterate up to nb_tx_queues, not nb_rx_queues.
Fixes: 3ef7955700 ("bonding: fix LACP mempool size")
Signed-off-by: Vladyslav Buslov <vladyslav.buslov@harmonicinc.com>
Stopping then re-starting a bond interface containing slaves that
used polling for link detection caused the bond to think all slave
links were down and inactive.
Move the start of the polling for link from slave_add() to
bond_ethdev_start() and in bond_ethdev_stop() make sure we clear
the last_link_status of the slaves.
Fixes: a45b288ef2 ("bond: support link status polling")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds out-of-place operations to qat symmetric crypto PMD,
i.e. the result of the operation can be written to the destination buffer
instead of overwriting the source buffer as done in "in-place" operation.
Both buffers can be of different sizes.
Previously the qat PMD assumed that m_src and m_dst in rte_crypto_sym_op
were identical.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
In SUSE11-SP3 i686 platform, with gcc 4.5.1, there are compile issues, e.g:
null_crypto_pmd_ops.c:44:3: error:
unknown field 'sym' specified in initializer
cc1: warnings being treated as errors
The member in anonymous union initialization should be inside '{}',
otherwise it will report an error.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
These asserts are only for debugging and never fired during
any testing, but they confuse coverity's null tracking.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Silence a compiler warning that this variable may be used uninitialized.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tell the compiler to use an unsigned constant for the config shifts.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tell the compiler to use an unsigned constant for the config shifts.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The device lsc interrupt check has a misleading whitespace around it which
can be improved by adding appropriate braces to the check. Since the ret
variable was checked after previous assignment, this introduces no functional
change.
Fixes: 921a72008f ("e1000: add Rx interrupt handler")
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
The register read/write mphy functions have misleading whitespace around
the `locked` check. This cleanup merely preserves the existing functionality
and suppresses future gcc versions' "misleading indentation" warning.
Suggested-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the provided configuration does not call for RSS, then RSS is
explicitly disabled. Without this change, the device continues to
operate under the previous RSS configuration.
Fixes: 57033cdf8f ("fm10k: add PF RSS")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Now that vmxnet3 supports TCP/UDP checksum offload, let's update
the default txq flags to allow such offloads. Also fixed the tx
queue setup check to allow TCP/UDP checksum and only error out
if SCTP checksum is requested.
Fixes: f598fd063b ("vmxnet3: add Tx L4 checksum offload")
Reported-by: Heng Ding <hengx.ding@intel.com>
Signed-off-by: Yong Wang <yongwang@vmware.com>
The NFP driver (unlike other PCI devices) was not copying the pci info
from the pci_dev to the eth_dev. This would make the driver_name be
null (and other unset fields) when application uses dev_info_get.
This was found by code review; do not have the hardware.
Fixes: d4a27a3b09 ("nfp: add basic features")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
When the number of RX queues is not a power of two,
the RETA table is configured to its maximum size for
better balancing.
Testing showed that limiting its size to 256 improves performance
noticeably with little to no impact on balancing results.
Fixes: ebb30ec64a ("mlx5: increase RETA table size")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Once freed, completed mbufs pointers are not set to NULL in the TX queue.
Clean up function must take this into account.
Fixes: 2e22920b85 ("mlx5: support non-scattered Tx and Rx")
Fixes: 7fae69eeff ("mlx4: new poll mode driver")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Calling rte_eth_dev_get_dcb_info to get dcb info from i40e
driver if VMDQ is disabled, results in a segmentation fault.
This patch fixes it by treating VMDQ and No-VMDQ respectively
when querying dcb information.
Fixes: 5135f3ca49 ("i40e: enable DCB in VMDQ VSIs")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
The lower 16 bits of EICR register are used for queue interrupts,
dpdk framework take over the first bit for other interrupts like
LSC, so there're only 15 bits left for queue interrupts mapping.
This patch adds a check for the num of interrupt queues at
dev_start.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
When starting testpmd with multiple queues on a ixgbe VF
port, it failed with this printing, "nb_rxq(4) is greater
than max_rx_queues(1)".
The root cause is the VF doesn't get the right max rx queue
number from PF and it uses the default value 1.
VF max rx queue number is set by PF through mailbox messages.
The message for this setting only supports version 1.1. As
message version is updated to 1.2, VF cannot parse the rx queue
number setting message correctly.
This patch raise a specific base code update for this issue.
Fixes: 72dec9e37a ("ixgbe: support multicast promiscuous mode on VF")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Update the 'imissed' counter with the number of packets dropped
by the NIC.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
When the enic was disabled, link notification was correctly disabled
in the NIC but the software indicator that it was disabled was not
updated (vdev->notify_pa not set to 0). When the link came back up,
enic did not re-enable notification in the NIC.
This affected bonding when a enic slave device link bounced.
The fix is to unconditionally enable notification when the enic is
enabled.
Fixes: 9913fbb91d ("enic/base: common code")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
If the nb_pkts parameter to rte_eth_tx_burst() was greater than
the TX descriptor count, a completion was not being requested
from the NIC, so descriptors would not be released back to the
host causing a lock-up.
Introduce a limit of how many TX descriptors can be used in a single
call to the enic PMD burst TX function before requesting a completion.
Fixes: d739ba4c6a ("enic: improve Tx packet rate")
Signed-off-by: John Daley <johndale@cisco.com>
FreeBSD was not defined in ena_plat.h
ETIME is not defined in FreeBSD.
In file included from DPDK/drivers/net/ena/base/ena_com.h:37:0,
from DPDK/drivers/net/ena/ena_ethdev.h:39,
from DPDK/drivers/net/ena/ena_ethdev.c:41:
DPDK/drivers/net/ena/base/ena_plat.h:48:2: error: #error "Invalid platform"
Fixes: 99ecfbf845 ("ena: import communication layer")
Fixes: 9ba7981ec9 ("ena: add communication layer for DPDK")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Fix for multiple compilation errors for ICC:
error #188: enumerated type mixed with another type
error #592: variable "flags" is used before its value is set
Fixes: 99ecfbf845 ("ena: import communication layer")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Build log:
/root/dpdk/app/test-pmd/cmdline.c:6687:45: error: no member named
's6_addr32' in 'struct in6_addr'
rte_be_to_cpu_32(res->ip_value.addr.ipv6.s6_addr32[i]);
This is caused by macro "s6_addr32" not defined on FreeBSD and testpmd
swap big endian parameter to host endian. Move the swap action to i40e
ethdev will fix this issue.
Fixes: 7b1312891b ("ethdev: add IP in GRE tunnel")
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Bruce Richardson <bruce.richardson@intel.com>
The code uses "entry" in the next loop of LIST_FOREACH after calling free()
on it in i40e_res_pool_destroy().
Change to a safe way to free entry, which is similar with LIST_FOREACH_SAFE
in FreeBSD.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jiangu Zhao <zhaojg@arraynetworks.com.cn>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This solves issues when an active device is added to a bond.
If a device to be enslaved already has transmit and/or receive queues
allocated, use those and then create any additional queues that are
necessary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ehkinzie@gmail.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
On the 82575 chipset, there is a pool of global TX contexts instead of 2
per queues on 82576. See Table A-1 "Changes in Programming Interface
Relative to 82575" of Intel® 82576EB GbE Controller datasheet (*).
In the driver, the contexts are attributed to a TX queue: 0-1 for txq0,
2-3 for txq1, and so on.
In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the
per-queue context (0 or 1), and ctx_idx contains the index to be given
to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must
be indexed with ctx_curr to avoid an out-of-bound access.
Also, the index returned by what_advctx_update() is the per-queue
index (0 or 1), so we need to add txq->ctx_start before sending it
to the hardware.
(*) The datasheets says 16 global contexts, however the IDX fields in TX
descriptors are 3 bits, which gives a total of 8 contexts. The
driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs.
Fixes: 4c8db5f09a ("igb: enable TSO support")
Fixes: af75078fec ("first public release")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When using RSS, the number of rxqs has to be a power of two.
This is a problem because there is no API in DPDK that makes
the application aware of that.
A good compromise is to allow the application to request a
number of rxqs that is not a power of 2, but having inactive
queues that will never receive packets. In this configuration,
a warning will be issued to users to let them know that
this is not an optimal configuration.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
l2 tunnel and e-tag are not supported on the new x550em_a NICs, due
to missing checks for that mac type in the code.
This patch adds in the necessary conditional checks to enable the features
for x550em_a.
Fixes: 22e77d4501 ("ixgbe: support L2 tunnel operations")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
An issue is found on x550em NICs: ieee1588 is not working, the time is
always reported as 0.
The root cause is that the timer is only supported by the driver for x550,
switch statement entries are missing for x550em_x and x550em_a. This patch
adds those missing entries.
Fixes: a7740dc130 ("ixgbe: support new devices and MAC types")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The current_primary_port is initialised to an invalid value
during bonded device creation.
It must be set to a valid value later.
This fix sets it to a valid value when the first slave port
is added to the bonding device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
icc (icc (ICC) 16.0.1 20151021) is generating following compile error:
CC ixgbe_rxtx.o
.../drivers/net/ixgbe/ixgbe_rxtx.c(153): error #3656: variable
"free" may be used before its value is set
(nb_free > 0 && m->pool != free[0]->pool)) {
^
Indeed this is a false positive and code is correct.
"nb_free" check prevents the free[] access before its value set.
Disabling this icc warning (#3656) for file ixgbe_rxtx.c.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Ixgbe HW supports 128 TX queues. However, the full 128 queues are only
available in VT and DCB mode. In normal default "none" mode (VT/DCB off)
the maximum number of available queues is only 64.
The driver doesn't check the mode when reporting the available
number of queues, allowing more that 64 queues to be used in all cases.
If a queue no. >=64 is used in default mode, the TX packets will be dropped
silently.
This change adds a check to forbid using a queue number larger than 64
during device configuration (in default mode), so that the problem is
reported as early as possible.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Internal variable containing the number of TX queues for a device,
was being incorrectly assigned the number of RX queues, instead of TX.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
In the function set_rx_mode, the pointer of device data points
to the wrong address as found in ixgbe code, and fixed in commit:
"ixgbe: fix PF promiscuous mode after VF closed"
Fixes: be2d648a2d ("igb: add PF support")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
There's an issue reported. In the scenario DPDK PF + DPDK VF,
if the VF port is closed, PF port cannot receive packets.
I found at that time the promicuous mode is disabled on the PF
port. But it should be enabled.
When VF port is closed, it will send a message to its PF port to
reset it. During this, PF port will also reset its own
promicuous mode. Which promiscuous mode should be set depends on
the parameter stored in the device data. In the function
set_rx_mode, the pointer of device data points to the wrong
address. So, the promiscuous mode is wrong.
Fixes: 00e30184da ("ixgbe: add PF support")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Reported-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Current vector RX can't always set the packet_type properly.
To be more specific:
a) it never sets RTE_PTYPE_L2_ETHER
b) it doesn't handle tunnel ipv4/ipv6 case correctly.
c) it doesn't check is IXGBE_RXDADV_PKTTYPE_ETQF set or not.
While a) is pretty easy to fix, b) and c) are not that straightforward
in terms of SIMD ops (specially b).
So far I wasn't able to make vRX support packet_type properly without
noticeable performance loss.
So for now, just remove that functionality from vector RX and
update dev_supported_ptypes_get().
Fixes: 3962541758 ("mbuf: redefine packet type")
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
Notify user otherwise. A similar check has already been added to mlx5 in
commit "mlx5: check port is configured as ethernet device".
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Currently, the maximum value of rx/tx queues are kept by EAL. But,
the value is used like below with different meanings in vhost PMD.
- The maximum value of current enabled queues.
- The maximum value of current supported queues.
This wrong double meaning will cause an issue like below steps.
* Invoke application with below option.
--vdev 'eth_vhost0,iface=<socket path>,queues=4'
* Configure queues like below.
rte_eth_dev_configure(portid, 2, 2, ...);
* Configure queues again like below.
rte_eth_dev_configure(portid, 4, 4, ...);
The second rte_eth_dev_configure() will fail because both
the maximum value of current enabled queues and supported queues
will be '2' after calling first rte_eth_dev_configure().
To fix the issue, the patch adds another variable to keep the maximum
number of supported queues in vhost PMD.
Fixes: 23981fb0d78b ("vhost: Add vhost PMD")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
Issue:
When CONFIG_RTE_LIBTRE_I40E_RX_ALLOW_BULK_ALLOC=n in config file, there
will be a build error:
'i40e_recv_pkts_bulk_alloc' undeclared
Now DPDK i40e PMD uses the preprocessor to choose whether or not to define
the bulk recv functions, but for selection of the RX function, PMD only
depends on a C variable. This causes the inconsistency and leads to the
build error due to the bulk recv function not being defined.
Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch extends flow director to select vlan id as part of
filter's input set and program the filter rule with vlan id.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds missing VLAN bitmask for inner frame in case of
tunneling and fixes VLAN tags bitmasks for single or outer frame
in case of tunneling.
Fixes: 98f0557076 ("i40e: configure input fields for RSS or flow director")
Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends flow director to select more IP Header fields
as filter input set.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds a new function to set the fdir input set to default
when initialization.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In this patch, flex payload is removed from valid fdir input set
values. This is because all flex payload configuration can be set
in struct rte_fdir_conf during device configure phase, which is
a more flexible way of setting this up.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
For the input set selection, Hash filter and Flow director shared
the same function, i.e. i40e_filter_inset_select.
For code readability, this patch replaces i40e_filter_inset_select
with two new functions: i40e_hash_filter_inset_select and
i40e_fdir_filter_inset_select for Hash filter and Flow director
respectively.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Virtio has an mbuf descriptor ring containing mbufs to be used for
receiving traffic. When the host queues traffic to be sent to the guest, it
consumes these descriptors. If none exist, it discards the packet.
The virtio pmd allocates mbufs to the descriptor ring every time it
successfully receives a packet. However, it never does it if it does not
receive a valid packet. If the descriptor ring is exhausted, and the mbuf
mempool does not have any mbufs free (which can happen for various reasons,
such as queueing along the processing pipeline), then the receive call will
not allocate any mbufs to the descriptor ring, and when it finishes, the
descriptor ring will be empty. The ring being empty means that we will
never receive a packet again, which means we will never allocate mbufs to
the ring: we are stuck.
Ultimately, the problem arises because there is a dependency between
receiving packets and making the descriptor ring not be empty, and a
dependency between the descriptor ring not being empty, and receiving
packets.
To fix the problem, this pakes makes virtio always try to allocate mbufs
to the descriptor ring, if necessary, when polling for packets. Do this by
removing the early exit if no packets were received. Since the packet loop
later will do nothing if there are no packets, this is fine.
I reproduced the problem by pushing packets through a pipelined systems
(such as the client_server sample application) after artificially
decreasing the size of the mbuf pool and introducing a delay in a secondary
stage.
Without the fix, the process stops receiving packets fairly quicky. With
the fix, it continues to receive packets.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Kyle Larose <klarose@sandvine.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
This structure has immutable function pointers.
Also fix indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
On initialization, the rq descriptor count was set to the limit
of the vic. When the requested number of rx descriptors was
less than this count, enic_alloc_rq() was incorrectly setting
the count to the lower value. This results in later calls to
enic_alloc_rq() incorrectly using the lower value as the adapter
limit.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Update function can be called with no key to enable or disable a RSS
protocol, or with a key to be applied to the desired protocols.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
RSS configuration provided by the application should not be used as storage
by the PMD.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
For x550 device, the reta table has 512 entries, but in function
ixgbe_dev_rss_reta_query and ixgbe_dev_rss_reta_update we use an
"uint8_t i" to traverse the entries, this will lead the function
to an endless loop.
This patch changes the data type from uint8_t to uint16_t to fix
the issue.
Fixes: 4bee94a6c2 ("ixgbe: support 512 RSS entries on x550")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the packet_error bit in the completion descriptor is set, the
remainder of the descriptor and data are invalid. PKT_RX_MAC_ERR
was set in the mbuf->ol_flags if packet_error was set and used
later to indicate an error packet. But since PKT_RX_MAC_ERR is
defined as 0, mbuf flags and packet types and length were being
misinterpreted.
Make the function enic_cq_rx_to_pkt_err_flags() return true for error
packets and use the return value instead of mbuf->ol_flags to indicate
error packets. Also remove warning for error packets and rely on
rx_error stats.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
In the receive path, the function to set mbuf ol_flags used the
mbuf packet_type before it was set.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
Add checks to make sure we don't try to allocate more tx or rx queues
than we support.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add the missing '\n' character to the end of a few print statements.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Acked-by: John Daley <johndale@cisco.com>
VLAN insertion can be done in hardware when supported in Verbs. A software
fallback is provided otherwise. The software implementation is also used
when multi-packet send is enabled on a queue, as both features are mutually
exclusive.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Environment variable MLX5_PMD_ENABLE_PADDING enables HW packet padding
in PCI bus transactions.
When packet size is cache aligned and CRC stripping is enabled, 4 fewer
bytes are written to the PCI bus. Enabling padding makes such packets
aligned again.
In cases where PCI bandwidth is the bottleneck, padding can improve
performance by 10%.
This is disabled by default since this can also decrease performance for
unaligned packet sizes.
Signed-off-by: Olga Shern <olgas@mellanox.com>
fix packet padding macro check
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Secondary processes are expected to use queues and other resources
allocated by the primary, however Verbs resources can only be shared
between processes when inherited through fork().
This limitation can be worked around for TX by configuring separate queues
from secondary processes.
Signed-off-by: Or Ami <ora@mellanox.com>
Add driver functions to set link state up or down.
Burst functions are updated to make sure applications cannot attempt to
send/receive after link is brought down.
Signed-off-by: Or Ami <ora@mellanox.com>
When Linux PF and DPDK VF are used for i40e PMD, when a PF reset occurs,
an interrupt will go via adminq event to inform the VF of the reset.
A callback mechanism is introduced for the VF to allow it to invoke a
registered callback when PF reset happens.
Users can register a callback for this interrupt event using:
rte_eth_dev_callback_register(portid,
RTE_ETH_EVENT_INTR_RESET,
reset_event_callback,
arg);
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Currently, i40evf PMD uses a global static buffer to send virtchnl
commands to host driver. It is shared by multiple VFs.
This patch changed to allocate a virtchnl cmd buffer for each VF.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This is a PMD for the Amazon ethernet ENA (Elastic Network Adapters)
family.
The driver operates variety of ENA adapters through feature negotiation
with the adapter and upgradable commands set.
ENA driver handles PCI Physical and Virtual ENA functions.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Release Note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Implementation of platform specific code for ENA communication layer.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Low level common abstraction for ENA device communication.
Signed-off-by: Netanel Belgazal <netanel@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Add a new API rte_eth_dev_get_supported_ptypes to query what packet types
can be filled by a given device. The device should be already started or
its PMD RX burst function already decided, since the packet types supported
may vary depending on RX function.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Commit e86a699cf6 missed two further libm dependencies: ceil() used
by librte_meter is typically inlined so the missing dependency does not
actually cause failures, and librte_pmd_nfp is not built by default
so its easy to miss.
This causes duplicates in LDLIBS in many configurations so its vital
they are removed before passing to linker.
Fixes: e86a699cf6 ("mk: fix shared library dependencies on libm and librt")
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Comment for "ierrors" counter says that it counts erroneous received
packets. But for some reason "imissed" counter is added to "ierrors"
counter in most drivers.
It is a mistake, because missed packets are obviously not received.
This patch fixes it.
Fixes: 70bdb18657 ("ethdev: add Rx error counters for missed, badcrc and badlen packets")
Fixes: 6bfe648406 ("i40e: add Rx error statistics")
Fixes: 856505d303 ("cxgbe: add port statistics")
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
If a bonded device is created when there are no slave devices
there is a loop in bond_ethdev_promiscuous_enable() which results
in a segmentation fault.
The solution is to initialise the current_primary_port to an
invalid port value when the bonded port is created.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The current code for detecting link during slave addition can cause a
slave interface to be activated twice -- once during slave_configure()
and again at the end of __eth_bond_slave_add_lock_free(). This will
either cause the active slave count to be incorrect or will cause the
802.3ad activation function to panic. Ensure that the interface is not
activated more than once.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
If the link state of a slave is "up" when added, it is added to the list
of active slaves but, even if it is the only slave, is not selected as
the primary interface. Generally, handling of link state interrupts
selects an interface to be primary, but only if the active count is zero.
This change avoids the situation where there are active slaves but
no primary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The bonding PMD in mode 4 puts all enslaved interfaces into promiscuous
mode in order to receive LACPDUs and must filter unwanted packets
after the traffic has been "collected". Allow broadcast and multicast
through so that ARP and IPv6 neighbor discovery continue to work.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Copy all needed fields from the mode8023ad_private structure in
bond_mode_8023ad_conf_get(). This help ensure that a subsequent call
to rte_eth_bond_8023ad_setup() is not passed uninitialized data that
would result in either incorrect behavior or a failed sanity check.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Ensure that a bonded slave device is not detached,
until it is removed from the bonded device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Fixes: a45b288ef2 ("bond: support link status polling")
Fixes: 494adb7f63 ("ethdev: add device fields from PCI layer")
Fixes: b1fb53a39d ("ethdev: remove some PCI specific handling")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Check that the bonded device has no slaves before detaching it.
Fixes: 8d30fe7fa7 ("bonding: support port hotplug")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When a device is created with "CREATE" as action, new rings are
allocated for it, then it is a good practice to free them when the
rte_ethdev_dettach method is invoked by the application.
Rings are not freeded when "ATTACH" is used or when the device is
created by means of the rte_eth_from_rings function.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Rename nb_rx/tx_queues fields in internals struct to max_rx/tx_queues
Updated fields required to keep max queue numbers configured. For current
queue number requirements data->nb_rx/tx_queues fields used.
Some checkpatch corrections and code clenaup.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
1- Remove duplicate nb_rx/tx_queues fields from internals
2- Move duplicate code into a common function
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
The actual captured length is header.caplen, whereas header.len is
the original length on the wire.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Dror Birkman <dror.birkman@lightcyber.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
Allow dynamic deallocation of af_packet device through proper
API functions. To achieve this:
* set device flag to RTE_ETH_DEV_DETACHABLE
* implement rte_pmd_af_packet_devuninit() and expose it
through rte_driver.uninit()
* copy device name to ethdev->data to make discoverable with
rte_eth_dev_allocated()
Moreover, make af_packet init function static, as there is no
reason to keep it public.
Signed-off-by: Wojciech Zmuda <woz@semihalf.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Allow overriding the base mac address of the device.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
During an MTU change, the adapter is restarted. If hardware VLAN offload
is in use, this existing filter table would also be cleared. Instead,
setup the shadow table once during device initialization and just update
during restart.
vmxnet3_dev_vlan_offload_set(dev, mask) was incorrectly treating the
mask parameter as the bitmask for vlan_strip and vlan_filter, whereas
the mask indicates only what has changed - the values for
vlan_stripping and vlan_filter needs to be taken from dev_conf.rxmode.
Fixes: f003fc3834 ("vmxnet3: enable vlan filtering")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Signed-off-by: Nachiketa Prachanda <nprachan@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
Add support for linking multi-segment buffers together to
handle Jumbo packets. The vmxnet3 API supports having header
and body buffer types. What this patch does is fill the primary
ring completely with header buffers and the secondary ring
with body buffers. This allows for non-jumbo frames to only
use one mbuf (from primary ring); and jumbo frames will have
first mbuf from primary ring and following mbufs from other
ring.
This could be optimized in future if the DPDK had API
to supply different sized mbufs (two pools) into driver.
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Release note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
All the error checks in virtqueue_enqueue_xmit are already done
by the caller. Therefore they can be removed to improve performance.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Virtio supports a feature that allows sender to put transmit
header prepended to data. It requires that the mbuf be writeable, correct
alignment, and the feature has been negotiatied. If all this works out,
then it will be the optimum way to transmit a single segment packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The virtio ring in QEMU/KVM is usually limited to 256 entries
and the normal way that virtio driver was queuing mbufs required
nsegs + 1 ring elements. By using the indirect ring element feature
if available, each packet will take only one ring slot even for
multi-segment packets.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Applied with coding standards fixes:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The virtio_net_hdr desc all pointed to the same buffer. It doesn't cause
issue because in the simple TX mode we don't use the header. This patch
makes the header desc point to different buffer.
Fixes: b4ae9c505f ("virtio: optimize ring layout")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This initialisation of nb_rx_queues and nb_tx_queues has been removed
from eth_virtio_dev_init.
The nb_rx_queues and nb_tx_queues were being initialised in
eth_virtio_dev_init before the tx_queues and rx_queues arrays were
allocated.
The arrays are allocated when the ethdev port is configured and the
nb_tx_queues and nb_rx_queues are initialised.
If any of the following functions were called before the ethdev
port was configured there was a segmentation fault because
rx_queues and tx_queues were NULL:
rte_eth_stats_get
rte_eth_stats_reset
rte_eth_xstats_get
rte_eth_xstats_reset
Fixes: 823ad64795 ("virtio: support multiple queues")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Fix the issue that virtio device cannot be started after stopped.
The field, hw->started, should be changed by virtio_dev_start/stop instead
of virtio_dev_close.
Fixes: a85786dc81 ("virtio: fix states handling during initialization")
Reported-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Pavel Fedin <p.fedin@samsung.com>
Mmap PCI resource file and add inline functions for reading from and
writing to PCI resource address space.
Add description of IBUF and OBUF address space.
Add configuration option for setting which firmware type will be used.
Right address space values for IBUFs and OBUFs offsets are used
according to configuration option CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS.
Setting link up/down and getting info about link status is done through
mmapped PCI resource address space.
Signed-off-by: Matej Vido <vido@cesnet.cz>
PMD was of type PMD_VDEV which means that PCI device is not recognised
automatically during EAL initialization, but it has to be created by
EAL option --vdev.
Now, PMD is of type PMD_PDEV which means that PCI device is probed
and recognised during EAL initialization automatically.
Path to szedata2 device file is matched with device and the count
of available RX and TX DMA channels is found out during device
initialization.
Initialization, starting and stopping of queues is changed to better
correspond with Ethernet device API model. Function callbacks
(rx|tx)_queue_(start|stop) are added. Unnecessary items are removed
from ethernet device private data structure.
Signed-off-by: Matej Vido <vido@cesnet.cz>
When using start-stop functionality the per queue fields need to
be properly reset.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Even with tx checksum offload available, do not set the flag by default.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The mbuf ol_flags field was changed to uin64_t with DPDK version 1.8
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The file sys/io.h was included but it can be unavailable in some
non-x86 toolchains.
As others system includes in the file nfp_net.c, it seems useless,
so the easy fix is to remove them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Change rxq_cq_to_ol_flags() to set checksum flags according to packet type,
so for non L3/L4 packets the mbuf chksum_bad flags will not be set.
Fixes: 67fa62bc67 ("mlx5: support checksum offload")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
RSS configuration should not be freed when priv is NULL.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Or Ami <ora@mellanox.com>
The first and last memory pool elements are usually cache-aligned but not
page-aligned, particularly when using huge pages.
Hardware performance can be improved significantly by registering memory
regions starting and ending on page boundaries.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Avoid dereferencing pointers twice to get to fast Verbs functions by
storing them directly in RX/TX queue structures.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Allows HW to strip the 802.1Q header from incoming frames and report it
through the mbuf structure.
This feature requires MLNX_OFED >= 3.2.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Upcoming flow director support will reuse this function to generate filter
rules.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Until now, broadcast frames were handled like unicast. Moving the related
flow to the special flows table frees up the related unicast MAC entry.
The same method is used to handle IPv6 multicast frames.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Merge redundant code by adding a static initialization table to manage
promiscuous and allmulticast (special) flows.
New function priv_rehash_flows() implements the logic to enable/disable
relevant flows in one place from any context.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
In the documentation it is specified that the hardware only supports a
number of RX queues if it is a power of 2.
Since ibv_exp_create_qp may not return an error when the number of
queues is unsupported by hardware, sanitize the value in dev_configure.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
When compiling with clang 3.6, the mlx4 driver gives the following error
message about an unneeded function.
CC mlx4.o
.../drivers/net/mlx4/mlx4.c:136:20: fatal error: function
'wr_id_t_check' is not needed and will not be emitted
[-Wunneeded-internal-declaration]
static inline void wr_id_t_check(void)
^
1 error generated.
The function is to compile-time check the size of wr_id_t, so use
the standard DPDK BUILD_BUG_ON macro to do so in the init function
instead.
Fixes: 7fae69eeff ("mlx4: new poll mode driver")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch enables reading sglort (global resource tag) info into the
mbuf for RX and inserting an FTAG (Fabric Tag) at the beginning of the
packet for TX. The vlan_tci_outer field selected from rte_mbuf structure
for sglort is not used in fm10k now.
In FTAG based forwarding mode, the switch will forward packets according
to glort info in FTAG rather than mac and vlan table.
To activate this feature, user needs to pass a devargs parameter to eal
for fm10k device like "-w 0000:84:00.0,enable_ftag=1". Currently this
feature is supported only on PF, because FM10K_PFVTCTL register is
read-only for VF.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Remove the unused element request_lport_map in struct fm10k_mac_ops.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Some cleanups to better reflect the code that was actually pushed out to
the upstream Linux community.
Among the above cleanups, a few macros such as FM10K_RXINT_TIMER_SHIFT are
removed, but they are needed in dpdk/fm10k, so we have to put all these
necessary macros into fm10k_osdep.h.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Per comments from an upstream kernel patch, and looking at how TLV
LE_STRUCT code works, we actually want these structures to be 4byte
aligned, not 1byte aligned.
In practice, 1byte alignment has worked so far because all our
structures end up being a multiple of 4. But if a future TLV
structure were added that had a u8 or similar sticking on the end things
would break. Fix this by using 4byte alignment which will prevent the
TLV LE_STRUCT code from breaking. Update the comment explaining that we
need 4byte alignment of our structures.
Fixes: 925c862cbc ("fm10k/base: pack TLV overlay structures")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The comment for fm10k_iov_msg_lport_state_pf was changed during
review of kernel driver, and the new wording is slightly clearer.
Re-write the comment in base code based on this new wording.
Fix a number of mailbox comment issues with function header comments,
lower-case acronyms (i.e. FIFO, TLV), incorrect function names in
DEBUGFUNC(), duplicate comments and a stubbed-out header comment for
fm10k_sm_mbx_init.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The vid variable name is shorthand for VLAN ID, so we should use this in
comments explaining what is happening.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The Linux Kernel provides the OS a call "pcie_get_minimum_link" which
can crawl the PCIe tree and determine the actual minimum link speed of a
device which is a more general check than provided by
is_slot_appropriate. Thus, the kernel driver does not use or want the
is_slot_appropriate function call. Add a NO_IS_SLOT_APPROPRIATE_CHECK
definition which can be defined to remove the code.
If left undefined (the default) then the code will all be active and no
driver changes should be necessary.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Use memcpy instead of copying MAC address byte-by-byte.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Using the BIT macro can simplify the bit-shifting operation and make the
code look clean. Similar to how this is handled in the i40e base code,
define a macro for it in DPDK, so it can be used here too.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
"else" is not generally useful after a break or return.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Recommended line length maximum is 80 characters
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Add comments which properly explain the undocumented use of bits in
TDLEN register prior to VF initializing it to the correct value. Note
that the mechanism is entirely software-defined and explain its purpose
to help reduce confusion in the future.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
VF drivers must detect how many queues are available. Previously, the
driver assumed that each VF has at minimum 1 queue. This assumption is
incorrect, since it is possible that the PF has not yet assigned the
queues to the VF by the time the VF checks.
To resolve this, we added a check first to ensure that the first queue
is, in fact, owned by the VF at init_hw_vf time.
However, the code flow did not reset hw->mac.max_queues to 0.
In some cases, such as during reinit flows, we call init_hw_vf
without clearing the previous value of hw->mac.max_queues. Due to this,
when init_hw_vf errors out, if its error code is not properly handled
the VF driver may still believe it has queues which no longer belong to
it. Fix this by clearing the hw->mac.max_queues on exit due to errors.
Fixes: 8b8264bdb9 ("fm10k/base: check VF has a queue")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Use bitshift instead of a divisor, because this is faster, and
eliminates any need for a '0' check. In our case, this even works
out because default Gen3 will be 0.
Because of this, we are also able to remove the check for non-zero value
in the VF code path since that will already be the default Gen3 case.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Make functions that are only referenced locally static.
Wrap fm10k_msg_data fm10k_iov_msg_data_pf[] in the new ifndef
NO_DEFAULT_SRIOV_MSG_HANDLERS so that drivers with custom SR-IOV
message handlers can strip it.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Since the resultant data type of the mac_update.mac_upper field is u16,
it does not make sense to typecast u8 variables to u32 first.
Fixes: 7223d200c2 ("fm10k: add base driver")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The new share code makes fm10k_msg_update_pvid_pf function static, so we
can not refer to it now in fm10k_ethdev.c. The registered PF handler is
almost the same as the default PF handler, removing it has no impact on
mailbox.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Using SSE instructions to parse error flags in HW Rx descriptor,
then set corresponding bits of mbuf.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
When the TX function tries to free a bunch of mbufs, it will free
them one by one. This change will scan the free list and merge the
requests in case they belongs to same pool, then free once, which
will reduce cycles on freeing mbufs.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
fm10k switch core uses source MAC + VID + SGLORT to do
look up in MAC table. If no match, an exception interrupt
will be sent to the switch manager. Too much of this kind
of exception interrupts cause switch manager side high CPU
usage.
To reproduce this issue, one DPDK testpmd runs on a server
with one fm10k NIC, mac forwards test traffic from one of
fm10k ports to another port. The CPU usage for the switch
manager will go up to about 20% for test traffic rate at
10G bps, comparing to near 0% for no test traffic.
This patch fixes this issue. A default SGLORT is assigned
to each TX queue. This default value works for non-VMDq mode
and current VMDq example. For advanced VMDq usage, e.g.
different source MAC address for different TX queue, FTAG
forwarding function could be used to change this default
SGLORT value.
Fixes: 9ae6068c86 ("fm10k: add dev start/stop")
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
In FM10K, a single PCIe port can derive out a few logical ports,
like SRIOV PF/VF devices, VMDQ objects. To better manage them, FM10K
silicon assigns a Unique GLORT ID to each logical port.
When a logical port sends a broadcast packet, the silicon will flood
it to all logical ports, including the one that sent the broadcast packet.
To prevent this, silicon has an rxq register to store the glort id of
the logical port that queue binds to.
FM10K has a switch core inside, which has a loopback suppression
mechanism in the switch level. Switch level loopback suppression mostly
works for the ether port traffic.
This patch assigns a SGLORT for each RX queue, and enables PCIe port
level loopback suppression.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
When the PF establishes a connection with Switch Manager(SM), it receives
a logical port range from SM, and registers certain logical ports from
that range. Then a default VID will be sent back from the SM.
This whole transaction - finishing with the default VID being set -
needs to be completed before dev_init returns. If not, the interrupt
setting will subsequently be changed in dev_start according to the RX
queue number, and that can cause this transaction to fail.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
Interrupt mode framework has per-queue enable/disable functions.
Implement these two functions for fm10k driver.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
Previous dev_stop function stops the rx/tx queues. This patch adds logic
to disable rx queue interrupt, clean the datapath event and queue/vector
map.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In interrupt mode, each rx queue can have one interrupt to notify the
application when packets are available in that queue. Some queues
also can share one interrupt.
Currently, fm10k needs one separate interrupt for mailbox. So, only those
drivers which support multiple interrupt vectors e.g. vfio-pci can work
in fm10k interrupt mode.
This patch uses the RXINT/INT_MAP registers to map interrupt causes
(rx queue and other events) to vectors, and enable these interrupts
through kernel drivers like vfio-pci.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
rx_descriptor_done is used by interrupt mode example application
(l3fwd-power) to check rxd DD bit to decide the RX trend,
then l3fwd-power will adjust the cpu frequency according to
the result.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In fm10k, PF, VF, VMDQ or queues binding to flow director rule can
be considered as a logical port. Original implementation only creates
a single port for all cases. This change creates 128 logical ports;
first 64 for PF and VMDQ, second 64 for flow director.
Registers DGLORTDEC/DGLORTMAP define rules for how to classify packets
into different queues. Currently only PF and VMDQ cases are considered.
This change add rules for flow director.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
In fm10k_recv_scattered_pkts function, a packet is stored in a linked list,
offload flags such as PKT_RX_VLAN_PKT should be set in the first segment.
Fixes: 6b59a3bc82 ("fm10k: fix VLAN in Rx mbuf")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
This patch implemented the ops of adding and removing mac
address in i40evf driver. Functions are assigned like:
.mac_addr_add = i40evf_add_mac_addr,
.mac_addr_remove = i40evf_del_mac_addr,
To support multiple mac addresses setting, this patch also
extended the mac addresses adding and deletion when device
start and stop. Each VF can have a maximum of 64 mac
addresses.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
VEB switching feature for i40e is used to enable the switching between the
VSIs connect to the virtual bridge. The old implementation is setting the
virtual bridge mode as VEPA which is port aggregation. Enable the switching
ability by setting the loop back mode for the specific VSIs which connect
to PF or VFs.
VEB/VSI/VEPA are concepts not specific to the i40e HW, the concepts are
from 802.1qbg spec
IEEE EVB tutorial:
http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf
VEB: a virtual switch can forward the packet based on the specific match
field.
VSI: a virtual interface connect between the VEB/VEPA and virtual machine.
VEPA: a virtual Ethernet port aggregator will upstream the packets from
VSI to the LAN port.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
This patch fixes a typo in a comment in the definition of
the i40e_pf struct.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Previously, DCB(Data Center Bridging) is only enabled on PF,
queue mapping and BW configuration is only done on PF.
This patch enables DCB for VMDQ VSIs(Virtual Station Interfaces)
by following steps:
1. Take BW and ETS(Enhanced Transmission Selection)
configuration on VEB(Virtual Ethernet Bridge).
2. Take BW and ETS configuration on VMDQ VSIs.
3. Update TC(Traffic Class) and queues mapping on VMDQ VSIs.
To enable DCB on VMDQ, the number of TCs should not be larger than
the number of queues in VMDQ pools, and the number of queues per
VMDQ pool is specified by CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
in config/common_* file.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
It removes the i40evf_set_mac_type() defined in PMD, and reuses
i40e_set_mac_type() defined in base driver.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It adds base driver release information such as release date,
for better tracking in the future.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Several structures and macros are added or updated, such
as 'struct i40e_aqc_get_link_status',
'struct i40e_aqc_run_phy_activity' and
'struct i40e_aqc_lldp_set_local_mib_resp'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It adds the new AQ command and struct for managing a
thermal sensor.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
X722 supports Expanded version of TCP, UDP PCTYPES for RSS.
Add a Virtchnl offload to support this.
Without this patch VF drivers will not be able to support
the correct PCTYPES for X722 and UDP flows will not fan out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch adds 7 new register definitions for programming the
parser, flow director and RSS blocks in the HW.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
RX control register read/write functions are added, as directly
read/write may fail when under stress small traffic. After the
adminq is ready, all rx control registers should be read/written
by dedicated functions.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
When updating a VSI, save off the number of allocated and
unallocated VSIs as we do when adding a VSI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Fix the driver load failure with linking with some
PHY types, as the amount of time it takes for the
GLGEN_RSTAT_DEVSTATE to be set increases greatly on those PHY
types, which can lead to a timeout.
Fixes: 9aeefed055 ("i40e/base: support ESS")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
In Multi-Function Mode (MFP) particularly when the PF VSI is set
in limited promiscuous mode, the HW switch was still mirroring the
outgoing packets from other VSIs (VF/VMdq) onto the PF VSI.
This sets a new bit to avoid above mirroring, and it is in limited
promiscuous on the PF VSI in MFP which is similar to default port
VSI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch adds functions to blink led on devices using a new
PHY since MAC registers used in other designs do not work in
this device configuration.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add the support code for calling the AdminQ API call
aq_set_switch_config.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
With the latest firmware, statistics gathering can now be enabled and
disabled in the HW switch, so we need to add a parameter to allow the
driver to set it as desired. At the same time, the L2 cloud filtering
parameter has been removed as it was never used.
Older drivers working with the newer firmware and newer drivers working
with older firmware will not run into problems with these bits as the
defaults are reasonable and there is no overlap in the bit definitions.
Also, newer drivers will be forced to update because of the change in
function call parameters, a reminder that the functionality exists.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch implements necessary functions related to port
mirroring features such as add/delete mirror rule, function
to set promiscuous VLAN mode for VSI if mirror rule_type is
"VLAN Mirroring".
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add the use of the new shared MAC filter bit for multicast
and broadcast filters in order to make better use of the
filters available from the device.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch fixes a problem where the NVMUpdate Tool, when
using the PHY NVM feature, gets bad data from the PHY because
of contention on the MDIO interface from get phy capability
calls from the driver during regular operations. The problem
is fixed by adding a check if media is available before calling
get phy capability function because that bit is not set when
device is in PHY interaction mode.
Fixes: 842ea19963 ("i40e/base: save link module type")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The device capabilities were defined in two places, and neither had
all the definitions. It really belongs with the AQ API definition,
so this patch removes the other set of definitions and fills out the
missing item.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The recently added proxy opcodes should be available only with
X722_SUPPORT, so move them into the #ifdef, and reorder these
to be in numerical order with the rest of the opcodes. Several
structs that were added are unnecessary, so they are removed
here.
Fixes: 788fc17b2d ("i40e/base: support proxy config for X722")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The recently added Wakeup On Line (WOL) opcodes should be
available only with X722_SUPPORT, so move them into the #ifdef,
and reorder these to be in numerical order with the rest of the
opcodes. Several structs that were added are unnecessary, so
they are removed here.
Fixes: 3c89193a36 ("i40e/base: support WOL config for X722")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add new Device ID's for backplane and QSFP+ adapters, and delete
deprecated one for backplane.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
In one obscure corner case, it was possible to clear the NVM update
wait flag when no update_done message was actually received. This
patch cleans the event descriptor before use, and moves the opcode
check to where it won't get done if there was no event to clean.
Fixes: 8db9e2a1b2 ("i40e: base driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The standard way to check if the AQ is enabled is to look at
the count field. So it should only set this field after it has
successfully allocated memory.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It's possible that while waiting for the spinlock, another
entity (that owns the spinlock) has shut down the admin queue.
If it then attempts to use the queue, it will panic.
It adds a check for this condition on the receive side. This
matches an existing check on the send queue side.
Fixes: 8db9e2a1b2 ("i40e: base driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
XL710/X710 devices requires FW version checks to properly handle
DCB configurations from the FW while other devices (e.g. X722)
do not, so limit these checks to XL710/X710 only.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
In X722, NVM reads can't be done through SRCTL registers.
And require AQ calls, which require grabbing the NVM lock.
Unfortunately some paths need the lock to be acquired once
and do a whole bunch of stuff and then release it.
This patch creates an unsafe version of the read calls, so
that it can be called from the paths that need the bulk access.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Instead of doing the MAC check, use a flag that gets set per
MAC. This way there are less chances of user error and it
can enable multiple MACs with the capability in a single place
rather than cluttering the code with MAC checks.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
SW needs to acquire the NVM ownership before issuing an AQ read
to the X722 NVM otherwise it will get EBUSY from the firmware.
Also it should be released when done.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Fix compilation warnings in base code on some platforms.
Fixes: bd6651c2d2 ("i40e/base: use bit shift macros")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Use ether API of 'is_valid_assigned_ether_addr' to validate
MAC address.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Generate a MAC address for each VF during PF host
initialization.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
VLAN filtering was always performed, even if hw_vlan_filter was
disabled. During device initialization, default filter
RTE_MACVLAN_PERFECT_MATCH was applied. In this situation, all incoming
VLAN frames were dropped by the card (increase of the register RUPP - Rx
Unsupported Protocol).
In order to restore default behavior, if HW VLAN filtering is activated,
set a filter to match MAC and VLAN. If not, set a filter to only match
MAC.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: 912b595146 ("i40e: mac vlan filter")
Signed-off-by: Julien Meunier <julien.meunier@6wind.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The no-refcount path was being taken without the application opting
in to it.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Reported-by: Mike Stolarchuk <mike.stolarchuk@bigswitch.com>
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The multi queue mode ETH_MQ_RX_VMDQ_DCB_RSS is not supported in
ixgbe driver.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>