The Tx functions were in enic_ethdev.c and enic_main.c - files in which
they did not logically belong. To make things consistent with most
other drivers, we therefore extract them and place them with the equivalent
Rx functions into a file called enic_rxtx.c.
Signed-off-by: John Daley <johndale@cisco.com>
Truncated packets occur on enic if an mbuf is not big enough to
receive it or there aren't enough mbufs if rx scatter is in use.
They show up as error packets but unlike other error packets (like
packets bad FCS) there are no nic drop counts incremented for them.
Truncated packets are calculated by subtracting hardware errors from
software errors. Note: this causes transient inaccuracies in the
ipackets count. Also, the length of truncated packets are counted
in ibytes even though truncated packets are dropped which can make
ibytes be slightly higher than it should be.
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Following the discussions from:
http://dpdk.org/ml/archives/dev/2015-July/021721.htmlhttp://dpdk.org/ml/archives/dev/2016-April/038143.html
Remove the unused flag from enic driver. Also, the enic driver is
now modified to drop bad packets instead of using a non-existent
flag to try and identify them as bad.
Fixes: 947d860c82 ("enic: improve Rx performance")
Fixes: 5776c30293 ("enic: fix error packets handling")
Fixes: 50765c820e ("enic: remove packet error conditional")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: John Daley <johndale@cisco.com>
rx_no_bufs is a hardware counter of packets dropped on the
interface due to no host buffers and should be used to update
r_stats->imissed counter instead of rx_nombuf.
Include rx_drop in ierrors. rx_drop is incremented if packets
arrive when the receive queue is disabled.
Add a structure and functions for initializing and clearing
software counters. Add count of Rx mbuf allocation failures
(rx_nombuf) as the first counter.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
GCC_VERSION is empty in case of clang:
/bin/sh: line 0: test: -ge: unary operator expected
It is the same issue as http://dpdk.org/dev/patchwork/patch/5994/
Fixes: 366113dbfb ("e1000: suppress misleading indentation warning")
Signed-off-by: Hiroyuki Mikita <h.mikita89@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: John W. Linville <linville@tuxdriver.com>
Suspicious implicit sign extension: pf->fdir.match_counter_index
with type unsigned short (16 bits, unsigned) is promoted in
"pf->fdir.match_counter_index << 20" to type int (32 bits, signed),
then sign-extended to type unsigned long (64 bits, unsigned).
If "pf->fdir.match_counter_index << 20" is greater than 0x7FFFFFFF,
the upper bits of the result will all be 1.
To fix the issue explicitly cast pf->fdir.match_counter_index to uint32_t.
Coverity issue: 13315
Fixes: 05999aab4c ("i40e: add or delete flow director")
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch enables configuring MTU for i40e.
Since changing MTU needs to reconfigure queue, the port must be
stopped before configuring MTU.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
When setting up the flexible paylaod selection rules, the value
NONUSE_FLX_PIT_DEST_OFF (== 63) is meant to disable the rule.
However, since the MK_FLX_PIT macro always added on an additional
offset of I40E_FLX_OFFSET_IN_FIELD_VECTOR (== 50) to the value passed
the functionality to disable the rule was broken.
This patch fixes this by checking for the disable value and not adding
the offset in that case.
Fixes: d8b90c4eab ("i40e: take flow director flexible payload configuration")
Reported-by: Michael Habibi <mikehabibi@gmail.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Previously, there was a known issue "On Intel® 40G Ethernet
Controller stopping the port does not really down the port link."
There were two reasons why the port was always kept up.
1. Old firmware versions had issues when "Set PHY config command"
was used on 40G NICs.
2. The kernel i40e driver didn't call "Set PHY config command" when
ifconfig up/down was used, it assumes the link is always up. But
in DPDK, ports are forced down when an applications quits. So if
the port is then switched to being controlled by kernel the driver,
the port can not be brought up through "ifconfig <ethx> up".
This patch fixes this issue by adding in "Set PHY config command"
into our driver. This is now possible because with newer firmware
there is no longer a problem using this command.
With this fix, after DPDK quit, if the port is switched to being used
by the kernel driver, "ethtool -s <ethx> autoneg on" can be used to
turn on the auto negotiation, and then port can be brought up through
"ifconfig <ethx> up".
NOTE: requires kernel i40e driver version >= 1.4.X
Fixes: 2f1e228174 ("i40e: skip link control as firmware workaround")
Fixes: 16c979f9ad ("i40e: disable setting of PHY configuration")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Change the Tx routine to ring the doorbell once per burst
and not on every Tx packet. This driver-level optimization
is necessary to achieve line rates for larger frame
sizes (1k or more).
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
- Process Tx completions based on configured Tx free threshold and
determine how much TX BDs are required before invoking bnx2x_tx_encap()
- Change bnx2x_tx_encap() to void function as it can now never fail
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Under certain scenarios, management firmware (MFW) periodically polls
the driver for LAN statistics. This patch implements the osal hook to
fill in the stats.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Program the PCIe completion timeout to 4 sec to give enough time
to allow completions to be received successfully in some older systems.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
To be consistent with the naming for ARM NEON implementation,
ixgbe_rxtx_vec.c is renamed to ixgbe_rxtx_vec_sse.c.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Previously, if an adminq message is sent successfully, but no response is
received, function "i40evf_execute_vf_cmd" will return without error.
The root cause is value "err" is overwritten. This patch fixes this by
ensuring the value of err is set appropriately for each cmd.
Fixes: ae19955e7c ("i40evf: support reporting PF reset")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Use ARM NEON intrinsic to implement ixgbe vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[style fixes as highlighted by checkpatch.pl]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
move scalar code which does not use x86 intrinsic functions to new file
"ixgbe_rxtx_vec_common.h", while keeping x86 code in ixgbe_rxtx_vec.c.
This allows the scalar code to to be shared among vector drivers for
different platforms.
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The VLAN tag information should be stored in the first mbuf of a chain
of buffers, not in the last one.
Fixes: 9fd5e98b62 ("vmxnet3: support RSS and refactor Rx offload")
Signed-off-by: John Guzik <john@shieldxnetworks.com>
Acked-by: Yong Wang <yongwang@vmware.com>
When linking drivers as shared libraries, the dependencies need
to be marked as DT_NEEDED entries.
The crypto dependencies (libsso and libIPSec) are static libraries.
To make them linked in the shared PMDs, the code must relocatable:
- libIPSec_MB.a must be built with -fPIC
- libsso_kasumi.a must be built with KASUMI_CFLAGS=-DKASUMI_C
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Fixes: 2773c86d06 ("crypto/kasumi: add driver for KASUMI library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some libraries were missing their dependency on eal, mbuf, mempool,
ring and kvargs.
It is revealed by the linker option "-z defs".
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
There is no need to have this parsing inlined in the header.
It brings kvargs dependency to every crypto drivers.
The functions are moved into rte_cryptodev.c.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The compilation for 32-bit fails when CONFIG_RTE_VIRTIO_USER is enabled:
drivers/net/virtio/virtio_user_ethdev.c:84:47:
error: format ‘%llu’ expects argument of type ‘long long unsigned int’,
but argument 5 has type ‘size_t {aka unsigned int}’
Fixes: e9efa4d938 ("net/virtio-user: add new virtual PCI driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
NSH packet can be recognized by Intel X710/XL710 series.
This patch enables the new packet type.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
In the following loop:
while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
...
}
There is no external function call or any explict memory barrier
in the loop, the re-read of used->idx might be optimized and only
be retrieved once.
Use of voaltile normally should be prohibited, and access_once
is Linux kernel's style to handle this issue; Once we have that
macro in DPDK, we could change to that style.
virtio_recv_mergable_pkts might also have the same issue, so fix
it as well.
Fixes: 823ad64795 ("virtio: support multiple queues")
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Trying to access xstats_names after "if (xstats_names == NULL)" is
obviously wrong, which would result to a crash while running "show
port xstats 0" in testpmd with virtio PMD.
The fix is straightforward; just reverse the check.
Fixes: baf91c395b ("net/virtio: fetch extended statistics with integer ids")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
In virtio-user driver, when notify ctrl-queue, invoke API of
virtio-user device emulation to handle ctrl-q command.
Besides, multi-queue requires ctrl-queue and ctrl-queue will be
enabled automatically when multi-queue is specified.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The main purpose of this patch is to enable multi-queue. But
multi-queue requires ctrl-queue so that driver can send how many
queues will be enabled through ctrl-queue messages.
So we partially implement ctrl-queue to handle control command
with class of VIRTIO_NET_CTRL_MQ and with cmd of
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET to handle mq support. This patch
provides a function, virtio_user_handle_cq(), for driver to handle
ctrl-queue messages.
Besides, multi-queue requires VIRTIO_NET_F_MQ and VIRTIO_NET_F_CTRL_VQ
are enabled when we do feature negotiation.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch mainly adds method in vhost user adapter to communicate
enable/disable queues messages with vhost user backend, aka,
VHOST_USER_SET_VRING_ENABLE.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new virtual device named virtio-user, which can be used just like
eth_ring, eth_null, etc. To reuse the code of original virtio, we do
some adjustment in virtio_ethdev.c, such as remove key _static_ of
eth_virtio_dev_init() so that it can be reused in virtual device; and
we add some check to make sure it will not crash.
Configured parameters include:
- queues (optional, 1 by default), number of queue pairs, multi-queue
not supported for now.
- cq (optional, 0 by default), not supported for now.
- mac (optional), random value will be given if not specified.
- queue_size (optional, 256 by default), size of virtqueues.
- path (madatory), path of vhost user.
When enable CONFIG_RTE_VIRTIO_USER (enabled by default), the compiled
library can be used in both VM and container environment.
Examples:
path_vhost=<path_to_vhost_user> # use vhost-user as a backend
sudo ./examples/l2fwd/build/l2fwd -c 0x100000 -n 4 \
--socket-mem 0,1024 --no-pci --file-prefix=l2fwd \
--vdev=virtio-user0,mac=00:01:02:03:04:05,path=$path_vhost -- -p 0x1
Known issues:
- Control queue and multi-queue are not supported yet.
- Cannot work with --huge-unlink.
- Cannot work with no-huge.
- Cannot work when there are more than VHOST_MEMORY_MAX_NREGIONS(8)
hugepages.
- Root privilege is a must (mainly becase of sorting hugepages according
to physical address).
- Applications should not use file name like HUGEFILE_FMT ("%smap_%d").
- Cannot work with vhost-net backend.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch is related to how to calculate relative address for vhost
backend.
The principle is that: based on one or multiple shared memory regions,
vhost maintains a reference system with the frontend start address,
backend start address, and length for each segment, so that each
frontend address (GPA, Guest Physical Address) can be translated into
vhost-recognizable backend address. To make the address translation
efficient, we need to maintain as few regions as possible. In the case
of VM, GPA is always locally continuous. But for some other case, like
virtio-user, GPA continuous is not guaranteed, therefore, we use virtual
address here.
It basically means:
a. when set_base_addr, VA address is used;
b. when preparing RX's descriptors, VA address is used;
c. when transmitting packets, VA is filled in TX's descriptors;
d. in TX and CQ's header, VA is used.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch moves phys addr check from virtio_dev_queue_setup
to pci ops. To make that happen, make sure virtio_ops.setup_queue
return the result if we pass through the check.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
We skip kernel managed virtio devices, if it isn't whitelisted.
Before checking if the virtio device is whitelisted, check if devargs
is specified.
Fixes: ac5e1d838d ("virtio: skip error when probing kernel managed device")
Reported-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new paramter (flags) to rte_vhost_driver_register(). DPDK
vhost-user acts as client mode when RTE_VHOST_USER_CLIENT flag
is set. The flags would also allow future extensions without
breaking the API (again).
The rest is straingfoward then: allocate a unix socket, and
bind/listen for server, connect for client.
This extension is for vhost-user only, therefore we simply quit
and report error when any flags are given for vhost-cuse.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With all the previous prepare works, we are just one step away from
the final ABI refactoring. That is, to change current API to let them
stick to vid instead of the old virtio_net dev.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
This change could let us avoid the dependency of "virtio_net"
struct, to prepare for the ABI refactoring.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_ifname() to export the ifname to
application.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_queue_num() to export the number of
queues.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_numa_node() to get the numa node
from which the virtio_net struct is allocated.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
It does not make sense to ask the application to set/unset the flag
VIRTIO_DEV_RUNNING (that used internal only) at new_device()/
destroy_device() callback.
Instead, it should be set after new_device() succeeds and reset before
destroy_device() is invoked inside vhost lib. This patch fixes it.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
We keep a common vq structure, containing only vq related fields,
and then split others into RX, TX and control queue respectively.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
[Jianfeng Tan: found and fixed 2 bugs]
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The commit dd856dfcb9 introduced an optimization that prepends virtio
header to mbuf data. It can be used when the tx mbuf is writeable, so we
need to check that the mbuf is direct (i.e. it embeds its own data).
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The class id is not filled and makes probing to fail.
Updated the code to use RTE_PCI_DEVICE which fills
the class id with a wildcard value.
Fixes: 701c8d80c8 ("pci: support class id probing")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library support SSE4.1 instruction set,
so feature flags of the crypto device must be updated
to reflect this.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Underlying libsso_snow3g library now supports bit-level
operations, so PMD has been updated to allow them.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
In order to avoid using magic numbers, macros for
the IV and digest lengths for Snow3G have been added.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library that SNOW3G PMD uses has been updated,
so now it is called libsso_snow3g. Also, the path to the library
has been renamed to reflect this changes (now called LIBSSO_SNOW3G_PATH).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The variables AESNI_MULTI_BUFFER_LIB_PATH and LIBSSO_PATH
are not required for "make clean".
It is the same fix as in the commit e277b2397.
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the test-pmd
and proc_info applications to use the new xstats API, and removes
deprecated code associated with the old API.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the virtio driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the i40e driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the fm10k driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the e1000 driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the ixgbe driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Although ppc supports both endianesses, qemu supposes that the cpu is
big endian and enforces this for the virtio-net stuff.
Fix PCI accesses in legacy mode. Only ppc64le is supported at the moment.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.
Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:
PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
is enabled in the RX configuration of the PMD.
For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.
This patch also updates the drivers. For PKT_RX_VLAN_PKT:
- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.
For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.
[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The SYSFS_PCI_DEVICES is a constant that makes the PCI testing
difficult as it points to an absolute path. We remove using this
constant and introducing a function pci_get_sysfs_path that gives
the same value. However, the user can pass a SYSFS_PCI_DEVICES env
variable to override the path. It is now possible to create a fake
sysfs hierarchy for testing.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
This patch adds missing DEPDIRS to avoid any library referring to
symbols they are not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add the missing external dependency to pthread to avoid referring to
symbols the library is not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Up to now dependencies between DPDK internal libraries have been
untracked at shared library level, requiring applications to know
about library internal dependencies and often consequently overlinking.
Since the dependencies are already recorded for build ordering in the
makefiles with DEPDIRS-y we can use that information to generate LDLIBS
entries for internal libraries automatically.
Also revert commit 8180554d82 ("vhost: fix linkage of driver with
library") which is made redundant by this change.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
This patch provides counter mode support to AES-NI multi-buffer library.
The following cipher algorithm is enabled:
- RTE_CRYPTO_CIPHER_AES_CTR
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added possibility for AES to work in counter mode
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
To avoid GCC warning about "dereferencing type-punned pointer will break
strict-aliasing rules" aad_len pointer is dereferenced instead of direct
dereferencing of uint32_t* cast of the middle of an array.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix an error with computation of physical address of
content descriptor in the symmetric operations session
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix null pointer dereferencing by reporting if null and
exiting the function.
Coverity issue: 126584
Fixes: c0f87eb525 ("cryptodev: change burst API to be crypto op oriented")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Fix wrong indentation for return value
Coverity issue: 126585
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Removed comparison against $CC in Makefiles as
in cross-compiling mode CC can be a different string
instead of string "gcc"
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The qede driver depends on libz but the LDLIBS entry in makefile
was missing. Also because of the external dependency, make it
disabled in default config as per common DPDK policy on external deps.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some 64-bit variables are printed for debug.
%PRIx64 qualifier must be used because %lx is not long enough
on 32-bit systems
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
The RTE_ETH_VALID_PORTID_OR_ERR_RET macro is used in some places
to check if a port id is valid or not. This commit makes use of it in
some new parts of the code.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some architectures (ex: Power8) have a cache line size of 128 bytes,
so the drivers should not expect that prefetching the second part of
the mbuf with rte_prefetch0(&m->cacheline1) is valid.
This commit add helpers that can be used by drivers to prefetch the
rx or tx part of the mbuf, whatever the cache line size.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Introduce rte_mempool_populate_default() which allocates
mempool objects in several memzones.
The mempool header is now always allocated in a specific memzone
(not with its objects). Thanks to this modification, we can remove
many specific behavior that was required when hugepages are not
enabled in case we are using rte_mempool_xmem_create().
This change requires to update how kni and mellanox drivers lookup for
mbuf memory. For now, this will only work if there is only one memory
chunk (like today), but we could make use of rte_mempool_mem_iter() to
support more memory chunks.
We can also remove RTE_MEMPOOL_OBJ_NAME that is not required anymore for
the lookup, as memory chunks are referenced by the mempool.
Note that rte_mempool_create() is still broken (it was the case before)
when there is no hugepages support (rte_mempool_create_xmem() has to be
used). This is fixed in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Do not use paddr table to store the mempool memory chunks.
This will allow to have several chunks with different virtual addresses.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Now that the mempool objects are chained into a list, we can use it to
browse them. This implies a rework of rte_mempool_obj_iter() API, that
does not need to take as many arguments as before. The previous function
is kept as a private function, and renamed in this commit. It will be
removed in a next commit of the patch series.
The only internal users of this function are the mellanox drivers. The
code is updated accordingly.
Introducing an API compatibility for this function has been considered,
but it is not easy to do without keeping the old code, as the previous
function could also be used to browse elements that were not added in a
mempool. Moreover, the API is already be broken by other patches in this
version.
The library version was already updated in
commit 213af31e09 ("mempool: reduce structure size if no cache needed")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit removes the const qualifier for the mempool in
rte_mempool_walk() callback prototype.
Indeed, most functions that can be done on a mempool require a non-const
mempool pointer, except the dump and the audit. Therefore, the
mempool_walk() is more useful if the mempool pointer is not const.
This is required by next commit where the mellanox drivers use
rte_mempool_walk() to iterate the mempools, then rte_mempool_obj_iter()
to iterate the objects in each mempool.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit renames mempool_obj_ctor_t as mempool_obj_cb_t.
In next commits, we will add the ability to populate the
mempool and iterate through objects using the same function.
We will use the same callback type for that. As the callback is
not a constructor anymore, rename it into rte_mempool_obj_cb_t.
The rte_mempool_obj_iter_t that was used to iterate over objects
will be removed in next commits.
No functional change.
In this commit, the API is preserved through a compat typedef.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Many drivers provide their own implementation of rte_mbuf_raw_alloc(),
duplicating the code. Introduce a new public function in rte_mbuf to
allocate a raw mbuf (uninitialized).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
With gcc >= 6.0, qede base driver fails to build with:
drivers/net/qede/base/ecore_cxt.c: In function 'ecore_cdu_init_common':
cc1: error: left shift of negative value [-Werror=shift-negative-value]
Since the base drivers are untouchable, work around by disabling
the warning.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
When virtio was proposed in DPDK, there is no API to free memzones.
But this has changed since rte_memzone_free() has been implemented by
commit ff909fe21f ("mem: introduce memzone freeing").
This patch is to make sure memzones in struct virtqueue, like mz and
virtio_net_hdr_mz, are freed when queue is released or setup fails.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The "drv_flags" is set with device as the input, which means different
device (say, modern vs legacy) could end up with a different value. And
the fact that "drv_flags" is shared by all devices means that every time
we add a new device, it simply overwrites the value configured from the
last device.
Therefore, when two virtio devices have different flags, it may lead to
wrong result, such as virtio would set irq config when it's not supported.
Making the flag per device (using "dev->data->dev_flags") could let us
have different value for each device, which would avoid the above issue.
Fixes: da978dfdc4 ("virtio: use port IO to get PCI resource")
Reported-by: David Marchand <david.marchand@6wind.com>
Suggested-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Avail ring is updated by the frontend and consumed by the backend.
There are frequent core to core cache transfers for the avail ring.
This optmization avoids avail ring entry index update if the entry
already holds the same value.
As DPDK virtio PMD implements FIFO free descriptor list (also for
performance reason of CACHE), in which descriptors are allocated
from the head and freed to the tail, with this patch in most cases
avail ring will remain the same, then it would be valid in both caches
of frontend and backend.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Currently, vhost PMD doesn't have linkage for librte_vhost, even though
it depends on librte_vhost APIs. This causes a linkage error if below
conditions are fulfilled.
- DPDK libraries are compiled as shared libraries.
- DPDK application doesn't link librte_vhost.
- Above application tries to link vhost PMD using '-d' DPDK option.
The patch adds linkage for librte_vhost to vhost PMD not to cause an
above error.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
check merge-able header as it is supported.
previously we don't support merge-able feature, so non merge-able
header is checked.
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
After the do-while loop, idx could be VQ_RING_DESC_CHAIN_END (32768)
when it's the last vring desc buf we can get. Therefore, following
expresssion could lead to a segfault error, as it tries to access
beyond the desc memory boundary.
start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
This bug could be reproduced easily with "set fwd txonly" in the
guest PMD, where the dequeue on host is slower than the guest Tx,
that running out of free desc buf is pretty easy.
The fix is straightforward and easy, just remove it, as we have
already set desc flags properly inside the do-while loop.
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
[Yuanhan Liu: commit log reword]
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Issue: output of appliations and debug info of DPDK may be mixed up
in same line when enabling below debug options of virtio:
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DRIVER
This patch adds "\n" in the tail of definitions like PMD_RX_LOG,
PMD_TX_LOG, and PMD_DRV_LOG, and removes some "\n" when using these
macros.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Previously, for tunnel packets, such as VXLAN/NVGRE, the vlan
tags of the inner header will be stripped without putting vlan
info to descriptor, what is not expected behaviour.
This patch fixes it by changing hardware configuration to leave
the inner packet alone.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Add three new functions to the vf ops structure:
* rx_queue_count
* rxq_info_get
* txq_info_get
In all cases, the corresponding PF APIs can be reused.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The code to provide mbufs for RX used m->data_off instead of
RTE_PKTMBUF_HEADROOM as the position inside the mbuf for the data to be
written. As the mbuf is uninitialised, this could potentially cause Rx
data to be placed at the wrong address in the mbuf - or even outside it.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
RTE_PCI_DRV_DETACHABLE flag is required to indicate that a device
can be detached during execution.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
mbufs were not properly released post-tx when they are chained.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Some apps calling some functions from different threads at the
same time could lead to reconfig problems. Reconfig mechanism is
based on a hardware queue where incrementing a counter signals the
firmware to do the reconfig. If there is a second increment before the
first one has been processed the firmware will stop and a device
reset is necessary.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
librte_malloc was long since merged into librte_eal, mop up the
leftover references to it from driver Makefiles.
Fixes: 2f9d47013e ("mem: move librte_malloc to eal/common")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Applied with qede Makefile update too.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
On hosts running a non-DPDK PF driver, the VF has no means of changing
the HW CRC strip setting for a RX queue. It's implicitly enabled.
This patch checks if the host is running a non-DPDK PF kernel driver,
and returns an error, if HW CRC stripping was not requested in the port
configuration.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In i40evf_config_vlan_pvid the check for NULL for the dev value is
unnecessary, since this value is passed in from the ethdev API which
will ensure that a valid rte_eth_dev structure is provided.
Furthermore, all code paths leading to this function already use the
dev value.
Issue identified by Coverity.
Coverity ID 13302:
There may be a null pointer dereference, or else the comparison against
null is unnecessary.
In i40evf_config_vlan_pvid: All paths that lead to this null pointer
comparison already dereference the pointer earlier
Fixes: 2b12431b53 ("i40e: add vlan stripping and insertion to VF")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
In Table 8-16 of the "Intel® Ethernet Controller XL710 Datasheet" it is
stated that when the whole packet is written to a single buffer, the
header length field in the descriptor will be 0. This means that when
extracting the packet/data_len field from the descriptor in the driver
we do not need to mask out the extra header-length bits.
Inside the vector driver, this reduces the need to pull all four pktlen
fields into a single register to work on. Instead of a shift and mask,
we now need to only do a shift. Therefore, we can work on each descriptor
independently, processing each using one shift intrinsic and a blend.
This change makes the code shorter and easier to read, so we can pull it
into the main descriptor processing loop instead of needing its own
function. This in turn makes the descriptor processing in the loop as a
whole slightly easier to read as it's more linear.
In terms of performance, in testing this change shows little effect, with
single-core perf tests showing a very slight improvement.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
An analysis of the i40e code using Intel® VTune™ Amplifier 2016 showed
that the code was unexpectedly causing stalls due to "Loads blocked by
Store Forwards". This can occur when a load from memory has to wait
due to the prior store being to the same address, but being of a smaller
size i.e. the stored value cannot be directly returned to the loader.
[See ref: https://software.intel.com/en-us/node/544454]
These stalls are due to the way in which the data_len values are handled
in the driver. The lengths are extracted using vector operations, but those
16-bit lengths are then assigned using scalar operations i.e. 16-bit
stores.
These regular 16-bit stores actually have two effects in the code:
* they cause the "Loads blocked by Store Forwards" issues reported
* they also cause the previous loads in the RX function to actually be a
load followed by a store to an address on the stack, because the 16-bit
assignment can't be done to an xmm register.
By converting the 16-bit store operations into a sequence of SSE blend
operations, we can ensure that the descriptor loads only occur once, and
avoid both the additional stores and loads from the stack, as well as the
stalls due to the blocked loads.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Later commits to improve the driver will make use of the SSE4.1
_mm_blend_epi16 intrinsic, so:
* set the compilation level to always have SSE4.1 support,
* and add in a runtime check for SSE4.1 as part of the condition checks
for vector driver selection.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
This patch removes several redundant forward declarations
in i40e_fdir.c.
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds LLDP and DCBX capabilities to the qede PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The physical link is handled by the management Firmware.
This patch lays the infrastructure for interrupt/attention handling in
the driver, as link change notifications arrive via async interrupts,
as well as the handling of such notifications. It adds async event
notification handler interfaces to the PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
This patch adds the features to supports configuration of various Layer 2
elements, such as channels and filtering options.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The Qlogic Everest Driver for Ethernet(QEDE) Poll Mode Driver(PMD) is
the DPDK specific module for QLogic FastLinQ QL4xxxx 25G/40G CNA family
of adapters as well as their virtual functions (VF) in SR-IOV context.
This patch adds QEDE PMD, which interacts with base driver and
initialises the HW.
This patch content also includes:
- eth_dev_ops callbacks
- Rx/Tx support for the driver
- link default configuration
- change link property
- link up/down/update notifications
- vlan offload and filtering capability
- device/function/port statistics
- qede nic guide and updated overview.rst
Note that the follow on commits contain the code for the features mentioned
in documents but not implemented in this patch.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The base driver is the backend module for the QLogic FastLinQ QL4xxxx
25G/40G CNA family of adapters as well as their virtual functions (VF)
in SR-IOV context.
The purpose of the base module is to:
- provide all the common code that will be shared between the various
drivers that would be used with said line of products. Flows such as
chip initialization and de-initialization fall under this category.
- abstract the protocol-specific HW & FW components, allowing the
protocol drivers to have clean APIs, which are detached in its
slowpath configuration from the actual Hardware Software Interface(HSI).
This patch adds a base module without any protocol-specific bits.
I.e., this adds a basic implementation that almost entirely falls under
the first category.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
Fix issue reported by Coverity.
Coverity ID 13193: Bad bit shift operation (BAD_SHIFT)
large_shift: In expression 1 << pool, left shifting by more than 31 bits
has undefined behavior. The shift amount, pool, is at least 32.
This patch is a rework of register addr selection logic and mask
computation to made it more readable and avoid bit overflow when 32 bit
value is shifted over its size for pool > 31.
Fixes: fe3a45fd41 ("ixgbe: add VMDq support")
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The statistics queried by calling rte_eth_stats_get are zero when
the API is first called on the port. The root cause is because the
offset_loaded flag is not set correctly after device start.
This patch fixes this issue by resetting statistics at initialization
time. The resetting process will set offset_loaded flag.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
When building a chain of mbufs for a multi-segment packet, the
packet_type field resides at the end of the chain. It should be
copied forward to the head of the list.
Also, uses RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE to guard packet-type
computation. The mbuf fields are not copied when this define is not set.
Fixes: fe65e1e1ce ("fm10k: add vector scatter Rx")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The masking for the RX/TX enable bit was incorrect in the rx and tx
queue stop functions. Instead of using "& MASK" it used "| MASK" which
would always return true. This error was found by converity scan.
CID 13215 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: txdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
CID 13216 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: rxdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
Coverity issue: 13215
Coverity issue: 13216
Fixes: 029fd06d40 ("ixgbe: queue start and stop")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The position of register values within i40e register dumps is
supposed to reflect the register addresses. These were not being
correctly calculated.
Fixes: d9efd0136a ("i40e: add EEPROM and registers dumping")
Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Run ixgbe driver through checkpatch and fix the issues highlighted
Fix line spacing, some bad indentation, and in a couple
of cases use short circuit (already there) return to lessen indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Applied with four additional fixes for issues highlighted by checkpatch
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The macro RTE_VERIFY always checks a condition.
It is optimized with "unlikely" hint.
While this macro is well suited for test applications, it is preferred
in libraries and examples to enable such check in debug mode.
That's why the macro RTE_ASSERT is introduced to call RTE_VERIFY only
if built with debug logs enabled.
A lot of assert macros were duplicated and enabled with a specific flag.
Removing these #ifdef allows to test these code branches more easily
and avoid dead code pitfalls.
The ENA_ASSERT is kept (in debug mode only) because it has more
parameters to log.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some statistics were deprecated since release 2.1 (49f386542a).
The last deprecated counter to be used was imcasts.
The VF loopback statistics are also removed as they are used only
in igb and duplicated in extended statistics.
The new counters should be added to extended statistics.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Compilation fails on 32 bits on Xen driver, due to wrong casting:
drivers/net/xenvirt/virtqueue.h: In function ‘virtqueue_enqueue_xmit’:
drivers/net/xenvirt/virtqueue.h:234:24: error: cast from pointer to integer
of different size [-Werror=pointer-to-int-cast]
start_dp[idx].addr = rte_pktmbuf_mtod(cookie, uint64_t);
^
Fixes: d6b324c00f ("mbuf: get DMA address")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
VxLAN & NVGRE are supported by x550. As we know HW can parse
the packet and tell SW the type info. For VxLAN & NVGRE packets
there's some change. HW will not tell SW the info of the outer
header but the inner header instead. But we always take the
info as it's for the outer header. So the packet type info is
not right when x550 receives VxLAN & NVGRE packets.
As x550 only supports IPv4 VxLAN & NVGRE packets, we can tell
the outer header of VxLAN is IPv4 + UDP, and the outer header
of NVGRE is IPv4 only. What we don't know is if there's
optional field in the outer IPv4 header.
This patch implement the support of packet type for VxLAN &
NVGRE. And it fixes the wrong packet type issue either.
BTW:
It doesn't fix any existing commit as although it resolve an
issue it's more like a new feature but not a fix.
Reported-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the vhost PMD were configured with more queues than the guest, the old
code would segfault in rte_vhost_enable_guest_notification due to a NULL
virtqueue pointer.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
renamed rte_cryptodev_sym_session.type -> dev_type
(as it's not a session type, but a device type)
renamed rte_crypto_sym_op.type -> sess_type
(as it's not an op type, but a session type)
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
AES-GCM PMD only supports 128-bit keys, but not 192 and 256,
which the capabilities structure was reporting.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
After some testing, it was found that retrieving numa information
about a vhost device via a call to get_mempolicy is more
accurate when performed during the new_device callback versus
the vring_state_changed callback, in particular upon initial boot
of the VM. Performing this check during new_device is also
potentially more efficient as this callback is only triggered once
during device initialisation, compared with vring_state_changed
which may be called multiple times depending on the number of
queues assigned to the device.
Reorganise the code to perform this check and assign the correct
socket_id to the device during the new_device callback.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With (ICC) 16.0.2 20160204, getting following warnings:
.../drivers/net/ena/base/ena_com.c(492): error #3656: variable
"flags" may be used before its value is set
ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
.../drivers/net/ena/base/ena_com.c(1971): error #3656: variable
"mem_handle" may be used before its value is set
ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, len,
For both warnings the variable value is ignored, so there is no defect.
To comfort compiler warning, a initial value provided to variables.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Issue:
when using the following CLI in testpmd to enable ipv6 TSO feature
(set --txqflags=0 in the testpmd command)
set verbose 1
csum set ip hw 0
csum set udp hw 0
csum set tcp hw 0
csum set sctp hw 0
csum set outer-ip hw 0
csum parse_tunnel on 0
tso set 800 0
set fwd csum
start
We will not get what we want, the ipv6 packets sent out from IXIA can be
received by i40e, but cannot forward to another port.
The root cause is when HW doing the TSO offload for packets, it does not only
depends on the context descriptor to define the MSS and TSO payload size, it
also need to know whether this packets is ipv4 or ipv6, we use
i40e_txd_enable_checksum to generate the related fields for data descriptor.
But PMD will not call i40e_txd_enable_checksum if only the TSO offload flag is
set. The reason why ipv4 works fine for TSO in testpmd csum mode is csum engine
will set the ip csum flag when the packet is ipv4 and TSO is enabled but
will not set the flag for ipv6 and this flag will cause the
i40e_txd_enable_checksum to be invoked. For both the cases the TSO flag will be
set, so we need to use TSO flag to trigger the i40e_txd_enable_checksum.
The right logic here is we enable csum offload for both ipv4 and ipv6 when TSO
flag is set.
Fixes: e3f0151f ("i40e: enable Tx checksum only for offloaded packets")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The issue is the VF's link speed kept as 10G and status always was up.
It did not change even the physical link's status changed.
This patch fixes this issue to make VF's link info consistent with
physical link.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
A problem is found on i350 VF. We found TX will happen once
per 4 packets. If only 1~3 packets are received, they will
not be forwarded. But the real problem is on RX side. The
reason is the default RX write-back threshold is changed to
4, so every first 3 packets may be hung there.
This patch checks the RX wthresh when setting up the RX
queue, and forces it to be 1, so every packet can be handled
immediately.
Fixes: 4a41c17dba ("igb: set default thresholds based on MAC type")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
For simple TX the virtio-net header must be zeroed, but it was using memory
that had been initialized with indirect descriptor tables. This resulted in
"unsupported gso type" errors from librte_vhost.
We can use the same memory for every descriptor to save cachelines in the
vswitch.
Fixes: 6dc5de3a ("virtio: use indirect ring elements")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
The link speed configuration is now done with bitmaps so 100G speed
requires only a new bit flag.
The actual link speed is a number so its size must be increased from
16-bit to 32-bit.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Tested-by: Matej Vido <vido@cesnet.cz>
This patch redesigns the API to set the link speed/s configuration
of an ethernet port. Specifically:
- it allows to define a set of advertised speeds for
auto-negociation.
- it allows to disable link auto-negociation (single fixed speed).
- default: auto-negociate all supported speeds.
A flag autoneg in struct rte_eth_link indicates if link speed was a
result of auto-negociation or was fixed by configuration.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The speed capabilities of a device can be retrieved with
rte_eth_dev_info_get().
The new field speed_capa is initialized in the drivers without
taking care of device characteristics in this patch.
When the capabilities of a driver are accurate, the table in
overview.rst must be filled.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
The speed numbers ETH_LINK_SPEED_ are renamed ETH_SPEED_NUM_.
The prefix ETH_LINK_SPEED_ is kept for AUTONEG and will be used
for bit flags in next patch.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Some duplex values are replaced from 0 to half-duplex when link is down.
Some drivers are still using their own constants for duplex modes.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Define and use ETH_LINK_UP and ETH_LINK_DOWN where appropriate.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Loop that calculates total number of tx descriptors in slave tx queues
should iterate up to nb_tx_queues, not nb_rx_queues.
Fixes: 3ef7955700 ("bonding: fix LACP mempool size")
Signed-off-by: Vladyslav Buslov <vladyslav.buslov@harmonicinc.com>
Stopping then re-starting a bond interface containing slaves that
used polling for link detection caused the bond to think all slave
links were down and inactive.
Move the start of the polling for link from slave_add() to
bond_ethdev_start() and in bond_ethdev_stop() make sure we clear
the last_link_status of the slaves.
Fixes: a45b288ef2 ("bond: support link status polling")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds out-of-place operations to qat symmetric crypto PMD,
i.e. the result of the operation can be written to the destination buffer
instead of overwriting the source buffer as done in "in-place" operation.
Both buffers can be of different sizes.
Previously the qat PMD assumed that m_src and m_dst in rte_crypto_sym_op
were identical.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
In SUSE11-SP3 i686 platform, with gcc 4.5.1, there are compile issues, e.g:
null_crypto_pmd_ops.c:44:3: error:
unknown field 'sym' specified in initializer
cc1: warnings being treated as errors
The member in anonymous union initialization should be inside '{}',
otherwise it will report an error.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
These asserts are only for debugging and never fired during
any testing, but they confuse coverity's null tracking.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Silence a compiler warning that this variable may be used uninitialized.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tell the compiler to use an unsigned constant for the config shifts.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tell the compiler to use an unsigned constant for the config shifts.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The device lsc interrupt check has a misleading whitespace around it which
can be improved by adding appropriate braces to the check. Since the ret
variable was checked after previous assignment, this introduces no functional
change.
Fixes: 921a72008f ("e1000: add Rx interrupt handler")
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
The register read/write mphy functions have misleading whitespace around
the `locked` check. This cleanup merely preserves the existing functionality
and suppresses future gcc versions' "misleading indentation" warning.
Suggested-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the provided configuration does not call for RSS, then RSS is
explicitly disabled. Without this change, the device continues to
operate under the previous RSS configuration.
Fixes: 57033cdf8f ("fm10k: add PF RSS")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Now that vmxnet3 supports TCP/UDP checksum offload, let's update
the default txq flags to allow such offloads. Also fixed the tx
queue setup check to allow TCP/UDP checksum and only error out
if SCTP checksum is requested.
Fixes: f598fd063b ("vmxnet3: add Tx L4 checksum offload")
Reported-by: Heng Ding <hengx.ding@intel.com>
Signed-off-by: Yong Wang <yongwang@vmware.com>
The NFP driver (unlike other PCI devices) was not copying the pci info
from the pci_dev to the eth_dev. This would make the driver_name be
null (and other unset fields) when application uses dev_info_get.
This was found by code review; do not have the hardware.
Fixes: d4a27a3b09 ("nfp: add basic features")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
When the number of RX queues is not a power of two,
the RETA table is configured to its maximum size for
better balancing.
Testing showed that limiting its size to 256 improves performance
noticeably with little to no impact on balancing results.
Fixes: ebb30ec64a ("mlx5: increase RETA table size")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Once freed, completed mbufs pointers are not set to NULL in the TX queue.
Clean up function must take this into account.
Fixes: 2e22920b85 ("mlx5: support non-scattered Tx and Rx")
Fixes: 7fae69eeff ("mlx4: new poll mode driver")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Calling rte_eth_dev_get_dcb_info to get dcb info from i40e
driver if VMDQ is disabled, results in a segmentation fault.
This patch fixes it by treating VMDQ and No-VMDQ respectively
when querying dcb information.
Fixes: 5135f3ca49 ("i40e: enable DCB in VMDQ VSIs")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
The lower 16 bits of EICR register are used for queue interrupts,
dpdk framework take over the first bit for other interrupts like
LSC, so there're only 15 bits left for queue interrupts mapping.
This patch adds a check for the num of interrupt queues at
dev_start.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
When starting testpmd with multiple queues on a ixgbe VF
port, it failed with this printing, "nb_rxq(4) is greater
than max_rx_queues(1)".
The root cause is the VF doesn't get the right max rx queue
number from PF and it uses the default value 1.
VF max rx queue number is set by PF through mailbox messages.
The message for this setting only supports version 1.1. As
message version is updated to 1.2, VF cannot parse the rx queue
number setting message correctly.
This patch raise a specific base code update for this issue.
Fixes: 72dec9e37a ("ixgbe: support multicast promiscuous mode on VF")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Update the 'imissed' counter with the number of packets dropped
by the NIC.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
When the enic was disabled, link notification was correctly disabled
in the NIC but the software indicator that it was disabled was not
updated (vdev->notify_pa not set to 0). When the link came back up,
enic did not re-enable notification in the NIC.
This affected bonding when a enic slave device link bounced.
The fix is to unconditionally enable notification when the enic is
enabled.
Fixes: 9913fbb91d ("enic/base: common code")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
If the nb_pkts parameter to rte_eth_tx_burst() was greater than
the TX descriptor count, a completion was not being requested
from the NIC, so descriptors would not be released back to the
host causing a lock-up.
Introduce a limit of how many TX descriptors can be used in a single
call to the enic PMD burst TX function before requesting a completion.
Fixes: d739ba4c6a ("enic: improve Tx packet rate")
Signed-off-by: John Daley <johndale@cisco.com>
FreeBSD was not defined in ena_plat.h
ETIME is not defined in FreeBSD.
In file included from DPDK/drivers/net/ena/base/ena_com.h:37:0,
from DPDK/drivers/net/ena/ena_ethdev.h:39,
from DPDK/drivers/net/ena/ena_ethdev.c:41:
DPDK/drivers/net/ena/base/ena_plat.h:48:2: error: #error "Invalid platform"
Fixes: 99ecfbf845 ("ena: import communication layer")
Fixes: 9ba7981ec9 ("ena: add communication layer for DPDK")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Fix for multiple compilation errors for ICC:
error #188: enumerated type mixed with another type
error #592: variable "flags" is used before its value is set
Fixes: 99ecfbf845 ("ena: import communication layer")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Build log:
/root/dpdk/app/test-pmd/cmdline.c:6687:45: error: no member named
's6_addr32' in 'struct in6_addr'
rte_be_to_cpu_32(res->ip_value.addr.ipv6.s6_addr32[i]);
This is caused by macro "s6_addr32" not defined on FreeBSD and testpmd
swap big endian parameter to host endian. Move the swap action to i40e
ethdev will fix this issue.
Fixes: 7b1312891b ("ethdev: add IP in GRE tunnel")
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Bruce Richardson <bruce.richardson@intel.com>
The code uses "entry" in the next loop of LIST_FOREACH after calling free()
on it in i40e_res_pool_destroy().
Change to a safe way to free entry, which is similar with LIST_FOREACH_SAFE
in FreeBSD.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jiangu Zhao <zhaojg@arraynetworks.com.cn>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This solves issues when an active device is added to a bond.
If a device to be enslaved already has transmit and/or receive queues
allocated, use those and then create any additional queues that are
necessary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ehkinzie@gmail.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
On the 82575 chipset, there is a pool of global TX contexts instead of 2
per queues on 82576. See Table A-1 "Changes in Programming Interface
Relative to 82575" of Intel® 82576EB GbE Controller datasheet (*).
In the driver, the contexts are attributed to a TX queue: 0-1 for txq0,
2-3 for txq1, and so on.
In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the
per-queue context (0 or 1), and ctx_idx contains the index to be given
to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must
be indexed with ctx_curr to avoid an out-of-bound access.
Also, the index returned by what_advctx_update() is the per-queue
index (0 or 1), so we need to add txq->ctx_start before sending it
to the hardware.
(*) The datasheets says 16 global contexts, however the IDX fields in TX
descriptors are 3 bits, which gives a total of 8 contexts. The
driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs.
Fixes: 4c8db5f09a ("igb: enable TSO support")
Fixes: af75078fec ("first public release")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When using RSS, the number of rxqs has to be a power of two.
This is a problem because there is no API in DPDK that makes
the application aware of that.
A good compromise is to allow the application to request a
number of rxqs that is not a power of 2, but having inactive
queues that will never receive packets. In this configuration,
a warning will be issued to users to let them know that
this is not an optimal configuration.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
l2 tunnel and e-tag are not supported on the new x550em_a NICs, due
to missing checks for that mac type in the code.
This patch adds in the necessary conditional checks to enable the features
for x550em_a.
Fixes: 22e77d4501 ("ixgbe: support L2 tunnel operations")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
An issue is found on x550em NICs: ieee1588 is not working, the time is
always reported as 0.
The root cause is that the timer is only supported by the driver for x550,
switch statement entries are missing for x550em_x and x550em_a. This patch
adds those missing entries.
Fixes: a7740dc130 ("ixgbe: support new devices and MAC types")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The current_primary_port is initialised to an invalid value
during bonded device creation.
It must be set to a valid value later.
This fix sets it to a valid value when the first slave port
is added to the bonding device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
icc (icc (ICC) 16.0.1 20151021) is generating following compile error:
CC ixgbe_rxtx.o
.../drivers/net/ixgbe/ixgbe_rxtx.c(153): error #3656: variable
"free" may be used before its value is set
(nb_free > 0 && m->pool != free[0]->pool)) {
^
Indeed this is a false positive and code is correct.
"nb_free" check prevents the free[] access before its value set.
Disabling this icc warning (#3656) for file ixgbe_rxtx.c.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Ixgbe HW supports 128 TX queues. However, the full 128 queues are only
available in VT and DCB mode. In normal default "none" mode (VT/DCB off)
the maximum number of available queues is only 64.
The driver doesn't check the mode when reporting the available
number of queues, allowing more that 64 queues to be used in all cases.
If a queue no. >=64 is used in default mode, the TX packets will be dropped
silently.
This change adds a check to forbid using a queue number larger than 64
during device configuration (in default mode), so that the problem is
reported as early as possible.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Internal variable containing the number of TX queues for a device,
was being incorrectly assigned the number of RX queues, instead of TX.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
In the function set_rx_mode, the pointer of device data points
to the wrong address as found in ixgbe code, and fixed in commit:
"ixgbe: fix PF promiscuous mode after VF closed"
Fixes: be2d648a2d ("igb: add PF support")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
There's an issue reported. In the scenario DPDK PF + DPDK VF,
if the VF port is closed, PF port cannot receive packets.
I found at that time the promicuous mode is disabled on the PF
port. But it should be enabled.
When VF port is closed, it will send a message to its PF port to
reset it. During this, PF port will also reset its own
promicuous mode. Which promiscuous mode should be set depends on
the parameter stored in the device data. In the function
set_rx_mode, the pointer of device data points to the wrong
address. So, the promiscuous mode is wrong.
Fixes: 00e30184da ("ixgbe: add PF support")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Reported-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Current vector RX can't always set the packet_type properly.
To be more specific:
a) it never sets RTE_PTYPE_L2_ETHER
b) it doesn't handle tunnel ipv4/ipv6 case correctly.
c) it doesn't check is IXGBE_RXDADV_PKTTYPE_ETQF set or not.
While a) is pretty easy to fix, b) and c) are not that straightforward
in terms of SIMD ops (specially b).
So far I wasn't able to make vRX support packet_type properly without
noticeable performance loss.
So for now, just remove that functionality from vector RX and
update dev_supported_ptypes_get().
Fixes: 3962541758 ("mbuf: redefine packet type")
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
Notify user otherwise. A similar check has already been added to mlx5 in
commit "mlx5: check port is configured as ethernet device".
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Currently, the maximum value of rx/tx queues are kept by EAL. But,
the value is used like below with different meanings in vhost PMD.
- The maximum value of current enabled queues.
- The maximum value of current supported queues.
This wrong double meaning will cause an issue like below steps.
* Invoke application with below option.
--vdev 'eth_vhost0,iface=<socket path>,queues=4'
* Configure queues like below.
rte_eth_dev_configure(portid, 2, 2, ...);
* Configure queues again like below.
rte_eth_dev_configure(portid, 4, 4, ...);
The second rte_eth_dev_configure() will fail because both
the maximum value of current enabled queues and supported queues
will be '2' after calling first rte_eth_dev_configure().
To fix the issue, the patch adds another variable to keep the maximum
number of supported queues in vhost PMD.
Fixes: 23981fb0d78b ("vhost: Add vhost PMD")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
Issue:
When CONFIG_RTE_LIBTRE_I40E_RX_ALLOW_BULK_ALLOC=n in config file, there
will be a build error:
'i40e_recv_pkts_bulk_alloc' undeclared
Now DPDK i40e PMD uses the preprocessor to choose whether or not to define
the bulk recv functions, but for selection of the RX function, PMD only
depends on a C variable. This causes the inconsistency and leads to the
build error due to the bulk recv function not being defined.
Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch extends flow director to select vlan id as part of
filter's input set and program the filter rule with vlan id.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds missing VLAN bitmask for inner frame in case of
tunneling and fixes VLAN tags bitmasks for single or outer frame
in case of tunneling.
Fixes: 98f0557076 ("i40e: configure input fields for RSS or flow director")
Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends flow director to select more IP Header fields
as filter input set.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds a new function to set the fdir input set to default
when initialization.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In this patch, flex payload is removed from valid fdir input set
values. This is because all flex payload configuration can be set
in struct rte_fdir_conf during device configure phase, which is
a more flexible way of setting this up.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
For the input set selection, Hash filter and Flow director shared
the same function, i.e. i40e_filter_inset_select.
For code readability, this patch replaces i40e_filter_inset_select
with two new functions: i40e_hash_filter_inset_select and
i40e_fdir_filter_inset_select for Hash filter and Flow director
respectively.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Virtio has an mbuf descriptor ring containing mbufs to be used for
receiving traffic. When the host queues traffic to be sent to the guest, it
consumes these descriptors. If none exist, it discards the packet.
The virtio pmd allocates mbufs to the descriptor ring every time it
successfully receives a packet. However, it never does it if it does not
receive a valid packet. If the descriptor ring is exhausted, and the mbuf
mempool does not have any mbufs free (which can happen for various reasons,
such as queueing along the processing pipeline), then the receive call will
not allocate any mbufs to the descriptor ring, and when it finishes, the
descriptor ring will be empty. The ring being empty means that we will
never receive a packet again, which means we will never allocate mbufs to
the ring: we are stuck.
Ultimately, the problem arises because there is a dependency between
receiving packets and making the descriptor ring not be empty, and a
dependency between the descriptor ring not being empty, and receiving
packets.
To fix the problem, this pakes makes virtio always try to allocate mbufs
to the descriptor ring, if necessary, when polling for packets. Do this by
removing the early exit if no packets were received. Since the packet loop
later will do nothing if there are no packets, this is fine.
I reproduced the problem by pushing packets through a pipelined systems
(such as the client_server sample application) after artificially
decreasing the size of the mbuf pool and introducing a delay in a secondary
stage.
Without the fix, the process stops receiving packets fairly quicky. With
the fix, it continues to receive packets.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Kyle Larose <klarose@sandvine.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
This structure has immutable function pointers.
Also fix indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
On initialization, the rq descriptor count was set to the limit
of the vic. When the requested number of rx descriptors was
less than this count, enic_alloc_rq() was incorrectly setting
the count to the lower value. This results in later calls to
enic_alloc_rq() incorrectly using the lower value as the adapter
limit.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Update function can be called with no key to enable or disable a RSS
protocol, or with a key to be applied to the desired protocols.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
RSS configuration provided by the application should not be used as storage
by the PMD.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
For x550 device, the reta table has 512 entries, but in function
ixgbe_dev_rss_reta_query and ixgbe_dev_rss_reta_update we use an
"uint8_t i" to traverse the entries, this will lead the function
to an endless loop.
This patch changes the data type from uint8_t to uint16_t to fix
the issue.
Fixes: 4bee94a6c2 ("ixgbe: support 512 RSS entries on x550")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the packet_error bit in the completion descriptor is set, the
remainder of the descriptor and data are invalid. PKT_RX_MAC_ERR
was set in the mbuf->ol_flags if packet_error was set and used
later to indicate an error packet. But since PKT_RX_MAC_ERR is
defined as 0, mbuf flags and packet types and length were being
misinterpreted.
Make the function enic_cq_rx_to_pkt_err_flags() return true for error
packets and use the return value instead of mbuf->ol_flags to indicate
error packets. Also remove warning for error packets and rely on
rx_error stats.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
In the receive path, the function to set mbuf ol_flags used the
mbuf packet_type before it was set.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
Add checks to make sure we don't try to allocate more tx or rx queues
than we support.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add the missing '\n' character to the end of a few print statements.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Acked-by: John Daley <johndale@cisco.com>
VLAN insertion can be done in hardware when supported in Verbs. A software
fallback is provided otherwise. The software implementation is also used
when multi-packet send is enabled on a queue, as both features are mutually
exclusive.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Environment variable MLX5_PMD_ENABLE_PADDING enables HW packet padding
in PCI bus transactions.
When packet size is cache aligned and CRC stripping is enabled, 4 fewer
bytes are written to the PCI bus. Enabling padding makes such packets
aligned again.
In cases where PCI bandwidth is the bottleneck, padding can improve
performance by 10%.
This is disabled by default since this can also decrease performance for
unaligned packet sizes.
Signed-off-by: Olga Shern <olgas@mellanox.com>
fix packet padding macro check
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Secondary processes are expected to use queues and other resources
allocated by the primary, however Verbs resources can only be shared
between processes when inherited through fork().
This limitation can be worked around for TX by configuring separate queues
from secondary processes.
Signed-off-by: Or Ami <ora@mellanox.com>
Add driver functions to set link state up or down.
Burst functions are updated to make sure applications cannot attempt to
send/receive after link is brought down.
Signed-off-by: Or Ami <ora@mellanox.com>
When Linux PF and DPDK VF are used for i40e PMD, when a PF reset occurs,
an interrupt will go via adminq event to inform the VF of the reset.
A callback mechanism is introduced for the VF to allow it to invoke a
registered callback when PF reset happens.
Users can register a callback for this interrupt event using:
rte_eth_dev_callback_register(portid,
RTE_ETH_EVENT_INTR_RESET,
reset_event_callback,
arg);
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Currently, i40evf PMD uses a global static buffer to send virtchnl
commands to host driver. It is shared by multiple VFs.
This patch changed to allocate a virtchnl cmd buffer for each VF.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This is a PMD for the Amazon ethernet ENA (Elastic Network Adapters)
family.
The driver operates variety of ENA adapters through feature negotiation
with the adapter and upgradable commands set.
ENA driver handles PCI Physical and Virtual ENA functions.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Release Note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Implementation of platform specific code for ENA communication layer.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Low level common abstraction for ENA device communication.
Signed-off-by: Netanel Belgazal <netanel@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Add a new API rte_eth_dev_get_supported_ptypes to query what packet types
can be filled by a given device. The device should be already started or
its PMD RX burst function already decided, since the packet types supported
may vary depending on RX function.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Commit e86a699cf6 missed two further libm dependencies: ceil() used
by librte_meter is typically inlined so the missing dependency does not
actually cause failures, and librte_pmd_nfp is not built by default
so its easy to miss.
This causes duplicates in LDLIBS in many configurations so its vital
they are removed before passing to linker.
Fixes: e86a699cf6 ("mk: fix shared library dependencies on libm and librt")
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Comment for "ierrors" counter says that it counts erroneous received
packets. But for some reason "imissed" counter is added to "ierrors"
counter in most drivers.
It is a mistake, because missed packets are obviously not received.
This patch fixes it.
Fixes: 70bdb18657 ("ethdev: add Rx error counters for missed, badcrc and badlen packets")
Fixes: 6bfe648406 ("i40e: add Rx error statistics")
Fixes: 856505d303 ("cxgbe: add port statistics")
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
If a bonded device is created when there are no slave devices
there is a loop in bond_ethdev_promiscuous_enable() which results
in a segmentation fault.
The solution is to initialise the current_primary_port to an
invalid port value when the bonded port is created.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The current code for detecting link during slave addition can cause a
slave interface to be activated twice -- once during slave_configure()
and again at the end of __eth_bond_slave_add_lock_free(). This will
either cause the active slave count to be incorrect or will cause the
802.3ad activation function to panic. Ensure that the interface is not
activated more than once.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
If the link state of a slave is "up" when added, it is added to the list
of active slaves but, even if it is the only slave, is not selected as
the primary interface. Generally, handling of link state interrupts
selects an interface to be primary, but only if the active count is zero.
This change avoids the situation where there are active slaves but
no primary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The bonding PMD in mode 4 puts all enslaved interfaces into promiscuous
mode in order to receive LACPDUs and must filter unwanted packets
after the traffic has been "collected". Allow broadcast and multicast
through so that ARP and IPv6 neighbor discovery continue to work.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Copy all needed fields from the mode8023ad_private structure in
bond_mode_8023ad_conf_get(). This help ensure that a subsequent call
to rte_eth_bond_8023ad_setup() is not passed uninitialized data that
would result in either incorrect behavior or a failed sanity check.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Ensure that a bonded slave device is not detached,
until it is removed from the bonded device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Fixes: a45b288ef2 ("bond: support link status polling")
Fixes: 494adb7f63 ("ethdev: add device fields from PCI layer")
Fixes: b1fb53a39d ("ethdev: remove some PCI specific handling")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Check that the bonded device has no slaves before detaching it.
Fixes: 8d30fe7fa7 ("bonding: support port hotplug")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When a device is created with "CREATE" as action, new rings are
allocated for it, then it is a good practice to free them when the
rte_ethdev_dettach method is invoked by the application.
Rings are not freeded when "ATTACH" is used or when the device is
created by means of the rte_eth_from_rings function.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Rename nb_rx/tx_queues fields in internals struct to max_rx/tx_queues
Updated fields required to keep max queue numbers configured. For current
queue number requirements data->nb_rx/tx_queues fields used.
Some checkpatch corrections and code clenaup.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
1- Remove duplicate nb_rx/tx_queues fields from internals
2- Move duplicate code into a common function
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
The actual captured length is header.caplen, whereas header.len is
the original length on the wire.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Dror Birkman <dror.birkman@lightcyber.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
Allow dynamic deallocation of af_packet device through proper
API functions. To achieve this:
* set device flag to RTE_ETH_DEV_DETACHABLE
* implement rte_pmd_af_packet_devuninit() and expose it
through rte_driver.uninit()
* copy device name to ethdev->data to make discoverable with
rte_eth_dev_allocated()
Moreover, make af_packet init function static, as there is no
reason to keep it public.
Signed-off-by: Wojciech Zmuda <woz@semihalf.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Allow overriding the base mac address of the device.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
During an MTU change, the adapter is restarted. If hardware VLAN offload
is in use, this existing filter table would also be cleared. Instead,
setup the shadow table once during device initialization and just update
during restart.
vmxnet3_dev_vlan_offload_set(dev, mask) was incorrectly treating the
mask parameter as the bitmask for vlan_strip and vlan_filter, whereas
the mask indicates only what has changed - the values for
vlan_stripping and vlan_filter needs to be taken from dev_conf.rxmode.
Fixes: f003fc3834 ("vmxnet3: enable vlan filtering")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Signed-off-by: Nachiketa Prachanda <nprachan@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
Add support for linking multi-segment buffers together to
handle Jumbo packets. The vmxnet3 API supports having header
and body buffer types. What this patch does is fill the primary
ring completely with header buffers and the secondary ring
with body buffers. This allows for non-jumbo frames to only
use one mbuf (from primary ring); and jumbo frames will have
first mbuf from primary ring and following mbufs from other
ring.
This could be optimized in future if the DPDK had API
to supply different sized mbufs (two pools) into driver.
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Release note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
All the error checks in virtqueue_enqueue_xmit are already done
by the caller. Therefore they can be removed to improve performance.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Virtio supports a feature that allows sender to put transmit
header prepended to data. It requires that the mbuf be writeable, correct
alignment, and the feature has been negotiatied. If all this works out,
then it will be the optimum way to transmit a single segment packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The virtio ring in QEMU/KVM is usually limited to 256 entries
and the normal way that virtio driver was queuing mbufs required
nsegs + 1 ring elements. By using the indirect ring element feature
if available, each packet will take only one ring slot even for
multi-segment packets.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Applied with coding standards fixes:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The virtio_net_hdr desc all pointed to the same buffer. It doesn't cause
issue because in the simple TX mode we don't use the header. This patch
makes the header desc point to different buffer.
Fixes: b4ae9c505f ("virtio: optimize ring layout")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This initialisation of nb_rx_queues and nb_tx_queues has been removed
from eth_virtio_dev_init.
The nb_rx_queues and nb_tx_queues were being initialised in
eth_virtio_dev_init before the tx_queues and rx_queues arrays were
allocated.
The arrays are allocated when the ethdev port is configured and the
nb_tx_queues and nb_rx_queues are initialised.
If any of the following functions were called before the ethdev
port was configured there was a segmentation fault because
rx_queues and tx_queues were NULL:
rte_eth_stats_get
rte_eth_stats_reset
rte_eth_xstats_get
rte_eth_xstats_reset
Fixes: 823ad64795 ("virtio: support multiple queues")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Fix the issue that virtio device cannot be started after stopped.
The field, hw->started, should be changed by virtio_dev_start/stop instead
of virtio_dev_close.
Fixes: a85786dc81 ("virtio: fix states handling during initialization")
Reported-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Pavel Fedin <p.fedin@samsung.com>