Commit b8eb345378 ("pci: ignore devices already managed in Linux when
mapping x86 ioport") did not update other parts of the ioport api.
The application is not supposed to call these read/write/unmap ioport
functions if map call failed but I prefer aligning the code for the sake
of consistency.
Signed-off-by: David Marchand <david.marchand@6wind.com>
If a bonded device is created when there are no slave devices
there is a loop in bond_ethdev_promiscuous_enable() which results
in a segmentation fault.
The solution is to initialise the current_primary_port to an
invalid port value when the bonded port is created.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The current code for detecting link during slave addition can cause a
slave interface to be activated twice -- once during slave_configure()
and again at the end of __eth_bond_slave_add_lock_free(). This will
either cause the active slave count to be incorrect or will cause the
802.3ad activation function to panic. Ensure that the interface is not
activated more than once.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
If the link state of a slave is "up" when added, it is added to the list
of active slaves but, even if it is the only slave, is not selected as
the primary interface. Generally, handling of link state interrupts
selects an interface to be primary, but only if the active count is zero.
This change avoids the situation where there are active slaves but
no primary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The bonding PMD in mode 4 puts all enslaved interfaces into promiscuous
mode in order to receive LACPDUs and must filter unwanted packets
after the traffic has been "collected". Allow broadcast and multicast
through so that ARP and IPv6 neighbor discovery continue to work.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Copy all needed fields from the mode8023ad_private structure in
bond_mode_8023ad_conf_get(). This help ensure that a subsequent call
to rte_eth_bond_8023ad_setup() is not passed uninitialized data that
would result in either incorrect behavior or a failed sanity check.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Ensure that a bonded slave device is not detached,
until it is removed from the bonded device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Fixes: a45b288ef2 ("bond: support link status polling")
Fixes: 494adb7f63 ("ethdev: add device fields from PCI layer")
Fixes: b1fb53a39d ("ethdev: remove some PCI specific handling")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Check that the bonded device has no slaves before detaching it.
Fixes: 8d30fe7fa7 ("bonding: support port hotplug")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When a device is created with "CREATE" as action, new rings are
allocated for it, then it is a good practice to free them when the
rte_ethdev_dettach method is invoked by the application.
Rings are not freeded when "ATTACH" is used or when the device is
created by means of the rte_eth_from_rings function.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Rename nb_rx/tx_queues fields in internals struct to max_rx/tx_queues
Updated fields required to keep max queue numbers configured. For current
queue number requirements data->nb_rx/tx_queues fields used.
Some checkpatch corrections and code clenaup.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
1- Remove duplicate nb_rx/tx_queues fields from internals
2- Move duplicate code into a common function
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
The actual captured length is header.caplen, whereas header.len is
the original length on the wire.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Dror Birkman <dror.birkman@lightcyber.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
Allow dynamic deallocation of af_packet device through proper
API functions. To achieve this:
* set device flag to RTE_ETH_DEV_DETACHABLE
* implement rte_pmd_af_packet_devuninit() and expose it
through rte_driver.uninit()
* copy device name to ethdev->data to make discoverable with
rte_eth_dev_allocated()
Moreover, make af_packet init function static, as there is no
reason to keep it public.
Signed-off-by: Wojciech Zmuda <woz@semihalf.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Allow overriding the base mac address of the device.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
During an MTU change, the adapter is restarted. If hardware VLAN offload
is in use, this existing filter table would also be cleared. Instead,
setup the shadow table once during device initialization and just update
during restart.
vmxnet3_dev_vlan_offload_set(dev, mask) was incorrectly treating the
mask parameter as the bitmask for vlan_strip and vlan_filter, whereas
the mask indicates only what has changed - the values for
vlan_stripping and vlan_filter needs to be taken from dev_conf.rxmode.
Fixes: f003fc3834 ("vmxnet3: enable vlan filtering")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Signed-off-by: Nachiketa Prachanda <nprachan@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
Add support for linking multi-segment buffers together to
handle Jumbo packets. The vmxnet3 API supports having header
and body buffer types. What this patch does is fill the primary
ring completely with header buffers and the secondary ring
with body buffers. This allows for non-jumbo frames to only
use one mbuf (from primary ring); and jumbo frames will have
first mbuf from primary ring and following mbufs from other
ring.
This could be optimized in future if the DPDK had API
to supply different sized mbufs (two pools) into driver.
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Release note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
All the error checks in virtqueue_enqueue_xmit are already done
by the caller. Therefore they can be removed to improve performance.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Virtio supports a feature that allows sender to put transmit
header prepended to data. It requires that the mbuf be writeable, correct
alignment, and the feature has been negotiatied. If all this works out,
then it will be the optimum way to transmit a single segment packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The virtio ring in QEMU/KVM is usually limited to 256 entries
and the normal way that virtio driver was queuing mbufs required
nsegs + 1 ring elements. By using the indirect ring element feature
if available, each packet will take only one ring slot even for
multi-segment packets.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Applied with coding standards fixes:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The virtio_net_hdr desc all pointed to the same buffer. It doesn't cause
issue because in the simple TX mode we don't use the header. This patch
makes the header desc point to different buffer.
Fixes: b4ae9c505f ("virtio: optimize ring layout")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This initialisation of nb_rx_queues and nb_tx_queues has been removed
from eth_virtio_dev_init.
The nb_rx_queues and nb_tx_queues were being initialised in
eth_virtio_dev_init before the tx_queues and rx_queues arrays were
allocated.
The arrays are allocated when the ethdev port is configured and the
nb_tx_queues and nb_rx_queues are initialised.
If any of the following functions were called before the ethdev
port was configured there was a segmentation fault because
rx_queues and tx_queues were NULL:
rte_eth_stats_get
rte_eth_stats_reset
rte_eth_xstats_get
rte_eth_xstats_reset
Fixes: 823ad64795 ("virtio: support multiple queues")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Fix the issue that virtio device cannot be started after stopped.
The field, hw->started, should be changed by virtio_dev_start/stop instead
of virtio_dev_close.
Fixes: a85786dc81 ("virtio: fix states handling during initialization")
Reported-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Pavel Fedin <p.fedin@samsung.com>
Mmap PCI resource file and add inline functions for reading from and
writing to PCI resource address space.
Add description of IBUF and OBUF address space.
Add configuration option for setting which firmware type will be used.
Right address space values for IBUFs and OBUFs offsets are used
according to configuration option CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS.
Setting link up/down and getting info about link status is done through
mmapped PCI resource address space.
Signed-off-by: Matej Vido <vido@cesnet.cz>
PMD was of type PMD_VDEV which means that PCI device is not recognised
automatically during EAL initialization, but it has to be created by
EAL option --vdev.
Now, PMD is of type PMD_PDEV which means that PCI device is probed
and recognised during EAL initialization automatically.
Path to szedata2 device file is matched with device and the count
of available RX and TX DMA channels is found out during device
initialization.
Initialization, starting and stopping of queues is changed to better
correspond with Ethernet device API model. Function callbacks
(rx|tx)_queue_(start|stop) are added. Unnecessary items are removed
from ethernet device private data structure.
Signed-off-by: Matej Vido <vido@cesnet.cz>
When using start-stop functionality the per queue fields need to
be properly reset.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Even with tx checksum offload available, do not set the flag by default.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The mbuf ol_flags field was changed to uin64_t with DPDK version 1.8
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The file sys/io.h was included but it can be unavailable in some
non-x86 toolchains.
As others system includes in the file nfp_net.c, it seems useless,
so the easy fix is to remove them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Change rxq_cq_to_ol_flags() to set checksum flags according to packet type,
so for non L3/L4 packets the mbuf chksum_bad flags will not be set.
Fixes: 67fa62bc67 ("mlx5: support checksum offload")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
RSS configuration should not be freed when priv is NULL.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Or Ami <ora@mellanox.com>
The first and last memory pool elements are usually cache-aligned but not
page-aligned, particularly when using huge pages.
Hardware performance can be improved significantly by registering memory
regions starting and ending on page boundaries.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>