If the link state of a slave is "up" when added, it is added to the list
of active slaves but, even if it is the only slave, is not selected as
the primary interface. Generally, handling of link state interrupts
selects an interface to be primary, but only if the active count is zero.
This change avoids the situation where there are active slaves but
no primary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The bonding PMD in mode 4 puts all enslaved interfaces into promiscuous
mode in order to receive LACPDUs and must filter unwanted packets
after the traffic has been "collected". Allow broadcast and multicast
through so that ARP and IPv6 neighbor discovery continue to work.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Copy all needed fields from the mode8023ad_private structure in
bond_mode_8023ad_conf_get(). This help ensure that a subsequent call
to rte_eth_bond_8023ad_setup() is not passed uninitialized data that
would result in either incorrect behavior or a failed sanity check.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Ensure that a bonded slave device is not detached,
until it is removed from the bonded device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Fixes: a45b288ef2 ("bond: support link status polling")
Fixes: 494adb7f63 ("ethdev: add device fields from PCI layer")
Fixes: b1fb53a39d ("ethdev: remove some PCI specific handling")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Check that the bonded device has no slaves before detaching it.
Fixes: 8d30fe7fa7 ("bonding: support port hotplug")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When a device is created with "CREATE" as action, new rings are
allocated for it, then it is a good practice to free them when the
rte_ethdev_dettach method is invoked by the application.
Rings are not freeded when "ATTACH" is used or when the device is
created by means of the rte_eth_from_rings function.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Rename nb_rx/tx_queues fields in internals struct to max_rx/tx_queues
Updated fields required to keep max queue numbers configured. For current
queue number requirements data->nb_rx/tx_queues fields used.
Some checkpatch corrections and code clenaup.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
1- Remove duplicate nb_rx/tx_queues fields from internals
2- Move duplicate code into a common function
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
The actual captured length is header.caplen, whereas header.len is
the original length on the wire.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Dror Birkman <dror.birkman@lightcyber.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
Allow dynamic deallocation of af_packet device through proper
API functions. To achieve this:
* set device flag to RTE_ETH_DEV_DETACHABLE
* implement rte_pmd_af_packet_devuninit() and expose it
through rte_driver.uninit()
* copy device name to ethdev->data to make discoverable with
rte_eth_dev_allocated()
Moreover, make af_packet init function static, as there is no
reason to keep it public.
Signed-off-by: Wojciech Zmuda <woz@semihalf.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Allow overriding the base mac address of the device.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
During an MTU change, the adapter is restarted. If hardware VLAN offload
is in use, this existing filter table would also be cleared. Instead,
setup the shadow table once during device initialization and just update
during restart.
vmxnet3_dev_vlan_offload_set(dev, mask) was incorrectly treating the
mask parameter as the bitmask for vlan_strip and vlan_filter, whereas
the mask indicates only what has changed - the values for
vlan_stripping and vlan_filter needs to be taken from dev_conf.rxmode.
Fixes: f003fc3834 ("vmxnet3: enable vlan filtering")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Signed-off-by: Nachiketa Prachanda <nprachan@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
Add support for linking multi-segment buffers together to
handle Jumbo packets. The vmxnet3 API supports having header
and body buffer types. What this patch does is fill the primary
ring completely with header buffers and the secondary ring
with body buffers. This allows for non-jumbo frames to only
use one mbuf (from primary ring); and jumbo frames will have
first mbuf from primary ring and following mbufs from other
ring.
This could be optimized in future if the DPDK had API
to supply different sized mbufs (two pools) into driver.
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Release note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
All the error checks in virtqueue_enqueue_xmit are already done
by the caller. Therefore they can be removed to improve performance.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Virtio supports a feature that allows sender to put transmit
header prepended to data. It requires that the mbuf be writeable, correct
alignment, and the feature has been negotiatied. If all this works out,
then it will be the optimum way to transmit a single segment packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The virtio ring in QEMU/KVM is usually limited to 256 entries
and the normal way that virtio driver was queuing mbufs required
nsegs + 1 ring elements. By using the indirect ring element feature
if available, each packet will take only one ring slot even for
multi-segment packets.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Applied with coding standards fixes:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The virtio_net_hdr desc all pointed to the same buffer. It doesn't cause
issue because in the simple TX mode we don't use the header. This patch
makes the header desc point to different buffer.
Fixes: b4ae9c505f ("virtio: optimize ring layout")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This initialisation of nb_rx_queues and nb_tx_queues has been removed
from eth_virtio_dev_init.
The nb_rx_queues and nb_tx_queues were being initialised in
eth_virtio_dev_init before the tx_queues and rx_queues arrays were
allocated.
The arrays are allocated when the ethdev port is configured and the
nb_tx_queues and nb_rx_queues are initialised.
If any of the following functions were called before the ethdev
port was configured there was a segmentation fault because
rx_queues and tx_queues were NULL:
rte_eth_stats_get
rte_eth_stats_reset
rte_eth_xstats_get
rte_eth_xstats_reset
Fixes: 823ad64795 ("virtio: support multiple queues")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Fix the issue that virtio device cannot be started after stopped.
The field, hw->started, should be changed by virtio_dev_start/stop instead
of virtio_dev_close.
Fixes: a85786dc81 ("virtio: fix states handling during initialization")
Reported-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Pavel Fedin <p.fedin@samsung.com>
Mmap PCI resource file and add inline functions for reading from and
writing to PCI resource address space.
Add description of IBUF and OBUF address space.
Add configuration option for setting which firmware type will be used.
Right address space values for IBUFs and OBUFs offsets are used
according to configuration option CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS.
Setting link up/down and getting info about link status is done through
mmapped PCI resource address space.
Signed-off-by: Matej Vido <vido@cesnet.cz>
PMD was of type PMD_VDEV which means that PCI device is not recognised
automatically during EAL initialization, but it has to be created by
EAL option --vdev.
Now, PMD is of type PMD_PDEV which means that PCI device is probed
and recognised during EAL initialization automatically.
Path to szedata2 device file is matched with device and the count
of available RX and TX DMA channels is found out during device
initialization.
Initialization, starting and stopping of queues is changed to better
correspond with Ethernet device API model. Function callbacks
(rx|tx)_queue_(start|stop) are added. Unnecessary items are removed
from ethernet device private data structure.
Signed-off-by: Matej Vido <vido@cesnet.cz>
When using start-stop functionality the per queue fields need to
be properly reset.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Even with tx checksum offload available, do not set the flag by default.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The mbuf ol_flags field was changed to uin64_t with DPDK version 1.8
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The file sys/io.h was included but it can be unavailable in some
non-x86 toolchains.
As others system includes in the file nfp_net.c, it seems useless,
so the easy fix is to remove them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Change rxq_cq_to_ol_flags() to set checksum flags according to packet type,
so for non L3/L4 packets the mbuf chksum_bad flags will not be set.
Fixes: 67fa62bc67 ("mlx5: support checksum offload")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
RSS configuration should not be freed when priv is NULL.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Or Ami <ora@mellanox.com>
The first and last memory pool elements are usually cache-aligned but not
page-aligned, particularly when using huge pages.
Hardware performance can be improved significantly by registering memory
regions starting and ending on page boundaries.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Avoid dereferencing pointers twice to get to fast Verbs functions by
storing them directly in RX/TX queue structures.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Allows HW to strip the 802.1Q header from incoming frames and report it
through the mbuf structure.
This feature requires MLNX_OFED >= 3.2.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Upcoming flow director support will reuse this function to generate filter
rules.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Until now, broadcast frames were handled like unicast. Moving the related
flow to the special flows table frees up the related unicast MAC entry.
The same method is used to handle IPv6 multicast frames.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Merge redundant code by adding a static initialization table to manage
promiscuous and allmulticast (special) flows.
New function priv_rehash_flows() implements the logic to enable/disable
relevant flows in one place from any context.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
In the documentation it is specified that the hardware only supports a
number of RX queues if it is a power of 2.
Since ibv_exp_create_qp may not return an error when the number of
queues is unsupported by hardware, sanitize the value in dev_configure.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
When compiling with clang 3.6, the mlx4 driver gives the following error
message about an unneeded function.
CC mlx4.o
.../drivers/net/mlx4/mlx4.c:136:20: fatal error: function
'wr_id_t_check' is not needed and will not be emitted
[-Wunneeded-internal-declaration]
static inline void wr_id_t_check(void)
^
1 error generated.
The function is to compile-time check the size of wr_id_t, so use
the standard DPDK BUILD_BUG_ON macro to do so in the init function
instead.
Fixes: 7fae69eeff ("mlx4: new poll mode driver")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch enables reading sglort (global resource tag) info into the
mbuf for RX and inserting an FTAG (Fabric Tag) at the beginning of the
packet for TX. The vlan_tci_outer field selected from rte_mbuf structure
for sglort is not used in fm10k now.
In FTAG based forwarding mode, the switch will forward packets according
to glort info in FTAG rather than mac and vlan table.
To activate this feature, user needs to pass a devargs parameter to eal
for fm10k device like "-w 0000:84:00.0,enable_ftag=1". Currently this
feature is supported only on PF, because FM10K_PFVTCTL register is
read-only for VF.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Remove the unused element request_lport_map in struct fm10k_mac_ops.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Some cleanups to better reflect the code that was actually pushed out to
the upstream Linux community.
Among the above cleanups, a few macros such as FM10K_RXINT_TIMER_SHIFT are
removed, but they are needed in dpdk/fm10k, so we have to put all these
necessary macros into fm10k_osdep.h.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Per comments from an upstream kernel patch, and looking at how TLV
LE_STRUCT code works, we actually want these structures to be 4byte
aligned, not 1byte aligned.
In practice, 1byte alignment has worked so far because all our
structures end up being a multiple of 4. But if a future TLV
structure were added that had a u8 or similar sticking on the end things
would break. Fix this by using 4byte alignment which will prevent the
TLV LE_STRUCT code from breaking. Update the comment explaining that we
need 4byte alignment of our structures.
Fixes: 925c862cbc ("fm10k/base: pack TLV overlay structures")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The comment for fm10k_iov_msg_lport_state_pf was changed during
review of kernel driver, and the new wording is slightly clearer.
Re-write the comment in base code based on this new wording.
Fix a number of mailbox comment issues with function header comments,
lower-case acronyms (i.e. FIFO, TLV), incorrect function names in
DEBUGFUNC(), duplicate comments and a stubbed-out header comment for
fm10k_sm_mbx_init.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The vid variable name is shorthand for VLAN ID, so we should use this in
comments explaining what is happening.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The Linux Kernel provides the OS a call "pcie_get_minimum_link" which
can crawl the PCIe tree and determine the actual minimum link speed of a
device which is a more general check than provided by
is_slot_appropriate. Thus, the kernel driver does not use or want the
is_slot_appropriate function call. Add a NO_IS_SLOT_APPROPRIATE_CHECK
definition which can be defined to remove the code.
If left undefined (the default) then the code will all be active and no
driver changes should be necessary.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Use memcpy instead of copying MAC address byte-by-byte.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Using the BIT macro can simplify the bit-shifting operation and make the
code look clean. Similar to how this is handled in the i40e base code,
define a macro for it in DPDK, so it can be used here too.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
"else" is not generally useful after a break or return.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Recommended line length maximum is 80 characters
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Add comments which properly explain the undocumented use of bits in
TDLEN register prior to VF initializing it to the correct value. Note
that the mechanism is entirely software-defined and explain its purpose
to help reduce confusion in the future.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
VF drivers must detect how many queues are available. Previously, the
driver assumed that each VF has at minimum 1 queue. This assumption is
incorrect, since it is possible that the PF has not yet assigned the
queues to the VF by the time the VF checks.
To resolve this, we added a check first to ensure that the first queue
is, in fact, owned by the VF at init_hw_vf time.
However, the code flow did not reset hw->mac.max_queues to 0.
In some cases, such as during reinit flows, we call init_hw_vf
without clearing the previous value of hw->mac.max_queues. Due to this,
when init_hw_vf errors out, if its error code is not properly handled
the VF driver may still believe it has queues which no longer belong to
it. Fix this by clearing the hw->mac.max_queues on exit due to errors.
Fixes: 8b8264bdb9 ("fm10k/base: check VF has a queue")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Use bitshift instead of a divisor, because this is faster, and
eliminates any need for a '0' check. In our case, this even works
out because default Gen3 will be 0.
Because of this, we are also able to remove the check for non-zero value
in the VF code path since that will already be the default Gen3 case.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Make functions that are only referenced locally static.
Wrap fm10k_msg_data fm10k_iov_msg_data_pf[] in the new ifndef
NO_DEFAULT_SRIOV_MSG_HANDLERS so that drivers with custom SR-IOV
message handlers can strip it.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Since the resultant data type of the mac_update.mac_upper field is u16,
it does not make sense to typecast u8 variables to u32 first.
Fixes: 7223d200c2 ("fm10k: add base driver")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The new share code makes fm10k_msg_update_pvid_pf function static, so we
can not refer to it now in fm10k_ethdev.c. The registered PF handler is
almost the same as the default PF handler, removing it has no impact on
mailbox.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Using SSE instructions to parse error flags in HW Rx descriptor,
then set corresponding bits of mbuf.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
When the TX function tries to free a bunch of mbufs, it will free
them one by one. This change will scan the free list and merge the
requests in case they belongs to same pool, then free once, which
will reduce cycles on freeing mbufs.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
fm10k switch core uses source MAC + VID + SGLORT to do
look up in MAC table. If no match, an exception interrupt
will be sent to the switch manager. Too much of this kind
of exception interrupts cause switch manager side high CPU
usage.
To reproduce this issue, one DPDK testpmd runs on a server
with one fm10k NIC, mac forwards test traffic from one of
fm10k ports to another port. The CPU usage for the switch
manager will go up to about 20% for test traffic rate at
10G bps, comparing to near 0% for no test traffic.
This patch fixes this issue. A default SGLORT is assigned
to each TX queue. This default value works for non-VMDq mode
and current VMDq example. For advanced VMDq usage, e.g.
different source MAC address for different TX queue, FTAG
forwarding function could be used to change this default
SGLORT value.
Fixes: 9ae6068c86 ("fm10k: add dev start/stop")
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
In FM10K, a single PCIe port can derive out a few logical ports,
like SRIOV PF/VF devices, VMDQ objects. To better manage them, FM10K
silicon assigns a Unique GLORT ID to each logical port.
When a logical port sends a broadcast packet, the silicon will flood
it to all logical ports, including the one that sent the broadcast packet.
To prevent this, silicon has an rxq register to store the glort id of
the logical port that queue binds to.
FM10K has a switch core inside, which has a loopback suppression
mechanism in the switch level. Switch level loopback suppression mostly
works for the ether port traffic.
This patch assigns a SGLORT for each RX queue, and enables PCIe port
level loopback suppression.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
When the PF establishes a connection with Switch Manager(SM), it receives
a logical port range from SM, and registers certain logical ports from
that range. Then a default VID will be sent back from the SM.
This whole transaction - finishing with the default VID being set -
needs to be completed before dev_init returns. If not, the interrupt
setting will subsequently be changed in dev_start according to the RX
queue number, and that can cause this transaction to fail.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
Interrupt mode framework has per-queue enable/disable functions.
Implement these two functions for fm10k driver.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
Previous dev_stop function stops the rx/tx queues. This patch adds logic
to disable rx queue interrupt, clean the datapath event and queue/vector
map.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In interrupt mode, each rx queue can have one interrupt to notify the
application when packets are available in that queue. Some queues
also can share one interrupt.
Currently, fm10k needs one separate interrupt for mailbox. So, only those
drivers which support multiple interrupt vectors e.g. vfio-pci can work
in fm10k interrupt mode.
This patch uses the RXINT/INT_MAP registers to map interrupt causes
(rx queue and other events) to vectors, and enable these interrupts
through kernel drivers like vfio-pci.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
rx_descriptor_done is used by interrupt mode example application
(l3fwd-power) to check rxd DD bit to decide the RX trend,
then l3fwd-power will adjust the cpu frequency according to
the result.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In fm10k, PF, VF, VMDQ or queues binding to flow director rule can
be considered as a logical port. Original implementation only creates
a single port for all cases. This change creates 128 logical ports;
first 64 for PF and VMDQ, second 64 for flow director.
Registers DGLORTDEC/DGLORTMAP define rules for how to classify packets
into different queues. Currently only PF and VMDQ cases are considered.
This change add rules for flow director.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
In fm10k_recv_scattered_pkts function, a packet is stored in a linked list,
offload flags such as PKT_RX_VLAN_PKT should be set in the first segment.
Fixes: 6b59a3bc82 ("fm10k: fix VLAN in Rx mbuf")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
This patch implemented the ops of adding and removing mac
address in i40evf driver. Functions are assigned like:
.mac_addr_add = i40evf_add_mac_addr,
.mac_addr_remove = i40evf_del_mac_addr,
To support multiple mac addresses setting, this patch also
extended the mac addresses adding and deletion when device
start and stop. Each VF can have a maximum of 64 mac
addresses.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
VEB switching feature for i40e is used to enable the switching between the
VSIs connect to the virtual bridge. The old implementation is setting the
virtual bridge mode as VEPA which is port aggregation. Enable the switching
ability by setting the loop back mode for the specific VSIs which connect
to PF or VFs.
VEB/VSI/VEPA are concepts not specific to the i40e HW, the concepts are
from 802.1qbg spec
IEEE EVB tutorial:
http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf
VEB: a virtual switch can forward the packet based on the specific match
field.
VSI: a virtual interface connect between the VEB/VEPA and virtual machine.
VEPA: a virtual Ethernet port aggregator will upstream the packets from
VSI to the LAN port.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
This patch fixes a typo in a comment in the definition of
the i40e_pf struct.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Previously, DCB(Data Center Bridging) is only enabled on PF,
queue mapping and BW configuration is only done on PF.
This patch enables DCB for VMDQ VSIs(Virtual Station Interfaces)
by following steps:
1. Take BW and ETS(Enhanced Transmission Selection)
configuration on VEB(Virtual Ethernet Bridge).
2. Take BW and ETS configuration on VMDQ VSIs.
3. Update TC(Traffic Class) and queues mapping on VMDQ VSIs.
To enable DCB on VMDQ, the number of TCs should not be larger than
the number of queues in VMDQ pools, and the number of queues per
VMDQ pool is specified by CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
in config/common_* file.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
It removes the i40evf_set_mac_type() defined in PMD, and reuses
i40e_set_mac_type() defined in base driver.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It adds base driver release information such as release date,
for better tracking in the future.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Several structures and macros are added or updated, such
as 'struct i40e_aqc_get_link_status',
'struct i40e_aqc_run_phy_activity' and
'struct i40e_aqc_lldp_set_local_mib_resp'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It adds the new AQ command and struct for managing a
thermal sensor.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
X722 supports Expanded version of TCP, UDP PCTYPES for RSS.
Add a Virtchnl offload to support this.
Without this patch VF drivers will not be able to support
the correct PCTYPES for X722 and UDP flows will not fan out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch adds 7 new register definitions for programming the
parser, flow director and RSS blocks in the HW.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
RX control register read/write functions are added, as directly
read/write may fail when under stress small traffic. After the
adminq is ready, all rx control registers should be read/written
by dedicated functions.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
When updating a VSI, save off the number of allocated and
unallocated VSIs as we do when adding a VSI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Fix the driver load failure with linking with some
PHY types, as the amount of time it takes for the
GLGEN_RSTAT_DEVSTATE to be set increases greatly on those PHY
types, which can lead to a timeout.
Fixes: 9aeefed055 ("i40e/base: support ESS")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
In Multi-Function Mode (MFP) particularly when the PF VSI is set
in limited promiscuous mode, the HW switch was still mirroring the
outgoing packets from other VSIs (VF/VMdq) onto the PF VSI.
This sets a new bit to avoid above mirroring, and it is in limited
promiscuous on the PF VSI in MFP which is similar to default port
VSI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch adds functions to blink led on devices using a new
PHY since MAC registers used in other designs do not work in
this device configuration.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add the support code for calling the AdminQ API call
aq_set_switch_config.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
With the latest firmware, statistics gathering can now be enabled and
disabled in the HW switch, so we need to add a parameter to allow the
driver to set it as desired. At the same time, the L2 cloud filtering
parameter has been removed as it was never used.
Older drivers working with the newer firmware and newer drivers working
with older firmware will not run into problems with these bits as the
defaults are reasonable and there is no overlap in the bit definitions.
Also, newer drivers will be forced to update because of the change in
function call parameters, a reminder that the functionality exists.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch implements necessary functions related to port
mirroring features such as add/delete mirror rule, function
to set promiscuous VLAN mode for VSI if mirror rule_type is
"VLAN Mirroring".
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add the use of the new shared MAC filter bit for multicast
and broadcast filters in order to make better use of the
filters available from the device.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch fixes a problem where the NVMUpdate Tool, when
using the PHY NVM feature, gets bad data from the PHY because
of contention on the MDIO interface from get phy capability
calls from the driver during regular operations. The problem
is fixed by adding a check if media is available before calling
get phy capability function because that bit is not set when
device is in PHY interaction mode.
Fixes: 842ea19963 ("i40e/base: save link module type")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The device capabilities were defined in two places, and neither had
all the definitions. It really belongs with the AQ API definition,
so this patch removes the other set of definitions and fills out the
missing item.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The recently added proxy opcodes should be available only with
X722_SUPPORT, so move them into the #ifdef, and reorder these
to be in numerical order with the rest of the opcodes. Several
structs that were added are unnecessary, so they are removed
here.
Fixes: 788fc17b2d ("i40e/base: support proxy config for X722")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The recently added Wakeup On Line (WOL) opcodes should be
available only with X722_SUPPORT, so move them into the #ifdef,
and reorder these to be in numerical order with the rest of the
opcodes. Several structs that were added are unnecessary, so
they are removed here.
Fixes: 3c89193a36 ("i40e/base: support WOL config for X722")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Add new Device ID's for backplane and QSFP+ adapters, and delete
deprecated one for backplane.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
In one obscure corner case, it was possible to clear the NVM update
wait flag when no update_done message was actually received. This
patch cleans the event descriptor before use, and moves the opcode
check to where it won't get done if there was no event to clean.
Fixes: 8db9e2a1b2 ("i40e: base driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
The standard way to check if the AQ is enabled is to look at
the count field. So it should only set this field after it has
successfully allocated memory.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It's possible that while waiting for the spinlock, another
entity (that owns the spinlock) has shut down the admin queue.
If it then attempts to use the queue, it will panic.
It adds a check for this condition on the receive side. This
matches an existing check on the send queue side.
Fixes: 8db9e2a1b2 ("i40e: base driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
XL710/X710 devices requires FW version checks to properly handle
DCB configurations from the FW while other devices (e.g. X722)
do not, so limit these checks to XL710/X710 only.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
In X722, NVM reads can't be done through SRCTL registers.
And require AQ calls, which require grabbing the NVM lock.
Unfortunately some paths need the lock to be acquired once
and do a whole bunch of stuff and then release it.
This patch creates an unsafe version of the read calls, so
that it can be called from the paths that need the bulk access.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Instead of doing the MAC check, use a flag that gets set per
MAC. This way there are less chances of user error and it
can enable multiple MACs with the capability in a single place
rather than cluttering the code with MAC checks.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
SW needs to acquire the NVM ownership before issuing an AQ read
to the X722 NVM otherwise it will get EBUSY from the firmware.
Also it should be released when done.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Fix compilation warnings in base code on some platforms.
Fixes: bd6651c2d2 ("i40e/base: use bit shift macros")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Use ether API of 'is_valid_assigned_ether_addr' to validate
MAC address.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Generate a MAC address for each VF during PF host
initialization.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
VLAN filtering was always performed, even if hw_vlan_filter was
disabled. During device initialization, default filter
RTE_MACVLAN_PERFECT_MATCH was applied. In this situation, all incoming
VLAN frames were dropped by the card (increase of the register RUPP - Rx
Unsupported Protocol).
In order to restore default behavior, if HW VLAN filtering is activated,
set a filter to match MAC and VLAN. If not, set a filter to only match
MAC.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: 912b595146 ("i40e: mac vlan filter")
Signed-off-by: Julien Meunier <julien.meunier@6wind.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The no-refcount path was being taken without the application opting
in to it.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Reported-by: Mike Stolarchuk <mike.stolarchuk@bigswitch.com>
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The multi queue mode ETH_MQ_RX_VMDQ_DCB_RSS is not supported in
ixgbe driver.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Call the ixgbevf_remove_mac_addr() function in the ixgbevf_dev_close()
function to ensure that the VF traffic goes to the PF after stop,
close and detach of the VF.
Fixes: af75078fec ("first public release")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add the nb_rx_q and nb_tx_q values to the error message
to give details about the error.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Releasing the rx and tx queues is already done in ixgbe_dev_close()
so it does not need to be done in eth_ixgbevf_dev_uninit().
Fixes: 2866c5f1b8 ("ixgbe: support port hotplug")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
For the secondary process of DPDK to initialize ixgbevf, it will always
use the simple RX function or LRO RX function, and this behavior is not
the same RX/TX function selection logic as it is for the primary process.
Use the ixgbe_set_tx_function and ixgbe_set_rx_function to select the
RX/TX function when secondary process calls the init function for eth dev.
Fixes: 9d8a92628f ("ixgbe: remove simple scalar scattered Rx method")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Normally the auto-negotiation is supported by FW. SW need not care about
that. But on x550em_x, FW doesn't support auto-neg. As the x550em_x ports
are 10G, if we connect the port will a peer which is 1G, the link will
always be down.
We need support auto-neg by SW to avoid this link down issue. As we already
have the code to handle the link speed setting, what we need is a trigger.
When the advertised link speed changes, a PHY interruption will be
triggered. So, we should handle this interrupt and call ixgbe_handle_lasi
to set the link speed correctly.
Please be aware it's working when auto-neg is on. If the auto-neg of the
peer port is turned off and its speed is indicated manually, we should also
set the speed of our own port manually.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Add multicast promiscuous mode support on ixgbe VF driver.
Please note if we want to use this promiscuous mode, we need both PF
and VF driver to support it. The reason is this VF feature is
configged on PF.
If use kernel PF driver + dpdk VF driver, make sure kernel PF driver
support VF multicast promiscuous mode. If use dpdk PF + dpdk VF,
better make sure PF driver is the same version as VF.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
It's more valuable to abstract the link read/write interface. As such,
change the following method names, and add them to a new link info
structure:
read_i2c_combined => read_link
read_i2c_combined_unlocked => read_link_unlocked
write_i2c_combined => write_link
write_i2c_combined_unlocked => write_link_unlocked
This will allow X550EM_a to override these methods for MDIO access
while X550EM_x provides methods to use I2C combined access.
Initially the structure is just method pointers and a bus
address.
Two functions involved in combined I2C accesses were moved from
ixgbe_phy.c to ixgbe_x550.c. The underlying functions that carry
out the combined I2C accesses were left in ixgbe_phy.c because
they share some functions with other I2C methods.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The MDIO clock speed must be reconfigured after the MAC reset.
The MDIO clock speed becomes invalid, therefore the driver reads
invalid PHY register values. The driver now set the MDIO clock
speed prior to initializing PHY ops and again after the MAC reset.
As now the MDIO speed gets set in more than one place, make a
function for it so it will always be done correctly.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Do not set FDIRCTRL.DROP_NO_MATCH in ixgbe_init_fdir_perfect_82599(),
this bit is already set in ixgbe_set_fdir_drop_queue_82599() which
makes more sense for drivers that call that function.
This resolves an issue where packets were being dropped when switching
to perfect filters mode.
Setting this bit makes no sense in perfect filters mode for the
driver as we do not want to route all packets that don't match an FDIR
rule to a single queue and instead fall back to RSS.
Drivers that need this bit set can call ixgbe_set_fdir_drop_queue_82599()
and the ones that don't, can preserve the old behavior.
Fixes: 2241ce2816 ("ixgbe/base: add flow director drop queue")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The X550EM_a device provides the MAC_SGMII_BUSY register to
indicate when slow SGMII register writes complete. Add
definitions for the register. No definitions are provided for
the individual bits under the theory that it is better to wait
for everything to complete when needed rather than try to map
out which reads need to wait for which writes. So we should wait
when anything is marked as "busy".
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Instead of not defining the callback for set_phy_power when
manageability is enabled, put the check in the set_phy_power
function so that only turning the power off is conditional on
management, but not turning the PHY on.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch resolves an issue where VF mac address is zeroed out
in cases where the VF driver is loaded while the PF interface
is down.
The solution is to only set it when we get an ACK from the PF.
Fixes: 6202266e56 ("ixgbe/base: vf changes")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Only x550em_x V1 was supported before. Now V2 is supported.
A mask for V1 and V2 is defined and used to support both.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add new X550EM_a devices and their mac types, X550EM_a
and X550EM_a_vf.
Update the code to use the new devices and mac types.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Currently, ixgbe vf and pf will disable interrupt twice in
stop stage and uninit stage. It will cause an error:
testpmd> quit
Shutting down port 0...
Stopping ports...
Done
Closing ports...
EAL: Error disabling MSI-X interrupts for fd 26
Done
because the interrupt has already been disabled in stop stage.
Since it is enabled in init stage, better remove from
stop stage.
Fixes: 0eb609239e ("ixgbe: enable Rx queue interrupts for PF and VF")
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The freeing of mbuf's in ixgbe is one of the observable hot spots
under load. Optimize it by doing bulk free of mbufs using code similar
to i40e and fm10k.
Drop the no longer needed micro-optimization for the no refcount flag.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This brings the DPDK igb driver inline with the behavior used by
the current Linux driver. The IGB hardware has several different
MAC types and the threshold values that work vary based on the hardware.
Since DPDK 1.8 it has been up to devices to provide the correct default
configuration parameter. But the igb driver gives values that are broken
on some devices, and always causes a warning message at startup.
Please test this on real hardware, I don't have the luxury of a
hardware lab full of variations of this chip.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Allow reprogramming of the RAR with a zero mac address,
to ensure that the VF traffic goes to the PF after
stop, close and detach of the VF.
Fixes: be2d648a2d ("igb: add PF support")
Fixes: d82170d279 ("igb: add VF support")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Enable promiscuous and allmulticast mode control from the VF using
rte_eth_promiscuous_enable()/rte_eth_promiscuous_disable() and
rte_eth_allmulticast_enable()/rte_eth_allmulticast_disable().
For promiscuous mode host/PF igb driver should be built with
IGB_ENABLE_VF_PROMISC.
For allmulticast mode "allmulti" flag should be set for appropriate PF
ifconfig eth0 allmulti
Signed-off-by: Yury Kylulin <yury.kylulin@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Modified driver and eal code to support I217 and I218 Intel NICs.
Compiled and tested (via testpmd) on Ubuntu 14.04 for target
x86_64-native-linuxapp-gcc
Compiled for target x86_64-native-linuxapp-clang
Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The last packet of the tx burst function array was not being
emitted until the subsequent call. The nic descriptor index
was being set to the current tx descriptor instead of one past
the descriptor as required by the nic.
Fixes: d739ba4c6a ("enic: improve Tx packet rate")
Signed-off-by: John Daley <johndale@cisco.com>
This is a wholesale replacement of the Enic PMD receive path in order
to improve performance and code clarity. The changes are:
- Simplify and reduce code path length of receive function.
- Put most of the fast-path receive functions in one file.
- Reduce the number of posted_index updates (pay attention to
rx_free_thresh)
- Remove the unneeded container structure around the RQ mbuf ring
- Prefetch next Mbuf and descriptors while processing the current one
- Use a lookup table for converting CQ flags to mbuf flags.
Signed-off-by: John Daley <johndale@cisco.com>
The enic PMD driver send function uses a constant offset instead
of relying on the data_off in the mbuf to find the start of the packet.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Yoann Desmouceaux <ydesmouc@cisco.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Chelsio NIC ports share a single PF. Move rte_eth_copy_pci_info()
to copy the pci device information to the remaining ports as well.
Also update license year to 2016.
Fixes: eeefe73f0a ("drivers: copy PCI device info to ethdev data")
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
max_rx_pkt_len already includes ETHER_HDR_LEN and ETHER_CRC_LEN for the
mtu. But, the firmware also adds ETHER_HDR_LEN and ETHER_CRC_LEN to the
mtu specified. Fix by subtracting these values from the mtu before
passing it to firmware.
Fixes: 4b2eff452d ("cxgbe: enable jumbo frames")
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
The size of each entry in the port's rss table is actually 2 bytes
and not 1 byte. A segfault occurs when accessing part of port 0's rss
table because it gets overwritten by subsequent port 1's part of the
rss table. Fix by setting the size of each entry appropriately.
Fixes: 92c8a63223 ("cxgbe: add device configuration and Rx support")
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
The VF needs to determine the queues sizes before .dev_infos_get
so that it can hint to the upper layer the proper sizes. Move
bnx2x_vf_get_resources() to .eth_dev_init and probe with the guesses
from bnx2x_init_rte().
Signed-off-by: Chas Williams <3chas3@gmail.com>
Acked-by: Rasesh Mody <rasesh.mody@qlogic.com>
bnx2x_loop_obtain_resources() returns a struct containing the status and
the error message. If bnx2x_do_req4pf() fails, it shouldn't return both
of these fields set to 0 indicating failure and no error.
Further, bnx2x_do_req4pf() needs to be able fail and return NO_RESOURCES
so that bnx2x_loop_obtain_resources() can negotiate reduced resource
requirments. This requires additional checking around bnx2x_do_req4pf().
Fixes: 540a211084 ("bnx2x: driver core")
Signed-off-by: Chas Williams <3chas3@gmail.com>
Acked-by: Rasesh Mody <rasesh.mody@qlogic.com>
The mbuf_alloc_size is leftover from BSD or some other code base.
It is set but never used in DPDK driver. After that the related defines
can also be eliminated.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Harish Patil <harish.patil@qlogic.com>
Fixes: the hung/crash issue when quitting testpmd under high
traffic rate. The following issue were found and fixed.
1. edesc->size is not initialized properly in mpipe_do_xmit() and could
cause buffer leak or corruption when HW buffer return is used.
2. Check the 'idesc.be' error bit in mpipe_recv_flush() to make sure
buffer is valid before releasing it. This is to avoid issues when
running out of buffers.
3. priv->rx_buffers counter is not accurate when HW buffer return is
used. Remove this counter to simplify the code.
Signed-off-by: Liming Sun <lsun@ezchip.com>
Acked-by: Zhigang Lu <zlu@ezchip.com>
Mpipe link structure is initialized in function mpipe_link_init().
Currently it's only called from the eth_dev_ops.dev_start, which
caused crashes when link mgmt APIs (like promiscuous_enable)
was called before eth_dev_ops.dev_start(). This submit fixed it
by calling mpipe_link_init() in rte_pmd_mpipe_devinit().
Fixes: a8dd50513d ("mpipe: add TILE-Gx mPIPE poll mode driver")
Signed-off-by: Liming Sun <lsun@ezchip.com>
Acked-by: Zhigang Lu <zlu@ezchip.com>
This submit has changes to optimize the mpipe buffer return. When
a packet is received, instead of allocating and refilling the
buffer stack right away, it tracks the number of pending buffers,
and use HW buffer return as an optimization when the pending
number is below certain threshold, thus save two MMIO writes and
improves performance especially for bidirectional traffic case.
Signed-off-by: Liming Sun <lsun@ezchip.com>
Acked-by: Zhigang Lu <zlu@ezchip.com>
Declare dst as type uint32_t instead of uint64_t, otherwise, we will get
a random upper 32 bit feature bits, as the following io port read reads
lower 32 bit only. It could lead a feature bits that include VIRTIO_F_VERSION_1
(the 32th bit) for legacy virtio, which is obviously wrong.
Fixes: b8f04520ad ("virtio: use PCI ioport API")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Change the fields of outer_mac and inner_mac in struct
rte_eth_tunnel_filter_conf from pointer to struct in order to
keep the code's readability.
Signed-off-by: Xutao Sun <xutao.sun@intel.com>
Signed-off-by: Jijiang Liu <jijiang.liu@intel.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The patch add VxLAN & NVGRE TX checksum off-load. When the flag of
outer IP header checksum offload is set, we'll set the context
descriptor to enable this checksum off-load.
Also update release notes for VxLAN & NVGRE checksum off-load support.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
X550 will do VxLAN & NVGRE RX checksum off-load automatically.
This patch exposes the result of the checksum off-load.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add UDP tunnel port add/del support on ixgbe. Now only
support VxLAN port configuration.
Although according to the specification the VxLAN port has
a default value 4789, it can be changed. We support VxLAN
port configuration to meet the change.
Note, the default value of VxLAN port in ixgbe NICs is 0. So
please set it when using VxLAN off-load.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The names of function for tunnel port configuration are not
accurate. They're tunnel_add/del, better change them to
tunnel_port_add/del.
The old functions are directly replaced because the API and ABI
compatibility of ethdev are already broken in 16.04.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add support of l2 tunnel configuration and operations.
1, Support modifying ether type of a type of l2 tunnel.
2, Support enabling and disabling the support of a type of l2 tunnel.
3, Support enabling/disabling l2 tunnel tag insertion/stripping.
4, Support enabling/disabling l2 tunnel packets forwarding.
5, Support adding/deleting forwarding rules for l2 tunnel packets.
Only support E-tag now.
Also update the release note.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
On X550, as required by datasheet, E-tag packets are not expected
when double VLAN are used. So modify the register PFVTCTL after
enabling double VLAN to select pool by MAC but not MAC or E-tag.
An introduction of E-tag:
It's defined in IEEE802.1br. Please reference this website,
http://www.ieee802.org/1/pages/802.1br.html.
A brief description.
E-tag means external tag, and it's a kind of l2 tunnel. It means a
tag will be inserted in the l2 header. Like below,
|31 24|23 16|15 8|7 0|
0| Destination MAC address |
4| Dest MAC address(cont.) | Src MAC address |
8| Source MAC address(cont.) |
12| E-tag Etherenet type (0x893f) | E-tag header |
16| E-tag header(cont.) |
20| VLAN Ethertype(optional) | VLAN header(optional) |
24| Original type | ...... |
...| ...... |
The E-tag format is like below,
|0 15|16 18|19 |20 31|
| Ethertype - 0x893f | E-PCP |DEI| Ingress E-CID_base |
|32 33|34 35|36 47|48 55 |56 63|
| RSV | GRP |E-CID_base|Ingress_E-CID_ext| E-CID_ext |
The Ingess_E-CID_ext and E-CID_ext are always zero for endpoints
and are effectively reserved.
The more details of E-tag is in IEEE 802.1BR. 802.1BR is used to
replace 802.1Qbh. 802.1BR is a standard for Bridge Port Extension.
It specifies the operation of Bridge Port Extenders, including
management, protocols, and algorithms. Bridge Port Extenders
operate in support of the MAC Service by Extended Bridges.
The E-tag is added to l2 header to identify the VM channel and
the virtual port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
The array 'ptype_table' was defined in depth of 'UINT8_MAX' which
is 255, while the querying index could be from 0 to 255. The issue
can be fixed with expanding the array to one more element.
Fixes: 9571ea0284 ("i40e: replace some offload flags with unified packet type")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
In order to set ether type of VLAN for single VLAN, inner
and outer VLAN, the VLAN type as an input parameter is added
to 'rte_eth_dev_set_vlan_ether_type()'.
In addition, corresponding changes in e1000, ixgbe and i40e
are also added.
It is an ABI break but ethdev library is already bumped for 16.04.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Parse the device parameters from rte_eal_vdev_init,
instead of the config file, so user can change the parameters
at runtime.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch add a mechanism for discovery of crypto device features and supported
crypto operations and algorithms. It also provides a method for a crypto PMD to
publish any data range limitations it may have for the operations and algorithms
it supports.
The parameter feature_flags added to rte_cryptodev struct is used to capture
features such as operations supported (symmetric crypto, operation chaining etc)
as well parameter such as whether the device is hardware accelerated or uses
SIMD instructions.
The capabilities parameter allows a PMD to define an array of supported operations
with any limitation which that implementation may have.
Finally the rte_cryptodev_info struct has been extended to allow retrieval of
these parameter using the existing rte_cryptodev_info_get() API.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
If the experimental CONFIG_RTE_LIBRTE_CRYPTODEV is disabled,
build of any crypto pmds will fail because of the missing dependency.
This has been present for a while now but hidden until the addition
of null_crypto since all the other crypto pmds have been disabled
by default.
Conditionalize the entire drivers/crypto directory on
CONFIG_RTE_LIBRTE_CRYPTODEV to fix.
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
This patch provides the implementation of a NULL crypto PMD, which supports
NULL cipher and NULL authentication operations, which can be chained together
as follows:
- Authentication Only
- Cipher Only
- Authentication then Cipher
- Cipher then Authentication
As this is a NULL operation device the crypto operations which are submitted for
processing are not actually modified and are stored in a queue pairs processed
packets ring ready for collection when rte_cryptodev_burst_dequeue() is called.
The patch also contains the related unit tests function to test the PMDs
supported operations.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
AES GCM on the cryptodev API was giving invalid results
in some cases, due to an incorrect IV setting.
Added AES GCM in the QAT supported algorithms,
as encryption/decryption is fully functional.
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: John Griffin <john.griffin@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Remove support for AES GMAC support for which was added to
the code in error. AES GMAC will be added in a subsequent release
when testing completes.
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: John Griffin <john.griffin@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch provides the implementation of an AES-NI accelerated crypto PMD
which is dependent on Intel's multi-buffer library, see the white paper
"Fast Multi-buffer IPsec Implementations on Intel® Architecture Processors"
This PMD supports AES_GCM authenticated encryption and authenticated
decryption using 128-bit AES keys
The patch also contains the related unit tests functions
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
Wireless algorithms like Snow3G needs input in bits.
In this patch, changes have been made to incorporate this requirement
in both QAT and SW PMD.
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added new SW PMD which makes use of the libsso SW library,
which provides wireless algorithms SNOW 3G UEA2 and UIA2
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2
- RTE_CRYPTO_SYM_AUTH_SNOW3G_UIA2
The SNOW 3G hash and cipher algorithms, which are enabled
by this crypto PMD are implemented by Intel's libsso software
library. For library download and build instructions,
see the documentation included (doc/guides/cryptodevs/snow3g.rst)
The patch also contains the related unit tests function to test the PMD
supported operations.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Refactored the existing functionality into
modular form to support the cipher/auth only
functionalities.
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
As cryptodev library does not depend on mbuf_offload library
any longer, this patch removes it.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
This patch modifies the crypto burst enqueue/dequeue APIs to operate on bursts
rte_crypto_op's rather than the current implementation which operates on
rte_mbuf bursts, this simplifies the burst processing in the crypto PMDs and the
use of crypto operations in general, including new functions for managing
rte_crypto_op pools.
These changes continues the separation of the symmetric operation parameters
from the more general operation parameters, which will simplify the integration
of asymmetric crypto operations in the future.
PMDs, unit tests and sample applications are also modified to work with the
modified and new API.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
This patch splits symmetric specific definitions and
functions away from the common crypto APIs to facilitate the future extension
and expansion of the cryptodev framework, in order to allow asymmetric
crypto operations to be introduced at a later date, as well as to clean the
logical structure of the public includes. The patch also introduces the _sym
prefix to symmetric specific structure and functions to improve clarity in
the API.
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
- Fixed >80char lines in test file
- Removed unused elements from stats struct
- Removed unused objects in rte_cryptodev_pmd.h
- Renamed variables
- Replaced leading spaces with tabs
- Improved performance results display in test
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
virtio PMD could use IO port to configure the virtio device without
using UIO/VFIO driver in legacy mode.
There are two issues with previous implementation:
1) virtio PMD will take over the virtio device(s) blindly even if not
intended for DPDK.
2) driver conflict between virtio PMD and virtio-net kernel driver.
This patch checks if there is kernel driver other than UIO/VFIO managing
the virtio device before using port IO.
If legacy_virtio_resource_init fails and kernel driver other than
VFIO/UIO is managing the device, return 1 to tell the upper layer we
don't take over this device.
For all other IO port mapping errors, return -1.
Note than if VFIO/UIO fails, now we don't fall back to port IO.
Fixes: da978dfdc4 ("virtio: use port IO to get PCI resource")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
PCIe feature of 'Extended Tag' is important for 40G performance.
It adds its enabling during each port initialization, to ensure
the high performance.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Fixed issue of byte order in ethdev library that the structure
for setting fdir's mask and flow entry is inconsist and made
inputs of mask be in big endian.
Fixes: 2d4c1a9ea2 ("ethdev: add new flow director masks")
Fixes: 76c6f89e80 ("ixgbe: support new flow director masks")
Reported-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Macros RTE_MBUF_DATA_DMA_ADDR and RTE_MBUF_DATA_DMA_ADDR_DEFAULT
are defined in each PMD driver file. Convert macros to inline
functions and move them to common lib/librte_mbuf/rte_mbuf.h file.
PMD drivers include rte_mbuf.h file directly/indirectly hence no
additioanl header file inclusion is necessary.
Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
ConnectX-4 NICs can handle at most 512 entries in RETA table.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The physically linked-together combined library has been an increasing
source of problems, as was predicted when library and symbol versioning
was introduced. Replace the complex and fragile construction with a
simple linker script which achieves the same without all the problems,
remove the related kludges from eg mlx drivers.
Since creating the linker script is practically zero cost, remove the
config option and just create it always.
Based on a patch by Sergio Gonzales Monroy, linker script approach
initially suggested by Neil Horman.
Suggested-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Suggested-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Fixing build on 32-bit systems on quick assist driver - for example:
drivers/crypto/qat/qat_crypto.c: In function ‘qat_alg_write_mbuf_entry’:
drivers/crypto/qat/qat_crypto.c:408:34: error:
cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: John Griffin <john.griffin@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
When compiling the AESNI_MB PMD with GCC 4.4.7 on Centos 6.7 a "dereferencing
pointer ‘obj_p’ does break strict-aliasing rules" warning occurs in the
get_session() function.
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
cryptodev_aesni_mb_init was returning the device id of
the device just created, but rte_eal_vdev_init
(the function that calls the first one), was expecting 0 or
negative value.
This made impossible to create more than one aesni_mb device
from command line.
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The variable AESNI_MULTI_BUFFER_LIB_PATH is not required for
make clean
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Move all os / arch specifics to eal.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Santosh Shukla <sshukla@mvista.com>
Tested-by: Santosh Shukla <sshukla@mvista.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
According to the api, rte_eal_pci_map_device is only successful when
returning 0.
Fixes: 6ba1f63b5a ("virtio: support specification 1.0")
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Fixes: c52afa68d7 ("virtio: move left PCI stuff in the right file")
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The file rte_config.h is automatically generated and included.
No need to #include it.
The example performance-thread needs a makefile fix to avoid
overwriting the default cflags.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
As discussed on list, switch numbering scheme to be based on year/month.
Release 2.3 then becomes 16.04.
Ref: http://dpdk.org/ml/archives/dev/2015-December/030336.html
Also, added zero padding to the month so that it appear as 16.04 and
not 16.4 in "make showversion" and rte_version().
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
fix the error reported by checkpatch:
"ERROR: return is not a function, parentheses are not required"
remove parentheses in return like:
"return (logical expressions)"
remove parentheses in return a function like:
"return (rte_mempool_lookup(...))"
Fixes: 6307b909b8 ("lib: remove extra parenthesis after return")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Modern (v1.0) virtio pci device defines several pci capabilities.
Each cap has a configure structure corresponding to it, and the
cap.bar and cap.offset fields tell us where to find it.
Firstly, we map the pci resources by rte_eal_pci_map_device().
We then could easily locate a cfg structure by:
cfg_addr = dev->mem_resources[cap.bar].addr + cap.offset;
Therefore, the entrance of enabling modern (v1.0) pci device support
is to iterate the pci capability lists, and to locate some configs
we care; and they are:
- common cfg
For generic virtio and virtqueue configuration, such as setting/getting
features, enabling a specific queue, and so on.
- nofity cfg
Combining with `queue_notify_off' from common cfg, we could use it to
notify a specific virt queue.
- device cfg
Where virtio_net_config structure is located.
- isr cfg
Where to read isr (interrupt status).
If any of above cap is not found, we fallback to the legacy virtio
handling.
If succeed, hw->vtpci_ops is assigned to modern_ops, where all
operations are implemented by reading/writing a (or few) specific
configuration space from above 4 cfg structures. And that's basically
how this patch works.
Besides those changes, virtio 1.0 introduces a new status field:
FEATURES_OK, which is set after features negotiation is done.
Last, set the VIRTIO_F_VERSION_1 feature flag.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The mergeable virtio net hdr format has been the standard and the
only virtio net hdr format since virtio 1.0. Therefore, we can
not hardcode hdr_size to "sizeof(struct virtio_net_hdr)" any more
at virtio_recv_pkts(), otherwise, there would be a mismatch of
hdr size from rte_vhost_enqueue_burst() and virtio_recv_pkts(),
leading a packet corruption.
Instead, we should retrieve it from hw->vtnet_hdr_size; we will
do proper settings at eth_virtio_dev_init() in later patches.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Switch to 64 bit features, which virtio 1.0 supports.
While legacy virtio only supports 32 bit features, it complains aloud
and quit when trying to setting > 32 bit features.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
virtio_pci.c is a more proper place for pci stuff; virtio_ethdev is not.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Introduce struct virtio_pci_ops, to let legacy virtio (v0.95) and
modern virtio (1.0) have different implementation regarding to a
specific pci action, such as read host status.
With that, this patch reimplements all exported pci functions, in
a way like:
vtpci_foo_bar(struct virtio_hw *hw)
{
hw->vtpci_ops->foo_bar(hw);
}
So that we need pay attention to those pci related functions only
while adding virtio 1.0 support.
This patch introduced a new vtpci function, vtpci_init(), to do
proper virtio pci settings. It's pretty simple so far: just sets
hw->vtpci_ops to legacy_ops as we don't support 1.0 yet.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
offset arg of vtpci_read/write_dev_config is derived from offsetof(),
which is of size_t type, instead of uint64_t. So, define it as size_t
type.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
As we have already set up it at virtio_dev_queue_setup(), and a vq
restart will not reset the settings.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Qian Xu <qian.q.xu@intel.com>
Reviewed-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Tested-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The x550 MDIO clock speed must be configured prior to first MDIO read or
write. The default MDIO clock speed is not valid, therefore the driver
is configuring a valid speed prior to reading the copper PHY device id.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch removes KR PHY reset from ixgbe_init_phy_ops_X550em. Since
this function is meant to initialize function pointers for detected PHY
type. Internal PHY reset was moved to ixgbe_setup_internal_phy_t_x550em
which will now detect which mode does internal PHY work in, and setup it
as required.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
KR auto-neg mode is what we will be using going forward. The SW
interface for this mode is different than what was used for iXFI.
While debugging, it was determined that the ucode diagnostic was
no longer needed. This code has been removed to simplify the init
flow.
A subtle semaphore error in the CS4227 reset flow was fixed.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Avoid a needless PHY access on copper phys to save the 10ms wait
time for each PHY access. A helper function is introduced to
actually do the register access and process the contents.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch changes code to use registers offsets stored in mvals table
instead of values defined statically.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch adds ixgbe_set_fdir_drop_queue_82599 for enabling and
setting flow director drop queue, and adds sets drop no match in
ixgbe_init_fdir_perfect_82599 for x550.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Waiting for FDIRCMD completion is an expensive thing to do in the
transmit hot path. This wait was added to catch problems with perfect
filter rules, and, at least in the Linux driver, there is no error
check anyway, so there is no point to adding the delay. So do not wait
for completion. Change the return of the function to void, since it has
no meaningful return value.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch adds the flow control ethertype to the defines for the
ETQF filter list. This only adds the define. Each driver
can add this ethertype to the filter. This is needed to prevent
denial of service by malicious VFs sending out flow control
packets.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Currently credit_refill and credit_max could be zero for a TC and that
is causing Tx hang for CEE mode configuration, so to fix that have at
min credit assigned to a TC and that is as what IEEE mode already does.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Coverity issue reported like
CID 119268 (#1 of 1): Unintended sign extension
(SIGN_EXTENSION)sign_extension: Suspicious implicit sign extension:
vsi_id with type unsigned short (16 bits, unsigned) is promoted in
vsi_id << 23 to type int (32 bits, signed), then sign-extended to type
unsigned long (64 bits, unsigned). If vsi_id << 23 is greater than
0x7FFFFFFF, the upper bits of the result will all be 1.
Fixes: 88ebc2b7f9 ("i40e: extend flow director to support VF")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
In FreeBsd driver, the max frame size is changed to MTU, but not
keep the default value defined in DataSheet. When DPDK runs on that
NIC, the configured value is not expected.
This patch sets the max frame size to default when initialization.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This counter was left unmodified. Restore it in ixgbe_dev_stats_get.
The ierrors counter still includes imissed for ixgbe. This behaviour is
not consistent amongst all drivers. Another patch may be needed to unify
the meaning of the ierrors counter.
Fixes: 5e50ad1c1b ("ixgbe: add specific stats")
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Fragmented IPv4 packets have no TCP/UDP headers, so we hashed
random data introducing reordering of the fragments.
Signed-off-by: Andriy Berestovskyy <aber@semihalf.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The following messages might appear after some idle time:
"PMD: Failed to allocate LACP packet from pool"
The fix ensures the mempool size is greater than the sum
of TX descriptors.
Signed-off-by: Andriy Berestovskyy <aber@semihalf.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Add BNX2X PMD version, print it as part of adapter info.
Adjusted print adapter info output formatting.
This patch versions BNX2X PMD at 1.0.0.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
The periodic debug option is used to collect periodic
events like statistics, register access etc and won't
interfere with user-level messages.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Fix for the following clang build error:
drivers/net/bnx2x/elink.c:10384:41: error: shifting a
negative signed value is undefined [-Werror,-Wshift-negative-value]
vars->eee_status &= ~SHMEM_EEE_1G_ADV <<
~~~~~~~~~~~~~~~~~ ^
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
SR-IOV is supported using bnx2x poll mode driver running as VF driver and
native linux driver running as PF (in host/hypervisor). There is no issue
while running with the PF driver which is at the base version as that of
PMD. However, there is a compatibility issue between newer out-of-box PF
drivers with older VF driver. So the newer VFs would also need to send
BNX2X_VF_TLV_PHYS_PORT_ID (among other TLVs) to differentiate between
newer and older VFs.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
If you stop and start the driver, the rx queue will have the previous
index values when programming the adapter. Therefore, we should always
reset the queue indices when the rx ring is setup. Note: We need to
clear (write) the status block's completion queue index since it is
possibly in a read cache.
Tidy some init code to make it clearer what the defaults are.
Signed-off-by: Chas Williams <3chas3@gmail.com>
When refilling freelists for the first time and if it fails, the rxq
is freed and returns ENOMEM. There is a check while freeing hardware rxq
to pass freelist context id if the freelist exists or 0xffff otherwise.
The error path is only reached if freelist exists. So, fix is to remove
the useless check for freelist existence.
Coverity issue: 107108
Fixes: 92c8a63223 ("cxgbe: add device configuration and Rx support")
Reported-by: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Acked-by: John McNamara <john.mcnamara@intel.com>