Since PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM, the PMD must force this
flag.
The fix applied for both tunneled and non-tunneled packets.
Fixes: 3f13f8c23a ("net/mlx5: support hardware TSO")
Fixes: b247f34601 ("net/mlx5: support hardware TSO for VXLAN and GRE")
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Multiple IES API resets can cause a race condition where the mailbox
interrupt request bits can be cleared before being handled. This can
leave certain mailbox messages from the PF to be untreated and the PF
will enter in some inactive state. If this situation occurs, the IES API
will initiate a mailbox version reset which, then, trigger a mailbox
state change. Once this mailbox transition occurs (from OPEN to CONNECT
state), a request for reset will be returned.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Other shared code bases are planning on using
IS_MULTICAST_ETHER_ADDR and friends without leaving the driver
name in the macro.
Remove reference to FM10K here so that we can re-use the specific
compat flags from Linux.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Avoid potential FUM fault errors on a VF when updating MAC address
and VLAN information. Only use the register flow when the mailbox is
disconnected, by checking if the enqueue_tx returns
FM10K_MBX_ERR_NO_MBX. If the mailbox message can be sent, there is no
reason to bother with the register writes which are only intended to
be used during VF driver initialization.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Don't report FM10K_ERR_REQUESTS_PENDING when we fail to disable queues
within the timeout. This can occur due to a hardware Tx hang, or when
the switch ethernet fabric is resetting while we are transmitting
traffic. It can sometimes take up to 500ms before the Tx DMA engine
gives up. Instead, just skip the DMA engine check and perform
a data-path reset anyways. Add a statistic counter to keep track of the
number of resets occurring while we have pending DMA on the rings.
In order to prevent having to assign err = FM10K_SUCCESS, re-order the
last few items of the reset_hw_pf function so that we don't perform
"return err" at the end.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
If the fm10k interface is brought up, but the switch manager software is
not running, the driver will continuously request the lport map every
few seconds in the base driver watchdog routine. Eventually after
several minutes the switch mailbox Tx fifo will fill up and the mailbox
will timeout, resulting in a reset. This reset will appear as if for no
reason, and occurs regularly every few minutes until the switch manager
software is loaded.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
The VF uses a multi-bit update request to clear unused VLANs whenever it
resets. However, an accident in a previous refactor broke multi-bit
updates for VFs, due to misreading a comment in fm10k_vf.c and
attempting to reduce code duplication. The problem occurs because
a multi-bit request has a non-zero length, and the PF would simply drop
any request with the upper 16 bits set. In addition, a multi-bit vlan
update does not have a concept for "VLAN 0" as the single bit update
does.
A previous revision of this patch resolved the issue by simply removing
the upper 16 bit check and the iov_select_vid checks. However, this would
remove the checks for default VID and for ensuring no other VLANs can be
enabled except pf_vid when it has been set. To resolve that issue, this
revision uses the iov_select_vid when we have a single-bit update, and
denies any multi-bit update when the VLAN was administratively set by
the PF. This should be ok since the PF properly updates VLAN_TABLE when
it assigns the PF vid. This ensures that requests to add or "remove" the
PF vid work as expected, but a rogue VF could not use the multi-bit
update as a loophole to attempt receiving traffic on other VLANs.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
The original comment may be read incorrectly as referring to checking
the *entire* length is zero. However, it merely checks only the reserved
bits of both length and reserved in a small amount of code. Update the
comment to indicate this is a clever trick and clearly spell out that it
only checks the reserve bits.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Use a new #define FM10K_VLAN_OVERRIDE even though we're using
the exact same bit. The reason for this is clarity in the code,
otherwise you can read FM10K_VLAN_CLEAR and think it should be
removed. Also add a comment explaining why the FM10K_VLAN_OVERRIDE
bit is set.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
The diagram represents bit layout of the multi-bit VLAN update
message format. Re-draw the numbers using base 8, and mark the
bit values every 8 bits at the top. This should make it more
easy to grasp the table quickly.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Add FM10K_PF_ATTR_ID_ERR, since it is possible for the switch manager
to send out an error message indicating status of the LPORT_MAP due to
zero allocated bandwidth.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Clean up the logic in fm10k_tlv_attr_parse, we
should not reply on that FM10K_NOT_IMPLEMENTED is
greater than zero, as this can easily cause confusion.
The patch also correct a minor document error.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Deleting lport when multicast mod is configured to
FM10K_XCAST_MODE_ALLMULTI or FM10K_XCAST_MODE_PROMISC will
result in generating orphaned multicast-group entries in the
switch manager.
Before deleting the lport, reset multicast mode to
FM10K_XCAST_MODE_NONE to flush out these multicast-group
entries.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Store the device name in dev->data->name, to have symmetrical behavior
between rte_pmd_tap_probe(name) and rte_pmd_tap_remove(name).
The netdevice name (linux interface name) is stored in the name field of
struct pmd_internals.
snprintf(data->name) has been moved closer to the rte_ethdev_allocate()
as it should use the same name.
Fixes: 02f96a0a82 ("net/tap: add TUN/TAP device PMD")
Signed-off-by: Pascal Mazon <pascal.mazon@6wind.com>
Acked-by: Keith Wiles <keith.wiles@intel.com>
EF10 supported by the PMD has no limitations on address boundary
crossing by Tx DMA descriptors.
Fixes: 428c7ddd2f ("net/sfc: send bursts of packets")
Fixes: fec33d5bb3 ("net/sfc: support firmware-assisted TSO")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Siena has limitation on maximum byte count and 4k boundary crosssing
(which is stricter than maximum byte count).
EF10 has limitation on maximum byte count only.
Fixes: f7dc06bf35 ("net/sfc/base: import 5xxx/6xxx family support")
Fixes: e7cd430c86 ("net/sfc/base: import SFN7xxx family support")
Fixes: 94190e3543 ("net/sfc/base: import SFN8xxx family support")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
If a libefx-based driver needs some way to clear port statistics,
then an MCDI agnostic method is required.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
All kinds of filter need to hardware mac type check
to make sure the hardware support that type of filter.
If not, it may cause serious issue.
Fixes: 11777435c7 ("net/ixgbe: parse flow director filter")
Fixes: 672be56d76 ("net/ixgbe: parse n-tuple filter")
Fixes: eb3539fc85 ("net/ixgbe: parse ethertype filter")
Fixes: 429f6ebb42 ("net/ixgbe: parse TCP SYN filter")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Move ixgbe 2 mac type check macros to ixgbe_ethdev.h in
order to be used by filter parser functions in file
ixgbe_flow.c.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Delete useless function declarations in file ixgbe_flow.c and
adjust function definition position to avoid compile error.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
With all vmxnet3 version 3 changes incorporated in the vmxnet3 driver,
the driver can configure emulation to run at vmxnet3 version 3, provided
the emulation advertises support for version 3.
This patch also updates release notes.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
In vmxnet3 version 3, the emulation added support for the vmxnet3 driver
to communicate information about the memory regions the driver will use
for rx/tx buffers. The driver can also indicate which rx/tx queue the
memory region is applicable for. If this information is communicated
to the emulation, the emulation will always keep these memory regions
mapped, thereby avoiding the mapping/unmapping overhead for every packet.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Signed-off-by: Guolin Yang <gyang@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
This command is reserved.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
vmxnet3 driver preallocates buffers for receiving packets and posts the
buffers to the emulation. In order to deliver a received packet to the
guest, the emulation must map buffer(s) and copy the packet into it.
To avoid this memory mapping overhead, this patch introduces the receive
data ring - a set of small sized buffers that are always mapped by
the emulation. If a packet fits into the receive data ring buffer, the
emulation delivers the packet via the receive data ring (which must be
copied by the guest driver), or else the usual receive path is used.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
vmxnet3 driver supports transmit data ring viz. a set of fixed size
buffers used by the driver to copy packet headers. Small packets that
fit these buffers are copied into these buffers entirely.
Currently this buffer size of fixed at 128 bytes. This patch extends
transmit data ring implementation to allow variable length transmit
data ring buffers. The length of the buffer is read from the emulation
during initialization.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
Shared memory is used to exchange information between the vmxnet3 driver
and the emulation. In order to request emulation to perform a task, the
driver first populates specific fields in this shared memory and then
issues corresponding command by writing to the command register(CMD). The
layout of the shared memory was defined by vmxnet3 version 1 and cannot
be extended for every new command without breaking backward compatibility.
To address this problem, in vmxnet3 version 3, the emulation repurposed
a reserved field in the shared memory to represent command information
instead. For new commands, the driver first populates the command
information field in the shared memory and then issues the command. The
emulation interprets the data written to the command information
depending on the type of the command. This patch exposes this capability
to the driver.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
Cleanup some code in preparation of vmxnet3 version 3 changes.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Acked-by: Jin Heo <heoj@vmware.com>
enable DCB on SRIOV VFs, including
- UP and TC mapping according to dcb_tc in struct rte_eth_dcb_rx_conf.
- TC and queue mapping: queues are divided equally for each TC.
- UP insert when sending packet according to the TC the Tx queue
belongs to.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Fix the UP and TC mapping to divide multiple UPs to TCs instead of mapping
the UPs who are lager than num_tcs to TC0.
Fixes: 1a572499be ("app/testpmd: setup DCB forwarding based on traffic class")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
In SRIOV mode, the mq_mode of rte_eth_rxmode should not carry VMDQ info
without rx_adv_conf setting.
Fixes: a30979f6ad ("app/testpmd: set Rx VMDq RSS mode")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
In SRIOV case, ETH_MQ_RX_VMDQ_DCB and ETH_MQ_RX_DCB should be considered as
the same meaning, due to the multi-queue mapping is the same SRIOV and VMDq
in ixgbe.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
DCB is split to RX and TX mode. All-queues-drop is set for TX mode.
It's not appropriate because all-queue-drop is a RX feature.
Move this setting from TX to RX.
Fixes: f3f9b17bb8 ("net/ixgbe: support multiqueue mode VMDq DCB with SRIOV")
Cc: stable@dpdk.org
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Checking whether the counter is IB counter was performed with the
wrong index.
Fixes: 859081d3fb ("net/mlx5: add out of buffer counter to extended statistic")
Cc: stable@dpdk.org
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit adds support for hardware TSO for tunneled packets.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Prior to this commit Tx checksum offload was supported only for the
inner headers.
This commit adds support for the hardware to compute the checksum for the
outer headers as well.
The support is for tunneling protocols GRE and VXLAN.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The function is not defined anywhere, remove it.
Fixes: 0eb609239e ("ixgbe: enable Rx queue interrupts for PF and VF")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
In the IOV scenario, multi Rx queues can be assigned to one VF.
If the dropping is not enabled, when no descriptors are available
for one queue, this queue can block others.
Fixes: 00e30184da ("ixgbe: add PF support")
Cc: stable@dpdk.org
Suggested-by: Liang-Min Larry Wang <liang-min.wang@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
During PF initialization, PF will generate an initial mac address
for VFs, the purpose is to help VF keep a constant mac address between
its startup/shutdown cycles. Now this is not necessary, since we already
provide an API to set VF's MAC address from PF side
(rte_pmd_i40e_set_vf_mac_addr).
Application can use this API to lock down VF's mac address.(of course this
should happen before VF init)
While without this patch, we still can use rte_pmd_i40e_set_vf_mac_addr
to overwrite the random one, but this patch align DPDK's default behavior
with Kernel PF driver's, and this help to give an identical experience
when work with kernel VF driver.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Having a drop queue per drop flow consumes a lot of memory and reduce the
speed capabilities of the NIC to handle such cases.
To avoid this and reduce memory consumption, an RSS drop queue is created
for all drop flows.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Implement a basic flow RSS action. This commits don't handle the default
RSS queues already created by the control plane, this last part being huge.
Any new request RSS flow request will be added using an higher priority
than the default one to be sure this rule will be the one used.
Default ones (those created by dev_start()) remains but has they have a
lower priority they will not receive any new packet.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>