Break ixgbe_setup_eee_X550 down to better handle a change from if
statements to switch statements needed to add X550em_a KR support.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
When software sends commands to firmware using the host
slave command interface, firmware fails to receive the
command due to a checksum failed error, as the checksum is
not being correctly set by the driver software.
This patch sets command checksum to the default value of
0xFF, as per the datasheet, therefore the checksum won't
be checked by firmware.
Fixes: 86b8fb293f ("ixgbe/base: add sw-firmware sync for resource sharing on X550em_a")
Fixes: 0790adeb56 ("ixgbe/base: support X550em_a device")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
An error code indicating that the PF rejects the MAC address change
should be returned, in case that the PF has already assigned a MAC
for the VF.
Fixes: af75078fec ("first public release")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch adds new phy type and media type to support
SGMII link for X550, and add ixgbe_setup_sgmii to support
SGMII link setup.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch adds two new VF requests of IXGBE_VF_GET_RETA and
IXGBE_VF_GET_RSS_KEY to the mailbox API.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This commit brings back Rx scatter and related support by the MTU update
function. The maximum number of segments per packet is not a fixed value
anymore (previously MLX5_PMD_SGE_WR_N, set to 4 by default) as it caused
performance issues when fewer segments were actually needed as well as
limitations on the maximum packet size that could be received with the
default mbuf size (supporting at most 8576 bytes).
These limitations are now lifted as the number of SGEs is derived from the
MTU (which implies MRU) at queue initialization and during MTU update.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The primary purpose of rxq_rehash() function is to stop and restart
reception on a queue after re-posting buffers. This may fail if the array
that temporarily stores existing buffers for reuse cannot be allocated.
Update rxq_rehash() to work on the target queue directly (not through a
template copy) and avoid this allocation.
rxq_alloc_elts() is modified accordingly to take buffers from an existing
queue directly and update their refcount.
Unlike rxq_rehash(), rxq_setup() must work on a temporary structure but
should not allocate new mbufs from the pool while reinitializing an
existing queue. This is achieved by using the refcount-aware
rxq_alloc_elts() before overwriting queue data.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Toggling RX checksum offloads is already done at initialization time. This
code does not belong in rxq_rehash().
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Compared to its previous incarnation, the software limit on the number of
mbuf segments is no more (previously MLX5_PMD_SGE_WR_N, set to 4 by
default) hence no need for linearization code and related buffers that
permanently consumed a non negligible amount of memory to handle oversized
mbufs.
The resulting code is both lighter and faster.
With the addition of this code, older GCC versions (such
as 4.8.5) may complain about 'wqe' variable being uninitialized, so
initialize it preemptively, even though it is not necessary to do so.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The space necessary to store segmented packets cannot be known in advance
and must be verified for each of them.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This feature enables the TX burst function to emit up to 5 packets using
only two work queue entries (WQEs) on devices that support it. Saves PCI
bandwidth and improves performance.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Implement send inline feature which copies packet data directly into
work queue entries (WQEs) for improved latency. The maximum packet
size and the minimum number of Tx queues to qualify for inline send
are user-configurable.
This feature is effective when HW causes a performance bottleneck.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Replacing the variable countdown (which depends on the number of
descriptors) with a fixed relative threshold known at compile time improves
performance by reducing the TX queue structure footprint and the amount of
code to manage completions during a burst.
Completions are now requested at most once per burst after threshold is
reached.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Mini (compressed) completion queue entries (CQEs) are returned by the
NIC when PCI back pressure is detected, in which case the first CQE64
contains common packet information followed by a number of CQE8
providing the rest, followed by a matching number of empty CQE64
entries to be used by software for decompression.
Before decompression:
0 1 2 6 7 8
+-------+ +---------+ +-------+ +-------+ +-------+ +-------+
| CQE64 | | CQE64 | | CQE64 | | CQE64 | | CQE64 | | CQE64 |
|-------| |---------| |-------| |-------| |-------| |-------|
| ..... | | cqe8[0] | | | . | | | | | ..... |
| ..... | | cqe8[1] | | | . | | | | | ..... |
| ..... | | ....... | | | . | | | | | ..... |
| ..... | | cqe8[7] | | | | | | | | ..... |
+-------+ +---------+ +-------+ +-------+ +-------+ +-------+
After decompression:
0 1 ... 8
+-------+ +-------+ +-------+
| CQE64 | | CQE64 | | CQE64 |
|-------| |-------| |-------|
| ..... | | ..... | . | ..... |
| ..... | | ..... | . | ..... |
| ..... | | ..... | . | ..... |
| ..... | | ..... | | ..... |
+-------+ +-------+ +-------+
This patch does not perform the entire decompression step as it would be
really expensive, instead the first CQE64 is consumed and an internal
context is maintained to interpret the following CQE8 entries directly.
Intermediate empty CQE64 entries are handed back to HW without further
processing.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
The intent is to replace the remaining compile-time options and environment
variables with a common mean of runtime configuration. This commit only
adds the kvargs handling code, subsequent commits will update the rest.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
These structures and macros extend those exposed by libmlx5 (in mlx5_hw.h)
to let the PMD manage work queue and completion queue elements directly.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The latest version of Mellanox OFED exposes hardware definitions necessary
to implement data path operation bypassing Verbs. Update the minimum
version requirement to MLNX_OFED >= 3.3 and clean up compatibility checks
for previous releases.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To keep the data path as efficient as possible, move fields only useful to
the control path into new structure rxq_ctrl.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To keep the data path as efficient as possible, move fields only useful to
the control path into new structure txq_ctrl.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Inline TX will be fully managed by the PMD after Verbs is bypassed in the
data path. Remove the current code until then.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no scatter/gather support anymore, CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N
has no purpose and can be removed.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This is done in preparation of bypassing Verbs entirely for the data path
as a performance improvement. RX scatter cannot be maintained during the
transition and will be reimplemented later.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This is done in preparation of bypassing Verbs entirely for the data path
as a performance improvement. TX gather cannot be maintained during the
transition and will be reimplemented later.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Except for the first time when memory registration occurs, the lkey is
always cached. Since memory registration is slow and performs system calls,
performance can be improved by moving that code to its own function outside
of the data path so only the lookup code is left in the original inlined
function.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Use RTE_PCI_DEVICE macro to set all fields rather than explicitly setting
them individually in the code. This shortens the code while helping to
future-proof against future changes to the rte_pci_id structure.
Fixes: 701c8d80c8 ("pci: support class id probing")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Remove 0x prefix for %p format to prevent double 0x in logs
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Removing dependency on nfp_uio kernel module. The igb_uio
kernel modules can be used instead.
Fixes: 80bc1752f1 ("nfp: add guide")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Previously, a single VLAN header is treated as inner VLAN,
but generally, a single VLAN header is treated as the outer
VLAN header.
The patch fixes the ether type of a single VLAN type, and
enables configuring inner and outer TPID for double VLAN.
Fixes: 19b16e2f64 ("ethdev: add vlan type when setting ether type")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
In current i40e codebase, if single VLAN header is added in a packet,
it's treated as inner VLAN. Generally, a single VLAN header is
treated as the outer VLAN header, so update the driver behaviour
appropriately.
Fixes: 19b16e2f64 ("ethdev: add vlan type when setting ether type")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch adds support for Cumulus+ Ethernet adapters.
These Cumulus+ Ethernet adapters support 10Gb/25Gb/40Gb/50Gb speeds.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Negative array index write using variable pos as an index to array
enic->fdir.nodes. Fixed by add array index check.
Coverity issue: 13270
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
made second cache line access behavior same as IA
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Jianbo Liu <jianbo.liu@linaro.org>
The Rx function should not be setting the mbuf buffer length, so remove
the assignment.
Fixes: 540a211084 ("bnx2x: driver core")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
For performance reasons, this patch uses 2 VIC RQs per RQ presented to
DPDK.
The VIC requires that each descriptor be marked as either a start of
packet (SOP) descriptor or a non-SOP descriptor. A one RQ solution
requires skipping descriptors when receiving small packets and results
in bad performance when receiving many small packets.
The 2 RQ solution makes use of the VIC feature that allows a receive
on primary queue to 'spill over' into another queue if the receive is
too large to fit in the buffer assigned to the descriptor on the
primary queue. This means that there is no skipping of descriptors
when receiving small packets and results in much better performance.
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
This patch enables configuring the outer TPID for double VLAN.
Note that all other TPID values, for single VLANs or inner VLAN in the
QinQ case, are read only.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch enables VF to VF traffic with unmatched destination addresses.
The steps to enable this are:
- Enable promiscuous mode filter settings.
- Check for VF mode and enable promiscuous mode settings for VF.
- Check filter configuration to ensure conflicting filter modes
are not set.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
- Add device id to the PCI table
- Add polling for the slowpath events for CMT mode device
- Add prerequisites to allow 100g mode
* Min number of queues needed is 2
* Only even number of queues are allowed
- Update documentation
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Add support for setting hash configuration based on adapter capability
and update corresponding NIC documentation.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
ixgbe PMD RX function(s) misses some packet types that are:
- correctly recognised by the underlying HW.
- marked as supported by ixgbe_dev_supported_ptypes_get().
Fixes: 9586ebd358 ("ixgbe: replace some offload flags with packet type")
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
Intel stopped supporting Match Interface, remove reference to it in the
documentation.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>