This patch adds support for the VLAN pool filter (VLVF) to be
bypassed when adding or removing a VLAN filter table array (VFTA) entry.
The PF can utilize the default pool while preserving the VLVF for the
VFs use.
Meanwhile, update the VF operations and drivers where corresponding
functionality is invoked.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch simplifies the adding and removing VLANs from
VFTA/VLVF/VLVFB registers. The logic to determine the registers to use
has been simplified to (vid / 32) and (1 - vid / 32). Many conditional
paths and checks are no longer needed with this patch.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch limits getting and putting the PHY Token to PHY MDIO
access only by adding ixgbe_read_phy_reg_x550a and
ixgbe_write_phy_reg_x550a. The PHY Token is only needed to
synchronize access to the MDIO shared between the two MAC instance.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch corrects the FLA/GSCL/GSCN access offset values according
to the datasheet.
Fixes: 0790adeb56 ("ixgbe/base: support X550em_a device")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch fixes a possible race issue between ports, when issuing host
interface commands, by acquiring/releasing the management host interface
semaphore in ixgbe_host_interface_command.
Fixes: 36f43e8679 ("ixgbe/base: refactor manageability block communication")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
For X540 onwards it is possible if a system reset occurs at the
right time to leave the SWFW semaphore high. This new function will
attempt to grab and release the semaphore. If the grab times out it
will still release the semaphore placing it in a known good state.
The idea is to call this when you know no one should be holding the
semaphore (i.e. probe time)
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
There are two device IDs changed from 15C6/15C7 to 15E4/15E5 because of
PHY info changes. 15C6/15C7 IDs are now used for the backplane
SGMII versions.
Also, clean up some discovery kludges from the previous shared ID,
and also add 15C6/15C7 to ixgbe_set_mdio_speed just for paranoia
to control MDIO speed even though nothing should be attached.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
NW_MNG_IF_SEL register is a PHY link configuration register.
Add ixgbe_read_mng_if_sel_x550em to read NW_MNG_IF_SEL, validate
register values and save fields such as PHY MDIO address. This
centralises the reading and checking of the register in one place
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
The ixgbe_vf.h file did not use _<FILENAME>_ and instead used
__<FILENAME>__ which is not the standard used in every other file.
Fixes: af75078fec ("first public release")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
When there is an error getting the PHY token, the error path
fails to release the locks that it has taken. Release those
locks in that failure case.
Fixes: 86b8fb293f ("ixgbe/base: add sw-firmware sync for resource sharing on X550em_a")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch renames IXGBE_PVFTTDLEN to IXGBE_PVFTDLEN according to
abbreviation of Transmit Descriptor Length in datasheet.
Fixes: d2e72774e5 ("ixgbe/base: support X550")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch adds clearing the pool mappings when configuring default
MAC addresses for the interface. Without this there will be the risk
of leaking an address into pool 0 which really belongs to VF 0 when
SR-IOV is enabled.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch updates ixgbe_setup_mac_link_sfp_x550a for X550 SFP+.
ixgbe_set_lan_id_multi_port_pcie has been updated to set the MAC
instance(0/1) which is needed when configuring the external PHY,
since X550a has two instances of MGPK. The MAC instance is read
from the EEPROM.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Use the method pointers instead of direct function calls for IOSF
access so that the right functions can be called on X550EM_a,
compared to other devices using the driver.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Break ixgbe_setup_eee_X550 down to better handle a change from if
statements to switch statements needed to add X550em_a KR support.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
When software sends commands to firmware using the host
slave command interface, firmware fails to receive the
command due to a checksum failed error, as the checksum is
not being correctly set by the driver software.
This patch sets command checksum to the default value of
0xFF, as per the datasheet, therefore the checksum won't
be checked by firmware.
Fixes: 86b8fb293f ("ixgbe/base: add sw-firmware sync for resource sharing on X550em_a")
Fixes: 0790adeb56 ("ixgbe/base: support X550em_a device")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
An error code indicating that the PF rejects the MAC address change
should be returned, in case that the PF has already assigned a MAC
for the VF.
Fixes: af75078fec ("first public release")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch adds new phy type and media type to support
SGMII link for X550, and add ixgbe_setup_sgmii to support
SGMII link setup.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This patch adds two new VF requests of IXGBE_VF_GET_RETA and
IXGBE_VF_GET_RSS_KEY to the mailbox API.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
This commit brings back Rx scatter and related support by the MTU update
function. The maximum number of segments per packet is not a fixed value
anymore (previously MLX5_PMD_SGE_WR_N, set to 4 by default) as it caused
performance issues when fewer segments were actually needed as well as
limitations on the maximum packet size that could be received with the
default mbuf size (supporting at most 8576 bytes).
These limitations are now lifted as the number of SGEs is derived from the
MTU (which implies MRU) at queue initialization and during MTU update.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The primary purpose of rxq_rehash() function is to stop and restart
reception on a queue after re-posting buffers. This may fail if the array
that temporarily stores existing buffers for reuse cannot be allocated.
Update rxq_rehash() to work on the target queue directly (not through a
template copy) and avoid this allocation.
rxq_alloc_elts() is modified accordingly to take buffers from an existing
queue directly and update their refcount.
Unlike rxq_rehash(), rxq_setup() must work on a temporary structure but
should not allocate new mbufs from the pool while reinitializing an
existing queue. This is achieved by using the refcount-aware
rxq_alloc_elts() before overwriting queue data.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Toggling RX checksum offloads is already done at initialization time. This
code does not belong in rxq_rehash().
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Compared to its previous incarnation, the software limit on the number of
mbuf segments is no more (previously MLX5_PMD_SGE_WR_N, set to 4 by
default) hence no need for linearization code and related buffers that
permanently consumed a non negligible amount of memory to handle oversized
mbufs.
The resulting code is both lighter and faster.
With the addition of this code, older GCC versions (such
as 4.8.5) may complain about 'wqe' variable being uninitialized, so
initialize it preemptively, even though it is not necessary to do so.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
The space necessary to store segmented packets cannot be known in advance
and must be verified for each of them.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
This feature enables the TX burst function to emit up to 5 packets using
only two work queue entries (WQEs) on devices that support it. Saves PCI
bandwidth and improves performance.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Implement send inline feature which copies packet data directly into
work queue entries (WQEs) for improved latency. The maximum packet
size and the minimum number of Tx queues to qualify for inline send
are user-configurable.
This feature is effective when HW causes a performance bottleneck.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Replacing the variable countdown (which depends on the number of
descriptors) with a fixed relative threshold known at compile time improves
performance by reducing the TX queue structure footprint and the amount of
code to manage completions during a burst.
Completions are now requested at most once per burst after threshold is
reached.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
Mini (compressed) completion queue entries (CQEs) are returned by the
NIC when PCI back pressure is detected, in which case the first CQE64
contains common packet information followed by a number of CQE8
providing the rest, followed by a matching number of empty CQE64
entries to be used by software for decompression.
Before decompression:
0 1 2 6 7 8
+-------+ +---------+ +-------+ +-------+ +-------+ +-------+
| CQE64 | | CQE64 | | CQE64 | | CQE64 | | CQE64 | | CQE64 |
|-------| |---------| |-------| |-------| |-------| |-------|
| ..... | | cqe8[0] | | | . | | | | | ..... |
| ..... | | cqe8[1] | | | . | | | | | ..... |
| ..... | | ....... | | | . | | | | | ..... |
| ..... | | cqe8[7] | | | | | | | | ..... |
+-------+ +---------+ +-------+ +-------+ +-------+ +-------+
After decompression:
0 1 ... 8
+-------+ +-------+ +-------+
| CQE64 | | CQE64 | | CQE64 |
|-------| |-------| |-------|
| ..... | | ..... | . | ..... |
| ..... | | ..... | . | ..... |
| ..... | | ..... | . | ..... |
| ..... | | ..... | | ..... |
+-------+ +-------+ +-------+
This patch does not perform the entire decompression step as it would be
really expensive, instead the first CQE64 is consumed and an internal
context is maintained to interpret the following CQE8 entries directly.
Intermediate empty CQE64 entries are handed back to HW without further
processing.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Vasily Philipov <vasilyf@mellanox.com>
The intent is to replace the remaining compile-time options and environment
variables with a common mean of runtime configuration. This commit only
adds the kvargs handling code, subsequent commits will update the rest.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
These structures and macros extend those exposed by libmlx5 (in mlx5_hw.h)
to let the PMD manage work queue and completion queue elements directly.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The latest version of Mellanox OFED exposes hardware definitions necessary
to implement data path operation bypassing Verbs. Update the minimum
version requirement to MLNX_OFED >= 3.3 and clean up compatibility checks
for previous releases.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To keep the data path as efficient as possible, move fields only useful to
the control path into new structure rxq_ctrl.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To keep the data path as efficient as possible, move fields only useful to
the control path into new structure txq_ctrl.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Inline TX will be fully managed by the PMD after Verbs is bypassed in the
data path. Remove the current code until then.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no scatter/gather support anymore, CONFIG_RTE_LIBRTE_MLX5_SGE_WR_N
has no purpose and can be removed.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This is done in preparation of bypassing Verbs entirely for the data path
as a performance improvement. RX scatter cannot be maintained during the
transition and will be reimplemented later.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This is done in preparation of bypassing Verbs entirely for the data path
as a performance improvement. TX gather cannot be maintained during the
transition and will be reimplemented later.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Except for the first time when memory registration occurs, the lkey is
always cached. Since memory registration is slow and performs system calls,
performance can be improved by moving that code to its own function outside
of the data path so only the lookup code is left in the original inlined
function.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Use RTE_PCI_DEVICE macro to set all fields rather than explicitly setting
them individually in the code. This shortens the code while helping to
future-proof against future changes to the rte_pci_id structure.
Fixes: 701c8d80c8 ("pci: support class id probing")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Remove 0x prefix for %p format to prevent double 0x in logs
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yong Wang <yongwang@vmware.com>
Previously, a single VLAN header is treated as inner VLAN,
but generally, a single VLAN header is treated as the outer
VLAN header.
The patch fixes the ether type of a single VLAN type, and
enables configuring inner and outer TPID for double VLAN.
Fixes: 19b16e2f64 ("ethdev: add vlan type when setting ether type")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
In current i40e codebase, if single VLAN header is added in a packet,
it's treated as inner VLAN. Generally, a single VLAN header is
treated as the outer VLAN header, so update the driver behaviour
appropriately.
Fixes: 19b16e2f64 ("ethdev: add vlan type when setting ether type")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Negative array index write using variable pos as an index to array
enic->fdir.nodes. Fixed by add array index check.
Coverity issue: 13270
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
made second cache line access behavior same as IA
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Jianbo Liu <jianbo.liu@linaro.org>
The Rx function should not be setting the mbuf buffer length, so remove
the assignment.
Fixes: 540a211084 ("bnx2x: driver core")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
For performance reasons, this patch uses 2 VIC RQs per RQ presented to
DPDK.
The VIC requires that each descriptor be marked as either a start of
packet (SOP) descriptor or a non-SOP descriptor. A one RQ solution
requires skipping descriptors when receiving small packets and results
in bad performance when receiving many small packets.
The 2 RQ solution makes use of the VIC feature that allows a receive
on primary queue to 'spill over' into another queue if the receive is
too large to fit in the buffer assigned to the descriptor on the
primary queue. This means that there is no skipping of descriptors
when receiving small packets and results in much better performance.
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
This patch enables configuring the outer TPID for double VLAN.
Note that all other TPID values, for single VLANs or inner VLAN in the
QinQ case, are read only.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch enables VF to VF traffic with unmatched destination addresses.
The steps to enable this are:
- Enable promiscuous mode filter settings.
- Check for VF mode and enable promiscuous mode settings for VF.
- Check filter configuration to ensure conflicting filter modes
are not set.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
- Add device id to the PCI table
- Add polling for the slowpath events for CMT mode device
- Add prerequisites to allow 100g mode
* Min number of queues needed is 2
* Only even number of queues are allowed
- Update documentation
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Add support for setting hash configuration based on adapter capability
and update corresponding NIC documentation.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
ixgbe PMD RX function(s) misses some packet types that are:
- correctly recognised by the underlying HW.
- marked as supported by ixgbe_dev_supported_ptypes_get().
Fixes: 9586ebd358 ("ixgbe: replace some offload flags with packet type")
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
When trying to release the mbufs, the function was incorrectly
iterating over the max size configured instead of the actual size
of the ring.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
- add l4 ptypes to the ones we report as supporting
- report/use RTE_PTYPE_L3_IPV4_EXT_UNKNOWN and
RTE_PTYPE_L3_IPV6_EXT_UNKNOWN instead of RTE_PTYPE_L3_IPV4 and
RTE_PTYPE_L3_IPV6 as vic can't distinguish between packets with
extentions and those without extentions.
- correctly set the ptype bits set on packets that are both tcp/udp
and a frag
- set RTE_PTYPE_L4_NONFRAG on ip packets we know are not udp, tcp,
or fragments.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
The flags for RSS and flow director are not set correctly in the vector
Rx function, so applications which use these flags will not work
correctly.
The problem is caused by incorrect constants for masking,
shuffling and shifting the descriptor bytes, to create the resultant
flags in the mbuf. Correcting the constants fixes the problem
Fixes: 9ed94e5bb0 ("i40e: add vector Rx")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Reviewed-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
PAGE_SIZE constant is not defined on ARM since multiple values
are possible, so DPDK needs to dynamically get the page size.
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
If MSIX is available, the vector count given by the table size is one
less than the actual count. This count also limits the receive and
transmit queue resources the VF can support.
Fixes: 540a211084 ("bnx2x: driver core")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
Provide functions to allow an external 802.3ad state machine to transmit
and receive LACPDUs and to set the collection/distribution flags on
slave interfaces.
Signed-off-by: Eric Kinzie <ehkinzie@gmail.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Instead of a hard-coded maximum receive length, allow the bonded interface
to inherit this limit from the slave interfaces. This allows
an application that uses jumbo frames to pass realistic values to
rte_eth_dev_configure without causing an error.
Before the bonding interface is configured, allow slaves with any
max_rx_pktlen to be added and remember the lowest of these values as
a candidate value. During dev_configure, set the bond device's
max_rx_pktlen to the candidate value. After this point only slaves
with a max_rx_pktlen greater or equal to that of the bonding device
can be added.
If all slaves are removed, the bond device's pktlen is cleared.
Signed-off-by: Eric Kinzie <ehkinzie@gmail.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Introduce driver initialization and enable build infrastructure for
nicvf pmd driver.
By default, It is enabled only for defconfig_arm64-thunderx-*
config as it is an inbuilt NIC device.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Signed-off-by: Maciej Czekaj <maciej.czekaj@caviumnetworks.com>
Signed-off-by: Kamil Rytarowski <kamil.rytarowski@caviumnetworks.com>
Signed-off-by: Zyta Szpak <zyta.szpak@semihalf.com>
Signed-off-by: Slawomir Rosek <slawomir.rosek@semihalf.com>
Signed-off-by: Radoslaw Biernacki <rad@semihalf.com>
When app tries to change promisc/allmulti setting, fm10k will check if a
valid glort is acquired, and exit without doing anything if not.
For VFs, this glort value is not necessary, and so the check can be
removed. This saves having unnecessary failures of the API call, as well as
saving the time taken for the mailbox communication between VF and PF in
the case when the glort check passes.
Fixes: df02ba8646 ("fm10k: support promiscuous mode")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The field elt_va_start has been removed from the mempool structure,
and it was not replaced in xenvirt.
Fix this by getting the mempool objects address by using the address of
the first memory chunk list.
Note that it won't work with mempool composed of several chunks,
but it was already the case before.
Fixes: 84121f1971 ("mempool: store memory chunks in a list")
Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add flow_ctrl_get and flow_ctrl_set dev_ops.
Uses the bnxt_set_hwrm_link_config() HWRM API added in earlier patch.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add rss_hash_update and rss_hash_conf_get dev_ops
Uses the bnxt_hwrm_vnic_rss_cfg() HWRM API added in the previous patch.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add code to Update/query reta dev_ops
Uses the bnxt_hwrm_vnic_rss_cfg() HWRM API added earlier.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Adds dev_ops to set link Up or Down as appropriate.
Uses the bnxt_set_hwrm_link_config() API added previously.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds dev_ops to Add/Remove MAC addresses.
These use the bnxt_hwrm_set_filter() defined previously
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds code to free all resources except the one corresponding
to HWRM, which are required to notify the HWRM that the driver is unloaded
(these are freed in uninit()).
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds dev_ops to enable/disable multicast traffic.
Uses the bnxt_hwrm_cfa_l2_set_rx_mask() API added earlier.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds the promiscuous mode enable and disable dev_ops.
Uses the bnxt_hwrm_cfa_l2_set_rx_mask() API added in an earlier commit.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds code to add the start, stop and link update dev_ops.
Also adds wrapper functions like bnxt_init_chip(), bnxt_init_nic(),
bnxt_alloc_mem(), bnxt_free_mem()
The BNXT driver will now minimally pass traffic with testpmd.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM calls to query the port's PHY and link configuration.
New HWRM call:
bnxt_hwrm_port_phy_qcfg
This command queries the PHY configuration for the port
Also adding helper function like bnxt_get_hwrm_link_config()
and bnxt_parse_hw_link_speed() parse the link state.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add a top level functions to initialize ring groups, and functions
to allocate and free all the rings via HWRM.
A ring group is identified by an index. It consists of Rx or Tx ring id,
completion ring id and a statistics context. Once a ring group is
initialized, use this group index while creating the rings in the ASIC
using the appropriate HWRM API added via earlier patches.
Functions added:
bnxt_free_cp_ring
Calls the HWRM function generic ring free with arguments specific
to a completion ring and sanitizes the host completion structure
bnxt_free_all_hwrm_rings
Frees all the HWRM allocated hardware rings
bnxt_free_all_hwrm_resources
Frees all the resources allocated via the HRM in the hardware
bnxt_alloc_hwrm_rings
Allocates all the HWRM rings needed in the current configuration
This should be the last functionality needed to add start/stop
device operations.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
New HWRM call:
bnxt_clear_hwrm_vnic_filters
This patch adds code to set and clear L2 filters from the
corresponding VNIC. These filters will determine the Rx flows
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add function and associated structures and definitions to free
statistics context from the ASIC.
New HWRM call:
bnxt_hwrm_stat_ctx_free
This command is used to free a stat context.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM API for ring group alloc/free functions, associated structs and
definitions.
This API allocates and does basic preparation for a ring group in ASIC.
A ring group is identified by an index. It consists of Rx ring id,
completion ring id and a statistics context.
New HWRM calls:
bnxt_hwrm_ring_grp_alloc
Allocates and does basic preparation for a ring group
bnxt_hwrm_ring_grp_free
Frees and does cleanup resources of a ring group
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM API calls to allocate and free TX, RX and Completion rings
in the hardware along with the associated structs and definitions.
This informs the hardware of how the specific rings were set up in the
host and allocates them in the HWRM, setting up the doorbell registers
etc. as needed, returning an ID for the ring.
Basic ring alloc/free calls:
bnxt_hwrm_ring_alloc
This command allocates and does basic preparation for a ring.
bnxt_hwrm_ring_free
This command is used to free a ring and associated resources.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM API code to allocate a statistics context in the ASIC.
This API will be called by the previously submitted "add statistics
operations patch".
New HWRM call:
bnxt_hwrm_stat_ctx_alloc:
This command allocates and does basic preparation for a stat
context.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add API to set/clear L2 Rx mask.
New HWRM calls:
bnxt_hwrm_cfa_l2_clear_rx_mask
bnxt_hwrm_cfa_l2_set_rx_mask
These HWRM APIs allow setting and clearing of Rx masks in L2 context
per VNIC.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
New HWRM call:
bnxt_hwrm_vnic_rss_cfg:
Used to enable RSS configuration of the VNIC.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds APIs to allow configuration of a VNIC.
The functions alloc and free the Class of Service or COS and
Load Balance context corresponding to the VNIC in the chip.
New HWRM calls:
bnxt_hwrm_vnic_ctx_alloc:
Used to allocate COS/Load Balance context of VNIC
bnxt_hwrm_vnic_ctx_free:
Used to free COS/Load Balance context of VNIC
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch configures the properties and actions of the VNIC
allocated by vnic_alloc function from the previous patch.
bnxt_hwrm_vnic_cfg:
Configure the VNIC structure in hardware.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
In this patch we add a new HWRM API to free a VNIC.
New HWRM call:
bnxt_hwrm_vnic_free:
Frees a vnic allocated by the bnxt_hwrm_vnic_alloc() function.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This requires a group info array in struct bnxt, so add that, we can save
the max size from the func_qcap response, and alloc/free in init/uninit
New HWRM call:
bnxt_hwrm_vnic_alloc:
Allocates a VNIC resource in the hardware.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add bnxt_hwrm_func_reset() function and supporting structs and macros.
New HWRM calls:
bnxt_hwrm_func_reset:
This command puts the function into the reset state.
In the reset state, global and port related features of the
chip are not available.
This command resets a hardware function (PCIe function) and
frees any resources used by the function. This command initiated by
the driver prepare the function for re-use. This command may also be
initiated by a driver prior to doing it's own configuration.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Perform allocation and free()ing of ring and information structures
for the TX, RX, and completion rings. The previous patches had
so far provided top level stubs and generic ring support, while this
patch does the real allocation and freeing of the memory specific to
each different type of generic ring.
For example bnxt_init_tx_ring_struct() or bnxt_init_rx_ring_struct() is
now allocating memory based on the socked_id being provided.
bnxt_tx_queue_setup_op() or bnxt_rx_queue_setup_op() have gone through
some reformatting to perform a graceful cleanup in case memory
allocation fails.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds initial implementation of rx_pkt_burst() function for Rx.
bnxt_recv_pkts() is the top level function for doing Rx.
This patch also adds code to allocate rings in the ASIC.
For each Rx queue allocated in the PMD driver, a corresponding ring
in hardware will be created. Every time a frame is received a Rx ring
is selected based on the hardware configuration like RSS, MAC or VLAN,
COS and such. The hardware uses a completion ring to indicate the
availability of a packet.
This patch also brings in functions like bnxt_init_one_rx_ring()
bnxt_init_rx_ring_struct() which initializes various structures before
a Rx can begin.
bnxt_init_rxbds() initializes the Rx Buffer Descriptors while
bnxt_alloc_rx_data() allocates a buffer in the host to receive the
incoming packet.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Initial implementation of tx_pkt_burst for transmit.
bnxt_xmit_pkts() is the top level function that is called during Tx.
bnxt_handle_tx_cp() is used to check and process the Tx completions
generated for the Tx Buffer Descriptors sent by the hardware.
This patch also adds code to allocate rings in the hardware.
For each Tx queue allocated in the PMD driver, a corresponding ring
in hardware will be created. Every time a Tx request is initiated
via the bnxt_xmit_pkts() call, a Buffer Descriptor is created and
is sent to the hardware via the associated Tx ring.
On completing the Tx operation, the hardware will generates the status
in the form of a completion. This completion is processed by the
bnxt_handle_tx_cp() function.
Functions like bnxt_init_tx_ring_struct() and bnxt_init_one_tx_ring()
are used to initialize various members of the structure before
starting Tx operations.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add the bnxt_stats_get_op() and bnxt_stats_reset_op() dev_ops to
get and reset statistics. It also brings in the associated HWRM calls
to handle the requests appropriately.
We also have the bnxt_free_stats() function which will be used in the
follow on patches to free the memory allocated by the driver for
statistics.
New HWRM calls:
bnxt_hwrm_stat_clear:
This command clears statistics of a context
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
In this patch we are adding the bnxt_rx_queue_setup_op() and
bnxt_rx_queue_release_op() functions. These will be tied to the
rx_queue_setup and rx_queue_release dev_ops in a subsequent patch.
In these functions we allocate/free memory for the RX queues.
This still requires support to create a RX ring in the ASIC which
will be completed in a future commit. Each Rx queue created via the
rx_queue_setup dev_op will have an associated Rx ring in the hardware.
The Rx logic in the hardware picks a Rx ring for each Rx frame received
by the hardware depending on the properties like RSS, MAC and VLAN
settings configured in the hardware. These packets in the end arrive
on the Rx queue corresponding to the Rx ring in the hardware.
We are also adding some functions like bnxt_mq_rx_configure()
bnxt_free_rx_mbufs() and bnxt_free_rxq_stats() which will be used in
subsequent patches.
We are also adding hwrm_vnic_rss_cfg_* structures, which will be used
in subsequent patches to enable RSS configuration.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
In this patch we are adding the bnxt_tx_queue_setup_op() and
bnxt_tx_queue_release_op() functions. These will be tied to the
tx_queue_setup and tx_queue_release dev_ops in a subsequent patch.
In these functions we allocate/free memory for the TX queues.
This still requires support to create a TX ring in the ASIC which
will be completed in a future commit. Each Tx queue created via the
tx_queue_setup dev_op will have an associated Tx ring in the hardware.
A Tx request coming on the Tx queue gets sent to the corresponding
Tx ring in the ASIC for subsequent transmission.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add the L2 filter structure and the alloc/init/free functions for
dealing with them.
A filter is used to identify traffic that contains a matching set of
parameters like unicast or broadcast MAC address or a VLAN tag amongst
other things which then allows the ASIC to direct the incoming traffic
to an appropriate VNIC or Rx ring.
New HWRM calls:
bnxt_hwrm_clear_filter:
Free a L2 filter.
bnxt_hwrm_set_filter
Allocate an An L2 filter or a L2 context.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Structures, macros, and functions for working with completion rings
in the driver.
Completion Ring is used by the Ethernet controller to provide the
status of transmitted & received packets, report errors, report
status changes to the host software, and inter-function forwarding
requests. In addition to the generic ring features, a completion ring
can have a statistics context that has statistics periodically DMAed
to host memory, along with a consumer index.
bnxt_handle_async_event() handles completions not related to a specific
transmit or receive ring such as link status changes which arrive on
the default completion ring.
Other physical or virtual functions on the same device may send an HWRM
command forward request. In this case, we will pass it through
unvalidated. In the future, we will be able to have the PF monitor and
control VF access to the HWRM interface if needed.
New HWRM Calls:
bnxt_hwrm_exec_fwd_resp:
Execute an encapsulated command and forward the response.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Declare generic ring structures and a free() function. These are
generic ring management functions which will be used to create Tx,
Rx and Completion rings in the subsequent patches, and tie them to
the HWRM managed ring resources.
This generic ring structure is shared all the ring types and tracks
the the host Buffer Descriptors (BDs) and the HWRM assigned ID.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add functions to allocate, initialize, and free vnics.
A VNIC represents a virtual interface. It is a resource in the RX path
of the chip and is used to setup various target actions such as RSS,
MAC filtering etc. for the physical function in use.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
The dev_configure_op function calls bnxt_set_hwrm_link_config() to
setup the PHY. This calls the new bnxt_parse_eth_link_*() functions
to translate from the DPDK macro values to those used by HWRM calls,
then calls bnxt_hwrm_port_phy_cfg() to issue the HWRM call.
New HWRM calls:
bnxt_hwrm_port_phy_cfg:
This command configures the PHY device for the port.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Gets device info from the bp structure filled in the init() function.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Move init() cleanup into uninit() function
Fix .dev_private_size
New HWRM calls:
bnxt_hwrm_func_driver_register:
This command is used by the function driver to register
its information with the HWRM.
bnxt_hwrm_func_driver_unregister:
This command is used by the function driver to unregister
with the HWRM.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Start adding support to use the HWRM API.
Hardware Resource Manager or HWRM in short, is a set of API provided
by the firmware running in the ASIC to manage the various resources.
Initial commit just performs necessary HWRM queries for init, then
fails as before.
Now that struct bnxt is non-zero size, we can set dev_private_size
correctly.
The used HWRM calls so far:
bnxt_hwrm_func_qcaps:
This command returns capabilities of a function.
bnxt_hwrm_ver_get:
This function is called by a driver to determine the HWRM
interface version supported by the HWRM firmware, the
version of HWRM firmware implementation, the name of HWRM
firmware, the versions of other embedded firmwares, and
the names of other embedded firmwares, etc. Gets the
firmware version and interface specifications. Returns
an error if the firmware on the device is not supported
by the driver and ensures the response space is large
enough for the largest possible response.
bnxt_hwrm_queue_qportcfg:
This function is called by a driver to query queue
configuration of a port.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds the initial skeleton for bnxt driver along with the
nic guide, and ties the driver into the build system.
At this point, the driver simply fails init.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
[Release Note Addition]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When using kernel PF and DPDK VF, when the PF driver finds the link
state changes, up -> down or down -> up, the driver will send a
message to VF by mailbox. This link state change may be
triggered by PHY disconnection/reconnection, user config change
like *ifconfig down/up* or interface parameter, like MTU change.
This patch enables the support of the mailbox interrupt,
so VF driver can receive the message for link up/down.
After VF receives this message, VF port need to be reset to
recover. This needs to be handled by the application so this patch
allows the app to register a reset callback so it can reset the VF port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
When using kernel PF and DPDK VF, when the PF driver finds the link
state changes, up -> down or down -> up, the driver will send a
message to VF by mailbox. This link state change may be
triggered by PHY disconnection/reconnection, user config change
like *ifconfig down/up* or interface parameter, like MTU change.
This patch enables the support of the mailbox interrupt,
so VF driver can receive the message for link up/down.
After VF receives this message, VF port need to be reset to
recover. This needs to be handled by the application so this patch
allows the app to register a reset callback so it can reset the VF port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Update the documentation and comments with brief details on the base
code version included in this release.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Add a flag which can be used to tell the firmware to disable the link on
all ports.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Add opcodes and structures to support RSS configuration
by PF driver on behalf of the VF drivers.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Adds input set mask definitions for RSS, flow director
and flex bytes.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Adds more device capabilities for NVM management.
- if update is available
- if security check is needed
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Correct the number of MSIX vector in debug info.
Fixes: 889bc9f0cd ("i40e/base: unify the capability function")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Remove a problematic mirror rule ID check. The check
returned an error if the mirror rule ID is 0, which is
a valid value.
Fixes: 0bf2dbbe07 ("i40e/base: support mirroring rules")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Allow egress traffic to be mirrored to VSIs in promiscuous mode, as latest
firmware supports that from API version 1.5.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The hardware doesn't layout the Geneve VNI (Virtual Network
Identifier) quite the same as the VxLAN VNI, so it needs to
adjust it before sending through the Admin Queue commands as the
workaround.
Fixes: 8db9e2a1b2 ("i40e: base driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch trims the source code, with limiting pieces of code for
PF or VF driver only, code style fixes, and annotation
rewording.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch refactors the NVM update command processing, with adding
a new element of nvm_wait_opcode in struct i40e_hw to indicate
the opcode it waits on, and putting the wait event check into
a function. In addition, that element needs to be initialized
or updated properly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch centralizes all NVM update status info into a single
structure, by moving nvm_release_on_done from struct
i40e_adminq_info to struct i40e_hw, for better management.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Host Memory Cache(HMC) admin queue APIs were removed from the latest
datasheet, and hence remove its implementation.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
SR-IOV mode is currently set when dealing with VF devices. PF devices must
be taken into account as well if they have active VFs.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
A hardware capability check is missing before enabling RX VLAN stripping
during queue setup.
Also, while dev_conf.rxmode.hw_vlan_strip is currently a single bit that
can be stored in priv->hw_vlan_strip directly, it should be interpreted as
a boolean value for safety.
Fixes: f3db948918 ("mlx5: support Rx VLAN stripping")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no guarantee that the new MTU is effective after writing its value
to sysfs. Retrieve it to be sure.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Memory regions are always local with raw Ethernet queues, drop the remote
property as it adds extra processing on the hardware side.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
If configuration fails due to lack of resources, be more specific
about which resources are lacking - work queues, read queues or
completion queues. Return -EINVAL instead of -1 if more queeues
are requested than are available.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
If device configuration failed due to a lack of resources, such as
if more queues are requested than are available, the queue release
functions are called with NULL pointers which were being dereferenced.
Skip releasing queues if they are NULL pointers.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
Following the discussions from:
http://dpdk.org/ml/archives/dev/2015-July/021721.htmlhttp://dpdk.org/ml/archives/dev/2016-April/038143.html
The value of these flags is 0, making them useless. Today, no example
application checks them on Rx, and only few drivers sets them and
silently give wrong packets to the application, which should not happen.
This patch removes the unused flags from rte_mbuf and their use in the
drivers. The i40e and fm10k are kept as they are today and should be
fixed to drop bad packets. The enic driver is managed by its maintainer
in another patch.
Fixes: c22265f6 ("mbuf: add new packet flags for i40e")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch modifies bond_mode_alb_enable function.
When mempool allocation fails errno code is returned
instead of rte_panic. This allow to decide on application level
if it should quit or retry for mempool allocation.
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Ensure that the port field is set in mbufs received from the null PMD.
Signed-off-by: Sean Harte <sean.harte@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Since _BSD_SOURCE was deprecated in favor of _DEFAULT_SOURCE in Glibc 2.19
and entirely removed in 2.20, various BSD ioctl macros are not exposed
anymore when _XOPEN_SOURCE is defined, and linux/if.h now conflicts with
net/if.h.
Add _DEFAULT_SOURCE and keep _BSD_SOURCE for compatibility with older
versions.
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Testpmd application will crash in fclose() upon quit after running
the below command.
"sudo gdb --args ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4
--vdev 'eth_pcap0,tx_iface=enp1s0f1,rx_pcap=/tmp/test.pcap' --
--port-topology=chained -i"
The reason is, pcap vdev creation with tx stream type as "iface"
as in above command doesn't need member "dumpers" of
"struct tx_pcaps", hence will not have memory allocated for it.
It contains a garbage values, as local object of struct tx_pcaps
is not initialized to 0 inside rte_pmd_pcap_dev_init().
So calling pcap_dump_close() on dumper as part of eth_dev_stop()
is causing segfault in fclose().
Fix is to initialize local object of struct tx_pcaps to 0.
Also initialize local object of struct rx_pcaps to 0.
So during eth_dev_stop(), pcap_dump_close() will not be called if dumper
is NULL.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Private/conflicting ol_flags where used to enable UDP/TCP Tx
offloads. Use the common flags in PKT_TX_L4_MASK to support them.
When updating flags, also do some minor code rearranging for
slightly better performane.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
The offload flags variable (ol_flags) in rte_mbuf structure is 64-bits,
so local copy of it must be 64-bits too. Moreover bit comparison between
16-bits variable and 64-bits value make no sense. This breaks Tx vlan
IP and L4 offloads.
Coverity issue: 13218
Fixes: fefed3d1e6 ("enic: new driver")
Suggested-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Signed-off-by: John Daley <johndale@cisco.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Add an ASSERT macro for the enic driver which is enabled when the log
level is >= RTE_LOG_DEBUG. Assert that number of mbufs to return to
the pool in the Tx function is never greater than the max allowed.
Signed-off-by: John Daley <johndale@cisco.com>
Reduce host CPU overhead of Tx packet processing:
* Use local variables inside per-packet loop instead of fields in structs.
* Factor book keeping and conditionals out of the per-packet loop where
possible.
* Post buffers to the nic at a maximum of every 64 packets
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Mbufs were returned to the pool one at a time. Use rte_mempool_put_bulk
instead. There were multiple function calls for each buffer returned.
Refactor this code into just 2 functions.
Signed-off-by: John Daley <johndale@cisco.com>
The NIC can either DMA a separate completion message for each completed
send or periodically just DMA the index of the last completed send.
Switch to the latter method which improves cache locality and performance.
Signed-off-by: John Daley <johndale@cisco.com>
The list of mbufs held by the driver on Tx was allocated in chunks
(a hold-over from the enic kernel mode driver). The structure used
next pointers across chunks which led to cache misses.
Allocate the array used to hold mbufs in flight on Tx with
rte_zmalloc_socket(). Remove unnecessary fields from the structure
and use head and tail pointers instead of next pointers.
Signed-off-by: John Daley <johndale@cisco.com>
Functions existed which were never called. Removed them. Also
rename the 'pmd' from the name of the Tx function to improve clarity.
Signed-off-by: John Daley <johndale@cisco.com>
The Tx functions were in enic_ethdev.c and enic_main.c - files in which
they did not logically belong. To make things consistent with most
other drivers, we therefore extract them and place them with the equivalent
Rx functions into a file called enic_rxtx.c.
Signed-off-by: John Daley <johndale@cisco.com>
Truncated packets occur on enic if an mbuf is not big enough to
receive it or there aren't enough mbufs if rx scatter is in use.
They show up as error packets but unlike other error packets (like
packets bad FCS) there are no nic drop counts incremented for them.
Truncated packets are calculated by subtracting hardware errors from
software errors. Note: this causes transient inaccuracies in the
ipackets count. Also, the length of truncated packets are counted
in ibytes even though truncated packets are dropped which can make
ibytes be slightly higher than it should be.
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Following the discussions from:
http://dpdk.org/ml/archives/dev/2015-July/021721.htmlhttp://dpdk.org/ml/archives/dev/2016-April/038143.html
Remove the unused flag from enic driver. Also, the enic driver is
now modified to drop bad packets instead of using a non-existent
flag to try and identify them as bad.
Fixes: 947d860c82 ("enic: improve Rx performance")
Fixes: 5776c30293 ("enic: fix error packets handling")
Fixes: 50765c820e ("enic: remove packet error conditional")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: John Daley <johndale@cisco.com>
rx_no_bufs is a hardware counter of packets dropped on the
interface due to no host buffers and should be used to update
r_stats->imissed counter instead of rx_nombuf.
Include rx_drop in ierrors. rx_drop is incremented if packets
arrive when the receive queue is disabled.
Add a structure and functions for initializing and clearing
software counters. Add count of Rx mbuf allocation failures
(rx_nombuf) as the first counter.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
GCC_VERSION is empty in case of clang:
/bin/sh: line 0: test: -ge: unary operator expected
It is the same issue as http://dpdk.org/dev/patchwork/patch/5994/
Fixes: 366113dbfb ("e1000: suppress misleading indentation warning")
Signed-off-by: Hiroyuki Mikita <h.mikita89@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: John W. Linville <linville@tuxdriver.com>
Suspicious implicit sign extension: pf->fdir.match_counter_index
with type unsigned short (16 bits, unsigned) is promoted in
"pf->fdir.match_counter_index << 20" to type int (32 bits, signed),
then sign-extended to type unsigned long (64 bits, unsigned).
If "pf->fdir.match_counter_index << 20" is greater than 0x7FFFFFFF,
the upper bits of the result will all be 1.
To fix the issue explicitly cast pf->fdir.match_counter_index to uint32_t.
Coverity issue: 13315
Fixes: 05999aab4c ("i40e: add or delete flow director")
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch enables configuring MTU for i40e.
Since changing MTU needs to reconfigure queue, the port must be
stopped before configuring MTU.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
When setting up the flexible paylaod selection rules, the value
NONUSE_FLX_PIT_DEST_OFF (== 63) is meant to disable the rule.
However, since the MK_FLX_PIT macro always added on an additional
offset of I40E_FLX_OFFSET_IN_FIELD_VECTOR (== 50) to the value passed
the functionality to disable the rule was broken.
This patch fixes this by checking for the disable value and not adding
the offset in that case.
Fixes: d8b90c4eab ("i40e: take flow director flexible payload configuration")
Reported-by: Michael Habibi <mikehabibi@gmail.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Previously, there was a known issue "On Intel® 40G Ethernet
Controller stopping the port does not really down the port link."
There were two reasons why the port was always kept up.
1. Old firmware versions had issues when "Set PHY config command"
was used on 40G NICs.
2. The kernel i40e driver didn't call "Set PHY config command" when
ifconfig up/down was used, it assumes the link is always up. But
in DPDK, ports are forced down when an applications quits. So if
the port is then switched to being controlled by kernel the driver,
the port can not be brought up through "ifconfig <ethx> up".
This patch fixes this issue by adding in "Set PHY config command"
into our driver. This is now possible because with newer firmware
there is no longer a problem using this command.
With this fix, after DPDK quit, if the port is switched to being used
by the kernel driver, "ethtool -s <ethx> autoneg on" can be used to
turn on the auto negotiation, and then port can be brought up through
"ifconfig <ethx> up".
NOTE: requires kernel i40e driver version >= 1.4.X
Fixes: 2f1e228174 ("i40e: skip link control as firmware workaround")
Fixes: 16c979f9ad ("i40e: disable setting of PHY configuration")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Change the Tx routine to ring the doorbell once per burst
and not on every Tx packet. This driver-level optimization
is necessary to achieve line rates for larger frame
sizes (1k or more).
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
- Process Tx completions based on configured Tx free threshold and
determine how much TX BDs are required before invoking bnx2x_tx_encap()
- Change bnx2x_tx_encap() to void function as it can now never fail
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Under certain scenarios, management firmware (MFW) periodically polls
the driver for LAN statistics. This patch implements the osal hook to
fill in the stats.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Program the PCIe completion timeout to 4 sec to give enough time
to allow completions to be received successfully in some older systems.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
To be consistent with the naming for ARM NEON implementation,
ixgbe_rxtx_vec.c is renamed to ixgbe_rxtx_vec_sse.c.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Previously, if an adminq message is sent successfully, but no response is
received, function "i40evf_execute_vf_cmd" will return without error.
The root cause is value "err" is overwritten. This patch fixes this by
ensuring the value of err is set appropriately for each cmd.
Fixes: ae19955e7c ("i40evf: support reporting PF reset")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Use ARM NEON intrinsic to implement ixgbe vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[style fixes as highlighted by checkpatch.pl]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
move scalar code which does not use x86 intrinsic functions to new file
"ixgbe_rxtx_vec_common.h", while keeping x86 code in ixgbe_rxtx_vec.c.
This allows the scalar code to to be shared among vector drivers for
different platforms.
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The VLAN tag information should be stored in the first mbuf of a chain
of buffers, not in the last one.
Fixes: 9fd5e98b62 ("vmxnet3: support RSS and refactor Rx offload")
Signed-off-by: John Guzik <john@shieldxnetworks.com>
Acked-by: Yong Wang <yongwang@vmware.com>
When linking drivers as shared libraries, the dependencies need
to be marked as DT_NEEDED entries.
The crypto dependencies (libsso and libIPSec) are static libraries.
To make them linked in the shared PMDs, the code must relocatable:
- libIPSec_MB.a must be built with -fPIC
- libsso_kasumi.a must be built with KASUMI_CFLAGS=-DKASUMI_C
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Fixes: 2773c86d06 ("crypto/kasumi: add driver for KASUMI library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some libraries were missing their dependency on eal, mbuf, mempool,
ring and kvargs.
It is revealed by the linker option "-z defs".
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
There is no need to have this parsing inlined in the header.
It brings kvargs dependency to every crypto drivers.
The functions are moved into rte_cryptodev.c.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The compilation for 32-bit fails when CONFIG_RTE_VIRTIO_USER is enabled:
drivers/net/virtio/virtio_user_ethdev.c:84:47:
error: format ‘%llu’ expects argument of type ‘long long unsigned int’,
but argument 5 has type ‘size_t {aka unsigned int}’
Fixes: e9efa4d938 ("net/virtio-user: add new virtual PCI driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
NSH packet can be recognized by Intel X710/XL710 series.
This patch enables the new packet type.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
In the following loop:
while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
...
}
There is no external function call or any explict memory barrier
in the loop, the re-read of used->idx might be optimized and only
be retrieved once.
Use of voaltile normally should be prohibited, and access_once
is Linux kernel's style to handle this issue; Once we have that
macro in DPDK, we could change to that style.
virtio_recv_mergable_pkts might also have the same issue, so fix
it as well.
Fixes: 823ad64795 ("virtio: support multiple queues")
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Trying to access xstats_names after "if (xstats_names == NULL)" is
obviously wrong, which would result to a crash while running "show
port xstats 0" in testpmd with virtio PMD.
The fix is straightforward; just reverse the check.
Fixes: baf91c395b ("net/virtio: fetch extended statistics with integer ids")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
In virtio-user driver, when notify ctrl-queue, invoke API of
virtio-user device emulation to handle ctrl-q command.
Besides, multi-queue requires ctrl-queue and ctrl-queue will be
enabled automatically when multi-queue is specified.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The main purpose of this patch is to enable multi-queue. But
multi-queue requires ctrl-queue so that driver can send how many
queues will be enabled through ctrl-queue messages.
So we partially implement ctrl-queue to handle control command
with class of VIRTIO_NET_CTRL_MQ and with cmd of
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET to handle mq support. This patch
provides a function, virtio_user_handle_cq(), for driver to handle
ctrl-queue messages.
Besides, multi-queue requires VIRTIO_NET_F_MQ and VIRTIO_NET_F_CTRL_VQ
are enabled when we do feature negotiation.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch mainly adds method in vhost user adapter to communicate
enable/disable queues messages with vhost user backend, aka,
VHOST_USER_SET_VRING_ENABLE.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new virtual device named virtio-user, which can be used just like
eth_ring, eth_null, etc. To reuse the code of original virtio, we do
some adjustment in virtio_ethdev.c, such as remove key _static_ of
eth_virtio_dev_init() so that it can be reused in virtual device; and
we add some check to make sure it will not crash.
Configured parameters include:
- queues (optional, 1 by default), number of queue pairs, multi-queue
not supported for now.
- cq (optional, 0 by default), not supported for now.
- mac (optional), random value will be given if not specified.
- queue_size (optional, 256 by default), size of virtqueues.
- path (madatory), path of vhost user.
When enable CONFIG_RTE_VIRTIO_USER (enabled by default), the compiled
library can be used in both VM and container environment.
Examples:
path_vhost=<path_to_vhost_user> # use vhost-user as a backend
sudo ./examples/l2fwd/build/l2fwd -c 0x100000 -n 4 \
--socket-mem 0,1024 --no-pci --file-prefix=l2fwd \
--vdev=virtio-user0,mac=00:01:02:03:04:05,path=$path_vhost -- -p 0x1
Known issues:
- Control queue and multi-queue are not supported yet.
- Cannot work with --huge-unlink.
- Cannot work with no-huge.
- Cannot work when there are more than VHOST_MEMORY_MAX_NREGIONS(8)
hugepages.
- Root privilege is a must (mainly becase of sorting hugepages according
to physical address).
- Applications should not use file name like HUGEFILE_FMT ("%smap_%d").
- Cannot work with vhost-net backend.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch is related to how to calculate relative address for vhost
backend.
The principle is that: based on one or multiple shared memory regions,
vhost maintains a reference system with the frontend start address,
backend start address, and length for each segment, so that each
frontend address (GPA, Guest Physical Address) can be translated into
vhost-recognizable backend address. To make the address translation
efficient, we need to maintain as few regions as possible. In the case
of VM, GPA is always locally continuous. But for some other case, like
virtio-user, GPA continuous is not guaranteed, therefore, we use virtual
address here.
It basically means:
a. when set_base_addr, VA address is used;
b. when preparing RX's descriptors, VA address is used;
c. when transmitting packets, VA is filled in TX's descriptors;
d. in TX and CQ's header, VA is used.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch moves phys addr check from virtio_dev_queue_setup
to pci ops. To make that happen, make sure virtio_ops.setup_queue
return the result if we pass through the check.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
We skip kernel managed virtio devices, if it isn't whitelisted.
Before checking if the virtio device is whitelisted, check if devargs
is specified.
Fixes: ac5e1d838d ("virtio: skip error when probing kernel managed device")
Reported-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new paramter (flags) to rte_vhost_driver_register(). DPDK
vhost-user acts as client mode when RTE_VHOST_USER_CLIENT flag
is set. The flags would also allow future extensions without
breaking the API (again).
The rest is straingfoward then: allocate a unix socket, and
bind/listen for server, connect for client.
This extension is for vhost-user only, therefore we simply quit
and report error when any flags are given for vhost-cuse.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With all the previous prepare works, we are just one step away from
the final ABI refactoring. That is, to change current API to let them
stick to vid instead of the old virtio_net dev.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
This change could let us avoid the dependency of "virtio_net"
struct, to prepare for the ABI refactoring.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_ifname() to export the ifname to
application.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_queue_num() to export the number of
queues.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_numa_node() to get the numa node
from which the virtio_net struct is allocated.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
It does not make sense to ask the application to set/unset the flag
VIRTIO_DEV_RUNNING (that used internal only) at new_device()/
destroy_device() callback.
Instead, it should be set after new_device() succeeds and reset before
destroy_device() is invoked inside vhost lib. This patch fixes it.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
We keep a common vq structure, containing only vq related fields,
and then split others into RX, TX and control queue respectively.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
[Jianfeng Tan: found and fixed 2 bugs]
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The commit dd856dfcb9 introduced an optimization that prepends virtio
header to mbuf data. It can be used when the tx mbuf is writeable, so we
need to check that the mbuf is direct (i.e. it embeds its own data).
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The class id is not filled and makes probing to fail.
Updated the code to use RTE_PCI_DEVICE which fills
the class id with a wildcard value.
Fixes: 701c8d80c8 ("pci: support class id probing")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library support SSE4.1 instruction set,
so feature flags of the crypto device must be updated
to reflect this.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Underlying libsso_snow3g library now supports bit-level
operations, so PMD has been updated to allow them.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
In order to avoid using magic numbers, macros for
the IV and digest lengths for Snow3G have been added.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library that SNOW3G PMD uses has been updated,
so now it is called libsso_snow3g. Also, the path to the library
has been renamed to reflect this changes (now called LIBSSO_SNOW3G_PATH).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The variables AESNI_MULTI_BUFFER_LIB_PATH and LIBSSO_PATH
are not required for "make clean".
It is the same fix as in the commit e277b2397.
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the test-pmd
and proc_info applications to use the new xstats API, and removes
deprecated code associated with the old API.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the virtio driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the i40e driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the fm10k driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the e1000 driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the ixgbe driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Although ppc supports both endianesses, qemu supposes that the cpu is
big endian and enforces this for the virtio-net stuff.
Fix PCI accesses in legacy mode. Only ppc64le is supported at the moment.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.
Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:
PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
is enabled in the RX configuration of the PMD.
For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.
This patch also updates the drivers. For PKT_RX_VLAN_PKT:
- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.
For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.
[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The SYSFS_PCI_DEVICES is a constant that makes the PCI testing
difficult as it points to an absolute path. We remove using this
constant and introducing a function pci_get_sysfs_path that gives
the same value. However, the user can pass a SYSFS_PCI_DEVICES env
variable to override the path. It is now possible to create a fake
sysfs hierarchy for testing.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
This patch adds missing DEPDIRS to avoid any library referring to
symbols they are not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add the missing external dependency to pthread to avoid referring to
symbols the library is not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Up to now dependencies between DPDK internal libraries have been
untracked at shared library level, requiring applications to know
about library internal dependencies and often consequently overlinking.
Since the dependencies are already recorded for build ordering in the
makefiles with DEPDIRS-y we can use that information to generate LDLIBS
entries for internal libraries automatically.
Also revert commit 8180554d82 ("vhost: fix linkage of driver with
library") which is made redundant by this change.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
This patch provides counter mode support to AES-NI multi-buffer library.
The following cipher algorithm is enabled:
- RTE_CRYPTO_CIPHER_AES_CTR
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added possibility for AES to work in counter mode
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
To avoid GCC warning about "dereferencing type-punned pointer will break
strict-aliasing rules" aad_len pointer is dereferenced instead of direct
dereferencing of uint32_t* cast of the middle of an array.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix an error with computation of physical address of
content descriptor in the symmetric operations session
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix null pointer dereferencing by reporting if null and
exiting the function.
Coverity issue: 126584
Fixes: c0f87eb525 ("cryptodev: change burst API to be crypto op oriented")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Fix wrong indentation for return value
Coverity issue: 126585
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Removed comparison against $CC in Makefiles as
in cross-compiling mode CC can be a different string
instead of string "gcc"
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The qede driver depends on libz but the LDLIBS entry in makefile
was missing. Also because of the external dependency, make it
disabled in default config as per common DPDK policy on external deps.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some 64-bit variables are printed for debug.
%PRIx64 qualifier must be used because %lx is not long enough
on 32-bit systems
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
The RTE_ETH_VALID_PORTID_OR_ERR_RET macro is used in some places
to check if a port id is valid or not. This commit makes use of it in
some new parts of the code.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some architectures (ex: Power8) have a cache line size of 128 bytes,
so the drivers should not expect that prefetching the second part of
the mbuf with rte_prefetch0(&m->cacheline1) is valid.
This commit add helpers that can be used by drivers to prefetch the
rx or tx part of the mbuf, whatever the cache line size.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Introduce rte_mempool_populate_default() which allocates
mempool objects in several memzones.
The mempool header is now always allocated in a specific memzone
(not with its objects). Thanks to this modification, we can remove
many specific behavior that was required when hugepages are not
enabled in case we are using rte_mempool_xmem_create().
This change requires to update how kni and mellanox drivers lookup for
mbuf memory. For now, this will only work if there is only one memory
chunk (like today), but we could make use of rte_mempool_mem_iter() to
support more memory chunks.
We can also remove RTE_MEMPOOL_OBJ_NAME that is not required anymore for
the lookup, as memory chunks are referenced by the mempool.
Note that rte_mempool_create() is still broken (it was the case before)
when there is no hugepages support (rte_mempool_create_xmem() has to be
used). This is fixed in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Do not use paddr table to store the mempool memory chunks.
This will allow to have several chunks with different virtual addresses.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Now that the mempool objects are chained into a list, we can use it to
browse them. This implies a rework of rte_mempool_obj_iter() API, that
does not need to take as many arguments as before. The previous function
is kept as a private function, and renamed in this commit. It will be
removed in a next commit of the patch series.
The only internal users of this function are the mellanox drivers. The
code is updated accordingly.
Introducing an API compatibility for this function has been considered,
but it is not easy to do without keeping the old code, as the previous
function could also be used to browse elements that were not added in a
mempool. Moreover, the API is already be broken by other patches in this
version.
The library version was already updated in
commit 213af31e09 ("mempool: reduce structure size if no cache needed")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit removes the const qualifier for the mempool in
rte_mempool_walk() callback prototype.
Indeed, most functions that can be done on a mempool require a non-const
mempool pointer, except the dump and the audit. Therefore, the
mempool_walk() is more useful if the mempool pointer is not const.
This is required by next commit where the mellanox drivers use
rte_mempool_walk() to iterate the mempools, then rte_mempool_obj_iter()
to iterate the objects in each mempool.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit renames mempool_obj_ctor_t as mempool_obj_cb_t.
In next commits, we will add the ability to populate the
mempool and iterate through objects using the same function.
We will use the same callback type for that. As the callback is
not a constructor anymore, rename it into rte_mempool_obj_cb_t.
The rte_mempool_obj_iter_t that was used to iterate over objects
will be removed in next commits.
No functional change.
In this commit, the API is preserved through a compat typedef.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Many drivers provide their own implementation of rte_mbuf_raw_alloc(),
duplicating the code. Introduce a new public function in rte_mbuf to
allocate a raw mbuf (uninitialized).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
With gcc >= 6.0, qede base driver fails to build with:
drivers/net/qede/base/ecore_cxt.c: In function 'ecore_cdu_init_common':
cc1: error: left shift of negative value [-Werror=shift-negative-value]
Since the base drivers are untouchable, work around by disabling
the warning.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
When virtio was proposed in DPDK, there is no API to free memzones.
But this has changed since rte_memzone_free() has been implemented by
commit ff909fe21f ("mem: introduce memzone freeing").
This patch is to make sure memzones in struct virtqueue, like mz and
virtio_net_hdr_mz, are freed when queue is released or setup fails.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The "drv_flags" is set with device as the input, which means different
device (say, modern vs legacy) could end up with a different value. And
the fact that "drv_flags" is shared by all devices means that every time
we add a new device, it simply overwrites the value configured from the
last device.
Therefore, when two virtio devices have different flags, it may lead to
wrong result, such as virtio would set irq config when it's not supported.
Making the flag per device (using "dev->data->dev_flags") could let us
have different value for each device, which would avoid the above issue.
Fixes: da978dfdc4 ("virtio: use port IO to get PCI resource")
Reported-by: David Marchand <david.marchand@6wind.com>
Suggested-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Avail ring is updated by the frontend and consumed by the backend.
There are frequent core to core cache transfers for the avail ring.
This optmization avoids avail ring entry index update if the entry
already holds the same value.
As DPDK virtio PMD implements FIFO free descriptor list (also for
performance reason of CACHE), in which descriptors are allocated
from the head and freed to the tail, with this patch in most cases
avail ring will remain the same, then it would be valid in both caches
of frontend and backend.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Currently, vhost PMD doesn't have linkage for librte_vhost, even though
it depends on librte_vhost APIs. This causes a linkage error if below
conditions are fulfilled.
- DPDK libraries are compiled as shared libraries.
- DPDK application doesn't link librte_vhost.
- Above application tries to link vhost PMD using '-d' DPDK option.
The patch adds linkage for librte_vhost to vhost PMD not to cause an
above error.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
check merge-able header as it is supported.
previously we don't support merge-able feature, so non merge-able
header is checked.
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
After the do-while loop, idx could be VQ_RING_DESC_CHAIN_END (32768)
when it's the last vring desc buf we can get. Therefore, following
expresssion could lead to a segfault error, as it tries to access
beyond the desc memory boundary.
start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
This bug could be reproduced easily with "set fwd txonly" in the
guest PMD, where the dequeue on host is slower than the guest Tx,
that running out of free desc buf is pretty easy.
The fix is straightforward and easy, just remove it, as we have
already set desc flags properly inside the do-while loop.
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
[Yuanhan Liu: commit log reword]
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Issue: output of appliations and debug info of DPDK may be mixed up
in same line when enabling below debug options of virtio:
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DRIVER
This patch adds "\n" in the tail of definitions like PMD_RX_LOG,
PMD_TX_LOG, and PMD_DRV_LOG, and removes some "\n" when using these
macros.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Previously, for tunnel packets, such as VXLAN/NVGRE, the vlan
tags of the inner header will be stripped without putting vlan
info to descriptor, what is not expected behaviour.
This patch fixes it by changing hardware configuration to leave
the inner packet alone.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Add three new functions to the vf ops structure:
* rx_queue_count
* rxq_info_get
* txq_info_get
In all cases, the corresponding PF APIs can be reused.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The code to provide mbufs for RX used m->data_off instead of
RTE_PKTMBUF_HEADROOM as the position inside the mbuf for the data to be
written. As the mbuf is uninitialised, this could potentially cause Rx
data to be placed at the wrong address in the mbuf - or even outside it.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
RTE_PCI_DRV_DETACHABLE flag is required to indicate that a device
can be detached during execution.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
mbufs were not properly released post-tx when they are chained.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Some apps calling some functions from different threads at the
same time could lead to reconfig problems. Reconfig mechanism is
based on a hardware queue where incrementing a counter signals the
firmware to do the reconfig. If there is a second increment before the
first one has been processed the firmware will stop and a device
reset is necessary.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
librte_malloc was long since merged into librte_eal, mop up the
leftover references to it from driver Makefiles.
Fixes: 2f9d47013e ("mem: move librte_malloc to eal/common")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Applied with qede Makefile update too.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
On hosts running a non-DPDK PF driver, the VF has no means of changing
the HW CRC strip setting for a RX queue. It's implicitly enabled.
This patch checks if the host is running a non-DPDK PF kernel driver,
and returns an error, if HW CRC stripping was not requested in the port
configuration.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In i40evf_config_vlan_pvid the check for NULL for the dev value is
unnecessary, since this value is passed in from the ethdev API which
will ensure that a valid rte_eth_dev structure is provided.
Furthermore, all code paths leading to this function already use the
dev value.
Issue identified by Coverity.
Coverity ID 13302:
There may be a null pointer dereference, or else the comparison against
null is unnecessary.
In i40evf_config_vlan_pvid: All paths that lead to this null pointer
comparison already dereference the pointer earlier
Fixes: 2b12431b53 ("i40e: add vlan stripping and insertion to VF")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
In Table 8-16 of the "Intel® Ethernet Controller XL710 Datasheet" it is
stated that when the whole packet is written to a single buffer, the
header length field in the descriptor will be 0. This means that when
extracting the packet/data_len field from the descriptor in the driver
we do not need to mask out the extra header-length bits.
Inside the vector driver, this reduces the need to pull all four pktlen
fields into a single register to work on. Instead of a shift and mask,
we now need to only do a shift. Therefore, we can work on each descriptor
independently, processing each using one shift intrinsic and a blend.
This change makes the code shorter and easier to read, so we can pull it
into the main descriptor processing loop instead of needing its own
function. This in turn makes the descriptor processing in the loop as a
whole slightly easier to read as it's more linear.
In terms of performance, in testing this change shows little effect, with
single-core perf tests showing a very slight improvement.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
An analysis of the i40e code using Intel® VTune™ Amplifier 2016 showed
that the code was unexpectedly causing stalls due to "Loads blocked by
Store Forwards". This can occur when a load from memory has to wait
due to the prior store being to the same address, but being of a smaller
size i.e. the stored value cannot be directly returned to the loader.
[See ref: https://software.intel.com/en-us/node/544454]
These stalls are due to the way in which the data_len values are handled
in the driver. The lengths are extracted using vector operations, but those
16-bit lengths are then assigned using scalar operations i.e. 16-bit
stores.
These regular 16-bit stores actually have two effects in the code:
* they cause the "Loads blocked by Store Forwards" issues reported
* they also cause the previous loads in the RX function to actually be a
load followed by a store to an address on the stack, because the 16-bit
assignment can't be done to an xmm register.
By converting the 16-bit store operations into a sequence of SSE blend
operations, we can ensure that the descriptor loads only occur once, and
avoid both the additional stores and loads from the stack, as well as the
stalls due to the blocked loads.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Later commits to improve the driver will make use of the SSE4.1
_mm_blend_epi16 intrinsic, so:
* set the compilation level to always have SSE4.1 support,
* and add in a runtime check for SSE4.1 as part of the condition checks
for vector driver selection.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
This patch removes several redundant forward declarations
in i40e_fdir.c.
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds LLDP and DCBX capabilities to the qede PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The physical link is handled by the management Firmware.
This patch lays the infrastructure for interrupt/attention handling in
the driver, as link change notifications arrive via async interrupts,
as well as the handling of such notifications. It adds async event
notification handler interfaces to the PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
This patch adds the features to supports configuration of various Layer 2
elements, such as channels and filtering options.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The Qlogic Everest Driver for Ethernet(QEDE) Poll Mode Driver(PMD) is
the DPDK specific module for QLogic FastLinQ QL4xxxx 25G/40G CNA family
of adapters as well as their virtual functions (VF) in SR-IOV context.
This patch adds QEDE PMD, which interacts with base driver and
initialises the HW.
This patch content also includes:
- eth_dev_ops callbacks
- Rx/Tx support for the driver
- link default configuration
- change link property
- link up/down/update notifications
- vlan offload and filtering capability
- device/function/port statistics
- qede nic guide and updated overview.rst
Note that the follow on commits contain the code for the features mentioned
in documents but not implemented in this patch.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The base driver is the backend module for the QLogic FastLinQ QL4xxxx
25G/40G CNA family of adapters as well as their virtual functions (VF)
in SR-IOV context.
The purpose of the base module is to:
- provide all the common code that will be shared between the various
drivers that would be used with said line of products. Flows such as
chip initialization and de-initialization fall under this category.
- abstract the protocol-specific HW & FW components, allowing the
protocol drivers to have clean APIs, which are detached in its
slowpath configuration from the actual Hardware Software Interface(HSI).
This patch adds a base module without any protocol-specific bits.
I.e., this adds a basic implementation that almost entirely falls under
the first category.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
Fix issue reported by Coverity.
Coverity ID 13193: Bad bit shift operation (BAD_SHIFT)
large_shift: In expression 1 << pool, left shifting by more than 31 bits
has undefined behavior. The shift amount, pool, is at least 32.
This patch is a rework of register addr selection logic and mask
computation to made it more readable and avoid bit overflow when 32 bit
value is shifted over its size for pool > 31.
Fixes: fe3a45fd41 ("ixgbe: add VMDq support")
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The statistics queried by calling rte_eth_stats_get are zero when
the API is first called on the port. The root cause is because the
offset_loaded flag is not set correctly after device start.
This patch fixes this issue by resetting statistics at initialization
time. The resetting process will set offset_loaded flag.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
When building a chain of mbufs for a multi-segment packet, the
packet_type field resides at the end of the chain. It should be
copied forward to the head of the list.
Also, uses RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE to guard packet-type
computation. The mbuf fields are not copied when this define is not set.
Fixes: fe65e1e1ce ("fm10k: add vector scatter Rx")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The masking for the RX/TX enable bit was incorrect in the rx and tx
queue stop functions. Instead of using "& MASK" it used "| MASK" which
would always return true. This error was found by converity scan.
CID 13215 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: txdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
CID 13216 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: rxdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
Coverity issue: 13215
Coverity issue: 13216
Fixes: 029fd06d40 ("ixgbe: queue start and stop")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>