Adds dev_ops to set link Up or Down as appropriate.
Uses the bnxt_set_hwrm_link_config() API added previously.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds dev_ops to Add/Remove MAC addresses.
These use the bnxt_hwrm_set_filter() defined previously
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds code to free all resources except the one corresponding
to HWRM, which are required to notify the HWRM that the driver is unloaded
(these are freed in uninit()).
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds dev_ops to enable/disable multicast traffic.
Uses the bnxt_hwrm_cfa_l2_set_rx_mask() API added earlier.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds the promiscuous mode enable and disable dev_ops.
Uses the bnxt_hwrm_cfa_l2_set_rx_mask() API added in an earlier commit.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds code to add the start, stop and link update dev_ops.
Also adds wrapper functions like bnxt_init_chip(), bnxt_init_nic(),
bnxt_alloc_mem(), bnxt_free_mem()
The BNXT driver will now minimally pass traffic with testpmd.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM calls to query the port's PHY and link configuration.
New HWRM call:
bnxt_hwrm_port_phy_qcfg
This command queries the PHY configuration for the port
Also adding helper function like bnxt_get_hwrm_link_config()
and bnxt_parse_hw_link_speed() parse the link state.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add a top level functions to initialize ring groups, and functions
to allocate and free all the rings via HWRM.
A ring group is identified by an index. It consists of Rx or Tx ring id,
completion ring id and a statistics context. Once a ring group is
initialized, use this group index while creating the rings in the ASIC
using the appropriate HWRM API added via earlier patches.
Functions added:
bnxt_free_cp_ring
Calls the HWRM function generic ring free with arguments specific
to a completion ring and sanitizes the host completion structure
bnxt_free_all_hwrm_rings
Frees all the HWRM allocated hardware rings
bnxt_free_all_hwrm_resources
Frees all the resources allocated via the HRM in the hardware
bnxt_alloc_hwrm_rings
Allocates all the HWRM rings needed in the current configuration
This should be the last functionality needed to add start/stop
device operations.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
New HWRM call:
bnxt_clear_hwrm_vnic_filters
This patch adds code to set and clear L2 filters from the
corresponding VNIC. These filters will determine the Rx flows
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add function and associated structures and definitions to free
statistics context from the ASIC.
New HWRM call:
bnxt_hwrm_stat_ctx_free
This command is used to free a stat context.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM API for ring group alloc/free functions, associated structs and
definitions.
This API allocates and does basic preparation for a ring group in ASIC.
A ring group is identified by an index. It consists of Rx ring id,
completion ring id and a statistics context.
New HWRM calls:
bnxt_hwrm_ring_grp_alloc
Allocates and does basic preparation for a ring group
bnxt_hwrm_ring_grp_free
Frees and does cleanup resources of a ring group
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM API calls to allocate and free TX, RX and Completion rings
in the hardware along with the associated structs and definitions.
This informs the hardware of how the specific rings were set up in the
host and allocates them in the HWRM, setting up the doorbell registers
etc. as needed, returning an ID for the ring.
Basic ring alloc/free calls:
bnxt_hwrm_ring_alloc
This command allocates and does basic preparation for a ring.
bnxt_hwrm_ring_free
This command is used to free a ring and associated resources.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add HWRM API code to allocate a statistics context in the ASIC.
This API will be called by the previously submitted "add statistics
operations patch".
New HWRM call:
bnxt_hwrm_stat_ctx_alloc:
This command allocates and does basic preparation for a stat
context.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add API to set/clear L2 Rx mask.
New HWRM calls:
bnxt_hwrm_cfa_l2_clear_rx_mask
bnxt_hwrm_cfa_l2_set_rx_mask
These HWRM APIs allow setting and clearing of Rx masks in L2 context
per VNIC.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
New HWRM call:
bnxt_hwrm_vnic_rss_cfg:
Used to enable RSS configuration of the VNIC.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds APIs to allow configuration of a VNIC.
The functions alloc and free the Class of Service or COS and
Load Balance context corresponding to the VNIC in the chip.
New HWRM calls:
bnxt_hwrm_vnic_ctx_alloc:
Used to allocate COS/Load Balance context of VNIC
bnxt_hwrm_vnic_ctx_free:
Used to free COS/Load Balance context of VNIC
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch configures the properties and actions of the VNIC
allocated by vnic_alloc function from the previous patch.
bnxt_hwrm_vnic_cfg:
Configure the VNIC structure in hardware.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
In this patch we add a new HWRM API to free a VNIC.
New HWRM call:
bnxt_hwrm_vnic_free:
Frees a vnic allocated by the bnxt_hwrm_vnic_alloc() function.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This requires a group info array in struct bnxt, so add that, we can save
the max size from the func_qcap response, and alloc/free in init/uninit
New HWRM call:
bnxt_hwrm_vnic_alloc:
Allocates a VNIC resource in the hardware.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add bnxt_hwrm_func_reset() function and supporting structs and macros.
New HWRM calls:
bnxt_hwrm_func_reset:
This command puts the function into the reset state.
In the reset state, global and port related features of the
chip are not available.
This command resets a hardware function (PCIe function) and
frees any resources used by the function. This command initiated by
the driver prepare the function for re-use. This command may also be
initiated by a driver prior to doing it's own configuration.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Perform allocation and free()ing of ring and information structures
for the TX, RX, and completion rings. The previous patches had
so far provided top level stubs and generic ring support, while this
patch does the real allocation and freeing of the memory specific to
each different type of generic ring.
For example bnxt_init_tx_ring_struct() or bnxt_init_rx_ring_struct() is
now allocating memory based on the socked_id being provided.
bnxt_tx_queue_setup_op() or bnxt_rx_queue_setup_op() have gone through
some reformatting to perform a graceful cleanup in case memory
allocation fails.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds initial implementation of rx_pkt_burst() function for Rx.
bnxt_recv_pkts() is the top level function for doing Rx.
This patch also adds code to allocate rings in the ASIC.
For each Rx queue allocated in the PMD driver, a corresponding ring
in hardware will be created. Every time a frame is received a Rx ring
is selected based on the hardware configuration like RSS, MAC or VLAN,
COS and such. The hardware uses a completion ring to indicate the
availability of a packet.
This patch also brings in functions like bnxt_init_one_rx_ring()
bnxt_init_rx_ring_struct() which initializes various structures before
a Rx can begin.
bnxt_init_rxbds() initializes the Rx Buffer Descriptors while
bnxt_alloc_rx_data() allocates a buffer in the host to receive the
incoming packet.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Initial implementation of tx_pkt_burst for transmit.
bnxt_xmit_pkts() is the top level function that is called during Tx.
bnxt_handle_tx_cp() is used to check and process the Tx completions
generated for the Tx Buffer Descriptors sent by the hardware.
This patch also adds code to allocate rings in the hardware.
For each Tx queue allocated in the PMD driver, a corresponding ring
in hardware will be created. Every time a Tx request is initiated
via the bnxt_xmit_pkts() call, a Buffer Descriptor is created and
is sent to the hardware via the associated Tx ring.
On completing the Tx operation, the hardware will generates the status
in the form of a completion. This completion is processed by the
bnxt_handle_tx_cp() function.
Functions like bnxt_init_tx_ring_struct() and bnxt_init_one_tx_ring()
are used to initialize various members of the structure before
starting Tx operations.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add the bnxt_stats_get_op() and bnxt_stats_reset_op() dev_ops to
get and reset statistics. It also brings in the associated HWRM calls
to handle the requests appropriately.
We also have the bnxt_free_stats() function which will be used in the
follow on patches to free the memory allocated by the driver for
statistics.
New HWRM calls:
bnxt_hwrm_stat_clear:
This command clears statistics of a context
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
In this patch we are adding the bnxt_rx_queue_setup_op() and
bnxt_rx_queue_release_op() functions. These will be tied to the
rx_queue_setup and rx_queue_release dev_ops in a subsequent patch.
In these functions we allocate/free memory for the RX queues.
This still requires support to create a RX ring in the ASIC which
will be completed in a future commit. Each Rx queue created via the
rx_queue_setup dev_op will have an associated Rx ring in the hardware.
The Rx logic in the hardware picks a Rx ring for each Rx frame received
by the hardware depending on the properties like RSS, MAC and VLAN
settings configured in the hardware. These packets in the end arrive
on the Rx queue corresponding to the Rx ring in the hardware.
We are also adding some functions like bnxt_mq_rx_configure()
bnxt_free_rx_mbufs() and bnxt_free_rxq_stats() which will be used in
subsequent patches.
We are also adding hwrm_vnic_rss_cfg_* structures, which will be used
in subsequent patches to enable RSS configuration.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
In this patch we are adding the bnxt_tx_queue_setup_op() and
bnxt_tx_queue_release_op() functions. These will be tied to the
tx_queue_setup and tx_queue_release dev_ops in a subsequent patch.
In these functions we allocate/free memory for the TX queues.
This still requires support to create a TX ring in the ASIC which
will be completed in a future commit. Each Tx queue created via the
tx_queue_setup dev_op will have an associated Tx ring in the hardware.
A Tx request coming on the Tx queue gets sent to the corresponding
Tx ring in the ASIC for subsequent transmission.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add the L2 filter structure and the alloc/init/free functions for
dealing with them.
A filter is used to identify traffic that contains a matching set of
parameters like unicast or broadcast MAC address or a VLAN tag amongst
other things which then allows the ASIC to direct the incoming traffic
to an appropriate VNIC or Rx ring.
New HWRM calls:
bnxt_hwrm_clear_filter:
Free a L2 filter.
bnxt_hwrm_set_filter
Allocate an An L2 filter or a L2 context.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Structures, macros, and functions for working with completion rings
in the driver.
Completion Ring is used by the Ethernet controller to provide the
status of transmitted & received packets, report errors, report
status changes to the host software, and inter-function forwarding
requests. In addition to the generic ring features, a completion ring
can have a statistics context that has statistics periodically DMAed
to host memory, along with a consumer index.
bnxt_handle_async_event() handles completions not related to a specific
transmit or receive ring such as link status changes which arrive on
the default completion ring.
Other physical or virtual functions on the same device may send an HWRM
command forward request. In this case, we will pass it through
unvalidated. In the future, we will be able to have the PF monitor and
control VF access to the HWRM interface if needed.
New HWRM Calls:
bnxt_hwrm_exec_fwd_resp:
Execute an encapsulated command and forward the response.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Declare generic ring structures and a free() function. These are
generic ring management functions which will be used to create Tx,
Rx and Completion rings in the subsequent patches, and tie them to
the HWRM managed ring resources.
This generic ring structure is shared all the ring types and tracks
the the host Buffer Descriptors (BDs) and the HWRM assigned ID.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Add functions to allocate, initialize, and free vnics.
A VNIC represents a virtual interface. It is a resource in the RX path
of the chip and is used to setup various target actions such as RSS,
MAC filtering etc. for the physical function in use.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
The dev_configure_op function calls bnxt_set_hwrm_link_config() to
setup the PHY. This calls the new bnxt_parse_eth_link_*() functions
to translate from the DPDK macro values to those used by HWRM calls,
then calls bnxt_hwrm_port_phy_cfg() to issue the HWRM call.
New HWRM calls:
bnxt_hwrm_port_phy_cfg:
This command configures the PHY device for the port.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Gets device info from the bp structure filled in the init() function.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Move init() cleanup into uninit() function
Fix .dev_private_size
New HWRM calls:
bnxt_hwrm_func_driver_register:
This command is used by the function driver to register
its information with the HWRM.
bnxt_hwrm_func_driver_unregister:
This command is used by the function driver to unregister
with the HWRM.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Start adding support to use the HWRM API.
Hardware Resource Manager or HWRM in short, is a set of API provided
by the firmware running in the ASIC to manage the various resources.
Initial commit just performs necessary HWRM queries for init, then
fails as before.
Now that struct bnxt is non-zero size, we can set dev_private_size
correctly.
The used HWRM calls so far:
bnxt_hwrm_func_qcaps:
This command returns capabilities of a function.
bnxt_hwrm_ver_get:
This function is called by a driver to determine the HWRM
interface version supported by the HWRM firmware, the
version of HWRM firmware implementation, the name of HWRM
firmware, the versions of other embedded firmwares, and
the names of other embedded firmwares, etc. Gets the
firmware version and interface specifications. Returns
an error if the firmware on the device is not supported
by the driver and ensures the response space is large
enough for the largest possible response.
bnxt_hwrm_queue_qportcfg:
This function is called by a driver to query queue
configuration of a port.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds the initial skeleton for bnxt driver along with the
nic guide, and ties the driver into the build system.
At this point, the driver simply fails init.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
[Release Note Addition]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When using kernel PF and DPDK VF, when the PF driver finds the link
state changes, up -> down or down -> up, the driver will send a
message to VF by mailbox. This link state change may be
triggered by PHY disconnection/reconnection, user config change
like *ifconfig down/up* or interface parameter, like MTU change.
This patch enables the support of the mailbox interrupt,
so VF driver can receive the message for link up/down.
After VF receives this message, VF port need to be reset to
recover. This needs to be handled by the application so this patch
allows the app to register a reset callback so it can reset the VF port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
When using kernel PF and DPDK VF, when the PF driver finds the link
state changes, up -> down or down -> up, the driver will send a
message to VF by mailbox. This link state change may be
triggered by PHY disconnection/reconnection, user config change
like *ifconfig down/up* or interface parameter, like MTU change.
This patch enables the support of the mailbox interrupt,
so VF driver can receive the message for link up/down.
After VF receives this message, VF port need to be reset to
recover. This needs to be handled by the application so this patch
allows the app to register a reset callback so it can reset the VF port.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Update the documentation and comments with brief details on the base
code version included in this release.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Add a flag which can be used to tell the firmware to disable the link on
all ports.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Add opcodes and structures to support RSS configuration
by PF driver on behalf of the VF drivers.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Adds input set mask definitions for RSS, flow director
and flex bytes.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Adds more device capabilities for NVM management.
- if update is available
- if security check is needed
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Correct the number of MSIX vector in debug info.
Fixes: 889bc9f0cd ("i40e/base: unify the capability function")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Remove a problematic mirror rule ID check. The check
returned an error if the mirror rule ID is 0, which is
a valid value.
Fixes: 0bf2dbbe07 ("i40e/base: support mirroring rules")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Allow egress traffic to be mirrored to VSIs in promiscuous mode, as latest
firmware supports that from API version 1.5.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The hardware doesn't layout the Geneve VNI (Virtual Network
Identifier) quite the same as the VxLAN VNI, so it needs to
adjust it before sending through the Admin Queue commands as the
workaround.
Fixes: 8db9e2a1b2 ("i40e: base driver")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch trims the source code, with limiting pieces of code for
PF or VF driver only, code style fixes, and annotation
rewording.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch refactors the NVM update command processing, with adding
a new element of nvm_wait_opcode in struct i40e_hw to indicate
the opcode it waits on, and putting the wait event check into
a function. In addition, that element needs to be initialized
or updated properly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
This patch centralizes all NVM update status info into a single
structure, by moving nvm_release_on_done from struct
i40e_adminq_info to struct i40e_hw, for better management.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Host Memory Cache(HMC) admin queue APIs were removed from the latest
datasheet, and hence remove its implementation.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
SR-IOV mode is currently set when dealing with VF devices. PF devices must
be taken into account as well if they have active VFs.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
A hardware capability check is missing before enabling RX VLAN stripping
during queue setup.
Also, while dev_conf.rxmode.hw_vlan_strip is currently a single bit that
can be stored in priv->hw_vlan_strip directly, it should be interpreted as
a boolean value for safety.
Fixes: f3db948918 ("mlx5: support Rx VLAN stripping")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
There is no guarantee that the new MTU is effective after writing its value
to sysfs. Retrieve it to be sure.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Memory regions are always local with raw Ethernet queues, drop the remote
property as it adds extra processing on the hardware side.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
If configuration fails due to lack of resources, be more specific
about which resources are lacking - work queues, read queues or
completion queues. Return -EINVAL instead of -1 if more queeues
are requested than are available.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
If device configuration failed due to a lack of resources, such as
if more queues are requested than are available, the queue release
functions are called with NULL pointers which were being dereferenced.
Skip releasing queues if they are NULL pointers.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
Following the discussions from:
http://dpdk.org/ml/archives/dev/2015-July/021721.htmlhttp://dpdk.org/ml/archives/dev/2016-April/038143.html
The value of these flags is 0, making them useless. Today, no example
application checks them on Rx, and only few drivers sets them and
silently give wrong packets to the application, which should not happen.
This patch removes the unused flags from rte_mbuf and their use in the
drivers. The i40e and fm10k are kept as they are today and should be
fixed to drop bad packets. The enic driver is managed by its maintainer
in another patch.
Fixes: c22265f6 ("mbuf: add new packet flags for i40e")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch modifies bond_mode_alb_enable function.
When mempool allocation fails errno code is returned
instead of rte_panic. This allow to decide on application level
if it should quit or retry for mempool allocation.
Signed-off-by: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Ensure that the port field is set in mbufs received from the null PMD.
Signed-off-by: Sean Harte <sean.harte@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Since _BSD_SOURCE was deprecated in favor of _DEFAULT_SOURCE in Glibc 2.19
and entirely removed in 2.20, various BSD ioctl macros are not exposed
anymore when _XOPEN_SOURCE is defined, and linux/if.h now conflicts with
net/if.h.
Add _DEFAULT_SOURCE and keep _BSD_SOURCE for compatibility with older
versions.
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Testpmd application will crash in fclose() upon quit after running
the below command.
"sudo gdb --args ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4
--vdev 'eth_pcap0,tx_iface=enp1s0f1,rx_pcap=/tmp/test.pcap' --
--port-topology=chained -i"
The reason is, pcap vdev creation with tx stream type as "iface"
as in above command doesn't need member "dumpers" of
"struct tx_pcaps", hence will not have memory allocated for it.
It contains a garbage values, as local object of struct tx_pcaps
is not initialized to 0 inside rte_pmd_pcap_dev_init().
So calling pcap_dump_close() on dumper as part of eth_dev_stop()
is causing segfault in fclose().
Fix is to initialize local object of struct tx_pcaps to 0.
Also initialize local object of struct rx_pcaps to 0.
So during eth_dev_stop(), pcap_dump_close() will not be called if dumper
is NULL.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Private/conflicting ol_flags where used to enable UDP/TCP Tx
offloads. Use the common flags in PKT_TX_L4_MASK to support them.
When updating flags, also do some minor code rearranging for
slightly better performane.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
The offload flags variable (ol_flags) in rte_mbuf structure is 64-bits,
so local copy of it must be 64-bits too. Moreover bit comparison between
16-bits variable and 64-bits value make no sense. This breaks Tx vlan
IP and L4 offloads.
Coverity issue: 13218
Fixes: fefed3d1e6 ("enic: new driver")
Suggested-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Signed-off-by: John Daley <johndale@cisco.com>
Acked-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Add an ASSERT macro for the enic driver which is enabled when the log
level is >= RTE_LOG_DEBUG. Assert that number of mbufs to return to
the pool in the Tx function is never greater than the max allowed.
Signed-off-by: John Daley <johndale@cisco.com>
Reduce host CPU overhead of Tx packet processing:
* Use local variables inside per-packet loop instead of fields in structs.
* Factor book keeping and conditionals out of the per-packet loop where
possible.
* Post buffers to the nic at a maximum of every 64 packets
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Mbufs were returned to the pool one at a time. Use rte_mempool_put_bulk
instead. There were multiple function calls for each buffer returned.
Refactor this code into just 2 functions.
Signed-off-by: John Daley <johndale@cisco.com>
The NIC can either DMA a separate completion message for each completed
send or periodically just DMA the index of the last completed send.
Switch to the latter method which improves cache locality and performance.
Signed-off-by: John Daley <johndale@cisco.com>
The list of mbufs held by the driver on Tx was allocated in chunks
(a hold-over from the enic kernel mode driver). The structure used
next pointers across chunks which led to cache misses.
Allocate the array used to hold mbufs in flight on Tx with
rte_zmalloc_socket(). Remove unnecessary fields from the structure
and use head and tail pointers instead of next pointers.
Signed-off-by: John Daley <johndale@cisco.com>
Functions existed which were never called. Removed them. Also
rename the 'pmd' from the name of the Tx function to improve clarity.
Signed-off-by: John Daley <johndale@cisco.com>
The Tx functions were in enic_ethdev.c and enic_main.c - files in which
they did not logically belong. To make things consistent with most
other drivers, we therefore extract them and place them with the equivalent
Rx functions into a file called enic_rxtx.c.
Signed-off-by: John Daley <johndale@cisco.com>
Truncated packets occur on enic if an mbuf is not big enough to
receive it or there aren't enough mbufs if rx scatter is in use.
They show up as error packets but unlike other error packets (like
packets bad FCS) there are no nic drop counts incremented for them.
Truncated packets are calculated by subtracting hardware errors from
software errors. Note: this causes transient inaccuracies in the
ipackets count. Also, the length of truncated packets are counted
in ibytes even though truncated packets are dropped which can make
ibytes be slightly higher than it should be.
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Following the discussions from:
http://dpdk.org/ml/archives/dev/2015-July/021721.htmlhttp://dpdk.org/ml/archives/dev/2016-April/038143.html
Remove the unused flag from enic driver. Also, the enic driver is
now modified to drop bad packets instead of using a non-existent
flag to try and identify them as bad.
Fixes: 947d860c82 ("enic: improve Rx performance")
Fixes: 5776c30293 ("enic: fix error packets handling")
Fixes: 50765c820e ("enic: remove packet error conditional")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: John Daley <johndale@cisco.com>
rx_no_bufs is a hardware counter of packets dropped on the
interface due to no host buffers and should be used to update
r_stats->imissed counter instead of rx_nombuf.
Include rx_drop in ierrors. rx_drop is incremented if packets
arrive when the receive queue is disabled.
Add a structure and functions for initializing and clearing
software counters. Add count of Rx mbuf allocation failures
(rx_nombuf) as the first counter.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
GCC_VERSION is empty in case of clang:
/bin/sh: line 0: test: -ge: unary operator expected
It is the same issue as http://dpdk.org/dev/patchwork/patch/5994/
Fixes: 366113dbfb ("e1000: suppress misleading indentation warning")
Signed-off-by: Hiroyuki Mikita <h.mikita89@gmail.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: John W. Linville <linville@tuxdriver.com>
Suspicious implicit sign extension: pf->fdir.match_counter_index
with type unsigned short (16 bits, unsigned) is promoted in
"pf->fdir.match_counter_index << 20" to type int (32 bits, signed),
then sign-extended to type unsigned long (64 bits, unsigned).
If "pf->fdir.match_counter_index << 20" is greater than 0x7FFFFFFF,
the upper bits of the result will all be 1.
To fix the issue explicitly cast pf->fdir.match_counter_index to uint32_t.
Coverity issue: 13315
Fixes: 05999aab4c ("i40e: add or delete flow director")
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch enables configuring MTU for i40e.
Since changing MTU needs to reconfigure queue, the port must be
stopped before configuring MTU.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
When setting up the flexible paylaod selection rules, the value
NONUSE_FLX_PIT_DEST_OFF (== 63) is meant to disable the rule.
However, since the MK_FLX_PIT macro always added on an additional
offset of I40E_FLX_OFFSET_IN_FIELD_VECTOR (== 50) to the value passed
the functionality to disable the rule was broken.
This patch fixes this by checking for the disable value and not adding
the offset in that case.
Fixes: d8b90c4eab ("i40e: take flow director flexible payload configuration")
Reported-by: Michael Habibi <mikehabibi@gmail.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Previously, there was a known issue "On Intel® 40G Ethernet
Controller stopping the port does not really down the port link."
There were two reasons why the port was always kept up.
1. Old firmware versions had issues when "Set PHY config command"
was used on 40G NICs.
2. The kernel i40e driver didn't call "Set PHY config command" when
ifconfig up/down was used, it assumes the link is always up. But
in DPDK, ports are forced down when an applications quits. So if
the port is then switched to being controlled by kernel the driver,
the port can not be brought up through "ifconfig <ethx> up".
This patch fixes this issue by adding in "Set PHY config command"
into our driver. This is now possible because with newer firmware
there is no longer a problem using this command.
With this fix, after DPDK quit, if the port is switched to being used
by the kernel driver, "ethtool -s <ethx> autoneg on" can be used to
turn on the auto negotiation, and then port can be brought up through
"ifconfig <ethx> up".
NOTE: requires kernel i40e driver version >= 1.4.X
Fixes: 2f1e228174 ("i40e: skip link control as firmware workaround")
Fixes: 16c979f9ad ("i40e: disable setting of PHY configuration")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Change the Tx routine to ring the doorbell once per burst
and not on every Tx packet. This driver-level optimization
is necessary to achieve line rates for larger frame
sizes (1k or more).
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
- Process Tx completions based on configured Tx free threshold and
determine how much TX BDs are required before invoking bnx2x_tx_encap()
- Change bnx2x_tx_encap() to void function as it can now never fail
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Under certain scenarios, management firmware (MFW) periodically polls
the driver for LAN statistics. This patch implements the osal hook to
fill in the stats.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Program the PCIe completion timeout to 4 sec to give enough time
to allow completions to be received successfully in some older systems.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
To be consistent with the naming for ARM NEON implementation,
ixgbe_rxtx_vec.c is renamed to ixgbe_rxtx_vec_sse.c.
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Previously, if an adminq message is sent successfully, but no response is
received, function "i40evf_execute_vf_cmd" will return without error.
The root cause is value "err" is overwritten. This patch fixes this by
ensuring the value of err is set appropriately for each cmd.
Fixes: ae19955e7c ("i40evf: support reporting PF reset")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Use ARM NEON intrinsic to implement ixgbe vPMD
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[style fixes as highlighted by checkpatch.pl]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
move scalar code which does not use x86 intrinsic functions to new file
"ixgbe_rxtx_vec_common.h", while keeping x86 code in ixgbe_rxtx_vec.c.
This allows the scalar code to to be shared among vector drivers for
different platforms.
Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The VLAN tag information should be stored in the first mbuf of a chain
of buffers, not in the last one.
Fixes: 9fd5e98b62 ("vmxnet3: support RSS and refactor Rx offload")
Signed-off-by: John Guzik <john@shieldxnetworks.com>
Acked-by: Yong Wang <yongwang@vmware.com>
When linking drivers as shared libraries, the dependencies need
to be marked as DT_NEEDED entries.
The crypto dependencies (libsso and libIPSec) are static libraries.
To make them linked in the shared PMDs, the code must relocatable:
- libIPSec_MB.a must be built with -fPIC
- libsso_kasumi.a must be built with KASUMI_CFLAGS=-DKASUMI_C
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Fixes: 2773c86d06 ("crypto/kasumi: add driver for KASUMI library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some libraries were missing their dependency on eal, mbuf, mempool,
ring and kvargs.
It is revealed by the linker option "-z defs".
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
There is no need to have this parsing inlined in the header.
It brings kvargs dependency to every crypto drivers.
The functions are moved into rte_cryptodev.c.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The compilation for 32-bit fails when CONFIG_RTE_VIRTIO_USER is enabled:
drivers/net/virtio/virtio_user_ethdev.c:84:47:
error: format ‘%llu’ expects argument of type ‘long long unsigned int’,
but argument 5 has type ‘size_t {aka unsigned int}’
Fixes: e9efa4d938 ("net/virtio-user: add new virtual PCI driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
NSH packet can be recognized by Intel X710/XL710 series.
This patch enables the new packet type.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Yulong Pei <yulong.pei@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
In the following loop:
while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
...
}
There is no external function call or any explict memory barrier
in the loop, the re-read of used->idx might be optimized and only
be retrieved once.
Use of voaltile normally should be prohibited, and access_once
is Linux kernel's style to handle this issue; Once we have that
macro in DPDK, we could change to that style.
virtio_recv_mergable_pkts might also have the same issue, so fix
it as well.
Fixes: 823ad64795 ("virtio: support multiple queues")
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Trying to access xstats_names after "if (xstats_names == NULL)" is
obviously wrong, which would result to a crash while running "show
port xstats 0" in testpmd with virtio PMD.
The fix is straightforward; just reverse the check.
Fixes: baf91c395b ("net/virtio: fetch extended statistics with integer ids")
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
In virtio-user driver, when notify ctrl-queue, invoke API of
virtio-user device emulation to handle ctrl-q command.
Besides, multi-queue requires ctrl-queue and ctrl-queue will be
enabled automatically when multi-queue is specified.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The main purpose of this patch is to enable multi-queue. But
multi-queue requires ctrl-queue so that driver can send how many
queues will be enabled through ctrl-queue messages.
So we partially implement ctrl-queue to handle control command
with class of VIRTIO_NET_CTRL_MQ and with cmd of
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET to handle mq support. This patch
provides a function, virtio_user_handle_cq(), for driver to handle
ctrl-queue messages.
Besides, multi-queue requires VIRTIO_NET_F_MQ and VIRTIO_NET_F_CTRL_VQ
are enabled when we do feature negotiation.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch mainly adds method in vhost user adapter to communicate
enable/disable queues messages with vhost user backend, aka,
VHOST_USER_SET_VRING_ENABLE.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new virtual device named virtio-user, which can be used just like
eth_ring, eth_null, etc. To reuse the code of original virtio, we do
some adjustment in virtio_ethdev.c, such as remove key _static_ of
eth_virtio_dev_init() so that it can be reused in virtual device; and
we add some check to make sure it will not crash.
Configured parameters include:
- queues (optional, 1 by default), number of queue pairs, multi-queue
not supported for now.
- cq (optional, 0 by default), not supported for now.
- mac (optional), random value will be given if not specified.
- queue_size (optional, 256 by default), size of virtqueues.
- path (madatory), path of vhost user.
When enable CONFIG_RTE_VIRTIO_USER (enabled by default), the compiled
library can be used in both VM and container environment.
Examples:
path_vhost=<path_to_vhost_user> # use vhost-user as a backend
sudo ./examples/l2fwd/build/l2fwd -c 0x100000 -n 4 \
--socket-mem 0,1024 --no-pci --file-prefix=l2fwd \
--vdev=virtio-user0,mac=00:01:02:03:04:05,path=$path_vhost -- -p 0x1
Known issues:
- Control queue and multi-queue are not supported yet.
- Cannot work with --huge-unlink.
- Cannot work with no-huge.
- Cannot work when there are more than VHOST_MEMORY_MAX_NREGIONS(8)
hugepages.
- Root privilege is a must (mainly becase of sorting hugepages according
to physical address).
- Applications should not use file name like HUGEFILE_FMT ("%smap_%d").
- Cannot work with vhost-net backend.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch is related to how to calculate relative address for vhost
backend.
The principle is that: based on one or multiple shared memory regions,
vhost maintains a reference system with the frontend start address,
backend start address, and length for each segment, so that each
frontend address (GPA, Guest Physical Address) can be translated into
vhost-recognizable backend address. To make the address translation
efficient, we need to maintain as few regions as possible. In the case
of VM, GPA is always locally continuous. But for some other case, like
virtio-user, GPA continuous is not guaranteed, therefore, we use virtual
address here.
It basically means:
a. when set_base_addr, VA address is used;
b. when preparing RX's descriptors, VA address is used;
c. when transmitting packets, VA is filled in TX's descriptors;
d. in TX and CQ's header, VA is used.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This patch moves phys addr check from virtio_dev_queue_setup
to pci ops. To make that happen, make sure virtio_ops.setup_queue
return the result if we pass through the check.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
We skip kernel managed virtio devices, if it isn't whitelisted.
Before checking if the virtio device is whitelisted, check if devargs
is specified.
Fixes: ac5e1d838d ("virtio: skip error when probing kernel managed device")
Reported-by: Vincent Li <vincent.mc.li@gmail.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Add a new paramter (flags) to rte_vhost_driver_register(). DPDK
vhost-user acts as client mode when RTE_VHOST_USER_CLIENT flag
is set. The flags would also allow future extensions without
breaking the API (again).
The rest is straingfoward then: allocate a unix socket, and
bind/listen for server, connect for client.
This extension is for vhost-user only, therefore we simply quit
and report error when any flags are given for vhost-cuse.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With all the previous prepare works, we are just one step away from
the final ABI refactoring. That is, to change current API to let them
stick to vid instead of the old virtio_net dev.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
This change could let us avoid the dependency of "virtio_net"
struct, to prepare for the ABI refactoring.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_ifname() to export the ifname to
application.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_queue_num() to export the number of
queues.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Introduce a new API rte_vhost_get_numa_node() to get the numa node
from which the virtio_net struct is allocated.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
It does not make sense to ask the application to set/unset the flag
VIRTIO_DEV_RUNNING (that used internal only) at new_device()/
destroy_device() callback.
Instead, it should be set after new_device() succeeds and reset before
destroy_device() is invoked inside vhost lib. This patch fixes it.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
We keep a common vq structure, containing only vq related fields,
and then split others into RX, TX and control queue respectively.
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
[Jianfeng Tan: found and fixed 2 bugs]
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The commit dd856dfcb9 introduced an optimization that prepends virtio
header to mbuf data. It can be used when the tx mbuf is writeable, so we
need to check that the mbuf is direct (i.e. it embeds its own data).
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The class id is not filled and makes probing to fail.
Updated the code to use RTE_PCI_DEVICE which fills
the class id with a wildcard value.
Fixes: 701c8d80c8 ("pci: support class id probing")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library support SSE4.1 instruction set,
so feature flags of the crypto device must be updated
to reflect this.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Underlying libsso_snow3g library now supports bit-level
operations, so PMD has been updated to allow them.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
In order to avoid using magic numbers, macros for
the IV and digest lengths for Snow3G have been added.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The underlying libsso library that SNOW3G PMD uses has been updated,
so now it is called libsso_snow3g. Also, the path to the library
has been renamed to reflect this changes (now called LIBSSO_SNOW3G_PATH).
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Added new SW PMD which makes use of the libsso_kasumi SW library,
which provides wireless algorithms KASUMI F8 and F9
in software.
This PMD supports cipher-only, hash-only and chained operations
("cipher then hash" and "hash then cipher") of the following
algorithms:
- RTE_CRYPTO_SYM_CIPHER_KASUMI_F8
- RTE_CRYPTO_SYM_AUTH_KASUMI_F9
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
The variables AESNI_MULTI_BUFFER_LIB_PATH and LIBSSO_PATH
are not required for "make clean".
It is the same fix as in the commit e277b2397.
Fixes: eec136f3c5 ("aesni_gcm: add driver for AES-GCM crypto operations")
Fixes: 3aafc423cf ("snow3g: add driver for SNOW 3G library")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the test-pmd
and proc_info applications to use the new xstats API, and removes
deprecated code associated with the old API.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the virtio driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the i40e driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the fm10k driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the e1000 driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
The current extended ethernet statistics fetching involve doing several
string operations, which causes performance issues if there are lots of
statistics and/or network interfaces. This patch changes the ixgbe driver
to use the new API that seperates name string and value queries.
Signed-off-by: Remy Horton <remy.horton@intel.com>
Although ppc supports both endianesses, qemu supposes that the cpu is
big endian and enforces this for the virtio-net stuff.
Fix PCI accesses in legacy mode. Only ppc64le is supported at the moment.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.
Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:
PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
is enabled in the RX configuration of the PMD.
For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.
This patch also updates the drivers. For PKT_RX_VLAN_PKT:
- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.
For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.
[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The SYSFS_PCI_DEVICES is a constant that makes the PCI testing
difficult as it points to an absolute path. We remove using this
constant and introducing a function pci_get_sysfs_path that gives
the same value. However, the user can pass a SYSFS_PCI_DEVICES env
variable to override the path. It is now possible to create a fake
sysfs hierarchy for testing.
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
This patch adds missing DEPDIRS to avoid any library referring to
symbols they are not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add the missing external dependency to pthread to avoid referring to
symbols the library is not linked against.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Up to now dependencies between DPDK internal libraries have been
untracked at shared library level, requiring applications to know
about library internal dependencies and often consequently overlinking.
Since the dependencies are already recorded for build ordering in the
makefiles with DEPDIRS-y we can use that information to generate LDLIBS
entries for internal libraries automatically.
Also revert commit 8180554d82 ("vhost: fix linkage of driver with
library") which is made redundant by this change.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
This patch provides counter mode support to AES-NI multi-buffer library.
The following cipher algorithm is enabled:
- RTE_CRYPTO_CIPHER_AES_CTR
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added possibility for AES to work in counter mode
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
To avoid GCC warning about "dereferencing type-punned pointer will break
strict-aliasing rules" aad_len pointer is dereferenced instead of direct
dereferencing of uint32_t* cast of the middle of an array.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix an error with computation of physical address of
content descriptor in the symmetric operations session
Fixes: 1703e94ac5 ("qat: add driver for QuickAssist devices")
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Fix null pointer dereferencing by reporting if null and
exiting the function.
Coverity issue: 126584
Fixes: c0f87eb525 ("cryptodev: change burst API to be crypto op oriented")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Fix wrong indentation for return value
Coverity issue: 126585
Fixes: 924e84f873 ("aesni_mb: add driver for multi buffer based crypto")
Signed-off-by: Deepak Kumar Jain <deepak.k.jain@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Removed comparison against $CC in Makefiles as
in cross-compiling mode CC can be a different string
instead of string "gcc"
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The qede driver depends on libz but the LDLIBS entry in makefile
was missing. Also because of the external dependency, make it
disabled in default config as per common DPDK policy on external deps.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some 64-bit variables are printed for debug.
%PRIx64 qualifier must be used because %lx is not long enough
on 32-bit systems
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Harish Patil <harish.patil@qlogic.com>
The RTE_ETH_VALID_PORTID_OR_ERR_RET macro is used in some places
to check if a port id is valid or not. This commit makes use of it in
some new parts of the code.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some architectures (ex: Power8) have a cache line size of 128 bytes,
so the drivers should not expect that prefetching the second part of
the mbuf with rte_prefetch0(&m->cacheline1) is valid.
This commit add helpers that can be used by drivers to prefetch the
rx or tx part of the mbuf, whatever the cache line size.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Introduce rte_mempool_populate_default() which allocates
mempool objects in several memzones.
The mempool header is now always allocated in a specific memzone
(not with its objects). Thanks to this modification, we can remove
many specific behavior that was required when hugepages are not
enabled in case we are using rte_mempool_xmem_create().
This change requires to update how kni and mellanox drivers lookup for
mbuf memory. For now, this will only work if there is only one memory
chunk (like today), but we could make use of rte_mempool_mem_iter() to
support more memory chunks.
We can also remove RTE_MEMPOOL_OBJ_NAME that is not required anymore for
the lookup, as memory chunks are referenced by the mempool.
Note that rte_mempool_create() is still broken (it was the case before)
when there is no hugepages support (rte_mempool_create_xmem() has to be
used). This is fixed in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Do not use paddr table to store the mempool memory chunks.
This will allow to have several chunks with different virtual addresses.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Now that the mempool objects are chained into a list, we can use it to
browse them. This implies a rework of rte_mempool_obj_iter() API, that
does not need to take as many arguments as before. The previous function
is kept as a private function, and renamed in this commit. It will be
removed in a next commit of the patch series.
The only internal users of this function are the mellanox drivers. The
code is updated accordingly.
Introducing an API compatibility for this function has been considered,
but it is not easy to do without keeping the old code, as the previous
function could also be used to browse elements that were not added in a
mempool. Moreover, the API is already be broken by other patches in this
version.
The library version was already updated in
commit 213af31e09 ("mempool: reduce structure size if no cache needed")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit removes the const qualifier for the mempool in
rte_mempool_walk() callback prototype.
Indeed, most functions that can be done on a mempool require a non-const
mempool pointer, except the dump and the audit. Therefore, the
mempool_walk() is more useful if the mempool pointer is not const.
This is required by next commit where the mellanox drivers use
rte_mempool_walk() to iterate the mempools, then rte_mempool_obj_iter()
to iterate the objects in each mempool.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
This commit renames mempool_obj_ctor_t as mempool_obj_cb_t.
In next commits, we will add the ability to populate the
mempool and iterate through objects using the same function.
We will use the same callback type for that. As the callback is
not a constructor anymore, rename it into rte_mempool_obj_cb_t.
The rte_mempool_obj_iter_t that was used to iterate over objects
will be removed in next commits.
No functional change.
In this commit, the API is preserved through a compat typedef.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Many drivers provide their own implementation of rte_mbuf_raw_alloc(),
duplicating the code. Introduce a new public function in rte_mbuf to
allocate a raw mbuf (uninitialized).
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
With gcc >= 6.0, qede base driver fails to build with:
drivers/net/qede/base/ecore_cxt.c: In function 'ecore_cdu_init_common':
cc1: error: left shift of negative value [-Werror=shift-negative-value]
Since the base drivers are untouchable, work around by disabling
the warning.
Fixes: ec94dbc573 ("qede: add base driver")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
When virtio was proposed in DPDK, there is no API to free memzones.
But this has changed since rte_memzone_free() has been implemented by
commit ff909fe21f ("mem: introduce memzone freeing").
This patch is to make sure memzones in struct virtqueue, like mz and
virtio_net_hdr_mz, are freed when queue is released or setup fails.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
The "drv_flags" is set with device as the input, which means different
device (say, modern vs legacy) could end up with a different value. And
the fact that "drv_flags" is shared by all devices means that every time
we add a new device, it simply overwrites the value configured from the
last device.
Therefore, when two virtio devices have different flags, it may lead to
wrong result, such as virtio would set irq config when it's not supported.
Making the flag per device (using "dev->data->dev_flags") could let us
have different value for each device, which would avoid the above issue.
Fixes: da978dfdc4 ("virtio: use port IO to get PCI resource")
Reported-by: David Marchand <david.marchand@6wind.com>
Suggested-by: David Marchand <david.marchand@6wind.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Avail ring is updated by the frontend and consumed by the backend.
There are frequent core to core cache transfers for the avail ring.
This optmization avoids avail ring entry index update if the entry
already holds the same value.
As DPDK virtio PMD implements FIFO free descriptor list (also for
performance reason of CACHE), in which descriptors are allocated
from the head and freed to the tail, with this patch in most cases
avail ring will remain the same, then it would be valid in both caches
of frontend and backend.
Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Currently, vhost PMD doesn't have linkage for librte_vhost, even though
it depends on librte_vhost APIs. This causes a linkage error if below
conditions are fulfilled.
- DPDK libraries are compiled as shared libraries.
- DPDK application doesn't link librte_vhost.
- Above application tries to link vhost PMD using '-d' DPDK option.
The patch adds linkage for librte_vhost to vhost PMD not to cause an
above error.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
check merge-able header as it is supported.
previously we don't support merge-able feature, so non merge-able
header is checked.
Fixes: 13ce5e7eb9 ("virtio: mergeable buffers")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
After the do-while loop, idx could be VQ_RING_DESC_CHAIN_END (32768)
when it's the last vring desc buf we can get. Therefore, following
expresssion could lead to a segfault error, as it tries to access
beyond the desc memory boundary.
start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
This bug could be reproduced easily with "set fwd txonly" in the
guest PMD, where the dequeue on host is slower than the guest Tx,
that running out of free desc buf is pretty easy.
The fix is straightforward and easy, just remove it, as we have
already set desc flags properly inside the do-while loop.
Fixes: dd856dfcb9 ("virtio: use any layout on Tx")
[Yuanhan Liu: commit log reword]
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Issue: output of appliations and debug info of DPDK may be mixed up
in same line when enabling below debug options of virtio:
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX
CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DRIVER
This patch adds "\n" in the tail of definitions like PMD_RX_LOG,
PMD_TX_LOG, and PMD_DRV_LOG, and removes some "\n" when using these
macros.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Previously, for tunnel packets, such as VXLAN/NVGRE, the vlan
tags of the inner header will be stripped without putting vlan
info to descriptor, what is not expected behaviour.
This patch fixes it by changing hardware configuration to leave
the inner packet alone.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Add three new functions to the vf ops structure:
* rx_queue_count
* rxq_info_get
* txq_info_get
In all cases, the corresponding PF APIs can be reused.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The code to provide mbufs for RX used m->data_off instead of
RTE_PKTMBUF_HEADROOM as the position inside the mbuf for the data to be
written. As the mbuf is uninitialised, this could potentially cause Rx
data to be placed at the wrong address in the mbuf - or even outside it.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
RTE_PCI_DRV_DETACHABLE flag is required to indicate that a device
can be detached during execution.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
mbufs were not properly released post-tx when they are chained.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Some apps calling some functions from different threads at the
same time could lead to reconfig problems. Reconfig mechanism is
based on a hardware queue where incrementing a counter signals the
firmware to do the reconfig. If there is a second increment before the
first one has been processed the firmware will stop and a device
reset is necessary.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
librte_malloc was long since merged into librte_eal, mop up the
leftover references to it from driver Makefiles.
Fixes: 2f9d47013e ("mem: move librte_malloc to eal/common")
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Applied with qede Makefile update too.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
On hosts running a non-DPDK PF driver, the VF has no means of changing
the HW CRC strip setting for a RX queue. It's implicitly enabled.
This patch checks if the host is running a non-DPDK PF kernel driver,
and returns an error, if HW CRC stripping was not requested in the port
configuration.
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In i40evf_config_vlan_pvid the check for NULL for the dev value is
unnecessary, since this value is passed in from the ethdev API which
will ensure that a valid rte_eth_dev structure is provided.
Furthermore, all code paths leading to this function already use the
dev value.
Issue identified by Coverity.
Coverity ID 13302:
There may be a null pointer dereference, or else the comparison against
null is unnecessary.
In i40evf_config_vlan_pvid: All paths that lead to this null pointer
comparison already dereference the pointer earlier
Fixes: 2b12431b53 ("i40e: add vlan stripping and insertion to VF")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
In Table 8-16 of the "Intel® Ethernet Controller XL710 Datasheet" it is
stated that when the whole packet is written to a single buffer, the
header length field in the descriptor will be 0. This means that when
extracting the packet/data_len field from the descriptor in the driver
we do not need to mask out the extra header-length bits.
Inside the vector driver, this reduces the need to pull all four pktlen
fields into a single register to work on. Instead of a shift and mask,
we now need to only do a shift. Therefore, we can work on each descriptor
independently, processing each using one shift intrinsic and a blend.
This change makes the code shorter and easier to read, so we can pull it
into the main descriptor processing loop instead of needing its own
function. This in turn makes the descriptor processing in the loop as a
whole slightly easier to read as it's more linear.
In terms of performance, in testing this change shows little effect, with
single-core perf tests showing a very slight improvement.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
An analysis of the i40e code using Intel® VTune™ Amplifier 2016 showed
that the code was unexpectedly causing stalls due to "Loads blocked by
Store Forwards". This can occur when a load from memory has to wait
due to the prior store being to the same address, but being of a smaller
size i.e. the stored value cannot be directly returned to the loader.
[See ref: https://software.intel.com/en-us/node/544454]
These stalls are due to the way in which the data_len values are handled
in the driver. The lengths are extracted using vector operations, but those
16-bit lengths are then assigned using scalar operations i.e. 16-bit
stores.
These regular 16-bit stores actually have two effects in the code:
* they cause the "Loads blocked by Store Forwards" issues reported
* they also cause the previous loads in the RX function to actually be a
load followed by a store to an address on the stack, because the 16-bit
assignment can't be done to an xmm register.
By converting the 16-bit store operations into a sequence of SSE blend
operations, we can ensure that the descriptor loads only occur once, and
avoid both the additional stores and loads from the stack, as well as the
stalls due to the blocked loads.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
Later commits to improve the driver will make use of the SSE4.1
_mm_blend_epi16 intrinsic, so:
* set the compilation level to always have SSE4.1 support,
* and add in a runtime check for SSE4.1 as part of the condition checks
for vector driver selection.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
This patch removes several redundant forward declarations
in i40e_fdir.c.
Fixes: a778a1fa2e ("i40e: set up and initialize flow director")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds LLDP and DCBX capabilities to the qede PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The physical link is handled by the management Firmware.
This patch lays the infrastructure for interrupt/attention handling in
the driver, as link change notifications arrive via async interrupts,
as well as the handling of such notifications. It adds async event
notification handler interfaces to the PMD.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
This patch adds the features to supports configuration of various Layer 2
elements, such as channels and filtering options.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The Qlogic Everest Driver for Ethernet(QEDE) Poll Mode Driver(PMD) is
the DPDK specific module for QLogic FastLinQ QL4xxxx 25G/40G CNA family
of adapters as well as their virtual functions (VF) in SR-IOV context.
This patch adds QEDE PMD, which interacts with base driver and
initialises the HW.
This patch content also includes:
- eth_dev_ops callbacks
- Rx/Tx support for the driver
- link default configuration
- change link property
- link up/down/update notifications
- vlan offload and filtering capability
- device/function/port statistics
- qede nic guide and updated overview.rst
Note that the follow on commits contain the code for the features mentioned
in documents but not implemented in this patch.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
The base driver is the backend module for the QLogic FastLinQ QL4xxxx
25G/40G CNA family of adapters as well as their virtual functions (VF)
in SR-IOV context.
The purpose of the base module is to:
- provide all the common code that will be shared between the various
drivers that would be used with said line of products. Flows such as
chip initialization and de-initialization fall under this category.
- abstract the protocol-specific HW & FW components, allowing the
protocol drivers to have clean APIs, which are detached in its
slowpath configuration from the actual Hardware Software Interface(HSI).
This patch adds a base module without any protocol-specific bits.
I.e., this adds a basic implementation that almost entirely falls under
the first category.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
Fix issue reported by Coverity.
Coverity ID 13193: Bad bit shift operation (BAD_SHIFT)
large_shift: In expression 1 << pool, left shifting by more than 31 bits
has undefined behavior. The shift amount, pool, is at least 32.
This patch is a rework of register addr selection logic and mask
computation to made it more readable and avoid bit overflow when 32 bit
value is shifted over its size for pool > 31.
Fixes: fe3a45fd41 ("ixgbe: add VMDq support")
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The statistics queried by calling rte_eth_stats_get are zero when
the API is first called on the port. The root cause is because the
offset_loaded flag is not set correctly after device start.
This patch fixes this issue by resetting statistics at initialization
time. The resetting process will set offset_loaded flag.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
When building a chain of mbufs for a multi-segment packet, the
packet_type field resides at the end of the chain. It should be
copied forward to the head of the list.
Also, uses RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE to guard packet-type
computation. The mbuf fields are not copied when this define is not set.
Fixes: fe65e1e1ce ("fm10k: add vector scatter Rx")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The masking for the RX/TX enable bit was incorrect in the rx and tx
queue stop functions. Instead of using "& MASK" it used "| MASK" which
would always return true. This error was found by converity scan.
CID 13215 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: txdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
CID 13216 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: rxdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.
Coverity issue: 13215
Coverity issue: 13216
Fixes: 029fd06d40 ("ixgbe: queue start and stop")
Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The position of register values within i40e register dumps is
supposed to reflect the register addresses. These were not being
correctly calculated.
Fixes: d9efd0136a ("i40e: add EEPROM and registers dumping")
Signed-off-by: Remy Horton <remy.horton@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Run ixgbe driver through checkpatch and fix the issues highlighted
Fix line spacing, some bad indentation, and in a couple
of cases use short circuit (already there) return to lessen indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Applied with four additional fixes for issues highlighted by checkpatch
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The macro RTE_VERIFY always checks a condition.
It is optimized with "unlikely" hint.
While this macro is well suited for test applications, it is preferred
in libraries and examples to enable such check in debug mode.
That's why the macro RTE_ASSERT is introduced to call RTE_VERIFY only
if built with debug logs enabled.
A lot of assert macros were duplicated and enabled with a specific flag.
Removing these #ifdef allows to test these code branches more easily
and avoid dead code pitfalls.
The ENA_ASSERT is kept (in debug mode only) because it has more
parameters to log.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Some statistics were deprecated since release 2.1 (49f386542a).
The last deprecated counter to be used was imcasts.
The VF loopback statistics are also removed as they are used only
in igb and duplicated in extended statistics.
The new counters should be added to extended statistics.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Compilation fails on 32 bits on Xen driver, due to wrong casting:
drivers/net/xenvirt/virtqueue.h: In function ‘virtqueue_enqueue_xmit’:
drivers/net/xenvirt/virtqueue.h:234:24: error: cast from pointer to integer
of different size [-Werror=pointer-to-int-cast]
start_dp[idx].addr = rte_pktmbuf_mtod(cookie, uint64_t);
^
Fixes: d6b324c00f ("mbuf: get DMA address")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
VxLAN & NVGRE are supported by x550. As we know HW can parse
the packet and tell SW the type info. For VxLAN & NVGRE packets
there's some change. HW will not tell SW the info of the outer
header but the inner header instead. But we always take the
info as it's for the outer header. So the packet type info is
not right when x550 receives VxLAN & NVGRE packets.
As x550 only supports IPv4 VxLAN & NVGRE packets, we can tell
the outer header of VxLAN is IPv4 + UDP, and the outer header
of NVGRE is IPv4 only. What we don't know is if there's
optional field in the outer IPv4 header.
This patch implement the support of packet type for VxLAN &
NVGRE. And it fixes the wrong packet type issue either.
BTW:
It doesn't fix any existing commit as although it resolve an
issue it's more like a new feature but not a fix.
Reported-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the vhost PMD were configured with more queues than the guest, the old
code would segfault in rte_vhost_enable_guest_notification due to a NULL
virtqueue pointer.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
renamed rte_cryptodev_sym_session.type -> dev_type
(as it's not a session type, but a device type)
renamed rte_crypto_sym_op.type -> sess_type
(as it's not an op type, but a session type)
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
AES-GCM PMD only supports 128-bit keys, but not 192 and 256,
which the capabilities structure was reporting.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
After some testing, it was found that retrieving numa information
about a vhost device via a call to get_mempolicy is more
accurate when performed during the new_device callback versus
the vring_state_changed callback, in particular upon initial boot
of the VM. Performing this check during new_device is also
potentially more efficient as this callback is only triggered once
during device initialisation, compared with vring_state_changed
which may be called multiple times depending on the number of
queues assigned to the device.
Reorganise the code to perform this check and assign the correct
socket_id to the device during the new_device callback.
Fixes: ee584e9710 ("vhost: add driver on top of the library")
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
With (ICC) 16.0.2 20160204, getting following warnings:
.../drivers/net/ena/base/ena_com.c(492): error #3656: variable
"flags" may be used before its value is set
ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags);
.../drivers/net/ena/base/ena_com.c(1971): error #3656: variable
"mem_handle" may be used before its value is set
ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, len,
For both warnings the variable value is ignored, so there is no defect.
To comfort compiler warning, a initial value provided to variables.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Issue:
when using the following CLI in testpmd to enable ipv6 TSO feature
(set --txqflags=0 in the testpmd command)
set verbose 1
csum set ip hw 0
csum set udp hw 0
csum set tcp hw 0
csum set sctp hw 0
csum set outer-ip hw 0
csum parse_tunnel on 0
tso set 800 0
set fwd csum
start
We will not get what we want, the ipv6 packets sent out from IXIA can be
received by i40e, but cannot forward to another port.
The root cause is when HW doing the TSO offload for packets, it does not only
depends on the context descriptor to define the MSS and TSO payload size, it
also need to know whether this packets is ipv4 or ipv6, we use
i40e_txd_enable_checksum to generate the related fields for data descriptor.
But PMD will not call i40e_txd_enable_checksum if only the TSO offload flag is
set. The reason why ipv4 works fine for TSO in testpmd csum mode is csum engine
will set the ip csum flag when the packet is ipv4 and TSO is enabled but
will not set the flag for ipv6 and this flag will cause the
i40e_txd_enable_checksum to be invoked. For both the cases the TSO flag will be
set, so we need to use TSO flag to trigger the i40e_txd_enable_checksum.
The right logic here is we enable csum offload for both ipv4 and ipv6 when TSO
flag is set.
Fixes: e3f0151f ("i40e: enable Tx checksum only for offloaded packets")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The issue is the VF's link speed kept as 10G and status always was up.
It did not change even the physical link's status changed.
This patch fixes this issue to make VF's link info consistent with
physical link.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
A problem is found on i350 VF. We found TX will happen once
per 4 packets. If only 1~3 packets are received, they will
not be forwarded. But the real problem is on RX side. The
reason is the default RX write-back threshold is changed to
4, so every first 3 packets may be hung there.
This patch checks the RX wthresh when setting up the RX
queue, and forces it to be 1, so every packet can be handled
immediately.
Fixes: 4a41c17dba ("igb: set default thresholds based on MAC type")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
For simple TX the virtio-net header must be zeroed, but it was using memory
that had been initialized with indirect descriptor tables. This resulted in
"unsupported gso type" errors from librte_vhost.
We can use the same memory for every descriptor to save cachelines in the
vswitch.
Fixes: 6dc5de3a ("virtio: use indirect ring elements")
Signed-off-by: Rich Lane <rich.lane@bigswitch.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
The link speed configuration is now done with bitmaps so 100G speed
requires only a new bit flag.
The actual link speed is a number so its size must be increased from
16-bit to 32-bit.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Tested-by: Matej Vido <vido@cesnet.cz>
This patch redesigns the API to set the link speed/s configuration
of an ethernet port. Specifically:
- it allows to define a set of advertised speeds for
auto-negociation.
- it allows to disable link auto-negociation (single fixed speed).
- default: auto-negociate all supported speeds.
A flag autoneg in struct rte_eth_link indicates if link speed was a
result of auto-negociation or was fixed by configuration.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Tested-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Tested-by: Beilei Xing <beilei.xing@intel.com>
Tested-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The speed capabilities of a device can be retrieved with
rte_eth_dev_info_get().
The new field speed_capa is initialized in the drivers without
taking care of device characteristics in this patch.
When the capabilities of a driver are accurate, the table in
overview.rst must be filled.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
The speed numbers ETH_LINK_SPEED_ are renamed ETH_SPEED_NUM_.
The prefix ETH_LINK_SPEED_ is kept for AUTONEG and will be used
for bit flags in next patch.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Some duplex values are replaced from 0 to half-duplex when link is down.
Some drivers are still using their own constants for duplex modes.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Define and use ETH_LINK_UP and ETH_LINK_DOWN where appropriate.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Loop that calculates total number of tx descriptors in slave tx queues
should iterate up to nb_tx_queues, not nb_rx_queues.
Fixes: 3ef7955700 ("bonding: fix LACP mempool size")
Signed-off-by: Vladyslav Buslov <vladyslav.buslov@harmonicinc.com>
Stopping then re-starting a bond interface containing slaves that
used polling for link detection caused the bond to think all slave
links were down and inactive.
Move the start of the polling for link from slave_add() to
bond_ethdev_start() and in bond_ethdev_stop() make sure we clear
the last_link_status of the slaves.
Fixes: a45b288ef2 ("bond: support link status polling")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Signed-off-by: John Daley <johndale@cisco.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds out-of-place operations to qat symmetric crypto PMD,
i.e. the result of the operation can be written to the destination buffer
instead of overwriting the source buffer as done in "in-place" operation.
Both buffers can be of different sizes.
Previously the qat PMD assumed that m_src and m_dst in rte_crypto_sym_op
were identical.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: John Griffin <john.griffin@intel.com>
In SUSE11-SP3 i686 platform, with gcc 4.5.1, there are compile issues, e.g:
null_crypto_pmd_ops.c:44:3: error:
unknown field 'sym' specified in initializer
cc1: warnings being treated as errors
The member in anonymous union initialization should be inside '{}',
otherwise it will report an error.
Fixes: 26c2e4ad5a ("cryptodev: add capabilities discovery")
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Signed-off-by: Fiona Trahe <fiona.trahe@intel.com>
These asserts are only for debugging and never fired during
any testing, but they confuse coverity's null tracking.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Silence a compiler warning that this variable may be used uninitialized.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tell the compiler to use an unsigned constant for the config shifts.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Tell the compiler to use an unsigned constant for the config shifts.
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The device lsc interrupt check has a misleading whitespace around it which
can be improved by adding appropriate braces to the check. Since the ret
variable was checked after previous assignment, this introduces no functional
change.
Fixes: 921a72008f ("e1000: add Rx interrupt handler")
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Panu Matilainen <pmatilai@redhat.com>
The register read/write mphy functions have misleading whitespace around
the `locked` check. This cleanup merely preserves the existing functionality
and suppresses future gcc versions' "misleading indentation" warning.
Suggested-by: Panu Matilainen <pmatilai@redhat.com>
Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the provided configuration does not call for RSS, then RSS is
explicitly disabled. Without this change, the device continues to
operate under the previous RSS configuration.
Fixes: 57033cdf8f ("fm10k: add PF RSS")
Signed-off-by: Michael Frasca <michael.frasca@oracle.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Now that vmxnet3 supports TCP/UDP checksum offload, let's update
the default txq flags to allow such offloads. Also fixed the tx
queue setup check to allow TCP/UDP checksum and only error out
if SCTP checksum is requested.
Fixes: f598fd063b ("vmxnet3: add Tx L4 checksum offload")
Reported-by: Heng Ding <hengx.ding@intel.com>
Signed-off-by: Yong Wang <yongwang@vmware.com>
The NFP driver (unlike other PCI devices) was not copying the pci info
from the pci_dev to the eth_dev. This would make the driver_name be
null (and other unset fields) when application uses dev_info_get.
This was found by code review; do not have the hardware.
Fixes: d4a27a3b09 ("nfp: add basic features")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
When the number of RX queues is not a power of two,
the RETA table is configured to its maximum size for
better balancing.
Testing showed that limiting its size to 256 improves performance
noticeably with little to no impact on balancing results.
Fixes: ebb30ec64a ("mlx5: increase RETA table size")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Once freed, completed mbufs pointers are not set to NULL in the TX queue.
Clean up function must take this into account.
Fixes: 2e22920b85 ("mlx5: support non-scattered Tx and Rx")
Fixes: 7fae69eeff ("mlx4: new poll mode driver")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Calling rte_eth_dev_get_dcb_info to get dcb info from i40e
driver if VMDQ is disabled, results in a segmentation fault.
This patch fixes it by treating VMDQ and No-VMDQ respectively
when querying dcb information.
Fixes: 5135f3ca49 ("i40e: enable DCB in VMDQ VSIs")
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
The lower 16 bits of EICR register are used for queue interrupts,
dpdk framework take over the first bit for other interrupts like
LSC, so there're only 15 bits left for queue interrupts mapping.
This patch adds a check for the num of interrupt queues at
dev_start.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
When starting testpmd with multiple queues on a ixgbe VF
port, it failed with this printing, "nb_rxq(4) is greater
than max_rx_queues(1)".
The root cause is the VF doesn't get the right max rx queue
number from PF and it uses the default value 1.
VF max rx queue number is set by PF through mailbox messages.
The message for this setting only supports version 1.1. As
message version is updated to 1.2, VF cannot parse the rx queue
number setting message correctly.
This patch raise a specific base code update for this issue.
Fixes: 72dec9e37a ("ixgbe: support multicast promiscuous mode on VF")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Update the 'imissed' counter with the number of packets dropped
by the NIC.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
When the enic was disabled, link notification was correctly disabled
in the NIC but the software indicator that it was disabled was not
updated (vdev->notify_pa not set to 0). When the link came back up,
enic did not re-enable notification in the NIC.
This affected bonding when a enic slave device link bounced.
The fix is to unconditionally enable notification when the enic is
enabled.
Fixes: 9913fbb91d ("enic/base: common code")
Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Nelson Escobar <neescoba@cisco.com>
If the nb_pkts parameter to rte_eth_tx_burst() was greater than
the TX descriptor count, a completion was not being requested
from the NIC, so descriptors would not be released back to the
host causing a lock-up.
Introduce a limit of how many TX descriptors can be used in a single
call to the enic PMD burst TX function before requesting a completion.
Fixes: d739ba4c6a ("enic: improve Tx packet rate")
Signed-off-by: John Daley <johndale@cisco.com>
FreeBSD was not defined in ena_plat.h
ETIME is not defined in FreeBSD.
In file included from DPDK/drivers/net/ena/base/ena_com.h:37:0,
from DPDK/drivers/net/ena/ena_ethdev.h:39,
from DPDK/drivers/net/ena/ena_ethdev.c:41:
DPDK/drivers/net/ena/base/ena_plat.h:48:2: error: #error "Invalid platform"
Fixes: 99ecfbf845 ("ena: import communication layer")
Fixes: 9ba7981ec9 ("ena: add communication layer for DPDK")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Fix for multiple compilation errors for ICC:
error #188: enumerated type mixed with another type
error #592: variable "flags" is used before its value is set
Fixes: 99ecfbf845 ("ena: import communication layer")
Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
Build log:
/root/dpdk/app/test-pmd/cmdline.c:6687:45: error: no member named
's6_addr32' in 'struct in6_addr'
rte_be_to_cpu_32(res->ip_value.addr.ipv6.s6_addr32[i]);
This is caused by macro "s6_addr32" not defined on FreeBSD and testpmd
swap big endian parameter to host endian. Move the swap action to i40e
ethdev will fix this issue.
Fixes: 7b1312891b ("ethdev: add IP in GRE tunnel")
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Tested-by: Bruce Richardson <bruce.richardson@intel.com>
The code uses "entry" in the next loop of LIST_FOREACH after calling free()
on it in i40e_res_pool_destroy().
Change to a safe way to free entry, which is similar with LIST_FOREACH_SAFE
in FreeBSD.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Jiangu Zhao <zhaojg@arraynetworks.com.cn>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This solves issues when an active device is added to a bond.
If a device to be enslaved already has transmit and/or receive queues
allocated, use those and then create any additional queues that are
necessary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ehkinzie@gmail.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
On the 82575 chipset, there is a pool of global TX contexts instead of 2
per queues on 82576. See Table A-1 "Changes in Programming Interface
Relative to 82575" of Intel® 82576EB GbE Controller datasheet (*).
In the driver, the contexts are attributed to a TX queue: 0-1 for txq0,
2-3 for txq1, and so on.
In igbe_set_xmit_ctx(), the variable ctx_curr contains the index of the
per-queue context (0 or 1), and ctx_idx contains the index to be given
to the hardware (0 to 7). The size of txq->ctx_cache[] is 2, and must
be indexed with ctx_curr to avoid an out-of-bound access.
Also, the index returned by what_advctx_update() is the per-queue
index (0 or 1), so we need to add txq->ctx_start before sending it
to the hardware.
(*) The datasheets says 16 global contexts, however the IDX fields in TX
descriptors are 3 bits, which gives a total of 8 contexts. The
driver assumes there are 8 contexts on 82575: 2 per queues, 4 txqs.
Fixes: 4c8db5f09a ("igb: enable TSO support")
Fixes: af75078fec ("first public release")
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When using RSS, the number of rxqs has to be a power of two.
This is a problem because there is no API in DPDK that makes
the application aware of that.
A good compromise is to allow the application to request a
number of rxqs that is not a power of 2, but having inactive
queues that will never receive packets. In this configuration,
a warning will be issued to users to let them know that
this is not an optimal configuration.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
l2 tunnel and e-tag are not supported on the new x550em_a NICs, due
to missing checks for that mac type in the code.
This patch adds in the necessary conditional checks to enable the features
for x550em_a.
Fixes: 22e77d4501 ("ixgbe: support L2 tunnel operations")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
An issue is found on x550em NICs: ieee1588 is not working, the time is
always reported as 0.
The root cause is that the timer is only supported by the driver for x550,
switch statement entries are missing for x550em_x and x550em_a. This patch
adds those missing entries.
Fixes: a7740dc130 ("ixgbe: support new devices and MAC types")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The current_primary_port is initialised to an invalid value
during bonded device creation.
It must be set to a valid value later.
This fix sets it to a valid value when the first slave port
is added to the bonding device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
icc (icc (ICC) 16.0.1 20151021) is generating following compile error:
CC ixgbe_rxtx.o
.../drivers/net/ixgbe/ixgbe_rxtx.c(153): error #3656: variable
"free" may be used before its value is set
(nb_free > 0 && m->pool != free[0]->pool)) {
^
Indeed this is a false positive and code is correct.
"nb_free" check prevents the free[] access before its value set.
Disabling this icc warning (#3656) for file ixgbe_rxtx.c.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Ixgbe HW supports 128 TX queues. However, the full 128 queues are only
available in VT and DCB mode. In normal default "none" mode (VT/DCB off)
the maximum number of available queues is only 64.
The driver doesn't check the mode when reporting the available
number of queues, allowing more that 64 queues to be used in all cases.
If a queue no. >=64 is used in default mode, the TX packets will be dropped
silently.
This change adds a check to forbid using a queue number larger than 64
during device configuration (in default mode), so that the problem is
reported as early as possible.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Internal variable containing the number of TX queues for a device,
was being incorrectly assigned the number of RX queues, instead of TX.
Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
In the function set_rx_mode, the pointer of device data points
to the wrong address as found in ixgbe code, and fixed in commit:
"ixgbe: fix PF promiscuous mode after VF closed"
Fixes: be2d648a2d ("igb: add PF support")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
There's an issue reported. In the scenario DPDK PF + DPDK VF,
if the VF port is closed, PF port cannot receive packets.
I found at that time the promicuous mode is disabled on the PF
port. But it should be enabled.
When VF port is closed, it will send a message to its PF port to
reset it. During this, PF port will also reset its own
promicuous mode. Which promiscuous mode should be set depends on
the parameter stored in the device data. In the function
set_rx_mode, the pointer of device data points to the wrong
address. So, the promiscuous mode is wrong.
Fixes: 00e30184da ("ixgbe: add PF support")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Reported-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Current vector RX can't always set the packet_type properly.
To be more specific:
a) it never sets RTE_PTYPE_L2_ETHER
b) it doesn't handle tunnel ipv4/ipv6 case correctly.
c) it doesn't check is IXGBE_RXDADV_PKTTYPE_ETQF set or not.
While a) is pretty easy to fix, b) and c) are not that straightforward
in terms of SIMD ops (specially b).
So far I wasn't able to make vRX support packet_type properly without
noticeable performance loss.
So for now, just remove that functionality from vector RX and
update dev_supported_ptypes_get().
Fixes: 3962541758 ("mbuf: redefine packet type")
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
Notify user otherwise. A similar check has already been added to mlx5 in
commit "mlx5: check port is configured as ethernet device".
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Currently, the maximum value of rx/tx queues are kept by EAL. But,
the value is used like below with different meanings in vhost PMD.
- The maximum value of current enabled queues.
- The maximum value of current supported queues.
This wrong double meaning will cause an issue like below steps.
* Invoke application with below option.
--vdev 'eth_vhost0,iface=<socket path>,queues=4'
* Configure queues like below.
rte_eth_dev_configure(portid, 2, 2, ...);
* Configure queues again like below.
rte_eth_dev_configure(portid, 4, 4, ...);
The second rte_eth_dev_configure() will fail because both
the maximum value of current enabled queues and supported queues
will be '2' after calling first rte_eth_dev_configure().
To fix the issue, the patch adds another variable to keep the maximum
number of supported queues in vhost PMD.
Fixes: 23981fb0d78b ("vhost: Add vhost PMD")
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
Issue:
When CONFIG_RTE_LIBTRE_I40E_RX_ALLOW_BULK_ALLOC=n in config file, there
will be a build error:
'i40e_recv_pkts_bulk_alloc' undeclared
Now DPDK i40e PMD uses the preprocessor to choose whether or not to define
the bulk recv functions, but for selection of the RX function, PMD only
depends on a C variable. This causes the inconsistency and leads to the
build error due to the bulk recv function not being defined.
Fixes: 8e109464c0 ("i40e: allow vector Rx and Tx usage")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch extends flow director to select vlan id as part of
filter's input set and program the filter rule with vlan id.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds missing VLAN bitmask for inner frame in case of
tunneling and fixes VLAN tags bitmasks for single or outer frame
in case of tunneling.
Fixes: 98f0557076 ("i40e: configure input fields for RSS or flow director")
Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch extends flow director to select more IP Header fields
as filter input set.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds a new function to set the fdir input set to default
when initialization.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
In this patch, flex payload is removed from valid fdir input set
values. This is because all flex payload configuration can be set
in struct rte_fdir_conf during device configure phase, which is
a more flexible way of setting this up.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
For the input set selection, Hash filter and Flow director shared
the same function, i.e. i40e_filter_inset_select.
For code readability, this patch replaces i40e_filter_inset_select
with two new functions: i40e_hash_filter_inset_select and
i40e_fdir_filter_inset_select for Hash filter and Flow director
respectively.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Virtio has an mbuf descriptor ring containing mbufs to be used for
receiving traffic. When the host queues traffic to be sent to the guest, it
consumes these descriptors. If none exist, it discards the packet.
The virtio pmd allocates mbufs to the descriptor ring every time it
successfully receives a packet. However, it never does it if it does not
receive a valid packet. If the descriptor ring is exhausted, and the mbuf
mempool does not have any mbufs free (which can happen for various reasons,
such as queueing along the processing pipeline), then the receive call will
not allocate any mbufs to the descriptor ring, and when it finishes, the
descriptor ring will be empty. The ring being empty means that we will
never receive a packet again, which means we will never allocate mbufs to
the ring: we are stuck.
Ultimately, the problem arises because there is a dependency between
receiving packets and making the descriptor ring not be empty, and a
dependency between the descriptor ring not being empty, and receiving
packets.
To fix the problem, this pakes makes virtio always try to allocate mbufs
to the descriptor ring, if necessary, when polling for packets. Do this by
removing the early exit if no packets were received. Since the packet loop
later will do nothing if there are no packets, this is fine.
I reproduced the problem by pushing packets through a pipelined systems
(such as the client_server sample application) after artificially
decreasing the size of the mbuf pool and introducing a delay in a secondary
stage.
Without the fix, the process stops receiving packets fairly quicky. With
the fix, it continues to receive packets.
Fixes: c1f86306a0 ("virtio: add new driver")
Signed-off-by: Kyle Larose <klarose@sandvine.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
This structure has immutable function pointers.
Also fix indentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
On initialization, the rq descriptor count was set to the limit
of the vic. When the requested number of rx descriptors was
less than this count, enic_alloc_rq() was incorrectly setting
the count to the lower value. This results in later calls to
enic_alloc_rq() incorrectly using the lower value as the adapter
limit.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Update function can be called with no key to enable or disable a RSS
protocol, or with a key to be applied to the desired protocols.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
RSS configuration provided by the application should not be used as storage
by the PMD.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
For x550 device, the reta table has 512 entries, but in function
ixgbe_dev_rss_reta_query and ixgbe_dev_rss_reta_update we use an
"uint8_t i" to traverse the entries, this will lead the function
to an endless loop.
This patch changes the data type from uint8_t to uint16_t to fix
the issue.
Fixes: 4bee94a6c2 ("ixgbe: support 512 RSS entries on x550")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
If the packet_error bit in the completion descriptor is set, the
remainder of the descriptor and data are invalid. PKT_RX_MAC_ERR
was set in the mbuf->ol_flags if packet_error was set and used
later to indicate an error packet. But since PKT_RX_MAC_ERR is
defined as 0, mbuf flags and packet types and length were being
misinterpreted.
Make the function enic_cq_rx_to_pkt_err_flags() return true for error
packets and use the return value instead of mbuf->ol_flags to indicate
error packets. Also remove warning for error packets and rely on
rx_error stats.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
In the receive path, the function to set mbuf ol_flags used the
mbuf packet_type before it was set.
Fixes: 947d860c82 ("enic: improve Rx performance")
Signed-off-by: John Daley <johndale@cisco.com>
Add checks to make sure we don't try to allocate more tx or rx queues
than we support.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Add the missing '\n' character to the end of a few print statements.
Fixes: fefed3d1e6 ("enic: new driver")
Signed-off-by: Nelson Escobar <neescoba@cisco.com>
Acked-by: John Daley <johndale@cisco.com>
VLAN insertion can be done in hardware when supported in Verbs. A software
fallback is provided otherwise. The software implementation is also used
when multi-packet send is enabled on a queue, as both features are mutually
exclusive.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Environment variable MLX5_PMD_ENABLE_PADDING enables HW packet padding
in PCI bus transactions.
When packet size is cache aligned and CRC stripping is enabled, 4 fewer
bytes are written to the PCI bus. Enabling padding makes such packets
aligned again.
In cases where PCI bandwidth is the bottleneck, padding can improve
performance by 10%.
This is disabled by default since this can also decrease performance for
unaligned packet sizes.
Signed-off-by: Olga Shern <olgas@mellanox.com>
fix packet padding macro check
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Secondary processes are expected to use queues and other resources
allocated by the primary, however Verbs resources can only be shared
between processes when inherited through fork().
This limitation can be worked around for TX by configuring separate queues
from secondary processes.
Signed-off-by: Or Ami <ora@mellanox.com>
Add driver functions to set link state up or down.
Burst functions are updated to make sure applications cannot attempt to
send/receive after link is brought down.
Signed-off-by: Or Ami <ora@mellanox.com>
When Linux PF and DPDK VF are used for i40e PMD, when a PF reset occurs,
an interrupt will go via adminq event to inform the VF of the reset.
A callback mechanism is introduced for the VF to allow it to invoke a
registered callback when PF reset happens.
Users can register a callback for this interrupt event using:
rte_eth_dev_callback_register(portid,
RTE_ETH_EVENT_INTR_RESET,
reset_event_callback,
arg);
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Currently, i40evf PMD uses a global static buffer to send virtchnl
commands to host driver. It is shared by multiple VFs.
This patch changed to allocate a virtchnl cmd buffer for each VF.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The patch introduces a new PMD. This PMD is implemented as thin wrapper
of librte_vhost. It means librte_vhost is also needed to compile the PMD.
The vhost messages will be handled only when a port is started. So start
a port first, then invoke QEMU.
The PMD has 2 parameters.
- iface: The parameter is used to specify a path to connect to a
virtio-net device.
- queues: The parameter is used to specify the number of the queues
virtio-net device has.
(Default: 1)
Here is an example.
$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
To connect above testpmd, here is qemu command example.
$ qemu-system-x86_64 \
<snip>
-chardev socket,id=chr0,path=/tmp/sock0 \
-netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
-device virtio-net-pci,netdev=net0,mq=on
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Rich Lane <rich.lane@bigswitch.com>
Tested-by: Rich Lane <rich.lane@bigswitch.com>
Update for queue state event name:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This is a PMD for the Amazon ethernet ENA (Elastic Network Adapters)
family.
The driver operates variety of ENA adapters through feature negotiation
with the adapter and upgradable commands set.
ENA driver handles PCI Physical and Virtual ENA functions.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Release Note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Implementation of platform specific code for ENA communication layer.
Signed-off-by: Evgeny Schemeilin <evgenys@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Low level common abstraction for ENA device communication.
Signed-off-by: Netanel Belgazal <netanel@amazon.com>
Signed-off-by: Jan Medala <jan@semihalf.com>
Signed-off-by: Jakub Palider <jpa@semihalf.com>
Add a new API rte_eth_dev_get_supported_ptypes to query what packet types
can be filled by a given device. The device should be already started or
its PMD RX burst function already decided, since the packet types supported
may vary depending on RX function.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Commit e86a699cf6 missed two further libm dependencies: ceil() used
by librte_meter is typically inlined so the missing dependency does not
actually cause failures, and librte_pmd_nfp is not built by default
so its easy to miss.
This causes duplicates in LDLIBS in many configurations so its vital
they are removed before passing to linker.
Fixes: e86a699cf6 ("mk: fix shared library dependencies on libm and librt")
Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Comment for "ierrors" counter says that it counts erroneous received
packets. But for some reason "imissed" counter is added to "ierrors"
counter in most drivers.
It is a mistake, because missed packets are obviously not received.
This patch fixes it.
Fixes: 70bdb18657 ("ethdev: add Rx error counters for missed, badcrc and badlen packets")
Fixes: 6bfe648406 ("i40e: add Rx error statistics")
Fixes: 856505d303 ("cxgbe: add port statistics")
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
If a bonded device is created when there are no slave devices
there is a loop in bond_ethdev_promiscuous_enable() which results
in a segmentation fault.
The solution is to initialise the current_primary_port to an
invalid port value when the bonded port is created.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The current code for detecting link during slave addition can cause a
slave interface to be activated twice -- once during slave_configure()
and again at the end of __eth_bond_slave_add_lock_free(). This will
either cause the active slave count to be incorrect or will cause the
802.3ad activation function to panic. Ensure that the interface is not
activated more than once.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
If the link state of a slave is "up" when added, it is added to the list
of active slaves but, even if it is the only slave, is not selected as
the primary interface. Generally, handling of link state interrupts
selects an interface to be primary, but only if the active count is zero.
This change avoids the situation where there are active slaves but
no primary.
Fixes: 2efb58cbab ("bond: new link bonding library")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
The bonding PMD in mode 4 puts all enslaved interfaces into promiscuous
mode in order to receive LACPDUs and must filter unwanted packets
after the traffic has been "collected". Allow broadcast and multicast
through so that ARP and IPv6 neighbor discovery continue to work.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Copy all needed fields from the mode8023ad_private structure in
bond_mode_8023ad_conf_get(). This help ensure that a subsequent call
to rte_eth_bond_8023ad_setup() is not passed uninitialized data that
would result in either incorrect behavior or a failed sanity check.
Fixes: 46fb436836 ("bond: add mode 4")
Signed-off-by: Eric Kinzie <ekinzie@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Ensure that a bonded slave device is not detached,
until it is removed from the bonded device.
Fixes: 2efb58cbab ("bond: new link bonding library")
Fixes: a45b288ef2 ("bond: support link status polling")
Fixes: 494adb7f63 ("ethdev: add device fields from PCI layer")
Fixes: b1fb53a39d ("ethdev: remove some PCI specific handling")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Check that the bonded device has no slaves before detaching it.
Fixes: 8d30fe7fa7 ("bonding: support port hotplug")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
When a device is created with "CREATE" as action, new rings are
allocated for it, then it is a good practice to free them when the
rte_ethdev_dettach method is invoked by the application.
Rings are not freeded when "ATTACH" is used or when the device is
created by means of the rte_eth_from_rings function.
Signed-off-by: Mauricio Vasquez B <mauricio.vasquezbernal@studenti.polito.it>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Rename nb_rx/tx_queues fields in internals struct to max_rx/tx_queues
Updated fields required to keep max queue numbers configured. For current
queue number requirements data->nb_rx/tx_queues fields used.
Some checkpatch corrections and code clenaup.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
1- Remove duplicate nb_rx/tx_queues fields from internals
2- Move duplicate code into a common function
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
The actual captured length is header.caplen, whereas header.len is
the original length on the wire.
Fixes: 4c173302c3 ("pcap: add new driver")
Signed-off-by: Dror Birkman <dror.birkman@lightcyber.com>
Acked-by: Nicolás Pernas Maradei <nicolas.pernas.maradei@emutex.com>
Allow dynamic deallocation of af_packet device through proper
API functions. To achieve this:
* set device flag to RTE_ETH_DEV_DETACHABLE
* implement rte_pmd_af_packet_devuninit() and expose it
through rte_driver.uninit()
* copy device name to ethdev->data to make discoverable with
rte_eth_dev_allocated()
Moreover, make af_packet init function static, as there is no
reason to keep it public.
Signed-off-by: Wojciech Zmuda <woz@semihalf.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Allow overriding the base mac address of the device.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
During an MTU change, the adapter is restarted. If hardware VLAN offload
is in use, this existing filter table would also be cleared. Instead,
setup the shadow table once during device initialization and just update
during restart.
vmxnet3_dev_vlan_offload_set(dev, mask) was incorrectly treating the
mask parameter as the bitmask for vlan_strip and vlan_filter, whereas
the mask indicates only what has changed - the values for
vlan_stripping and vlan_filter needs to be taken from dev_conf.rxmode.
Fixes: f003fc3834 ("vmxnet3: enable vlan filtering")
Signed-off-by: Charles (Chas) Williams <ciwillia@brocade.com>
Signed-off-by: Nachiketa Prachanda <nprachan@brocade.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Remy Horton <remy.horton@intel.com>
Add support for linking multi-segment buffers together to
handle Jumbo packets. The vmxnet3 API supports having header
and body buffer types. What this patch does is fill the primary
ring completely with header buffers and the secondary ring
with body buffers. This allows for non-jumbo frames to only
use one mbuf (from primary ring); and jumbo frames will have
first mbuf from primary ring and following mbufs from other
ring.
This could be optimized in future if the DPDK had API
to supply different sized mbufs (two pools) into driver.
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Release note addition:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds vmxnet3 TSO support.
Verified with test-pmd (set fwd csum) that both tso and
non-tso pkts can be successfully transmitted and all
segmentes for a tso pkt are correct on the receiver side.
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tx data ring support was removed in a previous change that
added multi-seg transmit. This change adds it back.
According to the original commit (2e849373), 64B pkt
rate with l2fwd improved by ~20% on an Ivy Bridge
server at which point we start to hit some bottleneck
on the rx side.
I also re-did the same test on a different setup (Haswell
processor, ~2.3GHz clock rate) on top of the master
and still observed ~17% performance gains.
Fixes: 7ba5de417e ("vmxnet3: support multi-segment transmit")
Signed-off-by: Yong Wang <yongwang@vmware.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
All the error checks in virtqueue_enqueue_xmit are already done
by the caller. Therefore they can be removed to improve performance.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Virtio supports a feature that allows sender to put transmit
header prepended to data. It requires that the mbuf be writeable, correct
alignment, and the feature has been negotiatied. If all this works out,
then it will be the optimum way to transmit a single segment packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
The virtio ring in QEMU/KVM is usually limited to 256 entries
and the normal way that virtio driver was queuing mbufs required
nsegs + 1 ring elements. By using the indirect ring element feature
if available, each packet will take only one ring slot even for
multi-segment packets.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Applied with coding standards fixes:
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The virtio_net_hdr desc all pointed to the same buffer. It doesn't cause
issue because in the simple TX mode we don't use the header. This patch
makes the header desc point to different buffer.
Fixes: b4ae9c505f ("virtio: optimize ring layout")
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
This initialisation of nb_rx_queues and nb_tx_queues has been removed
from eth_virtio_dev_init.
The nb_rx_queues and nb_tx_queues were being initialised in
eth_virtio_dev_init before the tx_queues and rx_queues arrays were
allocated.
The arrays are allocated when the ethdev port is configured and the
nb_tx_queues and nb_rx_queues are initialised.
If any of the following functions were called before the ethdev
port was configured there was a segmentation fault because
rx_queues and tx_queues were NULL:
rte_eth_stats_get
rte_eth_stats_reset
rte_eth_xstats_get
rte_eth_xstats_reset
Fixes: 823ad64795 ("virtio: support multiple queues")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Fix the issue that virtio device cannot be started after stopped.
The field, hw->started, should be changed by virtio_dev_start/stop instead
of virtio_dev_close.
Fixes: a85786dc81 ("virtio: fix states handling during initialization")
Reported-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Tested-by: Pavel Fedin <p.fedin@samsung.com>
Mmap PCI resource file and add inline functions for reading from and
writing to PCI resource address space.
Add description of IBUF and OBUF address space.
Add configuration option for setting which firmware type will be used.
Right address space values for IBUFs and OBUFs offsets are used
according to configuration option CONFIG_RTE_LIBRTE_PMD_SZEDATA2_AS.
Setting link up/down and getting info about link status is done through
mmapped PCI resource address space.
Signed-off-by: Matej Vido <vido@cesnet.cz>
PMD was of type PMD_VDEV which means that PCI device is not recognised
automatically during EAL initialization, but it has to be created by
EAL option --vdev.
Now, PMD is of type PMD_PDEV which means that PCI device is probed
and recognised during EAL initialization automatically.
Path to szedata2 device file is matched with device and the count
of available RX and TX DMA channels is found out during device
initialization.
Initialization, starting and stopping of queues is changed to better
correspond with Ethernet device API model. Function callbacks
(rx|tx)_queue_(start|stop) are added. Unnecessary items are removed
from ethernet device private data structure.
Signed-off-by: Matej Vido <vido@cesnet.cz>
When using start-stop functionality the per queue fields need to
be properly reset.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Even with tx checksum offload available, do not set the flag by default.
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The mbuf ol_flags field was changed to uin64_t with DPDK version 1.8
Fixes: b812daadad ("nfp: add Rx and Tx")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
The file sys/io.h was included but it can be unavailable in some
non-x86 toolchains.
As others system includes in the file nfp_net.c, it seems useless,
so the easy fix is to remove them.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Change rxq_cq_to_ol_flags() to set checksum flags according to packet type,
so for non L3/L4 packets the mbuf chksum_bad flags will not be set.
Fixes: 67fa62bc67 ("mlx5: support checksum offload")
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
RSS configuration should not be freed when priv is NULL.
Fixes: 2f97422e77 ("mlx5: support RSS hash update and get")
Signed-off-by: Or Ami <ora@mellanox.com>
The first and last memory pool elements are usually cache-aligned but not
page-aligned, particularly when using huge pages.
Hardware performance can be improved significantly by registering memory
regions starting and ending on page boundaries.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Avoid dereferencing pointers twice to get to fast Verbs functions by
storing them directly in RX/TX queue structures.
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Allows HW to strip the 802.1Q header from incoming frames and report it
through the mbuf structure.
This feature requires MLNX_OFED >= 3.2.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Upcoming flow director support will reuse this function to generate filter
rules.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Until now, broadcast frames were handled like unicast. Moving the related
flow to the special flows table frees up the related unicast MAC entry.
The same method is used to handle IPv6 multicast frames.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Merge redundant code by adding a static initialization table to manage
promiscuous and allmulticast (special) flows.
New function priv_rehash_flows() implements the logic to enable/disable
relevant flows in one place from any context.
Signed-off-by: Yaacov Hazan <yaacovh@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
In the documentation it is specified that the hardware only supports a
number of RX queues if it is a power of 2.
Since ibv_exp_create_qp may not return an error when the number of
queues is unsupported by hardware, sanitize the value in dev_configure.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
When compiling with clang 3.6, the mlx4 driver gives the following error
message about an unneeded function.
CC mlx4.o
.../drivers/net/mlx4/mlx4.c:136:20: fatal error: function
'wr_id_t_check' is not needed and will not be emitted
[-Wunneeded-internal-declaration]
static inline void wr_id_t_check(void)
^
1 error generated.
The function is to compile-time check the size of wr_id_t, so use
the standard DPDK BUILD_BUG_ON macro to do so in the init function
instead.
Fixes: 7fae69eeff ("mlx4: new poll mode driver")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This patch enables reading sglort (global resource tag) info into the
mbuf for RX and inserting an FTAG (Fabric Tag) at the beginning of the
packet for TX. The vlan_tci_outer field selected from rte_mbuf structure
for sglort is not used in fm10k now.
In FTAG based forwarding mode, the switch will forward packets according
to glort info in FTAG rather than mac and vlan table.
To activate this feature, user needs to pass a devargs parameter to eal
for fm10k device like "-w 0000:84:00.0,enable_ftag=1". Currently this
feature is supported only on PF, because FM10K_PFVTCTL register is
read-only for VF.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Remove the unused element request_lport_map in struct fm10k_mac_ops.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Some cleanups to better reflect the code that was actually pushed out to
the upstream Linux community.
Among the above cleanups, a few macros such as FM10K_RXINT_TIMER_SHIFT are
removed, but they are needed in dpdk/fm10k, so we have to put all these
necessary macros into fm10k_osdep.h.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Per comments from an upstream kernel patch, and looking at how TLV
LE_STRUCT code works, we actually want these structures to be 4byte
aligned, not 1byte aligned.
In practice, 1byte alignment has worked so far because all our
structures end up being a multiple of 4. But if a future TLV
structure were added that had a u8 or similar sticking on the end things
would break. Fix this by using 4byte alignment which will prevent the
TLV LE_STRUCT code from breaking. Update the comment explaining that we
need 4byte alignment of our structures.
Fixes: 925c862cbc ("fm10k/base: pack TLV overlay structures")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The comment for fm10k_iov_msg_lport_state_pf was changed during
review of kernel driver, and the new wording is slightly clearer.
Re-write the comment in base code based on this new wording.
Fix a number of mailbox comment issues with function header comments,
lower-case acronyms (i.e. FIFO, TLV), incorrect function names in
DEBUGFUNC(), duplicate comments and a stubbed-out header comment for
fm10k_sm_mbx_init.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The vid variable name is shorthand for VLAN ID, so we should use this in
comments explaining what is happening.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The Linux Kernel provides the OS a call "pcie_get_minimum_link" which
can crawl the PCIe tree and determine the actual minimum link speed of a
device which is a more general check than provided by
is_slot_appropriate. Thus, the kernel driver does not use or want the
is_slot_appropriate function call. Add a NO_IS_SLOT_APPROPRIATE_CHECK
definition which can be defined to remove the code.
If left undefined (the default) then the code will all be active and no
driver changes should be necessary.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Use memcpy instead of copying MAC address byte-by-byte.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Using the BIT macro can simplify the bit-shifting operation and make the
code look clean. Similar to how this is handled in the i40e base code,
define a macro for it in DPDK, so it can be used here too.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
"else" is not generally useful after a break or return.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Recommended line length maximum is 80 characters
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Add comments which properly explain the undocumented use of bits in
TDLEN register prior to VF initializing it to the correct value. Note
that the mechanism is entirely software-defined and explain its purpose
to help reduce confusion in the future.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
VF drivers must detect how many queues are available. Previously, the
driver assumed that each VF has at minimum 1 queue. This assumption is
incorrect, since it is possible that the PF has not yet assigned the
queues to the VF by the time the VF checks.
To resolve this, we added a check first to ensure that the first queue
is, in fact, owned by the VF at init_hw_vf time.
However, the code flow did not reset hw->mac.max_queues to 0.
In some cases, such as during reinit flows, we call init_hw_vf
without clearing the previous value of hw->mac.max_queues. Due to this,
when init_hw_vf errors out, if its error code is not properly handled
the VF driver may still believe it has queues which no longer belong to
it. Fix this by clearing the hw->mac.max_queues on exit due to errors.
Fixes: 8b8264bdb9 ("fm10k/base: check VF has a queue")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Use bitshift instead of a divisor, because this is faster, and
eliminates any need for a '0' check. In our case, this even works
out because default Gen3 will be 0.
Because of this, we are also able to remove the check for non-zero value
in the VF code path since that will already be the default Gen3 case.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Make functions that are only referenced locally static.
Wrap fm10k_msg_data fm10k_iov_msg_data_pf[] in the new ifndef
NO_DEFAULT_SRIOV_MSG_HANDLERS so that drivers with custom SR-IOV
message handlers can strip it.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Since the resultant data type of the mac_update.mac_upper field is u16,
it does not make sense to typecast u8 variables to u32 first.
Fixes: 7223d200c2 ("fm10k: add base driver")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
The new share code makes fm10k_msg_update_pvid_pf function static, so we
can not refer to it now in fm10k_ethdev.c. The registered PF handler is
almost the same as the default PF handler, removing it has no impact on
mailbox.
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Tested-by: Heng Ding <hengx.ding@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Using SSE instructions to parse error flags in HW Rx descriptor,
then set corresponding bits of mbuf.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
When the TX function tries to free a bunch of mbufs, it will free
them one by one. This change will scan the free list and merge the
requests in case they belongs to same pool, then free once, which
will reduce cycles on freeing mbufs.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
fm10k switch core uses source MAC + VID + SGLORT to do
look up in MAC table. If no match, an exception interrupt
will be sent to the switch manager. Too much of this kind
of exception interrupts cause switch manager side high CPU
usage.
To reproduce this issue, one DPDK testpmd runs on a server
with one fm10k NIC, mac forwards test traffic from one of
fm10k ports to another port. The CPU usage for the switch
manager will go up to about 20% for test traffic rate at
10G bps, comparing to near 0% for no test traffic.
This patch fixes this issue. A default SGLORT is assigned
to each TX queue. This default value works for non-VMDq mode
and current VMDq example. For advanced VMDq usage, e.g.
different source MAC address for different TX queue, FTAG
forwarding function could be used to change this default
SGLORT value.
Fixes: 9ae6068c86 ("fm10k: add dev start/stop")
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
In FM10K, a single PCIe port can derive out a few logical ports,
like SRIOV PF/VF devices, VMDQ objects. To better manage them, FM10K
silicon assigns a Unique GLORT ID to each logical port.
When a logical port sends a broadcast packet, the silicon will flood
it to all logical ports, including the one that sent the broadcast packet.
To prevent this, silicon has an rxq register to store the glort id of
the logical port that queue binds to.
FM10K has a switch core inside, which has a loopback suppression
mechanism in the switch level. Switch level loopback suppression mostly
works for the ether port traffic.
This patch assigns a SGLORT for each RX queue, and enables PCIe port
level loopback suppression.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
When the PF establishes a connection with Switch Manager(SM), it receives
a logical port range from SM, and registers certain logical ports from
that range. Then a default VID will be sent back from the SM.
This whole transaction - finishing with the default VID being set -
needs to be completed before dev_init returns. If not, the interrupt
setting will subsequently be changed in dev_start according to the RX
queue number, and that can cause this transaction to fail.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
Interrupt mode framework has per-queue enable/disable functions.
Implement these two functions for fm10k driver.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
Previous dev_stop function stops the rx/tx queues. This patch adds logic
to disable rx queue interrupt, clean the datapath event and queue/vector
map.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In interrupt mode, each rx queue can have one interrupt to notify the
application when packets are available in that queue. Some queues
also can share one interrupt.
Currently, fm10k needs one separate interrupt for mailbox. So, only those
drivers which support multiple interrupt vectors e.g. vfio-pci can work
in fm10k interrupt mode.
This patch uses the RXINT/INT_MAP registers to map interrupt causes
(rx queue and other events) to vectors, and enable these interrupts
through kernel drivers like vfio-pci.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
rx_descriptor_done is used by interrupt mode example application
(l3fwd-power) to check rxd DD bit to decide the RX trend,
then l3fwd-power will adjust the cpu frequency according to
the result.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
In fm10k, PF, VF, VMDQ or queues binding to flow director rule can
be considered as a logical port. Original implementation only creates
a single port for all cases. This change creates 128 logical ports;
first 64 for PF and VMDQ, second 64 for flow director.
Registers DGLORTDEC/DGLORTMAP define rules for how to classify packets
into different queues. Currently only PF and VMDQ cases are considered.
This change add rules for flow director.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
In fm10k_recv_scattered_pkts function, a packet is stored in a linked list,
offload flags such as PKT_RX_VLAN_PKT should be set in the first segment.
Fixes: 6b59a3bc82 ("fm10k: fix VLAN in Rx mbuf")
Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
Acked-by: Shaopeng He <shaopeng.he@intel.com>
This patch implemented the ops of adding and removing mac
address in i40evf driver. Functions are assigned like:
.mac_addr_add = i40evf_add_mac_addr,
.mac_addr_remove = i40evf_del_mac_addr,
To support multiple mac addresses setting, this patch also
extended the mac addresses adding and deletion when device
start and stop. Each VF can have a maximum of 64 mac
addresses.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Zhe Tao <zhe.tao@intel.com>
VEB switching feature for i40e is used to enable the switching between the
VSIs connect to the virtual bridge. The old implementation is setting the
virtual bridge mode as VEPA which is port aggregation. Enable the switching
ability by setting the loop back mode for the specific VSIs which connect
to PF or VFs.
VEB/VSI/VEPA are concepts not specific to the i40e HW, the concepts are
from 802.1qbg spec
IEEE EVB tutorial:
http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf
VEB: a virtual switch can forward the packet based on the specific match
field.
VSI: a virtual interface connect between the VEB/VEPA and virtual machine.
VEPA: a virtual Ethernet port aggregator will upstream the packets from
VSI to the LAN port.
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
This patch fixes a typo in a comment in the definition of
the i40e_pf struct.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Signed-off-by: Rami Rosen <rami.rosen@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Previously, DCB(Data Center Bridging) is only enabled on PF,
queue mapping and BW configuration is only done on PF.
This patch enables DCB for VMDQ VSIs(Virtual Station Interfaces)
by following steps:
1. Take BW and ETS(Enhanced Transmission Selection)
configuration on VEB(Virtual Ethernet Bridge).
2. Take BW and ETS configuration on VMDQ VSIs.
3. Update TC(Traffic Class) and queues mapping on VMDQ VSIs.
To enable DCB on VMDQ, the number of TCs should not be larger than
the number of queues in VMDQ pools, and the number of queues per
VMDQ pool is specified by CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
in config/common_* file.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
It removes the i40evf_set_mac_type() defined in PMD, and reuses
i40e_set_mac_type() defined in base driver.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It adds base driver release information such as release date,
for better tracking in the future.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
Several structures and macros are added or updated, such
as 'struct i40e_aqc_get_link_status',
'struct i40e_aqc_run_phy_activity' and
'struct i40e_aqc_lldp_set_local_mib_resp'.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
It adds the new AQ command and struct for managing a
thermal sensor.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
X722 supports Expanded version of TCP, UDP PCTYPES for RSS.
Add a Virtchnl offload to support this.
Without this patch VF drivers will not be able to support
the correct PCTYPES for X722 and UDP flows will not fan out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
This patch adds 7 new register definitions for programming the
parser, flow director and RSS blocks in the HW.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
RX control register read/write functions are added, as directly
read/write may fail when under stress small traffic. After the
adminq is ready, all rx control registers should be read/written
by dedicated functions.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>