Add the corresponding data structure and logics, to support
the offload of push_vlan action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of pop_vlan action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload of
set dest MAC action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of set source MAC action.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload
of SCTP item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding logics to support the offload
of UDP item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of TCP item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of IPv6 item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of IPv4 item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the corresponding data structure and logics, to support
the offload of VLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of very basic actions: count, drop
and output.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the offload support of very basic items: ethernet and
port id.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the flow validate/create/query/destroy/flush API of nfp PMD.
The flow create API construct a control cmsg and send it to
firmware, then add this flow to the hash table.
The flow query API get flow stats from the flow_priv structure.
Note there exist an rte_spin_lock to prevent the update and query
action occur at the same time.
The flow destroy API construct a control cmsg and send it to
firmware, then adelete this flow from the hash table.
The flow flush API just iterate the flows in hash table and
call the flow destroy API.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the flow stats process logic in the ctrl VNIC service.
The flower firmware pass the flow stats to nfp driver through
control message, we store them in the flow_priv structure.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Add the structures and functions to process mask table, flow
table, and flow stats id, which are used in the rte_flow
offload logics.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
The services don't have a method to break the infinite loop, and
this will cause the DPDK app can't end normally.
Fixes: a36634e87e ("net/nfp: add flower ctrl VNIC Rx/Tx")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
The CPP (Command Pull Push) bridge service is needed for some debug
tools, and should be optional, so remove the mandatory requirement of
service lcore parameter.
Fixes: b188042195 ("net/nfp: add initial flower firmware support")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
When MTU is bigger than hw->flbufsz, it can't work. hw->flbufsz is set
in the nfp_net_rx_queue_setup().
At first, in the nfp_net_configure(), the hw->flbufsz isn't set the
value, it just judge the initialized value and MTU, it is unreasonable.
Now, it just check the MTU can't be more than the NFP_FRAME_SIZE_MAX in
the nfp_net_configure(), when hw->flbufsz is set the value, in the
nfp_net_start(), judge the hw->flbufsz and MTU.
Fixes: 5c305e218f ("net/nfp: fix initialization")
Cc: stable@dpdk.org
Signed-off-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
The jumbo offload was removed from patch [1], but testpmd still exist
jumbo offload related code, this patch removes it, and also updates
the rst file.
[1] ethdev: remove jumbo offload flag
Fixes: b563c14212 ("ethdev: remove jumbo offload flag")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Normally, to use the HW offloads capability (e.g. checksum and TSO) in
the Tx direction, the application needs to call rte_eth_tx_prepare() to
do some adjustment with the packets before sending them. But the
tx_prepare callback of the bonding driver is not implemented. Therefore,
the sent packets may have errors (e.g. checksum errors).
However, it is difficult to design the tx_prepare callback for bonding
driver. Because when a bonded device sends packets, the bonded device
allocates the packets to different slave devices based on the real-time
link status and bonding mode. That is, it is very difficult for the
bonded device to determine which slave device's prepare function should
be invoked.
So in this patch, the tx_prepare callback of bonding driver is not
implemented. Instead, the rte_eth_tx_prepare() will be called before
rte_eth_tx_burst(). In this way, all tx_offloads can be processed
correctly for all NIC devices.
Note: because it is rara that bond different PMDs together, so just
call tx-prepare once in broadcast bonding mode.
Also the following description was added to the rte_eth_tx_burst()
function:
"@note This function must not modify mbufs (including packets data)
unless the refcnt is 1. The exception is the bonding PMD, which does not
have tx-prepare function, in this case, mbufs maybe modified."
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Chas Williams <3chas3@gmail.com>
The current code first removes all back-end devices of
the bonded device and then invokes flush operation to
remove flows in such back-end devices, which makes no
sense. Fix that by re-ordering the steps accordingly.
Fixes: 49dad9028e ("net/bonding: support flow API")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
According to the documentation, rte_eth_dev_configure()
can be invoked repeatedly while in stopped state.
The current implementation in the bonding driver
allows for that (technically), but the user sees
warnings which say that back-end devices have
already been harnessed. Re-factor the code
to have cleanup before each (re-)configure.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Chas Williams <3chas3@gmail.com>
Ring the doorbell again for the following scenarios:
* No receives posted but Rx queue not empty after deadline
* No transmits posted but Tx work still pending after deadline
* Admin queue work still pending after deadline
This will help the queues recover in the extremely rare case that
a doorbell is missed by the FW.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
In some configurations, the FW may return EAGAIN if it is not able
to respond to commands immediately. Retry the init commands in this
case to prevent errors from reaching the client.
Fix up some return-code stuff while here, for clarity.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
The code is very similar, but the simple case can skip a few branches
in the hot path. This improves PPS when 10KB mbufs are used.
S/G is enabled on the Rx side by offload DEV_RX_OFFLOAD_SCATTER.
S/G is enabled on the Tx side by offload DEV_TX_OFFLOAD_MULTI_SEGS.
S/G is automatically enabled on the Rx side if the provided mbufs are
too small to hold the maximum possible frame.
To enable S/G in testpmd, add these args:
--rx-offloads=0x2000 --tx-offloads=0x8000
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: R Mohamed Shah <mohamedshah.r@amd.com>
Some clients have opinions about how often to flush the
transmit ring.
The default value is the number of Tx descriptors minus the
default Tx burst size.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
When 'ionic_cmb' is set to '1', queue memory will be allocated from
the device's onboard memory (Controller Memory Buffer). In some
configurations, this will dramatically reduce packet latency and
increase PPS.
Add the WC_ACTIVATE flag to the PCI driver flags.
Write combining must be enabled to achieve the maximum PPS.
When the queue is in the CMB, descriptors cannot be prefetched.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: Neel Patel <neel.patel@amd.com>
Linearize Tx mbuf chains in the info array.
This avoids walking the mbuf chain during flush.
Move a few branches out of the hot path.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Linearize RX mbuf chains in the expanded info array.
Clean one and fill one per CQE (completions are not coalesced).
Touch the mbufs as little as possible in the fill stage.
When touching the mbuf in the clean stage, use the rearm_data unions.
Ring the doorbell once at the end of the bulk clean/fill.
Signed-off-by: Neel Patel <neel.patel@amd.com>
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
The first (header) segment includes the standard headroom.
Subsequent segments do not.
Store the fragment counts in the queue structure.
Precalculating improves performance by reducing
how much work must be done in the hot path.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Free all of the mbufs in the receive queue when the queue is
stopped. This will allow them to be resized when the MTU is
changed.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
This makes the code safer by helping the compiler catch errors.
Rename the variables, too; they're not callbacks anymore.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Enable the interrupt if the platform & device support it.
This prevents spurious interrupts on virtual platforms.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
For future support of virtual devices, move the PCI code to its own
file. Create a new device interface, struct ionic_dev_intf, to plug
in to common code.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: Neel Patel <neel.patel@amd.com>
Signed-off-by: R Mohamed Shah <mohamedshah.r@amd.com>
There is no need to allocate the interrupt vector list if
datapath packet interrupts are not enabled.
This conserves resources.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
These bits are not used. Remove them to simplify the code.
Fix the spacing on the IONIC_ALIGN #define.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Test min and max MTU against values read from firmware, for correctness.
Update the firmware field name, for clarity.
The device must be stopped before changing MTU, for correctness.
Store the calculated frame size in the queue, for performance.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: R Mohamed Shah <mohamedshah.r@amd.com>