The struct rte_memzone field .phys_addr is renamed to .iova.
The deprecated name is kept in an anonymous union to avoid breaking
the API.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
The function rte_mem_virt2phy() is kept and used in functions which
works only with physical addresses.
For all other calls this function is replaced by rte_mem_virt2iova()
which does a direct mapping (no conversion) in the VA case.
Note: the new function rte_mem_virt2iova() function matches the
behaviour implemented in rte_mem_virt2phy() by the commit
680f6c12600f ("mem: honor IOVA mode in virt2phy")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
This patch adds support to enable and disable LRO
To support this feature, the driver creates an aggregator ring.
When the hardware starts doing LRO, it sends a tpa_start completion.
When the driver receives a tpa_end completion, it indicates that the
LRO chaining is complete.
Signed-off-by: Steeven Li <steeven.li@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch adds support to modify MTU using the set_mtu dev_op.
To support frames > 2k, the PMD creates an aggregator ring.
When a frame greater than 2k is received, it is fragmented
and the resulting fragments are DMA'ed to the aggregator ring.
Now the driver can support jumbo frames upto 9500 bytes.
Signed-off-by: Steeven Li <steeven.li@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
rte_malloc_virt2phy() does not return a physical address if huge pages
aren't in use. Further, rte_memzone->phys_addr is not a physical address.
Use rte_mem_virt2phy() and manually lock pages to support lack of
huge pages.
Also check the return value of rte_mem_virt2phy()
Verify the function returns an address. Otherwise return an error and
log a message.
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
1) For a VF, query the firmware to determine if a MAC address is
already configured. If not configure a random default MAC address.
2) Do not initialize the default completion ring in
bnxt_alloc_hwrm_rings().
3) While registering for async events with the firmware,
use func_vf_cfg for a VF and use func_cfg for a PF.
4) Query the VNIC plcmode config using the bnxt_hwrm_vnic_plcmodes_qcfg
before a VNIC is updated. Reconfigure the VNIC with the plcmode
configuration queried earlier. Not doing this could overwrite
the plcmodes in some cases.
5) Reorg the bnxt_handle_fwd_req to properly handle the forwarded
requests. The previous code did not handle it completely.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This patch updates the Broadcom bnxt PMD to version 1.7.7
Most of the changes in the patch are in the hsi_struct_def_dpdk.h - an
autogenerated file. The changes in the *.c files are because of changes
in the macro names.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Use existing information about pci and interrupt handle to minimize
the number of places that assume eth_dev contains pci_device
information.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Jan Blunck <jblunck@infradead.org>
Add a top level functions to initialize ring groups, and functions
to allocate and free all the rings via HWRM.
A ring group is identified by an index. It consists of Rx or Tx ring id,
completion ring id and a statistics context. Once a ring group is
initialized, use this group index while creating the rings in the ASIC
using the appropriate HWRM API added via earlier patches.
Functions added:
bnxt_free_cp_ring
Calls the HWRM function generic ring free with arguments specific
to a completion ring and sanitizes the host completion structure
bnxt_free_all_hwrm_rings
Frees all the HWRM allocated hardware rings
bnxt_free_all_hwrm_resources
Frees all the resources allocated via the HRM in the hardware
bnxt_alloc_hwrm_rings
Allocates all the HWRM rings needed in the current configuration
This should be the last functionality needed to add start/stop
device operations.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Perform allocation and free()ing of ring and information structures
for the TX, RX, and completion rings. The previous patches had
so far provided top level stubs and generic ring support, while this
patch does the real allocation and freeing of the memory specific to
each different type of generic ring.
For example bnxt_init_tx_ring_struct() or bnxt_init_rx_ring_struct() is
now allocating memory based on the socked_id being provided.
bnxt_tx_queue_setup_op() or bnxt_rx_queue_setup_op() have gone through
some reformatting to perform a graceful cleanup in case memory
allocation fails.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
This patch adds initial implementation of rx_pkt_burst() function for Rx.
bnxt_recv_pkts() is the top level function for doing Rx.
This patch also adds code to allocate rings in the ASIC.
For each Rx queue allocated in the PMD driver, a corresponding ring
in hardware will be created. Every time a frame is received a Rx ring
is selected based on the hardware configuration like RSS, MAC or VLAN,
COS and such. The hardware uses a completion ring to indicate the
availability of a packet.
This patch also brings in functions like bnxt_init_one_rx_ring()
bnxt_init_rx_ring_struct() which initializes various structures before
a Rx can begin.
bnxt_init_rxbds() initializes the Rx Buffer Descriptors while
bnxt_alloc_rx_data() allocates a buffer in the host to receive the
incoming packet.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Initial implementation of tx_pkt_burst for transmit.
bnxt_xmit_pkts() is the top level function that is called during Tx.
bnxt_handle_tx_cp() is used to check and process the Tx completions
generated for the Tx Buffer Descriptors sent by the hardware.
This patch also adds code to allocate rings in the hardware.
For each Tx queue allocated in the PMD driver, a corresponding ring
in hardware will be created. Every time a Tx request is initiated
via the bnxt_xmit_pkts() call, a Buffer Descriptor is created and
is sent to the hardware via the associated Tx ring.
On completing the Tx operation, the hardware will generates the status
in the form of a completion. This completion is processed by the
bnxt_handle_tx_cp() function.
Functions like bnxt_init_tx_ring_struct() and bnxt_init_one_tx_ring()
are used to initialize various members of the structure before
starting Tx operations.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>
Declare generic ring structures and a free() function. These are
generic ring management functions which will be used to create Tx,
Rx and Completion rings in the subsequent patches, and tie them to
the HWRM managed ring resources.
This generic ring structure is shared all the ring types and tracks
the the host Buffer Descriptors (BDs) and the HWRM assigned ID.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Stephen Hurd <stephen.hurd@broadcom.com>
Reviewed-by: David Christensen <david.christensen@broadcom.com>