Many updates to cxgbe(4)
- Device configuration via plain text config file. Also able to operate when not attached to the chip as the master driver. - Generic "work request" queue that serves as the base for both ctrl and ofld tx queues. - Generic interrupt handler routine that can process any event on any kind of ingress queue (via a dispatch table). - A couple of new driver ioctls. cxgbetool can now install a firmware to the card ("loadfw" command) and can read the card's memory ("memdump" and "tcb" commands). - Lots of assorted information within dev.t4nex.X.misc.* This is primarily for debugging and won't show up in sysctl -a. - Code to manage the L2 tables on the chip. - Updates to cxgbe(4) man page to go with the tunables that have changed. - Updates to the shared code in common/ - Updates to the driver-firmware interface (now at fw 1.4.16.0) MFC after: 1 month
This commit is contained in:
parent
22ea9f58f0
commit
733b92779e
@ -99,18 +99,29 @@ Tunables can be set at the
|
||||
prompt before booting the kernel or stored in
|
||||
.Xr loader.conf 5 .
|
||||
.Bl -tag -width indent
|
||||
.It Va hw.cxgbe.max_ntxq_10G_port
|
||||
The maximum number of tx queues to use for a 10Gb port.
|
||||
The default value is 8.
|
||||
.It Va hw.cxgbe.max_nrxq_10G_port
|
||||
The maximum number of rx queues to use for a 10Gb port.
|
||||
The default value is 8.
|
||||
.It Va hw.cxgbe.max_ntxq_1G_port
|
||||
The maximum number of tx queues to use for a 1Gb port.
|
||||
The default value is 2.
|
||||
.It Va hw.cxgbe.max_nrxq_1G_port
|
||||
The maximum number of rx queues to use for a 1Gb port.
|
||||
The default value is 2.
|
||||
.It Va hw.cxgbe.ntxq10g
|
||||
The number of tx queues to use for a 10Gb port. The default is 16 or the number
|
||||
of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.nrxq10g
|
||||
The number of rx queues to use for a 10Gb port. The default is 8 or the number
|
||||
of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.ntxq1g
|
||||
The number of tx queues to use for a 1Gb port. The default is 4 or the number
|
||||
of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.nrxq1g
|
||||
The number of rx queues to use for a 1Gb port. The default is 2 or the number
|
||||
of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.nofldtxq10g
|
||||
The number of TOE tx queues to use for a 10Gb port. The default is 8 or the
|
||||
number of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.nofldrxq10g
|
||||
The number of TOE rx queues to use for a 10Gb port. The default is 2 or the
|
||||
number of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.nofldtxq1g
|
||||
The number of TOE tx queues to use for a 1Gb port. The default is 2 or the
|
||||
number of CPU cores in the system, whichever is less.
|
||||
.It Va hw.cxgbe.nofldrxq1g
|
||||
The number of TOE rx queues to use for a 1Gb port. The default is 1.
|
||||
.It Va hw.cxgbe.holdoff_timer_idx_10G
|
||||
.It Va hw.cxgbe.holdoff_timer_idx_1G
|
||||
The timer index value to use to delay interrupts.
|
||||
@ -119,6 +130,8 @@ by default (all values are in microseconds) and the index selects a
|
||||
value from this list.
|
||||
The default value is 1 for both 10Gb and 1Gb ports, which means the
|
||||
timer value is 5us.
|
||||
Different cxgbe interfaces can be assigned different values at any time via the
|
||||
dev.cxgbe.X.holdoff_tmr_idx sysctl.
|
||||
.It Va hw.cxgbe.holdoff_pktc_idx_10G
|
||||
.It Va hw.cxgbe.holdoff_pktc_idx_1G
|
||||
The packet-count index value to use to delay interrupts.
|
||||
@ -127,6 +140,11 @@ and the index selects a value from this list.
|
||||
The default value is 2 for both 10Gb and 1Gb ports, which means 16
|
||||
packets (or the holdoff timer going off) before an interrupt is
|
||||
generated.
|
||||
-1 disables packet counting.
|
||||
Different cxgbe interfaces can be assigned different values via the
|
||||
dev.cxgbe.X.holdoff_pktc_idx sysctl.
|
||||
This sysctl works only when the interface has never been marked up (as done by
|
||||
ifconfig up).
|
||||
.It Va hw.cxgbe.qsize_txq
|
||||
The size, in number of entries, of the descriptor ring used for a tx
|
||||
queue.
|
||||
@ -134,10 +152,46 @@ A buf_ring of the same size is also allocated for additional
|
||||
software queuing. See
|
||||
.Xr ifnet 9 .
|
||||
The default value is 1024.
|
||||
Different cxgbe interfaces can be assigned different values via the
|
||||
dev.cxgbe.X.qsize_txq sysctl.
|
||||
This sysctl works only when the interface has never been marked up (as done by
|
||||
ifconfig up).
|
||||
.It Va hw.cxgbe.qsize_rxq
|
||||
The size, in number of entries, of the descriptor ring used for an
|
||||
rx queue.
|
||||
The default value is 1024.
|
||||
Different cxgbe interfaces can be assigned different values via the
|
||||
dev.cxgbe.X.qsize_rxq sysctl.
|
||||
This sysctl works only when the interface has never been marked up (as done by
|
||||
ifconfig up).
|
||||
.It Va hw.cxgbe.interrupt_types
|
||||
The interrupt types that the driver is allowed to use.
|
||||
Bit 0 represents INTx (line interrupts), bit 1 MSI, bit 2 MSI-X.
|
||||
The default is 7 (all allowed).
|
||||
The driver will select the best possible type out of the allowed types by
|
||||
itself.
|
||||
.It Va hw.cxgbe.config_file
|
||||
Select a pre-packaged device configuration file.
|
||||
A configuration file contains a recipe for partitioning and configuring the
|
||||
hardware resources on the card.
|
||||
This tunable is for specialized applications only and should not be used in
|
||||
normal operation.
|
||||
The configuration profile currently in use is available in the dev.t4nex.X.cf
|
||||
and dev.t4nex.X.cfcsum sysctls.
|
||||
.It Va hw.cxgbe.linkcaps_allowed
|
||||
.It Va hw.cxgbe.niccaps_allowed
|
||||
.It Va hw.cxgbe.toecaps_allowed
|
||||
.It Va hw.cxgbe.rdmacaps_allowed
|
||||
.It Va hw.cxgbe.iscsicaps_allowed
|
||||
.It Va hw.cxgbe.fcoecaps_allowed
|
||||
Disallowing capabilities provides a hint to the driver and firmware to not
|
||||
reserve hardware resources for that feature.
|
||||
Each of these is a bit field with a bit for each sub-capability within the
|
||||
capability.
|
||||
This tunable is for specialized applications only and should not be used in
|
||||
normal operation.
|
||||
The capabilities for which hardware resources have been reserved are listed in
|
||||
dev.t4nex.X.*caps sysctls.
|
||||
.El
|
||||
.Sh SUPPORT
|
||||
For general information and support,
|
||||
|
@ -31,6 +31,7 @@
|
||||
#ifndef __T4_ADAPTER_H__
|
||||
#define __T4_ADAPTER_H__
|
||||
|
||||
#include <sys/kernel.h>
|
||||
#include <sys/bus.h>
|
||||
#include <sys/rman.h>
|
||||
#include <sys/types.h>
|
||||
@ -46,8 +47,9 @@
|
||||
#include <netinet/tcp_lro.h>
|
||||
|
||||
#include "offload.h"
|
||||
#include "common/t4fw_interface.h"
|
||||
#include "firmware/t4fw_interface.h"
|
||||
|
||||
#define T4_CFGNAME "t4fw_cfg"
|
||||
#define T4_FWNAME "t4fw"
|
||||
|
||||
MALLOC_DECLARE(M_CXGBE);
|
||||
@ -110,25 +112,21 @@ enum {
|
||||
FW_IQ_QSIZE = 256,
|
||||
FW_IQ_ESIZE = 64, /* At least 64 mandated by the firmware spec */
|
||||
|
||||
INTR_IQ_QSIZE = 64,
|
||||
INTR_IQ_ESIZE = 64, /* Handles some CPLs too, do not reduce */
|
||||
|
||||
CTRL_EQ_QSIZE = 128,
|
||||
CTRL_EQ_ESIZE = 64,
|
||||
|
||||
RX_IQ_QSIZE = 1024,
|
||||
RX_IQ_ESIZE = 64, /* At least 64 so CPL_RX_PKT will fit */
|
||||
|
||||
RX_FL_ESIZE = 64, /* 8 64bit addresses */
|
||||
EQ_ESIZE = 64, /* All egress queues use this entry size */
|
||||
|
||||
RX_FL_ESIZE = EQ_ESIZE, /* 8 64bit addresses */
|
||||
#if MJUMPAGESIZE != MCLBYTES
|
||||
FL_BUF_SIZES = 4, /* cluster, jumbop, jumbo9k, jumbo16k */
|
||||
#else
|
||||
FL_BUF_SIZES = 3, /* cluster, jumbo9k, jumbo16k */
|
||||
#endif
|
||||
|
||||
CTRL_EQ_QSIZE = 128,
|
||||
|
||||
TX_EQ_QSIZE = 1024,
|
||||
TX_EQ_ESIZE = 64,
|
||||
TX_SGL_SEGS = 36,
|
||||
TX_WR_FLITS = SGE_MAX_WR_LEN / 8
|
||||
};
|
||||
@ -144,13 +142,16 @@ enum {
|
||||
/* adapter flags */
|
||||
FULL_INIT_DONE = (1 << 0),
|
||||
FW_OK = (1 << 1),
|
||||
INTR_SHARED = (1 << 2), /* one set of intrq's for all ports */
|
||||
INTR_DIRECT = (1 << 2), /* direct interrupts for everything */
|
||||
MASTER_PF = (1 << 3),
|
||||
ADAP_SYSCTL_CTX = (1 << 4),
|
||||
|
||||
CXGBE_BUSY = (1 << 9),
|
||||
|
||||
/* port flags */
|
||||
DOOMED = (1 << 0),
|
||||
VI_ENABLED = (1 << 1),
|
||||
PORT_INIT_DONE = (1 << 1),
|
||||
PORT_SYSCTL_CTX = (1 << 2),
|
||||
};
|
||||
|
||||
#define IS_DOOMED(pi) (pi->flags & DOOMED)
|
||||
@ -186,6 +187,12 @@ struct port_info {
|
||||
int first_txq; /* index of first tx queue */
|
||||
int nrxq; /* # of rx queues */
|
||||
int first_rxq; /* index of first rx queue */
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
int nofldtxq; /* # of offload tx queues */
|
||||
int first_ofld_txq; /* index of first offload tx queue */
|
||||
int nofldrxq; /* # of offload rx queues */
|
||||
int first_ofld_rxq; /* index of first offload rx queue */
|
||||
#endif
|
||||
int tmr_idx;
|
||||
int pktc_idx;
|
||||
int qsize_rxq;
|
||||
@ -194,11 +201,8 @@ struct port_info {
|
||||
struct link_config link_cfg;
|
||||
struct port_stats stats;
|
||||
|
||||
struct taskqueue *tq;
|
||||
struct callout tick;
|
||||
struct sysctl_ctx_list ctx; /* lives from ifconfig up to down */
|
||||
struct sysctl_oid *oid_rxq;
|
||||
struct sysctl_oid *oid_txq;
|
||||
struct sysctl_ctx_list ctx; /* from ifconfig up to driver detach */
|
||||
|
||||
uint8_t hw_addr[ETHER_ADDR_LEN]; /* factory MAC address, won't change */
|
||||
};
|
||||
@ -222,17 +226,26 @@ struct tx_map {
|
||||
bus_dmamap_t map;
|
||||
};
|
||||
|
||||
/* DMA maps used for tx */
|
||||
struct tx_maps {
|
||||
struct tx_map *maps;
|
||||
uint32_t map_total; /* # of DMA maps */
|
||||
uint32_t map_pidx; /* next map to be used */
|
||||
uint32_t map_cidx; /* reclaimed up to this index */
|
||||
uint32_t map_avail; /* # of available maps */
|
||||
};
|
||||
|
||||
struct tx_sdesc {
|
||||
uint8_t desc_used; /* # of hardware descriptors used by the WR */
|
||||
uint8_t credits; /* NIC txq: # of frames sent out in the WR */
|
||||
};
|
||||
|
||||
typedef void (iq_intr_handler_t)(void *);
|
||||
|
||||
enum {
|
||||
/* iq flags */
|
||||
IQ_ALLOCATED = (1 << 1), /* firmware resources allocated */
|
||||
IQ_STARTED = (1 << 2), /* started */
|
||||
IQ_ALLOCATED = (1 << 0), /* firmware resources allocated */
|
||||
IQ_HAS_FL = (1 << 1), /* iq associated with a freelist */
|
||||
IQ_INTR = (1 << 2), /* iq takes direct interrupt */
|
||||
IQ_LRO_ENABLED = (1 << 3), /* iq is an eth rxq with LRO enabled */
|
||||
|
||||
/* iq state */
|
||||
IQS_DISABLED = 0,
|
||||
@ -252,26 +265,35 @@ struct sge_iq {
|
||||
uint16_t abs_id; /* absolute SGE id for the iq */
|
||||
int8_t intr_pktc_idx; /* packet count threshold index */
|
||||
int8_t pad0;
|
||||
iq_intr_handler_t *handler;
|
||||
__be64 *desc; /* KVA of descriptor ring */
|
||||
|
||||
volatile uint32_t state;
|
||||
volatile int state;
|
||||
struct adapter *adapter;
|
||||
const __be64 *cdesc; /* current descriptor */
|
||||
uint8_t gen; /* generation bit */
|
||||
uint8_t intr_params; /* interrupt holdoff parameters */
|
||||
uint8_t intr_next; /* holdoff for next interrupt */
|
||||
uint8_t intr_next; /* XXX: holdoff for next interrupt */
|
||||
uint8_t esize; /* size (bytes) of each entry in the queue */
|
||||
uint16_t qsize; /* size (# of entries) of the queue */
|
||||
uint16_t cidx; /* consumer index */
|
||||
uint16_t cntxt_id; /* SGE context id for the iq */
|
||||
uint16_t cntxt_id; /* SGE context id for the iq */
|
||||
|
||||
STAILQ_ENTRY(sge_iq) link;
|
||||
};
|
||||
|
||||
enum {
|
||||
EQ_CTRL = 1,
|
||||
EQ_ETH = 2,
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
EQ_OFLD = 3,
|
||||
#endif
|
||||
|
||||
/* eq flags */
|
||||
EQ_ALLOCATED = (1 << 1), /* firmware resources allocated */
|
||||
EQ_STARTED = (1 << 2), /* started */
|
||||
EQ_CRFLUSHED = (1 << 3), /* expecting an update from SGE */
|
||||
EQ_TYPEMASK = 7, /* 3 lsbits hold the type */
|
||||
EQ_ALLOCATED = (1 << 3), /* firmware resources allocated */
|
||||
EQ_DOOMED = (1 << 4), /* about to be destroyed */
|
||||
EQ_CRFLUSHED = (1 << 5), /* expecting an update from SGE */
|
||||
EQ_STALLED = (1 << 6), /* out of hw descriptors or dmamaps */
|
||||
};
|
||||
|
||||
/*
|
||||
@ -281,10 +303,11 @@ enum {
|
||||
* consumes them) but it's special enough to have its own struct (see sge_fl).
|
||||
*/
|
||||
struct sge_eq {
|
||||
unsigned int flags; /* MUST be first */
|
||||
unsigned int cntxt_id; /* SGE context id for the eq */
|
||||
bus_dma_tag_t desc_tag;
|
||||
bus_dmamap_t desc_map;
|
||||
char lockname[16];
|
||||
unsigned int flags;
|
||||
struct mtx eq_lock;
|
||||
|
||||
struct tx_desc *desc; /* KVA of descriptor ring */
|
||||
@ -297,9 +320,24 @@ struct sge_eq {
|
||||
uint16_t pidx; /* producer idx (desc idx) */
|
||||
uint16_t pending; /* # of descriptors used since last doorbell */
|
||||
uint16_t iqid; /* iq that gets egr_update for the eq */
|
||||
unsigned int cntxt_id; /* SGE context id for the eq */
|
||||
uint8_t tx_chan; /* tx channel used by the eq */
|
||||
struct task tx_task;
|
||||
struct callout tx_callout;
|
||||
|
||||
/* stats */
|
||||
|
||||
uint32_t egr_update; /* # of SGE_EGR_UPDATE notifications for eq */
|
||||
uint32_t unstalled; /* recovered from stall */
|
||||
};
|
||||
|
||||
enum {
|
||||
FL_STARVING = (1 << 0), /* on the adapter's list of starving fl's */
|
||||
FL_DOOMED = (1 << 1), /* about to be destroyed */
|
||||
};
|
||||
|
||||
#define FL_RUNNING_LOW(fl) (fl->cap - fl->needed <= fl->lowat)
|
||||
#define FL_NOT_RUNNING_LOW(fl) (fl->cap - fl->needed >= 2 * fl->lowat)
|
||||
|
||||
struct sge_fl {
|
||||
bus_dma_tag_t desc_tag;
|
||||
bus_dmamap_t desc_map;
|
||||
@ -307,6 +345,7 @@ struct sge_fl {
|
||||
uint8_t tag_idx;
|
||||
struct mtx fl_lock;
|
||||
char lockname[16];
|
||||
int flags;
|
||||
|
||||
__be64 *desc; /* KVA of descriptor ring, ptr to addresses */
|
||||
bus_addr_t ba; /* bus address of descriptor ring */
|
||||
@ -317,8 +356,10 @@ struct sge_fl {
|
||||
uint32_t cidx; /* consumer idx (buffer idx, NOT hw desc idx) */
|
||||
uint32_t pidx; /* producer idx (buffer idx, NOT hw desc idx) */
|
||||
uint32_t needed; /* # of buffers needed to fill up fl. */
|
||||
uint32_t lowat; /* # of buffers <= this means fl needs help */
|
||||
uint32_t pending; /* # of bufs allocated since last doorbell */
|
||||
unsigned int dmamap_failed;
|
||||
TAILQ_ENTRY(sge_fl) link; /* All starving freelists */
|
||||
};
|
||||
|
||||
/* txq: SGE egress queue + what's needed for Ethernet NIC */
|
||||
@ -330,14 +371,8 @@ struct sge_txq {
|
||||
struct buf_ring *br; /* tx buffer ring */
|
||||
struct tx_sdesc *sdesc; /* KVA of software descriptor ring */
|
||||
struct mbuf *m; /* held up due to temporary resource shortage */
|
||||
struct task resume_tx;
|
||||
|
||||
/* DMA maps used for tx */
|
||||
struct tx_map *maps;
|
||||
uint32_t map_total; /* # of DMA maps */
|
||||
uint32_t map_pidx; /* next map to be used */
|
||||
uint32_t map_cidx; /* reclaimed up to this index */
|
||||
uint32_t map_avail; /* # of available maps */
|
||||
struct tx_maps txmaps;
|
||||
|
||||
/* stats for common events first */
|
||||
|
||||
@ -354,20 +389,14 @@ struct sge_txq {
|
||||
|
||||
uint32_t no_dmamap; /* no DMA map to load the mbuf */
|
||||
uint32_t no_desc; /* out of hardware descriptors */
|
||||
uint32_t egr_update; /* # of SGE_EGR_UPDATE notifications for txq */
|
||||
} __aligned(CACHE_LINE_SIZE);
|
||||
|
||||
enum {
|
||||
RXQ_LRO_ENABLED = (1 << 0)
|
||||
};
|
||||
|
||||
/* rxq: SGE ingress queue + SGE free list + miscellaneous items */
|
||||
struct sge_rxq {
|
||||
struct sge_iq iq; /* MUST be first */
|
||||
struct sge_fl fl;
|
||||
struct sge_fl fl; /* MUST follow iq */
|
||||
|
||||
struct ifnet *ifp; /* the interface this rxq belongs to */
|
||||
unsigned int flags;
|
||||
#ifdef INET
|
||||
struct lro_ctrl lro; /* LRO state */
|
||||
#endif
|
||||
@ -381,12 +410,28 @@ struct sge_rxq {
|
||||
|
||||
} __aligned(CACHE_LINE_SIZE);
|
||||
|
||||
/* ctrlq: SGE egress queue + stats for control queue */
|
||||
struct sge_ctrlq {
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
/* ofld_rxq: SGE ingress queue + SGE free list + miscellaneous items */
|
||||
struct sge_ofld_rxq {
|
||||
struct sge_iq iq; /* MUST be first */
|
||||
struct sge_fl fl; /* MUST follow iq */
|
||||
} __aligned(CACHE_LINE_SIZE);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* wrq: SGE egress queue that is given prebuilt work requests. Both the control
|
||||
* and offload tx queues are of this type.
|
||||
*/
|
||||
struct sge_wrq {
|
||||
struct sge_eq eq; /* MUST be first */
|
||||
|
||||
struct adapter *adapter;
|
||||
struct mbuf *head; /* held up due to lack of descriptors */
|
||||
struct mbuf *tail; /* valid only if head is valid */
|
||||
|
||||
/* stats for common events first */
|
||||
|
||||
uint64_t tx_wrs; /* # of tx work requests */
|
||||
|
||||
/* stats for not-that-common events */
|
||||
|
||||
@ -394,20 +439,28 @@ struct sge_ctrlq {
|
||||
} __aligned(CACHE_LINE_SIZE);
|
||||
|
||||
struct sge {
|
||||
uint16_t timer_val[SGE_NTIMERS];
|
||||
uint8_t counter_val[SGE_NCOUNTERS];
|
||||
int timer_val[SGE_NTIMERS];
|
||||
int counter_val[SGE_NCOUNTERS];
|
||||
int fl_starve_threshold;
|
||||
|
||||
int nrxq; /* total rx queues (all ports and the rest) */
|
||||
int ntxq; /* total tx queues (all ports and the rest) */
|
||||
int niq; /* total ingress queues */
|
||||
int neq; /* total egress queues */
|
||||
int nrxq; /* total # of Ethernet rx queues */
|
||||
int ntxq; /* total # of Ethernet tx tx queues */
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
int nofldrxq; /* total # of TOE rx queues */
|
||||
int nofldtxq; /* total # of TOE tx queues */
|
||||
#endif
|
||||
int niq; /* total # of ingress queues */
|
||||
int neq; /* total # of egress queues */
|
||||
|
||||
struct sge_iq fwq; /* Firmware event queue */
|
||||
struct sge_ctrlq *ctrlq;/* Control queues */
|
||||
struct sge_iq *intrq; /* Interrupt queues */
|
||||
struct sge_wrq mgmtq; /* Management queue (control queue) */
|
||||
struct sge_wrq *ctrlq; /* Control queues */
|
||||
struct sge_txq *txq; /* NIC tx queues */
|
||||
struct sge_rxq *rxq; /* NIC rx queues */
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
struct sge_wrq *ofld_txq; /* TOE tx queues */
|
||||
struct sge_ofld_rxq *ofld_rxq; /* TOE rx queues */
|
||||
#endif
|
||||
|
||||
uint16_t iq_start;
|
||||
int eq_start;
|
||||
@ -415,7 +468,12 @@ struct sge {
|
||||
struct sge_eq **eqmap; /* eq->cntxt_id to eq mapping */
|
||||
};
|
||||
|
||||
struct rss_header;
|
||||
typedef int (*cpl_handler_t)(struct sge_iq *, const struct rss_header *,
|
||||
struct mbuf *);
|
||||
|
||||
struct adapter {
|
||||
SLIST_ENTRY(adapter) link;
|
||||
device_t dev;
|
||||
struct cdev *cdev;
|
||||
|
||||
@ -444,27 +502,47 @@ struct adapter {
|
||||
|
||||
struct sge sge;
|
||||
|
||||
struct taskqueue *tq[NCHAN]; /* taskqueues that flush data out */
|
||||
struct port_info *port[MAX_NPORTS];
|
||||
uint8_t chan_map[NCHAN];
|
||||
uint32_t filter_mode;
|
||||
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
struct uld_softc tom;
|
||||
struct tom_tunables tt;
|
||||
#endif
|
||||
struct l2t_data *l2t; /* L2 table */
|
||||
struct tid_info tids;
|
||||
|
||||
int registered_device_map;
|
||||
int open_device_map;
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
int offload_map;
|
||||
#endif
|
||||
int flags;
|
||||
|
||||
char fw_version[32];
|
||||
unsigned int cfcsum;
|
||||
struct adapter_params params;
|
||||
struct t4_virt_res vres;
|
||||
|
||||
struct sysctl_ctx_list ctx; /* from first_port_up to last_port_down */
|
||||
struct sysctl_oid *oid_fwq;
|
||||
struct sysctl_oid *oid_ctrlq;
|
||||
struct sysctl_oid *oid_intrq;
|
||||
uint16_t linkcaps;
|
||||
uint16_t niccaps;
|
||||
uint16_t toecaps;
|
||||
uint16_t rdmacaps;
|
||||
uint16_t iscsicaps;
|
||||
uint16_t fcoecaps;
|
||||
|
||||
struct sysctl_ctx_list ctx; /* from adapter_full_init to full_uninit */
|
||||
|
||||
struct mtx sc_lock;
|
||||
char lockname[16];
|
||||
|
||||
/* Starving free lists */
|
||||
struct mtx sfl_lock; /* same cache-line as sc_lock? but that's ok */
|
||||
TAILQ_HEAD(, sge_fl) sfl;
|
||||
struct callout sfl_callout;
|
||||
|
||||
cpl_handler_t cpl_handler[256] __aligned(CACHE_LINE_SIZE);
|
||||
};
|
||||
|
||||
#define ADAPTER_LOCK(sc) mtx_lock(&(sc)->sc_lock)
|
||||
@ -506,11 +584,15 @@ struct adapter {
|
||||
#define for_each_rxq(pi, iter, rxq) \
|
||||
rxq = &pi->adapter->sge.rxq[pi->first_rxq]; \
|
||||
for (iter = 0; iter < pi->nrxq; ++iter, ++rxq)
|
||||
#define for_each_ofld_txq(pi, iter, ofld_txq) \
|
||||
ofld_txq = &pi->adapter->sge.ofld_txq[pi->first_ofld_txq]; \
|
||||
for (iter = 0; iter < pi->nofldtxq; ++iter, ++ofld_txq)
|
||||
#define for_each_ofld_rxq(pi, iter, ofld_rxq) \
|
||||
ofld_rxq = &pi->adapter->sge.ofld_rxq[pi->first_ofld_rxq]; \
|
||||
for (iter = 0; iter < pi->nofldrxq; ++iter, ++ofld_rxq)
|
||||
|
||||
/* One for errors, one for firmware events */
|
||||
#define T4_EXTRA_INTR 2
|
||||
#define NINTRQ(sc) ((sc)->intr_count > T4_EXTRA_INTR ? \
|
||||
(sc)->intr_count - T4_EXTRA_INTR : 1)
|
||||
|
||||
static inline uint32_t
|
||||
t4_read_reg(struct adapter *sc, uint32_t reg)
|
||||
@ -589,29 +671,52 @@ static inline bool is_10G_port(const struct port_info *pi)
|
||||
return ((pi->link_cfg.supported & FW_PORT_CAP_SPEED_10G) != 0);
|
||||
}
|
||||
|
||||
static inline int tx_resume_threshold(struct sge_eq *eq)
|
||||
{
|
||||
return (eq->qsize / 4);
|
||||
}
|
||||
|
||||
/* t4_main.c */
|
||||
void cxgbe_txq_start(void *, int);
|
||||
void t4_tx_task(void *, int);
|
||||
void t4_tx_callout(void *);
|
||||
int t4_os_find_pci_capability(struct adapter *, int);
|
||||
int t4_os_pci_save_state(struct adapter *);
|
||||
int t4_os_pci_restore_state(struct adapter *);
|
||||
void t4_os_portmod_changed(const struct adapter *, int);
|
||||
void t4_os_link_changed(struct adapter *, int, int);
|
||||
void t4_iterate(void (*)(struct adapter *, void *), void *);
|
||||
int t4_register_cpl_handler(struct adapter *, int, cpl_handler_t);
|
||||
|
||||
/* t4_sge.c */
|
||||
void t4_sge_modload(void);
|
||||
void t4_sge_init(struct adapter *);
|
||||
int t4_sge_init(struct adapter *);
|
||||
int t4_create_dma_tag(struct adapter *);
|
||||
int t4_destroy_dma_tag(struct adapter *);
|
||||
int t4_setup_adapter_queues(struct adapter *);
|
||||
int t4_teardown_adapter_queues(struct adapter *);
|
||||
int t4_setup_eth_queues(struct port_info *);
|
||||
int t4_teardown_eth_queues(struct port_info *);
|
||||
int t4_setup_port_queues(struct port_info *);
|
||||
int t4_teardown_port_queues(struct port_info *);
|
||||
int t4_alloc_tx_maps(struct tx_maps *, bus_dma_tag_t, int, int);
|
||||
void t4_free_tx_maps(struct tx_maps *, bus_dma_tag_t);
|
||||
void t4_intr_all(void *);
|
||||
void t4_intr(void *);
|
||||
void t4_intr_err(void *);
|
||||
void t4_intr_evt(void *);
|
||||
int t4_mgmt_tx(struct adapter *, struct mbuf *);
|
||||
int t4_wrq_tx_locked(struct adapter *, struct sge_wrq *, struct mbuf *);
|
||||
int t4_eth_tx(struct ifnet *, struct sge_txq *, struct mbuf *);
|
||||
void t4_update_fl_bufsize(struct ifnet *);
|
||||
int can_resume_tx(struct sge_eq *);
|
||||
|
||||
static inline int t4_wrq_tx(struct adapter *sc, struct sge_wrq *wrq, struct mbuf *m)
|
||||
{
|
||||
int rc;
|
||||
|
||||
TXQ_LOCK(wrq);
|
||||
rc = t4_wrq_tx_locked(sc, wrq, m);
|
||||
TXQ_UNLOCK(wrq);
|
||||
return (rc);
|
||||
}
|
||||
|
||||
|
||||
#endif
|
||||
|
@ -42,6 +42,15 @@ enum {
|
||||
|
||||
enum { MEM_EDC0, MEM_EDC1, MEM_MC };
|
||||
|
||||
enum {
|
||||
MEMWIN0_APERTURE = 2048,
|
||||
MEMWIN0_BASE = 0x1b800,
|
||||
MEMWIN1_APERTURE = 32768,
|
||||
MEMWIN1_BASE = 0x28000,
|
||||
MEMWIN2_APERTURE = 65536,
|
||||
MEMWIN2_BASE = 0x30000,
|
||||
};
|
||||
|
||||
enum dev_master { MASTER_CANT, MASTER_MAY, MASTER_MUST };
|
||||
|
||||
enum dev_state { DEV_STATE_UNINIT, DEV_STATE_INIT, DEV_STATE_ERR };
|
||||
@ -53,8 +62,8 @@ enum {
|
||||
};
|
||||
|
||||
#define FW_VERSION_MAJOR 1
|
||||
#define FW_VERSION_MINOR 3
|
||||
#define FW_VERSION_MICRO 10
|
||||
#define FW_VERSION_MINOR 4
|
||||
#define FW_VERSION_MICRO 16
|
||||
|
||||
struct port_stats {
|
||||
u64 tx_octets; /* total # of octets in good frames */
|
||||
@ -190,7 +199,6 @@ struct tp_proxy_stats {
|
||||
struct tp_cpl_stats {
|
||||
u32 req[4];
|
||||
u32 rsp[4];
|
||||
u32 tx_err[4];
|
||||
};
|
||||
|
||||
struct tp_rdma_stats {
|
||||
@ -214,9 +222,9 @@ struct vpd_params {
|
||||
};
|
||||
|
||||
struct pci_params {
|
||||
unsigned int vpd_cap_addr;
|
||||
unsigned char speed;
|
||||
unsigned char width;
|
||||
unsigned int vpd_cap_addr;
|
||||
unsigned short speed;
|
||||
unsigned short width;
|
||||
};
|
||||
|
||||
/*
|
||||
@ -239,20 +247,20 @@ struct adapter_params {
|
||||
|
||||
unsigned int fw_vers;
|
||||
unsigned int tp_vers;
|
||||
u8 api_vers[7];
|
||||
|
||||
unsigned short mtus[NMTUS];
|
||||
unsigned short a_wnd[NCCTRL_WIN];
|
||||
unsigned short b_wnd[NCCTRL_WIN];
|
||||
|
||||
unsigned int mc_size; /* MC memory size */
|
||||
unsigned int nfilters; /* size of filter region */
|
||||
unsigned int mc_size; /* MC memory size */
|
||||
unsigned int nfilters; /* size of filter region */
|
||||
|
||||
unsigned int cim_la_size;
|
||||
|
||||
unsigned int nports; /* # of ethernet ports */
|
||||
/* Used as int in sysctls, do not reduce size */
|
||||
unsigned int nports; /* # of ethernet ports */
|
||||
unsigned int portvec;
|
||||
unsigned int rev; /* chip revision */
|
||||
unsigned int rev; /* chip revision */
|
||||
unsigned int offload;
|
||||
|
||||
unsigned int ofldq_wr_cred;
|
||||
@ -366,6 +374,9 @@ int t4_seeprom_wp(struct adapter *adapter, int enable);
|
||||
int t4_read_flash(struct adapter *adapter, unsigned int addr, unsigned int nwords,
|
||||
u32 *data, int byte_oriented);
|
||||
int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size);
|
||||
int t4_load_boot(struct adapter *adap, const u8 *boot_data,
|
||||
unsigned int boot_addr, unsigned int size);
|
||||
unsigned int t4_flash_cfg_addr(struct adapter *adapter);
|
||||
int t4_load_cfg(struct adapter *adapter, const u8 *cfg_data, unsigned int size);
|
||||
int t4_get_fw_version(struct adapter *adapter, u32 *vers);
|
||||
int t4_get_tp_version(struct adapter *adapter, u32 *vers);
|
||||
@ -460,8 +471,8 @@ int t4_wol_pat_enable(struct adapter *adap, unsigned int port, unsigned int map,
|
||||
int t4_fw_hello(struct adapter *adap, unsigned int mbox, unsigned int evt_mbox,
|
||||
enum dev_master master, enum dev_state *state);
|
||||
int t4_fw_bye(struct adapter *adap, unsigned int mbox);
|
||||
int t4_early_init(struct adapter *adap, unsigned int mbox);
|
||||
int t4_fw_reset(struct adapter *adap, unsigned int mbox, int reset);
|
||||
int t4_fw_initialize(struct adapter *adap, unsigned int mbox);
|
||||
int t4_query_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
|
||||
unsigned int vf, unsigned int nparams, const u32 *params,
|
||||
u32 *val);
|
||||
|
@ -30,10 +30,10 @@ __FBSDID("$FreeBSD$");
|
||||
#include "common.h"
|
||||
#include "t4_regs.h"
|
||||
#include "t4_regs_values.h"
|
||||
#include "t4fw_interface.h"
|
||||
#include "firmware/t4fw_interface.h"
|
||||
|
||||
#undef msleep
|
||||
#define msleep(x) DELAY((x) * 1000)
|
||||
#define msleep(x) pause("t4hw", (x) * hz / 1000)
|
||||
|
||||
/**
|
||||
* t4_wait_op_done_val - wait until an operation is completed
|
||||
@ -187,7 +187,7 @@ int t4_wr_mbox_meat(struct adapter *adap, int mbox, const void *cmd, int size,
|
||||
* off to larger delays to a maximum retry delay.
|
||||
*/
|
||||
static const int delay[] = {
|
||||
1, 1, 3, 5, 10, 10, 20, 50, 100, 200
|
||||
1, 1, 3, 5, 10, 10, 20, 50, 100
|
||||
};
|
||||
|
||||
u32 v;
|
||||
@ -625,17 +625,6 @@ enum {
|
||||
SF_RD_DATA_FAST = 0xb, /* read flash */
|
||||
SF_RD_ID = 0x9f, /* read ID */
|
||||
SF_ERASE_SECTOR = 0xd8, /* erase sector */
|
||||
|
||||
FW_START_SEC = 8, /* first flash sector for FW */
|
||||
FW_END_SEC = 15, /* last flash sector for FW */
|
||||
FW_IMG_START = FW_START_SEC * SF_SEC_SIZE,
|
||||
FW_MAX_SIZE = (FW_END_SEC - FW_START_SEC + 1) * SF_SEC_SIZE,
|
||||
|
||||
FLASH_CFG_MAX_SIZE = 0x10000 , /* max size of the flash config file */
|
||||
FLASH_CFG_OFFSET = 0x1f0000,
|
||||
FLASH_CFG_START_SEC = FLASH_CFG_OFFSET / SF_SEC_SIZE,
|
||||
FPGA_FLASH_CFG_OFFSET = 0xf0000 , /* if FPGA mode, then cfg file is at 1MB - 64KB */
|
||||
FPGA_FLASH_CFG_START_SEC = FPGA_FLASH_CFG_OFFSET / SF_SEC_SIZE,
|
||||
};
|
||||
|
||||
/**
|
||||
@ -763,12 +752,15 @@ int t4_read_flash(struct adapter *adapter, unsigned int addr,
|
||||
* @addr: the start address to write
|
||||
* @n: length of data to write in bytes
|
||||
* @data: the data to write
|
||||
* @byte_oriented: whether to store data as bytes or as words
|
||||
*
|
||||
* Writes up to a page of data (256 bytes) to the serial flash starting
|
||||
* at the given address. All the data must be written to the same page.
|
||||
* If @byte_oriented is set the write data is stored as byte stream
|
||||
* (i.e. matches what on disk), otherwise in big-endian.
|
||||
*/
|
||||
static int t4_write_flash(struct adapter *adapter, unsigned int addr,
|
||||
unsigned int n, const u8 *data)
|
||||
unsigned int n, const u8 *data, int byte_oriented)
|
||||
{
|
||||
int ret;
|
||||
u32 buf[SF_PAGE_SIZE / 4];
|
||||
@ -788,6 +780,9 @@ static int t4_write_flash(struct adapter *adapter, unsigned int addr,
|
||||
for (val = 0, i = 0; i < c; ++i)
|
||||
val = (val << 8) + *data++;
|
||||
|
||||
if (!byte_oriented)
|
||||
val = htonl(val);
|
||||
|
||||
ret = sf1_write(adapter, c, c != left, 1, val);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
@ -799,7 +794,8 @@ static int t4_write_flash(struct adapter *adapter, unsigned int addr,
|
||||
t4_write_reg(adapter, A_SF_OP, 0); /* unlock SF */
|
||||
|
||||
/* Read the page to verify the write succeeded */
|
||||
ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf, 1);
|
||||
ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf,
|
||||
byte_oriented);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -825,7 +821,7 @@ static int t4_write_flash(struct adapter *adapter, unsigned int addr,
|
||||
int t4_get_fw_version(struct adapter *adapter, u32 *vers)
|
||||
{
|
||||
return t4_read_flash(adapter,
|
||||
FW_IMG_START + offsetof(struct fw_hdr, fw_ver), 1,
|
||||
FLASH_FW_START + offsetof(struct fw_hdr, fw_ver), 1,
|
||||
vers, 0);
|
||||
}
|
||||
|
||||
@ -838,7 +834,7 @@ int t4_get_fw_version(struct adapter *adapter, u32 *vers)
|
||||
*/
|
||||
int t4_get_tp_version(struct adapter *adapter, u32 *vers)
|
||||
{
|
||||
return t4_read_flash(adapter, FW_IMG_START + offsetof(struct fw_hdr,
|
||||
return t4_read_flash(adapter, FLASH_FW_START + offsetof(struct fw_hdr,
|
||||
tp_microcode_ver),
|
||||
1, vers, 0);
|
||||
}
|
||||
@ -854,24 +850,17 @@ int t4_get_tp_version(struct adapter *adapter, u32 *vers)
|
||||
*/
|
||||
int t4_check_fw_version(struct adapter *adapter)
|
||||
{
|
||||
u32 api_vers[2];
|
||||
int ret, major, minor, micro;
|
||||
|
||||
ret = t4_get_fw_version(adapter, &adapter->params.fw_vers);
|
||||
if (!ret)
|
||||
ret = t4_get_tp_version(adapter, &adapter->params.tp_vers);
|
||||
if (!ret)
|
||||
ret = t4_read_flash(adapter,
|
||||
FW_IMG_START + offsetof(struct fw_hdr, intfver_nic),
|
||||
2, api_vers, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
major = G_FW_HDR_FW_VER_MAJOR(adapter->params.fw_vers);
|
||||
minor = G_FW_HDR_FW_VER_MINOR(adapter->params.fw_vers);
|
||||
micro = G_FW_HDR_FW_VER_MICRO(adapter->params.fw_vers);
|
||||
memcpy(adapter->params.api_vers, api_vers,
|
||||
sizeof(adapter->params.api_vers));
|
||||
|
||||
if (major != FW_VERSION_MAJOR) { /* major mismatch - fail */
|
||||
CH_ERR(adapter, "card FW has major version %u, driver wants "
|
||||
@ -913,6 +902,21 @@ static int t4_flash_erase_sectors(struct adapter *adapter, int start, int end)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_flash_cfg_addr - return the address of the flash configuration file
|
||||
* @adapter: the adapter
|
||||
*
|
||||
* Return the address within the flash where the Firmware Configuration
|
||||
* File is stored.
|
||||
*/
|
||||
unsigned int t4_flash_cfg_addr(struct adapter *adapter)
|
||||
{
|
||||
if (adapter->params.sf_size == 0x100000)
|
||||
return FLASH_FPGA_CFG_START;
|
||||
else
|
||||
return FLASH_CFG_START;
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_load_cfg - download config file
|
||||
* @adap: the adapter
|
||||
@ -928,17 +932,8 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
|
||||
unsigned int flash_cfg_start_sec;
|
||||
unsigned int sf_sec_size = adap->params.sf_size / adap->params.sf_nsec;
|
||||
|
||||
if (adap->params.sf_size == 0x100000) {
|
||||
addr = FPGA_FLASH_CFG_OFFSET;
|
||||
flash_cfg_start_sec = FPGA_FLASH_CFG_START_SEC;
|
||||
} else {
|
||||
addr = FLASH_CFG_OFFSET;
|
||||
flash_cfg_start_sec = FLASH_CFG_START_SEC;
|
||||
}
|
||||
if (!size) {
|
||||
CH_ERR(adap, "cfg file has no data\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
addr = t4_flash_cfg_addr(adap);
|
||||
flash_cfg_start_sec = addr / SF_SEC_SIZE;
|
||||
|
||||
if (size > FLASH_CFG_MAX_SIZE) {
|
||||
CH_ERR(adap, "cfg file too large, max is %u bytes\n",
|
||||
@ -950,7 +945,11 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
|
||||
sf_sec_size);
|
||||
ret = t4_flash_erase_sectors(adap, flash_cfg_start_sec,
|
||||
flash_cfg_start_sec + i - 1);
|
||||
if (ret)
|
||||
/*
|
||||
* If size == 0 then we're simply erasing the FLASH sectors associated
|
||||
* with the on-adapter Firmware Configuration File.
|
||||
*/
|
||||
if (ret || size == 0)
|
||||
goto out;
|
||||
|
||||
/* this will write to the flash up to SF_PAGE_SIZE at a time */
|
||||
@ -959,7 +958,7 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
|
||||
n = size - i;
|
||||
else
|
||||
n = SF_PAGE_SIZE;
|
||||
ret = t4_write_flash(adap, addr, n, cfg_data);
|
||||
ret = t4_write_flash(adap, addr, n, cfg_data, 1);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@ -969,7 +968,8 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
|
||||
|
||||
out:
|
||||
if (ret)
|
||||
CH_ERR(adap, "config file download failed %d\n", ret);
|
||||
CH_ERR(adap, "config file %s failed %d\n",
|
||||
(size == 0 ? "clear" : "download"), ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -1004,9 +1004,9 @@ int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
|
||||
CH_ERR(adap, "FW image size differs from size in FW header\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (size > FW_MAX_SIZE) {
|
||||
if (size > FLASH_FW_MAX_SIZE) {
|
||||
CH_ERR(adap, "FW image too large, max is %u bytes\n",
|
||||
FW_MAX_SIZE);
|
||||
FLASH_FW_MAX_SIZE);
|
||||
return -EFBIG;
|
||||
}
|
||||
|
||||
@ -1020,7 +1020,8 @@ int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
|
||||
}
|
||||
|
||||
i = DIV_ROUND_UP(size, sf_sec_size); /* # of sectors spanned */
|
||||
ret = t4_flash_erase_sectors(adap, FW_START_SEC, FW_START_SEC + i - 1);
|
||||
ret = t4_flash_erase_sectors(adap, FLASH_FW_START_SEC,
|
||||
FLASH_FW_START_SEC + i - 1);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@ -1031,28 +1032,110 @@ int t4_load_fw(struct adapter *adap, const u8 *fw_data, unsigned int size)
|
||||
*/
|
||||
memcpy(first_page, fw_data, SF_PAGE_SIZE);
|
||||
((struct fw_hdr *)first_page)->fw_ver = htonl(0xffffffff);
|
||||
ret = t4_write_flash(adap, FW_IMG_START, SF_PAGE_SIZE, first_page);
|
||||
ret = t4_write_flash(adap, FLASH_FW_START, SF_PAGE_SIZE, first_page, 1);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
addr = FW_IMG_START;
|
||||
addr = FLASH_FW_START;
|
||||
for (size -= SF_PAGE_SIZE; size; size -= SF_PAGE_SIZE) {
|
||||
addr += SF_PAGE_SIZE;
|
||||
fw_data += SF_PAGE_SIZE;
|
||||
ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, fw_data);
|
||||
ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, fw_data, 1);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = t4_write_flash(adap,
|
||||
FW_IMG_START + offsetof(struct fw_hdr, fw_ver),
|
||||
sizeof(hdr->fw_ver), (const u8 *)&hdr->fw_ver);
|
||||
FLASH_FW_START + offsetof(struct fw_hdr, fw_ver),
|
||||
sizeof(hdr->fw_ver), (const u8 *)&hdr->fw_ver, 1);
|
||||
out:
|
||||
if (ret)
|
||||
CH_ERR(adap, "firmware download failed, error %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* BIOS boot header */
|
||||
typedef struct boot_header_s {
|
||||
u8 signature[2]; /* signature */
|
||||
u8 length; /* image length (include header) */
|
||||
u8 offset[4]; /* initialization vector */
|
||||
u8 reserved[19]; /* reserved */
|
||||
u8 exheader[2]; /* offset to expansion header */
|
||||
} boot_header_t;
|
||||
|
||||
enum {
|
||||
BOOT_FLASH_BOOT_ADDR = 0x0,/* start address of boot image in flash */
|
||||
BOOT_SIGNATURE = 0xaa55, /* signature of BIOS boot ROM */
|
||||
BOOT_SIZE_INC = 512, /* image size measured in 512B chunks */
|
||||
BOOT_MIN_SIZE = sizeof(boot_header_t), /* at least basic header */
|
||||
BOOT_MAX_SIZE = 1024*BOOT_SIZE_INC /* 1 byte * length increment */
|
||||
};
|
||||
|
||||
/*
|
||||
* t4_load_boot - download boot flash
|
||||
* @adapter: the adapter
|
||||
* @boot_data: the boot image to write
|
||||
* @size: image size
|
||||
*
|
||||
* Write the supplied boot image to the card's serial flash.
|
||||
* The boot image has the following sections: a 28-byte header and the
|
||||
* boot image.
|
||||
*/
|
||||
int t4_load_boot(struct adapter *adap, const u8 *boot_data,
|
||||
unsigned int boot_addr, unsigned int size)
|
||||
{
|
||||
int ret, addr;
|
||||
unsigned int i;
|
||||
unsigned int boot_sector = boot_addr * 1024;
|
||||
unsigned int sf_sec_size = adap->params.sf_size / adap->params.sf_nsec;
|
||||
|
||||
/*
|
||||
* Perform some primitive sanity testing to avoid accidentally
|
||||
* writing garbage over the boot sectors. We ought to check for
|
||||
* more but it's not worth it for now ...
|
||||
*/
|
||||
if (size < BOOT_MIN_SIZE || size > BOOT_MAX_SIZE) {
|
||||
CH_ERR(adap, "boot image too small/large\n");
|
||||
return -EFBIG;
|
||||
}
|
||||
|
||||
/*
|
||||
* Make sure the boot image does not encroach on the firmware region
|
||||
*/
|
||||
if ((boot_sector + size) >> 16 > FLASH_FW_START_SEC) {
|
||||
CH_ERR(adap, "boot image encroaching on firmware region\n");
|
||||
return -EFBIG;
|
||||
}
|
||||
|
||||
i = DIV_ROUND_UP(size, sf_sec_size); /* # of sectors spanned */
|
||||
ret = t4_flash_erase_sectors(adap, boot_sector >> 16,
|
||||
(boot_sector >> 16) + i - 1);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Skip over the first SF_PAGE_SIZE worth of data and write it after
|
||||
* we finish copying the rest of the boot image. This will ensure
|
||||
* that the BIOS boot header will only be written if the boot image
|
||||
* was written in full.
|
||||
*/
|
||||
addr = boot_sector;
|
||||
for (size -= SF_PAGE_SIZE; size; size -= SF_PAGE_SIZE) {
|
||||
addr += SF_PAGE_SIZE;
|
||||
boot_data += SF_PAGE_SIZE;
|
||||
ret = t4_write_flash(adap, addr, SF_PAGE_SIZE, boot_data, 0);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = t4_write_flash(adap, boot_sector, SF_PAGE_SIZE, boot_data, 0);
|
||||
|
||||
out:
|
||||
if (ret)
|
||||
CH_ERR(adap, "boot image download failed, error %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_read_cimq_cfg - read CIM queue configuration
|
||||
* @adap: the adapter
|
||||
@ -1668,7 +1751,10 @@ static void sge_intr_handler(struct adapter *adapter)
|
||||
err = t4_read_reg(adapter, A_SGE_ERROR_STATS);
|
||||
if (err & F_ERROR_QID_VALID) {
|
||||
CH_ERR(adapter, "SGE error for queue %u\n", G_ERROR_QID(err));
|
||||
t4_write_reg(adapter, A_SGE_ERROR_STATS, F_ERROR_QID_VALID);
|
||||
if (err & F_UNCAPTURED_ERROR)
|
||||
CH_ERR(adapter, "SGE UNCAPTURED_ERROR set (clearing)\n");
|
||||
t4_write_reg(adapter, A_SGE_ERROR_STATS, F_ERROR_QID_VALID |
|
||||
F_UNCAPTURED_ERROR);
|
||||
}
|
||||
|
||||
if (v != 0)
|
||||
@ -2261,6 +2347,7 @@ int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
|
||||
*/
|
||||
while (n > 0) {
|
||||
int nq = min(n, 32);
|
||||
int nq_packed = 0;
|
||||
__be32 *qp = &cmd.iq0_to_iq2;
|
||||
|
||||
/*
|
||||
@ -2282,25 +2369,28 @@ int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
|
||||
* Ingress Queue ID array and insert them into the command.
|
||||
*/
|
||||
while (nq > 0) {
|
||||
unsigned int v;
|
||||
/*
|
||||
* Grab up to the next 3 Ingress Queue IDs (wrapping
|
||||
* around the Ingress Queue ID array if necessary) and
|
||||
* insert them into the firmware RSS command at the
|
||||
* current 3-tuple position within the commad.
|
||||
*/
|
||||
v = V_FW_RSS_IND_TBL_CMD_IQ0(*rsp);
|
||||
if (++rsp >= rsp_end)
|
||||
rsp = rspq;
|
||||
v |= V_FW_RSS_IND_TBL_CMD_IQ1(*rsp);
|
||||
if (++rsp >= rsp_end)
|
||||
rsp = rspq;
|
||||
v |= V_FW_RSS_IND_TBL_CMD_IQ2(*rsp);
|
||||
if (++rsp >= rsp_end)
|
||||
rsp = rspq;
|
||||
u16 qbuf[3];
|
||||
u16 *qbp = qbuf;
|
||||
int nqbuf = min(3, nq);
|
||||
|
||||
*qp++ = htonl(v);
|
||||
nq -= 3;
|
||||
nq -= nqbuf;
|
||||
qbuf[0] = qbuf[1] = qbuf[2] = 0;
|
||||
while (nqbuf && nq_packed < 32) {
|
||||
nqbuf--;
|
||||
nq_packed++;
|
||||
*qbp++ = *rsp++;
|
||||
if (rsp >= rsp_end)
|
||||
rsp = rspq;
|
||||
}
|
||||
*qp++ = cpu_to_be32(V_FW_RSS_IND_TBL_CMD_IQ0(qbuf[0]) |
|
||||
V_FW_RSS_IND_TBL_CMD_IQ1(qbuf[1]) |
|
||||
V_FW_RSS_IND_TBL_CMD_IQ2(qbuf[2]));
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2694,8 +2784,6 @@ void t4_tp_get_cpl_stats(struct adapter *adap, struct tp_cpl_stats *st)
|
||||
{
|
||||
t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->req,
|
||||
8, A_TP_MIB_CPL_IN_REQ_0);
|
||||
t4_read_indirect(adap, A_TP_MIB_INDEX, A_TP_MIB_DATA, st->tx_err,
|
||||
4, A_TP_MIB_CPL_OUT_ERR_0);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -3298,6 +3386,7 @@ void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
|
||||
t4_read_reg64(adap, PORT_REG(idx, A_MPS_PORT_STAT_##name##_L))
|
||||
#define GET_STAT_COM(name) t4_read_reg64(adap, A_MPS_STAT_##name##_L)
|
||||
|
||||
p->tx_pause = GET_STAT(TX_PORT_PAUSE);
|
||||
p->tx_octets = GET_STAT(TX_PORT_BYTES);
|
||||
p->tx_frames = GET_STAT(TX_PORT_FRAMES);
|
||||
p->tx_bcast_frames = GET_STAT(TX_PORT_BCAST);
|
||||
@ -3312,7 +3401,6 @@ void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
|
||||
p->tx_frames_1024_1518 = GET_STAT(TX_PORT_1024B_1518B);
|
||||
p->tx_frames_1519_max = GET_STAT(TX_PORT_1519B_MAX);
|
||||
p->tx_drop = GET_STAT(TX_PORT_DROP);
|
||||
p->tx_pause = GET_STAT(TX_PORT_PAUSE);
|
||||
p->tx_ppp0 = GET_STAT(TX_PORT_PPP0);
|
||||
p->tx_ppp1 = GET_STAT(TX_PORT_PPP1);
|
||||
p->tx_ppp2 = GET_STAT(TX_PORT_PPP2);
|
||||
@ -3322,6 +3410,7 @@ void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
|
||||
p->tx_ppp6 = GET_STAT(TX_PORT_PPP6);
|
||||
p->tx_ppp7 = GET_STAT(TX_PORT_PPP7);
|
||||
|
||||
p->rx_pause = GET_STAT(RX_PORT_PAUSE);
|
||||
p->rx_octets = GET_STAT(RX_PORT_BYTES);
|
||||
p->rx_frames = GET_STAT(RX_PORT_FRAMES);
|
||||
p->rx_bcast_frames = GET_STAT(RX_PORT_BCAST);
|
||||
@ -3340,7 +3429,6 @@ void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p)
|
||||
p->rx_frames_512_1023 = GET_STAT(RX_PORT_512B_1023B);
|
||||
p->rx_frames_1024_1518 = GET_STAT(RX_PORT_1024B_1518B);
|
||||
p->rx_frames_1519_max = GET_STAT(RX_PORT_1519B_MAX);
|
||||
p->rx_pause = GET_STAT(RX_PORT_PAUSE);
|
||||
p->rx_ppp0 = GET_STAT(RX_PORT_PPP0);
|
||||
p->rx_ppp1 = GET_STAT(RX_PORT_PPP1);
|
||||
p->rx_ppp2 = GET_STAT(RX_PORT_PPP2);
|
||||
@ -3683,28 +3771,114 @@ int t4_fw_hello(struct adapter *adap, unsigned int mbox, unsigned int evt_mbox,
|
||||
{
|
||||
int ret;
|
||||
struct fw_hello_cmd c;
|
||||
u32 v;
|
||||
unsigned int master_mbox;
|
||||
int retries = FW_CMD_HELLO_RETRIES;
|
||||
|
||||
retry:
|
||||
memset(&c, 0, sizeof(c));
|
||||
INIT_CMD(c, HELLO, WRITE);
|
||||
c.err_to_mbasyncnot = htonl(
|
||||
c.err_to_clearinit = htonl(
|
||||
V_FW_HELLO_CMD_MASTERDIS(master == MASTER_CANT) |
|
||||
V_FW_HELLO_CMD_MASTERFORCE(master == MASTER_MUST) |
|
||||
V_FW_HELLO_CMD_MBMASTER(master == MASTER_MUST ? mbox :
|
||||
M_FW_HELLO_CMD_MBMASTER) |
|
||||
V_FW_HELLO_CMD_MBASYNCNOT(evt_mbox));
|
||||
V_FW_HELLO_CMD_MBASYNCNOT(evt_mbox) |
|
||||
V_FW_HELLO_CMD_STAGE(FW_HELLO_CMD_STAGE_OS) |
|
||||
F_FW_HELLO_CMD_CLEARINIT);
|
||||
|
||||
/*
|
||||
* Issue the HELLO command to the firmware. If it's not successful
|
||||
* but indicates that we got a "busy" or "timeout" condition, retry
|
||||
* the HELLO until we exhaust our retry limit.
|
||||
*/
|
||||
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
|
||||
if (ret == 0 && state) {
|
||||
u32 v = ntohl(c.err_to_mbasyncnot);
|
||||
if (v & F_FW_HELLO_CMD_INIT)
|
||||
*state = DEV_STATE_INIT;
|
||||
else if (v & F_FW_HELLO_CMD_ERR)
|
||||
if (ret != FW_SUCCESS) {
|
||||
if ((ret == -EBUSY || ret == -ETIMEDOUT) && retries-- > 0)
|
||||
goto retry;
|
||||
return ret;
|
||||
}
|
||||
|
||||
v = ntohl(c.err_to_clearinit);
|
||||
master_mbox = G_FW_HELLO_CMD_MBMASTER(v);
|
||||
if (state) {
|
||||
if (v & F_FW_HELLO_CMD_ERR)
|
||||
*state = DEV_STATE_ERR;
|
||||
else if (v & F_FW_HELLO_CMD_INIT)
|
||||
*state = DEV_STATE_INIT;
|
||||
else
|
||||
*state = DEV_STATE_UNINIT;
|
||||
return G_FW_HELLO_CMD_MBMASTER(v);
|
||||
}
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* If we're not the Master PF then we need to wait around for the
|
||||
* Master PF Driver to finish setting up the adapter.
|
||||
*
|
||||
* Note that we also do this wait if we're a non-Master-capable PF and
|
||||
* there is no current Master PF; a Master PF may show up momentarily
|
||||
* and we wouldn't want to fail pointlessly. (This can happen when an
|
||||
* OS loads lots of different drivers rapidly at the same time). In
|
||||
* this case, the Master PF returned by the firmware will be
|
||||
* M_PCIE_FW_MASTER so the test below will work ...
|
||||
*/
|
||||
if ((v & (F_FW_HELLO_CMD_ERR|F_FW_HELLO_CMD_INIT)) == 0 &&
|
||||
master_mbox != mbox) {
|
||||
int waiting = FW_CMD_HELLO_TIMEOUT;
|
||||
|
||||
/*
|
||||
* Wait for the firmware to either indicate an error or
|
||||
* initialized state. If we see either of these we bail out
|
||||
* and report the issue to the caller. If we exhaust the
|
||||
* "hello timeout" and we haven't exhausted our retries, try
|
||||
* again. Otherwise bail with a timeout error.
|
||||
*/
|
||||
for (;;) {
|
||||
u32 pcie_fw;
|
||||
|
||||
msleep(50);
|
||||
waiting -= 50;
|
||||
|
||||
/*
|
||||
* If neither Error nor Initialialized are indicated
|
||||
* by the firmware keep waiting till we exhaust our
|
||||
* timeout ... and then retry if we haven't exhausted
|
||||
* our retries ...
|
||||
*/
|
||||
pcie_fw = t4_read_reg(adap, A_PCIE_FW);
|
||||
if (!(pcie_fw & (F_PCIE_FW_ERR|F_PCIE_FW_INIT))) {
|
||||
if (waiting <= 0) {
|
||||
if (retries-- > 0)
|
||||
goto retry;
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* We either have an Error or Initialized condition
|
||||
* report errors preferentially.
|
||||
*/
|
||||
if (state) {
|
||||
if (pcie_fw & F_PCIE_FW_ERR)
|
||||
*state = DEV_STATE_ERR;
|
||||
else if (pcie_fw & F_PCIE_FW_INIT)
|
||||
*state = DEV_STATE_INIT;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we arrived before a Master PF was selected and
|
||||
* there's not a valid Master PF, grab its identity
|
||||
* for our caller.
|
||||
*/
|
||||
if (master_mbox == M_PCIE_FW_MASTER &&
|
||||
(pcie_fw & F_PCIE_FW_MASTER_VLD))
|
||||
master_mbox = G_PCIE_FW_MASTER(pcie_fw);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return master_mbox;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -3723,23 +3897,6 @@ int t4_fw_bye(struct adapter *adap, unsigned int mbox)
|
||||
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_init_cmd - ask FW to initialize the device
|
||||
* @adap: the adapter
|
||||
* @mbox: mailbox to use for the FW command
|
||||
*
|
||||
* Issues a command to FW to partially initialize the device. This
|
||||
* performs initialization that generally doesn't depend on user input.
|
||||
*/
|
||||
int t4_early_init(struct adapter *adap, unsigned int mbox)
|
||||
{
|
||||
struct fw_initialize_cmd c;
|
||||
|
||||
memset(&c, 0, sizeof(c));
|
||||
INIT_CMD(c, INITIALIZE, WRITE);
|
||||
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_fw_reset - issue a reset to FW
|
||||
* @adap: the adapter
|
||||
@ -3758,6 +3915,23 @@ int t4_fw_reset(struct adapter *adap, unsigned int mbox, int reset)
|
||||
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_fw_initialize - ask FW to initialize the device
|
||||
* @adap: the adapter
|
||||
* @mbox: mailbox to use for the FW command
|
||||
*
|
||||
* Issues a command to FW to partially initialize the device. This
|
||||
* performs initialization that generally doesn't depend on user input.
|
||||
*/
|
||||
int t4_fw_initialize(struct adapter *adap, unsigned int mbox)
|
||||
{
|
||||
struct fw_initialize_cmd c;
|
||||
|
||||
memset(&c, 0, sizeof(c));
|
||||
INIT_CMD(c, INITIALIZE, WRITE);
|
||||
return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_query_params - query FW or device parameters
|
||||
* @adap: the adapter
|
||||
@ -4495,6 +4669,21 @@ static int __devinit get_flash_params(struct adapter *adapter)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __devinit set_pcie_completion_timeout(struct adapter *adapter,
|
||||
u8 range)
|
||||
{
|
||||
u16 val;
|
||||
u32 pcie_cap;
|
||||
|
||||
pcie_cap = t4_os_find_pci_capability(adapter, PCI_CAP_ID_EXP);
|
||||
if (pcie_cap) {
|
||||
t4_os_pci_read_cfg2(adapter, pcie_cap + PCI_EXP_DEVCTL2, &val);
|
||||
val &= 0xfff0;
|
||||
val |= range ;
|
||||
t4_os_pci_write_cfg2(adapter, pcie_cap + PCI_EXP_DEVCTL2, val);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* t4_prep_adapter - prepare SW and HW for operation
|
||||
* @adapter: the adapter
|
||||
@ -4541,6 +4730,8 @@ int __devinit t4_prep_adapter(struct adapter *adapter)
|
||||
adapter->params.portvec = 1;
|
||||
adapter->params.vpd.cclk = 50000;
|
||||
|
||||
/* Set pci completion timeout value to 4 seconds. */
|
||||
set_pcie_completion_timeout(adapter, 0xd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -182,4 +182,82 @@ struct pagepod {
|
||||
#define M_PPOD_OFST 0xFFFFFFFF
|
||||
#define V_PPOD_OFST(x) ((x) << S_PPOD_OFST)
|
||||
|
||||
/*
|
||||
* Flash layout.
|
||||
*/
|
||||
#define FLASH_START(start) ((start) * SF_SEC_SIZE)
|
||||
#define FLASH_MAX_SIZE(nsecs) ((nsecs) * SF_SEC_SIZE)
|
||||
|
||||
enum {
|
||||
/*
|
||||
* Various Expansion-ROM boot images, etc.
|
||||
*/
|
||||
FLASH_EXP_ROM_START_SEC = 0,
|
||||
FLASH_EXP_ROM_NSECS = 6,
|
||||
FLASH_EXP_ROM_START = FLASH_START(FLASH_EXP_ROM_START_SEC),
|
||||
FLASH_EXP_ROM_MAX_SIZE = FLASH_MAX_SIZE(FLASH_EXP_ROM_NSECS),
|
||||
|
||||
/*
|
||||
* iSCSI Boot Firmware Table (iBFT) and other driver-related
|
||||
* parameters ...
|
||||
*/
|
||||
FLASH_IBFT_START_SEC = 6,
|
||||
FLASH_IBFT_NSECS = 1,
|
||||
FLASH_IBFT_START = FLASH_START(FLASH_IBFT_START_SEC),
|
||||
FLASH_IBFT_MAX_SIZE = FLASH_MAX_SIZE(FLASH_IBFT_NSECS),
|
||||
|
||||
/*
|
||||
* Boot configuration data.
|
||||
*/
|
||||
FLASH_BOOTCFG_START_SEC = 7,
|
||||
FLASH_BOOTCFG_NSECS = 1,
|
||||
FLASH_BOOTCFG_START = FLASH_START(FLASH_BOOTCFG_START_SEC),
|
||||
FLASH_BOOTCFG_MAX_SIZE = FLASH_MAX_SIZE(FLASH_BOOTCFG_NSECS),
|
||||
|
||||
/*
|
||||
* Location of firmware image in FLASH.
|
||||
*/
|
||||
FLASH_FW_START_SEC = 8,
|
||||
FLASH_FW_NSECS = 8,
|
||||
FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
|
||||
FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
|
||||
|
||||
/*
|
||||
* iSCSI persistent/crash information.
|
||||
*/
|
||||
FLASH_ISCSI_CRASH_START_SEC = 29,
|
||||
FLASH_ISCSI_CRASH_NSECS = 1,
|
||||
FLASH_ISCSI_CRASH_START = FLASH_START(FLASH_ISCSI_CRASH_START_SEC),
|
||||
FLASH_ISCSI_CRASH_MAX_SIZE = FLASH_MAX_SIZE(FLASH_ISCSI_CRASH_NSECS),
|
||||
|
||||
/*
|
||||
* FCoE persistent/crash information.
|
||||
*/
|
||||
FLASH_FCOE_CRASH_START_SEC = 30,
|
||||
FLASH_FCOE_CRASH_NSECS = 1,
|
||||
FLASH_FCOE_CRASH_START = FLASH_START(FLASH_FCOE_CRASH_START_SEC),
|
||||
FLASH_FCOE_CRASH_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FCOE_CRASH_NSECS),
|
||||
|
||||
/*
|
||||
* Location of Firmware Configuration File in FLASH. Since the FPGA
|
||||
* "FLASH" is smaller we need to store the Configuration File in a
|
||||
* different location -- which will overlap the end of the firmware
|
||||
* image if firmware ever gets that large ...
|
||||
*/
|
||||
FLASH_CFG_START_SEC = 31,
|
||||
FLASH_CFG_NSECS = 1,
|
||||
FLASH_CFG_START = FLASH_START(FLASH_CFG_START_SEC),
|
||||
FLASH_CFG_MAX_SIZE = FLASH_MAX_SIZE(FLASH_CFG_NSECS),
|
||||
|
||||
FLASH_FPGA_CFG_START_SEC = 15,
|
||||
FLASH_FPGA_CFG_START = FLASH_START(FLASH_FPGA_CFG_START_SEC),
|
||||
|
||||
/*
|
||||
* Sectors 32-63 are reserved for FLASH failover.
|
||||
*/
|
||||
};
|
||||
|
||||
#undef FLASH_START
|
||||
#undef FLASH_MAX_SIZE
|
||||
|
||||
#endif /* __T4_HW_H */
|
||||
|
132
sys/dev/cxgbe/firmware/t4fw_cfg.txt
Normal file
132
sys/dev/cxgbe/firmware/t4fw_cfg.txt
Normal file
@ -0,0 +1,132 @@
|
||||
# Firmware configuration file.
|
||||
#
|
||||
# Global limits (some are hardware limits, others are due to the firmware).
|
||||
# Also note that the firmware reserves some of these resources for its own use
|
||||
# so it's not always possible for the drivers to grab everything listed here.
|
||||
# nvi = 128 virtual interfaces
|
||||
# niqflint = 1023 ingress queues with freelists and/or interrupts
|
||||
# nethctrl = 64K Ethernet or ctrl egress queues
|
||||
# neq = 64K egress queues of all kinds, including freelists
|
||||
# nexactf = 336 MPS TCAM entries, can oversubscribe.
|
||||
#
|
||||
|
||||
[global]
|
||||
rss_glb_config_mode = basicvirtual
|
||||
rss_glb_config_options = tnlmapen, hashtoeplitz, tnlalllkp
|
||||
|
||||
sge_timer_value = 1, 5, 10, 50, 100, 200 # usecs
|
||||
|
||||
# TP_SHIFT_CNT
|
||||
reg[0x7dc0] = 0x64f8849
|
||||
|
||||
filterMode = fragmentation, mpshittype, protocol, vlan, port, fcoe
|
||||
|
||||
# TP rx and tx payload memory (% of the total EDRAM + DDR3).
|
||||
tp_pmrx = 40
|
||||
tp_pmtx = 60
|
||||
tp_pmrx_pagesize = 64K
|
||||
tp_pmtx_pagesize = 64K
|
||||
|
||||
# PFs 0-3. These get 8 MSI/8 MSI-X vectors each. VFs are supported by
|
||||
# these 4 PFs only. Not used here at all.
|
||||
[function "0"]
|
||||
nvf = 16
|
||||
nvi = 1
|
||||
[function "0/*"]
|
||||
nvi = 1
|
||||
|
||||
[function "1"]
|
||||
nvf = 16
|
||||
nvi = 1
|
||||
[function "1/*"]
|
||||
nvi = 1
|
||||
|
||||
[function "2"]
|
||||
nvf = 16
|
||||
nvi = 1
|
||||
[function "2/*"]
|
||||
nvi = 1
|
||||
|
||||
[function "3"]
|
||||
nvf = 16
|
||||
nvi = 1
|
||||
[function "3/*"]
|
||||
nvi = 1
|
||||
|
||||
# PF4 is the resource-rich PF that the bus/nexus driver attaches to.
|
||||
# It gets 32 MSI/128 MSI-X vectors.
|
||||
[function "4"]
|
||||
wx_caps = all
|
||||
r_caps = all
|
||||
nvi = 48
|
||||
niqflint = 256
|
||||
nethctrl = 128
|
||||
neq = 256
|
||||
nexactf = 300
|
||||
cmask = all
|
||||
pmask = all
|
||||
|
||||
# driver will mask off features it won't use
|
||||
protocol = ofld
|
||||
|
||||
tp_l2t = 100
|
||||
|
||||
# TCAM has 8K cells; each region must start at a multiple of 128 cell.
|
||||
# Each entry in these categories takes 4 cells each. nhash will use the
|
||||
# TCAM iff there is room left (that is, the rest don't add up to 2048).
|
||||
nroute = 32
|
||||
nclip = 0 # needed only for IPv6 offload
|
||||
nfilter = 1504
|
||||
nserver = 512
|
||||
nhash = 16384
|
||||
|
||||
# PF5 is the SCSI Controller PF. It gets 32 MSI/40 MSI-X vectors.
|
||||
# Not used right now.
|
||||
[function "5"]
|
||||
nvi = 1
|
||||
|
||||
# PF6 is the FCoE Controller PF. It gets 32 MSI/40 MSI-X vectors.
|
||||
# Not used right now.
|
||||
[function "6"]
|
||||
nvi = 1
|
||||
|
||||
# MPS has 192K buffer space for ingress packets from the wire as well as
|
||||
# loopback path of the L2 switch.
|
||||
[port "0"]
|
||||
dcb = none
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[port "1"]
|
||||
dcb = none
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[port "2"]
|
||||
dcb = none
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[port "3"]
|
||||
dcb = none
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[fini]
|
||||
version = 0x1
|
||||
checksum = 0xb31cdfac
|
||||
#
|
||||
# $FreeBSD$
|
||||
#
|
503
sys/dev/cxgbe/firmware/t4fw_cfg_uwire.txt
Normal file
503
sys/dev/cxgbe/firmware/t4fw_cfg_uwire.txt
Normal file
@ -0,0 +1,503 @@
|
||||
# Chelsio T4 Factory Default configuration file.
|
||||
#
|
||||
# Copyright (C) 2010 Chelsio Communications. All rights reserved.
|
||||
#
|
||||
|
||||
# This file provides the default, power-on configuration for 4-port T4-based
|
||||
# adapters shipped from the factory. These defaults are designed to address
|
||||
# the needs of the vast majority of T4 customers. The basic idea is to have
|
||||
# a default configuration which allows a customer to plug a T4 adapter in and
|
||||
# have it work regardless of OS, driver or application except in the most
|
||||
# unusual and/or demanding customer applications.
|
||||
#
|
||||
# Many of the T4 resources which are described by this configuration are
|
||||
# finite. This requires balancing the configuration/operation needs of
|
||||
# device drivers across OSes and a large number of customer application.
|
||||
#
|
||||
# Some of the more important resources to allocate and their constaints are:
|
||||
# 1. Virtual Interfaces: 128.
|
||||
# 2. Ingress Queues with Free Lists: 1024. PCI-E SR-IOV Virtual Functions
|
||||
# must use a power of 2 Ingress Queues.
|
||||
# 3. Egress Queues: 128K. PCI-E SR-IOV Virtual Functions must use a
|
||||
# power of 2 Egress Queues.
|
||||
# 4. MSI-X Vectors: 1088. A complication here is that the PCI-E SR-IOV
|
||||
# Virtual Functions based off of a Physical Function all get the
|
||||
# same umber of MSI-X Vectors as the base Physical Function.
|
||||
# Additionally, regardless of whether Virtual Functions are enabled or
|
||||
# not, their MSI-X "needs" are counted by the PCI-E implementation.
|
||||
# And finally, all Physical Funcations capable of supporting Virtual
|
||||
# Functions (PF0-3) must have the same number of configured TotalVFs in
|
||||
# their SR-IOV Capabilities.
|
||||
# 5. Multi-Port Support (MPS) TCAM: 336 entries to support MAC destination
|
||||
# address matching on Ingress Packets.
|
||||
#
|
||||
# Some of the important OS/Driver resource needs are:
|
||||
# 6. Some OS Drivers will manage all resources through a single Physical
|
||||
# Function (currently PF0 but it could be any Physical Function). Thus,
|
||||
# this "Unified PF" will need to have enough resources allocated to it
|
||||
# to allow for this. And because of the MSI-X resource allocation
|
||||
# constraints mentioned above, this probably means we'll either have to
|
||||
# severely limit the TotalVFs if we continue to use PF0 as the Unified PF
|
||||
# or we'll need to move the Unified PF into the PF4-7 range since those
|
||||
# Physical Functions don't have any Virtual Functions associated with
|
||||
# them.
|
||||
# 7. Some OS Drivers will manage different ports and functions (NIC,
|
||||
# storage, etc.) on different Physical Functions. For example, NIC
|
||||
# functions for ports 0-3 on PF0-3, FCoE on PF4, iSCSI on PF5, etc.
|
||||
#
|
||||
# Some of the customer application needs which need to be accommodated:
|
||||
# 8. Some customers will want to support large CPU count systems with
|
||||
# good scaling. Thus, we'll need to accommodate a number of
|
||||
# Ingress Queues and MSI-X Vectors to allow up to some number of CPUs
|
||||
# to be involved per port and per application function. For example,
|
||||
# in the case where all ports and application functions will be
|
||||
# managed via a single Unified PF and we want to accommodate scaling up
|
||||
# to 8 CPUs, we would want:
|
||||
#
|
||||
# 4 ports *
|
||||
# 3 application functions (NIC, FCoE, iSCSI) per port *
|
||||
# 8 Ingress Queue/MSI-X Vectors per application function
|
||||
#
|
||||
# for a total of 96 Ingress Queues and MSI-X Vectors on the Unified PF.
|
||||
# (Plus a few for Firmware Event Queues, etc.)
|
||||
#
|
||||
# 9. Some customers will want to use T4's PCI-E SR-IOV Capability to allow
|
||||
# Virtual Machines to directly access T4 functionality via SR-IOV
|
||||
# Virtual Functions and "PCI Device Passthrough" -- this is especially
|
||||
# true for the NIC application functionality. (Note that there is
|
||||
# currently no ability to use the TOE, FCoE, iSCSI, etc. via Virtual
|
||||
# Functions so this is in fact solely limited to NIC.)
|
||||
#
|
||||
|
||||
|
||||
# Global configuration settings.
|
||||
#
|
||||
[global]
|
||||
rss_glb_config_mode = basicvirtual
|
||||
rss_glb_config_options = tnlmapen,hashtoeplitz,tnlalllkp
|
||||
|
||||
# The following Scatter Gather Engine (SGE) settings assume a 4KB Host
|
||||
# Page Size and a 64B L1 Cache Line Size. It programs the
|
||||
# EgrStatusPageSize and IngPadBoundary to 64B and the PktShift to 2.
|
||||
# If a Master PF Driver finds itself on a machine with different
|
||||
# parameters, then the Master PF Driver is responsible for initializing
|
||||
# these parameters to appropriate values.
|
||||
#
|
||||
# Notes:
|
||||
# 1. The Free List Buffer Sizes below are raw and the firmware will
|
||||
# round them up to the Ingress Padding Boundary.
|
||||
# 2. The SGE Timer Values below are expressed below in microseconds.
|
||||
# The firmware will convert these values to Core Clock Ticks when
|
||||
# it processes the configuration parameters.
|
||||
#
|
||||
reg[0x1008] = 0x40810/0x21c70 # SGE_CONTROL
|
||||
reg[0x100c] = 0x22222222 # SGE_HOST_PAGE_SIZE
|
||||
reg[0x10a0] = 0x01040810 # SGE_INGRESS_RX_THRESHOLD
|
||||
reg[0x1044] = 4096 # SGE_FL_BUFFER_SIZE0
|
||||
reg[0x1048] = 65536 # SGE_FL_BUFFER_SIZE1
|
||||
reg[0x104c] = 1536 # SGE_FL_BUFFER_SIZE2
|
||||
reg[0x1050] = 9024 # SGE_FL_BUFFER_SIZE3
|
||||
reg[0x1054] = 9216 # SGE_FL_BUFFER_SIZE4
|
||||
reg[0x1058] = 2048 # SGE_FL_BUFFER_SIZE5
|
||||
reg[0x105c] = 128 # SGE_FL_BUFFER_SIZE6
|
||||
reg[0x1060] = 8192 # SGE_FL_BUFFER_SIZE7
|
||||
reg[0x1064] = 16384 # SGE_FL_BUFFER_SIZE8
|
||||
reg[0x10a4] = 0xa000a000/0xf000f000 # SGE_DBFIFO_STATUS
|
||||
reg[0x10a8] = 0x2000/0x2000 # SGE_DOORBELL_CONTROL
|
||||
sge_timer_value = 5, 10, 20, 50, 100, 200 # SGE_TIMER_VALUE* in usecs
|
||||
|
||||
reg[0x7dc0] = 0x64f8849 # TP_SHIFT_CNT
|
||||
|
||||
# Selection of tuples for LE filter lookup, fields (and widths which
|
||||
# must sum to <= 36): { IP Fragment (1), MPS Match Type (3),
|
||||
# IP Protocol (8), [Inner] VLAN (17), Port (3), FCoE (1) }
|
||||
#
|
||||
filterMode = fragmentation, mpshittype, protocol, vnic_id, port, fcoe
|
||||
|
||||
# Percentage of dynamic memory (in either the EDRAM or external MEM)
|
||||
# to use for TP RX payload
|
||||
tp_pmrx = 30
|
||||
|
||||
# TP RX payload page size
|
||||
tp_pmrx_pagesize = 64K
|
||||
|
||||
# Percentage of dynamic memory (in either the EDRAM or external MEM)
|
||||
# to use for TP TX payload
|
||||
tp_pmtx = 50
|
||||
|
||||
# TP TX payload page size
|
||||
tp_pmtx_pagesize = 64K
|
||||
|
||||
# Some "definitions" to make the rest of this a bit more readable. We support
|
||||
# 4 ports, 3 functions (NIC, FCoE and iSCSI), scaling up to 8 "CPU Queue Sets"
|
||||
# per function per port ...
|
||||
#
|
||||
# NMSIX = 1088 # available MSI-X Vectors
|
||||
# NVI = 128 # available Virtual Interfaces
|
||||
# NMPSTCAM = 336 # MPS TCAM entries
|
||||
#
|
||||
# NPORTS = 4 # ports
|
||||
# NCPUS = 8 # CPUs we want to support scalably
|
||||
# NFUNCS = 3 # functions per port (NIC, FCoE, iSCSI)
|
||||
|
||||
# Breakdown of Virtual Interface/Queue/Interrupt resources for the "Unified
|
||||
# PF" which many OS Drivers will use to manage most or all functions.
|
||||
#
|
||||
# Each Ingress Queue can use one MSI-X interrupt but some Ingress Queues can
|
||||
# use Forwarded Interrupt Ingress Queues. For these latter, an Ingress Queue
|
||||
# would be created and the Queue ID of a Forwarded Interrupt Ingress Queue
|
||||
# will be specified as the "Ingress Queue Asynchronous Destination Index."
|
||||
# Thus, the number of MSI-X Vectors assigned to the Unified PF will be less
|
||||
# than or equal to the number of Ingress Queues ...
|
||||
#
|
||||
# NVI_NIC = 4 # NIC access to NPORTS
|
||||
# NFLIQ_NIC = 32 # NIC Ingress Queues with Free Lists
|
||||
# NETHCTRL_NIC = 32 # NIC Ethernet Control/TX Queues
|
||||
# NEQ_NIC = 64 # NIC Egress Queues (FL, ETHCTRL/TX)
|
||||
# NMPSTCAM_NIC = 16 # NIC MPS TCAM Entries (NPORTS*4)
|
||||
# NMSIX_NIC = 32 # NIC MSI-X Interrupt Vectors (FLIQ)
|
||||
#
|
||||
# NVI_OFLD = 0 # Offload uses NIC function to access ports
|
||||
# NFLIQ_OFLD = 16 # Offload Ingress Queues with Free Lists
|
||||
# NETHCTRL_OFLD = 0 # Offload Ethernet Control/TX Queues
|
||||
# NEQ_OFLD = 16 # Offload Egress Queues (FL)
|
||||
# NMPSTCAM_OFLD = 0 # Offload MPS TCAM Entries (uses NIC's)
|
||||
# NMSIX_OFLD = 16 # Offload MSI-X Interrupt Vectors (FLIQ)
|
||||
#
|
||||
# NVI_RDMA = 0 # RDMA uses NIC function to access ports
|
||||
# NFLIQ_RDMA = 4 # RDMA Ingress Queues with Free Lists
|
||||
# NETHCTRL_RDMA = 0 # RDMA Ethernet Control/TX Queues
|
||||
# NEQ_RDMA = 4 # RDMA Egress Queues (FL)
|
||||
# NMPSTCAM_RDMA = 0 # RDMA MPS TCAM Entries (uses NIC's)
|
||||
# NMSIX_RDMA = 4 # RDMA MSI-X Interrupt Vectors (FLIQ)
|
||||
#
|
||||
# NEQ_WD = 128 # Wire Direct TX Queues and FLs
|
||||
# NETHCTRL_WD = 64 # Wire Direct TX Queues
|
||||
# NFLIQ_WD = 64 ` # Wire Direct Ingress Queues with Free Lists
|
||||
#
|
||||
# NVI_ISCSI = 4 # ISCSI access to NPORTS
|
||||
# NFLIQ_ISCSI = 4 # ISCSI Ingress Queues with Free Lists
|
||||
# NETHCTRL_ISCSI = 0 # ISCSI Ethernet Control/TX Queues
|
||||
# NEQ_ISCSI = 4 # ISCSI Egress Queues (FL)
|
||||
# NMPSTCAM_ISCSI = 4 # ISCSI MPS TCAM Entries (NPORTS)
|
||||
# NMSIX_ISCSI = 4 # ISCSI MSI-X Interrupt Vectors (FLIQ)
|
||||
#
|
||||
# NVI_FCOE = 4 # FCOE access to NPORTS
|
||||
# NFLIQ_FCOE = 34 # FCOE Ingress Queues with Free Lists
|
||||
# NETHCTRL_FCOE = 32 # FCOE Ethernet Control/TX Queues
|
||||
# NEQ_FCOE = 66 # FCOE Egress Queues (FL)
|
||||
# NMPSTCAM_FCOE = 32 # FCOE MPS TCAM Entries (NPORTS)
|
||||
# NMSIX_FCOE = 34 # FCOE MSI-X Interrupt Vectors (FLIQ)
|
||||
|
||||
# Two extra Ingress Queues per function for Firmware Events and Forwarded
|
||||
# Interrupts, and two extra interrupts per function for Firmware Events (or a
|
||||
# Forwarded Interrupt Queue) and General Interrupts per function.
|
||||
#
|
||||
# NFLIQ_EXTRA = 6 # "extra" Ingress Queues 2*NFUNCS (Firmware and
|
||||
# # Forwarded Interrupts
|
||||
# NMSIX_EXTRA = 6 # extra interrupts 2*NFUNCS (Firmware and
|
||||
# # General Interrupts
|
||||
|
||||
# Microsoft HyperV resources. The HyperV Virtual Ingress Queues will have
|
||||
# their interrupts forwarded to another set of Forwarded Interrupt Queues.
|
||||
#
|
||||
# NVI_HYPERV = 16 # VMs we want to support
|
||||
# NVIIQ_HYPERV = 2 # Virtual Ingress Queues with Free Lists per VM
|
||||
# NFLIQ_HYPERV = 40 # VIQs + NCPUS Forwarded Interrupt Queues
|
||||
# NEQ_HYPERV = 32 # VIQs Free Lists
|
||||
# NMPSTCAM_HYPERV = 16 # MPS TCAM Entries (NVI_HYPERV)
|
||||
# NMSIX_HYPERV = 8 # NCPUS Forwarded Interrupt Queues
|
||||
|
||||
# Adding all of the above Unified PF resource needs together: (NIC + OFLD +
|
||||
# RDMA + ISCSI + FCOE + EXTRA + HYPERV)
|
||||
#
|
||||
# NVI_UNIFIED = 28
|
||||
# NFLIQ_UNIFIED = 106
|
||||
# NETHCTRL_UNIFIED = 32
|
||||
# NEQ_UNIFIED = 124
|
||||
# NMPSTCAM_UNIFIED = 40
|
||||
#
|
||||
# The sum of all the MSI-X resources above is 74 MSI-X Vectors but we'll round
|
||||
# that up to 128 to make sure the Unified PF doesn't run out of resources.
|
||||
#
|
||||
# NMSIX_UNIFIED = 128
|
||||
#
|
||||
# The Storage PFs could need up to NPORTS*NCPUS + NMSIX_EXTRA MSI-X Vectors
|
||||
# which is 34 but they're probably safe with 32.
|
||||
#
|
||||
# NMSIX_STORAGE = 32
|
||||
|
||||
# Note: The UnifiedPF is PF4 which doesn't have any Virtual Functions
|
||||
# associated with it. Thus, the MSI-X Vector allocations we give to the
|
||||
# UnifiedPF aren't inherited by any Virtual Functions. As a result we can
|
||||
# provision many more Virtual Functions than we can if the UnifiedPF were
|
||||
# one of PF0-3.
|
||||
#
|
||||
|
||||
# All of the below PCI-E parameters are actually stored in various *_init.txt
|
||||
# files. We include them below essentially as comments.
|
||||
#
|
||||
# For PF0-3 we assign 8 vectors each for NIC Ingress Queues of the associated
|
||||
# ports 0-3.
|
||||
#
|
||||
# For PF4, the Unified PF, we give it an MSI-X Table Size as outlined above.
|
||||
#
|
||||
# For PF5-6 we assign enough MSI-X Vectors to support FCoE and iSCSI
|
||||
# storage applications across all four possible ports.
|
||||
#
|
||||
# Additionally, since the UnifiedPF isn't one of the per-port Physical
|
||||
# Functions, we give the UnifiedPF and the PF0-3 Physical Functions
|
||||
# different PCI Device IDs which will allow Unified and Per-Port Drivers
|
||||
# to directly select the type of Physical Function to which they wish to be
|
||||
# attached.
|
||||
#
|
||||
# Note that the actual values used for the PCI-E Intelectual Property will be
|
||||
# 1 less than those below since that's the way it "counts" things. For
|
||||
# readability, we use the number we actually mean ...
|
||||
#
|
||||
# PF0_INT = 8 # NCPUS
|
||||
# PF1_INT = 8 # NCPUS
|
||||
# PF2_INT = 8 # NCPUS
|
||||
# PF3_INT = 8 # NCPUS
|
||||
# PF0_3_INT = 32 # PF0_INT + PF1_INT + PF2_INT + PF3_INT
|
||||
#
|
||||
# PF4_INT = 128 # NMSIX_UNIFIED
|
||||
# PF5_INT = 32 # NMSIX_STORAGE
|
||||
# PF6_INT = 32 # NMSIX_STORAGE
|
||||
# PF7_INT = 0 # Nothing Assigned
|
||||
# PF4_7_INT = 192 # PF4_INT + PF5_INT + PF6_INT + PF7_INT
|
||||
#
|
||||
# PF0_7_INT = 224 # PF0_3_INT + PF4_7_INT
|
||||
#
|
||||
# With the above we can get 17 VFs/PF0-3 (limited by 336 MPS TCAM entries)
|
||||
# but we'll lower that to 16 to make our total 64 and a nice power of 2 ...
|
||||
#
|
||||
# NVF = 16
|
||||
|
||||
# For those OSes which manage different ports on different PFs, we need
|
||||
# only enough resources to support a single port's NIC application functions
|
||||
# on PF0-3. The below assumes that we're only doing NIC with NCPUS "Queue
|
||||
# Sets" for ports 0-3. The FCoE and iSCSI functions for such OSes will be
|
||||
# managed on the "storage PFs" (see below).
|
||||
#
|
||||
[function "0"]
|
||||
nvf = 16 # NVF on this function
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 8 # NCPUS "Queue Sets"
|
||||
nethctrl = 8 # NCPUS "Queue Sets"
|
||||
neq = 16 # niqflint + nethctrl Egress Queues
|
||||
nexactf = 8 # number of exact MPSTCAM MAC filters
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x1 # access to only one port
|
||||
|
||||
[function "1"]
|
||||
nvf = 16 # NVF on this function
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 8 # NCPUS "Queue Sets"
|
||||
nethctrl = 8 # NCPUS "Queue Sets"
|
||||
neq = 16 # niqflint + nethctrl Egress Queues
|
||||
nexactf = 8 # number of exact MPSTCAM MAC filters
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x2 # access to only one port
|
||||
|
||||
[function "2"]
|
||||
nvf = 16 # NVF on this function
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 8 # NCPUS "Queue Sets"
|
||||
nethctrl = 8 # NCPUS "Queue Sets"
|
||||
neq = 16 # niqflint + nethctrl Egress Queues
|
||||
nexactf = 8 # number of exact MPSTCAM MAC filters
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x4 # access to only one port
|
||||
|
||||
[function "3"]
|
||||
nvf = 16 # NVF on this function
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 8 # NCPUS "Queue Sets"
|
||||
nethctrl = 8 # NCPUS "Queue Sets"
|
||||
neq = 16 # niqflint + nethctrl Egress Queues
|
||||
nexactf = 8 # number of exact MPSTCAM MAC filters
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x8 # access to only one port
|
||||
|
||||
# Some OS Drivers manage all application functions for all ports via PF4.
|
||||
# Thus we need to provide a large number of resources here. For Egress
|
||||
# Queues we need to account for both TX Queues as well as Free List Queues
|
||||
# (because the host is responsible for producing Free List Buffers for the
|
||||
# hardware to consume).
|
||||
#
|
||||
[function "4"]
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 28 # NVI_UNIFIED
|
||||
niqflint = 170 # NFLIQ_UNIFIED + NLFIQ_WD
|
||||
nethctrl = 96 # NETHCTRL_UNIFIED + NETHCTRL_WD
|
||||
neq = 252 # NEQ_UNIFIED + NEQ_WD
|
||||
nexactf = 40 # NMPSTCAM_UNIFIED
|
||||
cmask = all # access to all channels
|
||||
pmask = all # access to all four ports ...
|
||||
nroute = 32 # number of routing region entries
|
||||
nclip = 32 # number of clip region entries
|
||||
nfilter = 768 # number of filter region entries
|
||||
nserver = 256 # number of server region entries
|
||||
nhash = 0 # number of hash region entries
|
||||
protocol = nic_vm, ofld, rddp, rdmac, iscsi_initiator_pdu, iscsi_target_pdu
|
||||
tp_l2t = 100
|
||||
tp_ddp = 2
|
||||
tp_ddp_iscsi = 2
|
||||
tp_stag = 2
|
||||
tp_pbl = 5
|
||||
tp_rq = 7
|
||||
|
||||
# We have FCoE and iSCSI storage functions on PF5 and PF6 each of which may
|
||||
# need to have Virtual Interfaces on each of the four ports with up to NCPUS
|
||||
# "Queue Sets" each.
|
||||
#
|
||||
[function "5"]
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 4 # NPORTS
|
||||
niqflint = 34 # NPORTS*NCPUS + NMSIX_EXTRA
|
||||
nethctrl = 32 # NPORTS*NCPUS
|
||||
neq = 64 # NPORTS*NCPUS * 2 (FL, ETHCTRL/TX)
|
||||
nexactf = 4 # NPORTS
|
||||
cmask = all # access to all channels
|
||||
pmask = all # access to all four ports ...
|
||||
|
||||
[function "6"]
|
||||
wx_caps = all # write/execute permissions for all commands
|
||||
r_caps = all # read permissions for all commands
|
||||
nvi = 4 # NPORTS
|
||||
niqflint = 34 # NPORTS*NCPUS + NMSIX_EXTRA
|
||||
nethctrl = 32 # NPORTS*NCPUS
|
||||
neq = 66 # NPORTS*NCPUS * 2 (FL, ETHCTRL/TX) + 2 (EXTRA)
|
||||
nexactf = 32 # NPORTS + adding 28 exact entries for FCoE
|
||||
# which is OK since < MIN(SUM PF0..3, PF4)
|
||||
# and we never load PF0..3 and PF4 concurrently
|
||||
cmask = all # access to all channels
|
||||
pmask = all # access to all four ports ...
|
||||
nhash = 0
|
||||
protocol = fcoe_initiator
|
||||
tp_ddp = 2
|
||||
fcoe_nfcf = 16
|
||||
fcoe_nvnp = 32
|
||||
fcoe_nssn = 1024
|
||||
|
||||
# For Virtual functions, we only allow NIC functionality and we only allow
|
||||
# access to one port (1 << PF). Note that because of limitations in the
|
||||
# Scatter Gather Engine (SGE) hardware which checks writes to VF KDOORBELL
|
||||
# and GTS registers, the number of Ingress and Egress Queues must be a power
|
||||
# of 2.
|
||||
#
|
||||
[function "0/*"] # NVF
|
||||
wx_caps = 0x82 # DMAQ | VF
|
||||
r_caps = 0x86 # DMAQ | VF | PORT
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 4 # 2 "Queue Sets" + NXIQ
|
||||
nethctrl = 2 # 2 "Queue Sets"
|
||||
neq = 4 # 2 "Queue Sets" * 2
|
||||
nexactf = 4
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x1 # access to only one port ...
|
||||
|
||||
[function "1/*"] # NVF
|
||||
wx_caps = 0x82 # DMAQ | VF
|
||||
r_caps = 0x86 # DMAQ | VF | PORT
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 4 # 2 "Queue Sets" + NXIQ
|
||||
nethctrl = 2 # 2 "Queue Sets"
|
||||
neq = 4 # 2 "Queue Sets" * 2
|
||||
nexactf = 4
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x2 # access to only one port ...
|
||||
|
||||
[function "2/*"] # NVF
|
||||
wx_caps = 0x82 # DMAQ | VF
|
||||
r_caps = 0x86 # DMAQ | VF | PORT
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 4 # 2 "Queue Sets" + NXIQ
|
||||
nethctrl = 2 # 2 "Queue Sets"
|
||||
neq = 4 # 2 "Queue Sets" * 2
|
||||
nexactf = 4
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x4 # access to only one port ...
|
||||
|
||||
[function "3/*"] # NVF
|
||||
wx_caps = 0x82 # DMAQ | VF
|
||||
r_caps = 0x86 # DMAQ | VF | PORT
|
||||
nvi = 1 # 1 port
|
||||
niqflint = 4 # 2 "Queue Sets" + NXIQ
|
||||
nethctrl = 2 # 2 "Queue Sets"
|
||||
neq = 4 # 2 "Queue Sets" * 2
|
||||
nexactf = 4
|
||||
cmask = all # access to all channels
|
||||
pmask = 0x8 # access to only one port ...
|
||||
|
||||
# MPS features a 196608 bytes ingress buffer that is used for ingress buffering
|
||||
# for packets from the wire as well as the loopback path of the L2 switch. The
|
||||
# folling params control how the buffer memory is distributed and the L2 flow
|
||||
# control settings:
|
||||
#
|
||||
# bg_mem: %-age of mem to use for port/buffer group
|
||||
# lpbk_mem: %-age of port/bg mem to use for loopback
|
||||
# hwm: high watermark; bytes available when starting to send pause
|
||||
# frames (in units of 0.1 MTU)
|
||||
# lwm: low watermark; bytes remaining when sending 'unpause' frame
|
||||
# (in inuits of 0.1 MTU)
|
||||
# dwm: minimum delta between high and low watermark (in units of 100
|
||||
# Bytes)
|
||||
#
|
||||
[port "0"]
|
||||
dcb = ppp, dcbx # configure for DCB PPP and enable DCBX offload
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[port "1"]
|
||||
dcb = ppp, dcbx
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[port "2"]
|
||||
dcb = ppp, dcbx
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[port "3"]
|
||||
dcb = ppp, dcbx
|
||||
bg_mem = 25
|
||||
lpbk_mem = 25
|
||||
hwm = 30
|
||||
lwm = 15
|
||||
dwm = 30
|
||||
|
||||
[fini]
|
||||
version = 0x14250007
|
||||
checksum = 0xfcbadefb
|
||||
|
||||
# Total resources used by above allocations:
|
||||
# Virtual Interfaces: 104
|
||||
# Ingress Queues/w Free Lists and Interrupts: 526
|
||||
# Egress Queues: 702
|
||||
# MPS TCAM Entries: 336
|
||||
# MSI-X Vectors: 736
|
||||
# Virtual Functions: 64
|
||||
#
|
||||
# $FreeBSD$
|
||||
#
|
@ -37,16 +37,23 @@
|
||||
enum fw_retval {
|
||||
FW_SUCCESS = 0, /* completed sucessfully */
|
||||
FW_EPERM = 1, /* operation not permitted */
|
||||
FW_ENOENT = 2, /* no such file or directory */
|
||||
FW_EIO = 5, /* input/output error; hw bad */
|
||||
FW_ENOEXEC = 8, /* Exec format error; inv microcode */
|
||||
FW_ENOEXEC = 8, /* exec format error; inv microcode */
|
||||
FW_EAGAIN = 11, /* try again */
|
||||
FW_ENOMEM = 12, /* out of memory */
|
||||
FW_EFAULT = 14, /* bad address; fw bad */
|
||||
FW_EBUSY = 16, /* resource busy */
|
||||
FW_EEXIST = 17, /* File exists */
|
||||
FW_EEXIST = 17, /* file exists */
|
||||
FW_EINVAL = 22, /* invalid argument */
|
||||
FW_ENOSPC = 28, /* no space left on device */
|
||||
FW_ENOSYS = 38, /* functionality not implemented */
|
||||
FW_EPROTO = 71, /* protocol error */
|
||||
FW_EADDRINUSE = 98, /* address already in use */
|
||||
FW_EADDRNOTAVAIL = 99, /* cannot assigned requested address */
|
||||
FW_ENETDOWN = 100, /* network is down */
|
||||
FW_ENETUNREACH = 101, /* network is unreachable */
|
||||
FW_ENOBUFS = 105, /* no buffer space available */
|
||||
FW_ETIMEDOUT = 110, /* timeout */
|
||||
FW_EINPROGRESS = 115, /* fw internal */
|
||||
FW_SCSI_ABORT_REQUESTED = 128, /* */
|
||||
@ -62,6 +69,8 @@ enum fw_retval {
|
||||
FW_ERR_RDEV_IMPL_LOGO = 138, /* */
|
||||
FW_SCSI_UNDER_FLOW_ERR = 139, /* */
|
||||
FW_SCSI_OVER_FLOW_ERR = 140, /* */
|
||||
FW_SCSI_DDP_ERR = 141, /* DDP error*/
|
||||
FW_SCSI_TASK_ERR = 142, /* No SCSI tasks available */
|
||||
};
|
||||
|
||||
/******************************************************************************
|
||||
@ -89,7 +98,7 @@ enum fw_wr_opcodes {
|
||||
FW_RI_INV_LSTAG_WR = 0x1a,
|
||||
FW_RI_WR = 0x0d,
|
||||
FW_ISCSI_NODE_WR = 0x4a,
|
||||
FW_LASTC2E_WR = 0x4b
|
||||
FW_LASTC2E_WR = 0x50
|
||||
};
|
||||
|
||||
/*
|
||||
@ -512,8 +521,14 @@ struct fw_eth_tx_pkt_wr {
|
||||
__be64 r3;
|
||||
};
|
||||
|
||||
#define S_FW_ETH_TX_PKT_WR_IMMDLEN 0
|
||||
#define M_FW_ETH_TX_PKT_WR_IMMDLEN 0x1ff
|
||||
#define V_FW_ETH_TX_PKT_WR_IMMDLEN(x) ((x) << S_FW_ETH_TX_PKT_WR_IMMDLEN)
|
||||
#define G_FW_ETH_TX_PKT_WR_IMMDLEN(x) \
|
||||
(((x) >> S_FW_ETH_TX_PKT_WR_IMMDLEN) & M_FW_ETH_TX_PKT_WR_IMMDLEN)
|
||||
|
||||
struct fw_eth_tx_pkts_wr {
|
||||
__be32 op_immdlen;
|
||||
__be32 op_pkd;
|
||||
__be32 equiq_to_len16;
|
||||
__be32 r3;
|
||||
__be16 plen;
|
||||
@ -537,7 +552,7 @@ enum fw_flowc_mnem {
|
||||
FW_FLOWC_MNEM_RCVNXT,
|
||||
FW_FLOWC_MNEM_SNDBUF,
|
||||
FW_FLOWC_MNEM_MSS,
|
||||
FW_FLOWC_MEM_TXDATAPLEN_MAX,
|
||||
FW_FLOWC_MNEM_TXDATAPLEN_MAX,
|
||||
};
|
||||
|
||||
struct fw_flowc_mnemval {
|
||||
@ -1469,22 +1484,129 @@ struct fw_ri_wr {
|
||||
#define G_FW_RI_WR_P2PTYPE(x) \
|
||||
(((x) >> S_FW_RI_WR_P2PTYPE) & M_FW_RI_WR_P2PTYPE)
|
||||
|
||||
#ifdef FOISCSI
|
||||
/******************************************************************************
|
||||
* S C S I W O R K R E Q U E S T s
|
||||
**********************************************/
|
||||
|
||||
|
||||
/******************************************************************************
|
||||
* F O i S C S I W O R K R E Q U E S T s
|
||||
**********************************************/
|
||||
|
||||
#define ISCSI_NAME_MAX_LEN 224
|
||||
#define ISCSI_ALIAS_MAX_LEN 224
|
||||
|
||||
enum session_type {
|
||||
ISCSI_SESSION_DISCOVERY = 0,
|
||||
ISCSI_SESSION_NORMAL,
|
||||
};
|
||||
|
||||
enum digest_val {
|
||||
DIGEST_NONE = 0,
|
||||
DIGEST_CRC32,
|
||||
DIGEST_BOTH,
|
||||
};
|
||||
|
||||
enum fw_iscsi_subops {
|
||||
NODE_ONLINE = 1,
|
||||
SESS_ONLINE,
|
||||
CONN_ONLINE,
|
||||
NODE_OFFLINE,
|
||||
SESS_OFFLINE,
|
||||
CONN_OFFLINE,
|
||||
NODE_STATS,
|
||||
SESS_STATS,
|
||||
CONN_STATS,
|
||||
UPDATE_IOHANDLE,
|
||||
};
|
||||
|
||||
struct fw_iscsi_node_attr {
|
||||
__u8 name_len;
|
||||
__u8 node_name[ISCSI_NAME_MAX_LEN];
|
||||
__u8 alias_len;
|
||||
__u8 node_alias[ISCSI_ALIAS_MAX_LEN];
|
||||
};
|
||||
|
||||
struct fw_iscsi_sess_attr {
|
||||
__u8 sess_type;
|
||||
__u8 seq_inorder;
|
||||
__u8 pdu_inorder;
|
||||
__u8 immd_data_en;
|
||||
__u8 init_r2t_en;
|
||||
__u8 erl;
|
||||
__be16 max_conn;
|
||||
__be16 max_r2t;
|
||||
__be16 time2wait;
|
||||
__be16 time2retain;
|
||||
__be32 max_burst;
|
||||
__be32 first_burst;
|
||||
};
|
||||
|
||||
struct fw_iscsi_conn_attr {
|
||||
__u8 hdr_digest;
|
||||
__u8 data_digest;
|
||||
__be32 max_rcv_dsl;
|
||||
__be16 dst_port;
|
||||
__be32 dst_addr;
|
||||
__be16 src_port;
|
||||
__be32 src_addr;
|
||||
__be32 ping_tmo;
|
||||
};
|
||||
|
||||
struct fw_iscsi_node_stats {
|
||||
__be16 sess_count;
|
||||
__be16 chap_fail_count;
|
||||
__be16 login_count;
|
||||
__be16 r1;
|
||||
};
|
||||
|
||||
struct fw_iscsi_sess_stats {
|
||||
__be32 rxbytes;
|
||||
__be32 txbytes;
|
||||
__be32 scmd_count;
|
||||
__be32 read_cmds;
|
||||
__be32 write_cmds;
|
||||
__be32 read_bytes;
|
||||
__be32 write_bytes;
|
||||
__be32 scsi_err_count;
|
||||
__be32 scsi_rst_count;
|
||||
__be32 iscsi_tmf_count;
|
||||
__be32 conn_count;
|
||||
};
|
||||
|
||||
struct fw_iscsi_conn_stats {
|
||||
__be32 txbytes;
|
||||
__be32 rxbytes;
|
||||
__be32 dataout;
|
||||
__be32 datain;
|
||||
};
|
||||
|
||||
struct fw_iscsi_node_wr {
|
||||
__u8 opcode;
|
||||
__u8 subop;
|
||||
__u8 node_attr_to_compl;
|
||||
__u8 len16;
|
||||
__u8 status;
|
||||
__u8 r2;
|
||||
__be16 immd_len;
|
||||
__be32 flowid_len16;
|
||||
__be64 cookie;
|
||||
__u8 node_attr_to_compl;
|
||||
__u8 status;
|
||||
__be16 r1;
|
||||
__be32 node_id;
|
||||
__be32 ctrl_handle;
|
||||
__be32 io_handle;
|
||||
__be32 r3;
|
||||
};
|
||||
|
||||
#define S_FW_ISCSI_NODE_WR_FLOWID 8
|
||||
#define M_FW_ISCSI_NODE_WR_FLOWID 0xfffff
|
||||
#define V_FW_ISCSI_NODE_WR_FLOWID(x) ((x) << S_FW_ISCSI_NODE_WR_FLOWID)
|
||||
#define G_FW_ISCSI_NODE_WR_FLOWID(x) \
|
||||
(((x) >> S_FW_ISCSI_NODE_WR_FLOWID) & M_FW_ISCSI_NODE_WR_FLOWID)
|
||||
|
||||
#define S_FW_ISCSI_NODE_WR_LEN16 0
|
||||
#define M_FW_ISCSI_NODE_WR_LEN16 0xff
|
||||
#define V_FW_ISCSI_NODE_WR_LEN16(x) ((x) << S_FW_ISCSI_NODE_WR_LEN16)
|
||||
#define G_FW_ISCSI_NODE_WR_LEN16(x) \
|
||||
(((x) >> S_FW_ISCSI_NODE_WR_LEN16) & M_FW_ISCSI_NODE_WR_LEN16)
|
||||
|
||||
#define S_FW_ISCSI_NODE_WR_NODE_ATTR 7
|
||||
#define M_FW_ISCSI_NODE_WR_NODE_ATTR 0x1
|
||||
#define V_FW_ISCSI_NODE_WR_NODE_ATTR(x) ((x) << S_FW_ISCSI_NODE_WR_NODE_ATTR)
|
||||
@ -1527,7 +1649,109 @@ struct fw_iscsi_node_wr {
|
||||
(((x) >> S_FW_ISCSI_NODE_WR_COMPL) & M_FW_ISCSI_NODE_WR_COMPL)
|
||||
#define F_FW_ISCSI_NODE_WR_COMPL V_FW_ISCSI_NODE_WR_COMPL(1U)
|
||||
|
||||
#endif
|
||||
#define FW_ISCSI_NODE_INVALID_ID 0xffffffff
|
||||
|
||||
struct fw_scsi_iscsi_data {
|
||||
__u8 r0;
|
||||
__u8 fbit_to_tattr;
|
||||
__be16 r2;
|
||||
__be32 r3;
|
||||
__u8 lun[8];
|
||||
__be32 r4;
|
||||
__be32 dlen;
|
||||
__be32 r5;
|
||||
__be32 r6;
|
||||
__u8 cdb[16];
|
||||
};
|
||||
|
||||
#define S_FW_SCSI_ISCSI_DATA_FBIT 7
|
||||
#define M_FW_SCSI_ISCSI_DATA_FBIT 0x1
|
||||
#define V_FW_SCSI_ISCSI_DATA_FBIT(x) ((x) << S_FW_SCSI_ISCSI_DATA_FBIT)
|
||||
#define G_FW_SCSI_ISCSI_DATA_FBIT(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_DATA_FBIT) & M_FW_SCSI_ISCSI_DATA_FBIT)
|
||||
#define F_FW_SCSI_ISCSI_DATA_FBIT V_FW_SCSI_ISCSI_DATA_FBIT(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_DATA_RBIT 6
|
||||
#define M_FW_SCSI_ISCSI_DATA_RBIT 0x1
|
||||
#define V_FW_SCSI_ISCSI_DATA_RBIT(x) ((x) << S_FW_SCSI_ISCSI_DATA_RBIT)
|
||||
#define G_FW_SCSI_ISCSI_DATA_RBIT(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_DATA_RBIT) & M_FW_SCSI_ISCSI_DATA_RBIT)
|
||||
#define F_FW_SCSI_ISCSI_DATA_RBIT V_FW_SCSI_ISCSI_DATA_RBIT(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_DATA_WBIT 5
|
||||
#define M_FW_SCSI_ISCSI_DATA_WBIT 0x1
|
||||
#define V_FW_SCSI_ISCSI_DATA_WBIT(x) ((x) << S_FW_SCSI_ISCSI_DATA_WBIT)
|
||||
#define G_FW_SCSI_ISCSI_DATA_WBIT(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_DATA_WBIT) & M_FW_SCSI_ISCSI_DATA_WBIT)
|
||||
#define F_FW_SCSI_ISCSI_DATA_WBIT V_FW_SCSI_ISCSI_DATA_WBIT(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_DATA_TATTR 0
|
||||
#define M_FW_SCSI_ISCSI_DATA_TATTR 0x7
|
||||
#define V_FW_SCSI_ISCSI_DATA_TATTR(x) ((x) << S_FW_SCSI_ISCSI_DATA_TATTR)
|
||||
#define G_FW_SCSI_ISCSI_DATA_TATTR(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_DATA_TATTR) & M_FW_SCSI_ISCSI_DATA_TATTR)
|
||||
|
||||
#define FW_SCSI_ISCSI_DATA_TATTR_UNTAGGED 0
|
||||
#define FW_SCSI_ISCSI_DATA_TATTR_SIMPLE 1
|
||||
#define FW_SCSI_ISCSI_DATA_TATTR_ORDERED 2
|
||||
#define FW_SCSI_ISCSI_DATA_TATTR_HEADOQ 3
|
||||
#define FW_SCSI_ISCSI_DATA_TATTR_ACA 4
|
||||
|
||||
#define FW_SCSI_ISCSI_TMF_OP 0x02
|
||||
#define FW_SCSI_ISCSI_ABORT_FUNC 0x01
|
||||
#define FW_SCSI_ISCSI_LUN_RESET_FUNC 0x05
|
||||
#define FW_SCSI_ISCSI_RESERVED_TAG 0xffffffff
|
||||
|
||||
struct fw_scsi_iscsi_rsp {
|
||||
__u8 r0;
|
||||
__u8 sbit_to_uflow;
|
||||
__u8 response;
|
||||
__u8 status;
|
||||
__be32 r4;
|
||||
__u8 r5[32];
|
||||
__be32 bidir_res_cnt;
|
||||
__be32 res_cnt;
|
||||
__u8 sense_data[128];
|
||||
};
|
||||
|
||||
#define S_FW_SCSI_ISCSI_RSP_SBIT 7
|
||||
#define M_FW_SCSI_ISCSI_RSP_SBIT 0x1
|
||||
#define V_FW_SCSI_ISCSI_RSP_SBIT(x) ((x) << S_FW_SCSI_ISCSI_RSP_SBIT)
|
||||
#define G_FW_SCSI_ISCSI_RSP_SBIT(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_RSP_SBIT) & M_FW_SCSI_ISCSI_RSP_SBIT)
|
||||
#define F_FW_SCSI_ISCSI_RSP_SBIT V_FW_SCSI_ISCSI_RSP_SBIT(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW 4
|
||||
#define M_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW 0x1
|
||||
#define V_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW(x) \
|
||||
((x) << S_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW)
|
||||
#define G_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW) & \
|
||||
M_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW)
|
||||
#define F_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW V_FW_SCSI_ISCSI_RSP_BIDIR_OFLOW(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW 3
|
||||
#define M_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW 0x1
|
||||
#define V_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW(x) \
|
||||
((x) << S_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW)
|
||||
#define G_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW) & \
|
||||
M_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW)
|
||||
#define F_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW V_FW_SCSI_ISCSI_RSP_BIDIR_UFLOW(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_RSP_OFLOW 2
|
||||
#define M_FW_SCSI_ISCSI_RSP_OFLOW 0x1
|
||||
#define V_FW_SCSI_ISCSI_RSP_OFLOW(x) ((x) << S_FW_SCSI_ISCSI_RSP_OFLOW)
|
||||
#define G_FW_SCSI_ISCSI_RSP_OFLOW(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_RSP_OFLOW) & M_FW_SCSI_ISCSI_RSP_OFLOW)
|
||||
#define F_FW_SCSI_ISCSI_RSP_OFLOW V_FW_SCSI_ISCSI_RSP_OFLOW(1U)
|
||||
|
||||
#define S_FW_SCSI_ISCSI_RSP_UFLOW 1
|
||||
#define M_FW_SCSI_ISCSI_RSP_UFLOW 0x1
|
||||
#define V_FW_SCSI_ISCSI_RSP_UFLOW(x) ((x) << S_FW_SCSI_ISCSI_RSP_UFLOW)
|
||||
#define G_FW_SCSI_ISCSI_RSP_UFLOW(x) \
|
||||
(((x) >> S_FW_SCSI_ISCSI_RSP_UFLOW) & M_FW_SCSI_ISCSI_RSP_UFLOW)
|
||||
#define F_FW_SCSI_ISCSI_RSP_UFLOW V_FW_SCSI_ISCSI_RSP_UFLOW(1U)
|
||||
|
||||
/******************************************************************************
|
||||
* C O M M A N D s
|
||||
@ -1543,6 +1767,16 @@ struct fw_iscsi_node_wr {
|
||||
*/
|
||||
#define FW_CMD_MAX_TIMEOUT 10000
|
||||
|
||||
/*
|
||||
* If a host driver does a HELLO and discovers that there's already a MASTER
|
||||
* selected, we may have to wait for that MASTER to finish issuing RESET,
|
||||
* configuration and INITIALIZE commands. Also, there's a possibility that
|
||||
* our own HELLO may get lost if it happens right as the MASTER is issuign a
|
||||
* RESET command, so we need to be willing to make a few retries of our HELLO.
|
||||
*/
|
||||
#define FW_CMD_HELLO_TIMEOUT (3 * FW_CMD_MAX_TIMEOUT)
|
||||
#define FW_CMD_HELLO_RETRIES 3
|
||||
|
||||
enum fw_cmd_opcodes {
|
||||
FW_LDST_CMD = 0x01,
|
||||
FW_RESET_CMD = 0x03,
|
||||
@ -1575,10 +1809,11 @@ enum fw_cmd_opcodes {
|
||||
FW_SCHED_CMD = 0x24,
|
||||
FW_DEVLOG_CMD = 0x25,
|
||||
FW_NETIF_CMD = 0x26,
|
||||
FW_WATCHDOG_CMD = 0x27,
|
||||
FW_CLIP_CMD = 0x28,
|
||||
FW_LASTC2E_CMD = 0x40,
|
||||
FW_ERROR_CMD = 0x80,
|
||||
FW_DEBUG_CMD = 0x81,
|
||||
|
||||
};
|
||||
|
||||
enum fw_cmd_cap {
|
||||
@ -1696,7 +1931,7 @@ struct fw_ldst_cmd {
|
||||
} addrval;
|
||||
struct fw_ldst_idctxt {
|
||||
__be32 physid;
|
||||
__be32 msg_pkd;
|
||||
__be32 msg_ctxtflush;
|
||||
__be32 ctxt_data7;
|
||||
__be32 ctxt_data6;
|
||||
__be32 ctxt_data5;
|
||||
@ -1769,6 +2004,13 @@ struct fw_ldst_cmd {
|
||||
(((x) >> S_FW_LDST_CMD_MSG) & M_FW_LDST_CMD_MSG)
|
||||
#define F_FW_LDST_CMD_MSG V_FW_LDST_CMD_MSG(1U)
|
||||
|
||||
#define S_FW_LDST_CMD_CTXTFLUSH 30
|
||||
#define M_FW_LDST_CMD_CTXTFLUSH 0x1
|
||||
#define V_FW_LDST_CMD_CTXTFLUSH(x) ((x) << S_FW_LDST_CMD_CTXTFLUSH)
|
||||
#define G_FW_LDST_CMD_CTXTFLUSH(x) \
|
||||
(((x) >> S_FW_LDST_CMD_CTXTFLUSH) & M_FW_LDST_CMD_CTXTFLUSH)
|
||||
#define F_FW_LDST_CMD_CTXTFLUSH V_FW_LDST_CMD_CTXTFLUSH(1U)
|
||||
|
||||
#define S_FW_LDST_CMD_PADDR 8
|
||||
#define M_FW_LDST_CMD_PADDR 0x1f
|
||||
#define V_FW_LDST_CMD_PADDR(x) ((x) << S_FW_LDST_CMD_PADDR)
|
||||
@ -1852,13 +2094,27 @@ struct fw_reset_cmd {
|
||||
__be32 op_to_write;
|
||||
__be32 retval_len16;
|
||||
__be32 val;
|
||||
__be32 r3;
|
||||
__be32 halt_pkd;
|
||||
};
|
||||
|
||||
#define S_FW_RESET_CMD_HALT 31
|
||||
#define M_FW_RESET_CMD_HALT 0x1
|
||||
#define V_FW_RESET_CMD_HALT(x) ((x) << S_FW_RESET_CMD_HALT)
|
||||
#define G_FW_RESET_CMD_HALT(x) \
|
||||
(((x) >> S_FW_RESET_CMD_HALT) & M_FW_RESET_CMD_HALT)
|
||||
#define F_FW_RESET_CMD_HALT V_FW_RESET_CMD_HALT(1U)
|
||||
|
||||
enum {
|
||||
FW_HELLO_CMD_STAGE_OS = 0,
|
||||
FW_HELLO_CMD_STAGE_PREOS0 = 1,
|
||||
FW_HELLO_CMD_STAGE_PREOS1 = 2,
|
||||
FW_HELLO_CMD_STAGE_POSTOS = 3,
|
||||
};
|
||||
|
||||
struct fw_hello_cmd {
|
||||
__be32 op_to_write;
|
||||
__be32 retval_len16;
|
||||
__be32 err_to_mbasyncnot;
|
||||
__be32 err_to_clearinit;
|
||||
__be32 fwrev;
|
||||
};
|
||||
|
||||
@ -1909,6 +2165,19 @@ struct fw_hello_cmd {
|
||||
#define G_FW_HELLO_CMD_MBASYNCNOT(x) \
|
||||
(((x) >> S_FW_HELLO_CMD_MBASYNCNOT) & M_FW_HELLO_CMD_MBASYNCNOT)
|
||||
|
||||
#define S_FW_HELLO_CMD_STAGE 17
|
||||
#define M_FW_HELLO_CMD_STAGE 0x7
|
||||
#define V_FW_HELLO_CMD_STAGE(x) ((x) << S_FW_HELLO_CMD_STAGE)
|
||||
#define G_FW_HELLO_CMD_STAGE(x) \
|
||||
(((x) >> S_FW_HELLO_CMD_STAGE) & M_FW_HELLO_CMD_STAGE)
|
||||
|
||||
#define S_FW_HELLO_CMD_CLEARINIT 16
|
||||
#define M_FW_HELLO_CMD_CLEARINIT 0x1
|
||||
#define V_FW_HELLO_CMD_CLEARINIT(x) ((x) << S_FW_HELLO_CMD_CLEARINIT)
|
||||
#define G_FW_HELLO_CMD_CLEARINIT(x) \
|
||||
(((x) >> S_FW_HELLO_CMD_CLEARINIT) & M_FW_HELLO_CMD_CLEARINIT)
|
||||
#define F_FW_HELLO_CMD_CLEARINIT V_FW_HELLO_CMD_CLEARINIT(1U)
|
||||
|
||||
struct fw_bye_cmd {
|
||||
__be32 op_to_write;
|
||||
__be32 retval_len16;
|
||||
@ -1989,6 +2258,8 @@ enum fw_caps_config_nic {
|
||||
FW_CAPS_CONFIG_NIC = 0x00000001,
|
||||
FW_CAPS_CONFIG_NIC_VM = 0x00000002,
|
||||
FW_CAPS_CONFIG_NIC_IDS = 0x00000004,
|
||||
FW_CAPS_CONFIG_NIC_UM = 0x00000008,
|
||||
FW_CAPS_CONFIG_NIC_UM_ISGL = 0x00000010,
|
||||
};
|
||||
|
||||
enum fw_caps_config_toe {
|
||||
@ -2015,9 +2286,16 @@ enum fw_caps_config_fcoe {
|
||||
FW_CAPS_CONFIG_FCOE_CTRL_OFLD = 0x00000004,
|
||||
};
|
||||
|
||||
enum fw_memtype_cf {
|
||||
FW_MEMTYPE_CF_EDC0 = 0x0,
|
||||
FW_MEMTYPE_CF_EDC1 = 0x1,
|
||||
FW_MEMTYPE_CF_EXTMEM = 0x2,
|
||||
FW_MEMTYPE_CF_FLASH = 0x4,
|
||||
};
|
||||
|
||||
struct fw_caps_config_cmd {
|
||||
__be32 op_to_write;
|
||||
__be32 retval_len16;
|
||||
__be32 cfvalid_to_len16;
|
||||
__be32 r2;
|
||||
__be32 hwmbitmap;
|
||||
__be16 nbmcaps;
|
||||
@ -2030,10 +2308,34 @@ struct fw_caps_config_cmd {
|
||||
__be16 r4;
|
||||
__be16 iscsicaps;
|
||||
__be16 fcoecaps;
|
||||
__be32 r5;
|
||||
__be64 r6;
|
||||
__be32 cfcsum;
|
||||
__be32 finiver;
|
||||
__be32 finicsum;
|
||||
};
|
||||
|
||||
#define S_FW_CAPS_CONFIG_CMD_CFVALID 27
|
||||
#define M_FW_CAPS_CONFIG_CMD_CFVALID 0x1
|
||||
#define V_FW_CAPS_CONFIG_CMD_CFVALID(x) ((x) << S_FW_CAPS_CONFIG_CMD_CFVALID)
|
||||
#define G_FW_CAPS_CONFIG_CMD_CFVALID(x) \
|
||||
(((x) >> S_FW_CAPS_CONFIG_CMD_CFVALID) & M_FW_CAPS_CONFIG_CMD_CFVALID)
|
||||
#define F_FW_CAPS_CONFIG_CMD_CFVALID V_FW_CAPS_CONFIG_CMD_CFVALID(1U)
|
||||
|
||||
#define S_FW_CAPS_CONFIG_CMD_MEMTYPE_CF 24
|
||||
#define M_FW_CAPS_CONFIG_CMD_MEMTYPE_CF 0x7
|
||||
#define V_FW_CAPS_CONFIG_CMD_MEMTYPE_CF(x) \
|
||||
((x) << S_FW_CAPS_CONFIG_CMD_MEMTYPE_CF)
|
||||
#define G_FW_CAPS_CONFIG_CMD_MEMTYPE_CF(x) \
|
||||
(((x) >> S_FW_CAPS_CONFIG_CMD_MEMTYPE_CF) & \
|
||||
M_FW_CAPS_CONFIG_CMD_MEMTYPE_CF)
|
||||
|
||||
#define S_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF 16
|
||||
#define M_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF 0xff
|
||||
#define V_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF(x) \
|
||||
((x) << S_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF)
|
||||
#define G_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF(x) \
|
||||
(((x) >> S_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF) & \
|
||||
M_FW_CAPS_CONFIG_CMD_MEMADDR64K_CF)
|
||||
|
||||
/*
|
||||
* params command mnemonics
|
||||
*/
|
||||
@ -2056,15 +2358,17 @@ enum fw_params_param_dev {
|
||||
* Lookup Engine
|
||||
*/
|
||||
FW_PARAMS_PARAM_DEV_FLOWC_BUFFIFO_SZ = 0x03,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_NIC = 0x04,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_VNIC = 0x05,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_OFLD = 0x06,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_RI = 0x07,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_ISCSIPDU = 0x08,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_ISCSI = 0x09,
|
||||
FW_PARAMS_PARAM_DEV_INTVER_FCOE = 0x0A,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_NIC = 0x04,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_VNIC = 0x05,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_OFLD = 0x06,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_RI = 0x07,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_ISCSIPDU = 0x08,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_ISCSI = 0x09,
|
||||
FW_PARAMS_PARAM_DEV_INTFVER_FCOE = 0x0A,
|
||||
FW_PARAMS_PARAM_DEV_FWREV = 0x0B,
|
||||
FW_PARAMS_PARAM_DEV_TPREV = 0x0C,
|
||||
FW_PARAMS_PARAM_DEV_CF = 0x0D,
|
||||
FW_PARAMS_PARAM_DEV_BYPASS = 0x0E,
|
||||
};
|
||||
|
||||
/*
|
||||
@ -2119,6 +2423,23 @@ enum fw_params_param_dmaq {
|
||||
FW_PARAMS_PARAM_DMAQ_EQ_SCHEDCLASS_ETH = 0x12,
|
||||
};
|
||||
|
||||
/*
|
||||
* dev bypass parameters; actions and modes
|
||||
*/
|
||||
enum fw_params_param_dev_bypass {
|
||||
|
||||
/* actions
|
||||
*/
|
||||
FW_PARAMS_PARAM_DEV_BYPASS_PFAIL = 0x00,
|
||||
FW_PARAMS_PARAM_DEV_BYPASS_CURRENT = 0x01,
|
||||
|
||||
/* modes
|
||||
*/
|
||||
FW_PARAMS_PARAM_DEV_BYPASS_NORMAL = 0x00,
|
||||
FW_PARAMS_PARAM_DEV_BYPASS_DROP = 0x1,
|
||||
FW_PARAMS_PARAM_DEV_BYPASS_BYPASS = 0x2,
|
||||
};
|
||||
|
||||
#define S_FW_PARAMS_MNEM 24
|
||||
#define M_FW_PARAMS_MNEM 0xff
|
||||
#define V_FW_PARAMS_MNEM(x) ((x) << S_FW_PARAMS_MNEM)
|
||||
@ -2271,6 +2592,7 @@ struct fw_pfvf_cmd {
|
||||
#define V_FW_PFVF_CMD_NETHCTRL(x) ((x) << S_FW_PFVF_CMD_NETHCTRL)
|
||||
#define G_FW_PFVF_CMD_NETHCTRL(x) \
|
||||
(((x) >> S_FW_PFVF_CMD_NETHCTRL) & M_FW_PFVF_CMD_NETHCTRL)
|
||||
|
||||
/*
|
||||
* ingress queue type; the first 1K ingress queues can have associated 0,
|
||||
* 1 or 2 free lists and an interrupt, all other ingress queues lack these
|
||||
@ -3518,6 +3840,7 @@ struct fw_eq_ofld_cmd {
|
||||
#define V_FW_EQ_OFLD_CMD_EQSIZE(x) ((x) << S_FW_EQ_OFLD_CMD_EQSIZE)
|
||||
#define G_FW_EQ_OFLD_CMD_EQSIZE(x) \
|
||||
(((x) >> S_FW_EQ_OFLD_CMD_EQSIZE) & M_FW_EQ_OFLD_CMD_EQSIZE)
|
||||
|
||||
/* Macros for VIID parsing:
|
||||
VIID - [10:8] PFN, [7] VI Valid, [6:0] VI number */
|
||||
#define S_FW_VIID_PFN 8
|
||||
@ -4081,8 +4404,10 @@ enum fw_port_action {
|
||||
FW_PORT_ACTION_L2_WOL_MODE_EN = 0x0012,
|
||||
FW_PORT_ACTION_LPBK_TO_NORMAL = 0x0020,
|
||||
FW_PORT_ACTION_L1_SS_LPBK_ASIC = 0x0021,
|
||||
FW_PORT_ACTION_MAC_LPBK = 0x0022,
|
||||
FW_PORT_ACTION_L1_WS_LPBK_ASIC = 0x0023,
|
||||
FW_PORT_ACTION_L1_EXT_LPBK = 0x0026,
|
||||
FW_PORT_ACTION_PCS_LPBK = 0x0028,
|
||||
FW_PORT_ACTION_PHY_RESET = 0x0040,
|
||||
FW_PORT_ACTION_PMA_RESET = 0x0041,
|
||||
FW_PORT_ACTION_PCS_RESET = 0x0042,
|
||||
@ -4164,7 +4489,8 @@ struct fw_port_cmd {
|
||||
struct fw_port_dcb_pgrate {
|
||||
__u8 type;
|
||||
__u8 apply_pkd;
|
||||
__u8 r10_lo[6];
|
||||
__u8 r10_lo[5];
|
||||
__u8 num_tcs_supported;
|
||||
__u8 pgrate[8];
|
||||
} pgrate;
|
||||
struct fw_port_dcb_priorate {
|
||||
@ -4181,11 +4507,12 @@ struct fw_port_cmd {
|
||||
} pfc;
|
||||
struct fw_port_app_priority {
|
||||
__u8 type;
|
||||
__u8 r10_lo[3];
|
||||
__u8 prio;
|
||||
__u8 sel;
|
||||
__u8 r10[2];
|
||||
__u8 idx;
|
||||
__u8 user_prio_map;
|
||||
__u8 sel_field;
|
||||
__be16 protocolid;
|
||||
__u8 r12[8];
|
||||
__be64 r12;
|
||||
} app_priority;
|
||||
} dcb;
|
||||
} u;
|
||||
@ -4337,20 +4664,6 @@ struct fw_port_cmd {
|
||||
(((x) >> S_FW_PORT_CMD_APPLY) & M_FW_PORT_CMD_APPLY)
|
||||
#define F_FW_PORT_CMD_APPLY V_FW_PORT_CMD_APPLY(1U)
|
||||
|
||||
#define S_FW_PORT_CMD_APPLY 7
|
||||
#define M_FW_PORT_CMD_APPLY 0x1
|
||||
#define V_FW_PORT_CMD_APPLY(x) ((x) << S_FW_PORT_CMD_APPLY)
|
||||
#define G_FW_PORT_CMD_APPLY(x) \
|
||||
(((x) >> S_FW_PORT_CMD_APPLY) & M_FW_PORT_CMD_APPLY)
|
||||
#define F_FW_PORT_CMD_APPLY V_FW_PORT_CMD_APPLY(1U)
|
||||
|
||||
#define S_FW_PORT_CMD_APPLY 7
|
||||
#define M_FW_PORT_CMD_APPLY 0x1
|
||||
#define V_FW_PORT_CMD_APPLY(x) ((x) << S_FW_PORT_CMD_APPLY)
|
||||
#define G_FW_PORT_CMD_APPLY(x) \
|
||||
(((x) >> S_FW_PORT_CMD_APPLY) & M_FW_PORT_CMD_APPLY)
|
||||
#define F_FW_PORT_CMD_APPLY V_FW_PORT_CMD_APPLY(1U)
|
||||
|
||||
/*
|
||||
* These are configured into the VPD and hence tools that generate
|
||||
* VPD may use this enumeration.
|
||||
@ -4383,6 +4696,7 @@ enum fw_port_module_type {
|
||||
FW_PORT_MOD_TYPE_TWINAX_PASSIVE = 0x4,
|
||||
FW_PORT_MOD_TYPE_TWINAX_ACTIVE = 0x5,
|
||||
FW_PORT_MOD_TYPE_LRM = 0x6,
|
||||
FW_PORT_MOD_TYPE_ERROR = M_FW_PORT_CMD_MODTYPE - 3,
|
||||
FW_PORT_MOD_TYPE_UNKNOWN = M_FW_PORT_CMD_MODTYPE - 2,
|
||||
FW_PORT_MOD_TYPE_NOTSUPPORTED = M_FW_PORT_CMD_MODTYPE - 1,
|
||||
FW_PORT_MOD_TYPE_NONE = M_FW_PORT_CMD_MODTYPE
|
||||
@ -5189,15 +5503,12 @@ struct fw_rss_vi_config_cmd {
|
||||
#define F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN \
|
||||
V_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN(1U)
|
||||
|
||||
#define S_FW_RSS_VI_CONFIG_CMD_UDPEN 0
|
||||
#define M_FW_RSS_VI_CONFIG_CMD_UDPEN 0x1
|
||||
#define V_FW_RSS_VI_CONFIG_CMD_UDPEN(x) \
|
||||
((x) << S_FW_RSS_VI_CONFIG_CMD_UDPEN)
|
||||
#define G_FW_RSS_VI_CONFIG_CMD_UDPEN(x) \
|
||||
(((x) >> S_FW_RSS_VI_CONFIG_CMD_UDPEN) & \
|
||||
M_FW_RSS_VI_CONFIG_CMD_UDPEN)
|
||||
#define F_FW_RSS_VI_CONFIG_CMD_UDPEN \
|
||||
V_FW_RSS_VI_CONFIG_CMD_UDPEN(1U)
|
||||
#define S_FW_RSS_VI_CONFIG_CMD_UDPEN 0
|
||||
#define M_FW_RSS_VI_CONFIG_CMD_UDPEN 0x1
|
||||
#define V_FW_RSS_VI_CONFIG_CMD_UDPEN(x) ((x) << S_FW_RSS_VI_CONFIG_CMD_UDPEN)
|
||||
#define G_FW_RSS_VI_CONFIG_CMD_UDPEN(x) \
|
||||
(((x) >> S_FW_RSS_VI_CONFIG_CMD_UDPEN) & M_FW_RSS_VI_CONFIG_CMD_UDPEN)
|
||||
#define F_FW_RSS_VI_CONFIG_CMD_UDPEN V_FW_RSS_VI_CONFIG_CMD_UDPEN(1U)
|
||||
|
||||
enum fw_sched_sc {
|
||||
FW_SCHED_SC_CONFIG = 0,
|
||||
@ -5352,103 +5663,97 @@ struct fw_devlog_cmd {
|
||||
M_FW_DEVLOG_CMD_MEMADDR16_DEVLOG)
|
||||
|
||||
struct fw_netif_cmd {
|
||||
__be32 op_portid;
|
||||
__be32 retval_to_len16;
|
||||
__be32 add_to_ipv4gw;
|
||||
__be32 vlanid_mtuval;
|
||||
__be32 op_to_ipv4gw;
|
||||
__be32 retval_len16;
|
||||
__be32 netifi_ifadridx;
|
||||
__be32 portid_to_mtuval;
|
||||
__be32 gwaddr;
|
||||
__be32 addr;
|
||||
__be32 nmask;
|
||||
__be32 bcaddr;
|
||||
};
|
||||
|
||||
#define S_FW_NETIF_CMD_PORTID 0
|
||||
#define M_FW_NETIF_CMD_PORTID 0xf
|
||||
#define V_FW_NETIF_CMD_PORTID(x) ((x) << S_FW_NETIF_CMD_PORTID)
|
||||
#define G_FW_NETIF_CMD_PORTID(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_PORTID) & M_FW_NETIF_CMD_PORTID)
|
||||
|
||||
#define S_FW_NETIF_CMD_RETVAL 24
|
||||
#define M_FW_NETIF_CMD_RETVAL 0xff
|
||||
#define V_FW_NETIF_CMD_RETVAL(x) ((x) << S_FW_NETIF_CMD_RETVAL)
|
||||
#define G_FW_NETIF_CMD_RETVAL(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_RETVAL) & M_FW_NETIF_CMD_RETVAL)
|
||||
|
||||
#define S_FW_NETIF_CMD_IFIDX 16
|
||||
#define M_FW_NETIF_CMD_IFIDX 0xff
|
||||
#define V_FW_NETIF_CMD_IFIDX(x) ((x) << S_FW_NETIF_CMD_IFIDX)
|
||||
#define G_FW_NETIF_CMD_IFIDX(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_IFIDX) & M_FW_NETIF_CMD_IFIDX)
|
||||
|
||||
#define S_FW_NETIF_CMD_LEN16 0
|
||||
#define M_FW_NETIF_CMD_LEN16 0xff
|
||||
#define V_FW_NETIF_CMD_LEN16(x) ((x) << S_FW_NETIF_CMD_LEN16)
|
||||
#define G_FW_NETIF_CMD_LEN16(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_LEN16) & M_FW_NETIF_CMD_LEN16)
|
||||
|
||||
#define S_FW_NETIF_CMD_ADD 31
|
||||
#define S_FW_NETIF_CMD_ADD 20
|
||||
#define M_FW_NETIF_CMD_ADD 0x1
|
||||
#define V_FW_NETIF_CMD_ADD(x) ((x) << S_FW_NETIF_CMD_ADD)
|
||||
#define G_FW_NETIF_CMD_ADD(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_ADD) & M_FW_NETIF_CMD_ADD)
|
||||
#define F_FW_NETIF_CMD_ADD V_FW_NETIF_CMD_ADD(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_LINK 30
|
||||
#define S_FW_NETIF_CMD_LINK 19
|
||||
#define M_FW_NETIF_CMD_LINK 0x1
|
||||
#define V_FW_NETIF_CMD_LINK(x) ((x) << S_FW_NETIF_CMD_LINK)
|
||||
#define G_FW_NETIF_CMD_LINK(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_LINK) & M_FW_NETIF_CMD_LINK)
|
||||
#define F_FW_NETIF_CMD_LINK V_FW_NETIF_CMD_LINK(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_VLAN 29
|
||||
#define S_FW_NETIF_CMD_VLAN 18
|
||||
#define M_FW_NETIF_CMD_VLAN 0x1
|
||||
#define V_FW_NETIF_CMD_VLAN(x) ((x) << S_FW_NETIF_CMD_VLAN)
|
||||
#define G_FW_NETIF_CMD_VLAN(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_VLAN) & M_FW_NETIF_CMD_VLAN)
|
||||
#define F_FW_NETIF_CMD_VLAN V_FW_NETIF_CMD_VLAN(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_MTU 28
|
||||
#define S_FW_NETIF_CMD_MTU 17
|
||||
#define M_FW_NETIF_CMD_MTU 0x1
|
||||
#define V_FW_NETIF_CMD_MTU(x) ((x) << S_FW_NETIF_CMD_MTU)
|
||||
#define G_FW_NETIF_CMD_MTU(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_MTU) & M_FW_NETIF_CMD_MTU)
|
||||
#define F_FW_NETIF_CMD_MTU V_FW_NETIF_CMD_MTU(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_DHCP 27
|
||||
#define S_FW_NETIF_CMD_DHCP 16
|
||||
#define M_FW_NETIF_CMD_DHCP 0x1
|
||||
#define V_FW_NETIF_CMD_DHCP(x) ((x) << S_FW_NETIF_CMD_DHCP)
|
||||
#define G_FW_NETIF_CMD_DHCP(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_DHCP) & M_FW_NETIF_CMD_DHCP)
|
||||
#define F_FW_NETIF_CMD_DHCP V_FW_NETIF_CMD_DHCP(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_IPV4BCADDR 3
|
||||
#define S_FW_NETIF_CMD_IPV4BCADDR 15
|
||||
#define M_FW_NETIF_CMD_IPV4BCADDR 0x1
|
||||
#define V_FW_NETIF_CMD_IPV4BCADDR(x) ((x) << S_FW_NETIF_CMD_IPV4BCADDR)
|
||||
#define G_FW_NETIF_CMD_IPV4BCADDR(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_IPV4BCADDR) & M_FW_NETIF_CMD_IPV4BCADDR)
|
||||
#define F_FW_NETIF_CMD_IPV4BCADDR V_FW_NETIF_CMD_IPV4BCADDR(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_IPV4NMASK 2
|
||||
#define S_FW_NETIF_CMD_IPV4NMASK 14
|
||||
#define M_FW_NETIF_CMD_IPV4NMASK 0x1
|
||||
#define V_FW_NETIF_CMD_IPV4NMASK(x) ((x) << S_FW_NETIF_CMD_IPV4NMASK)
|
||||
#define G_FW_NETIF_CMD_IPV4NMASK(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_IPV4NMASK) & M_FW_NETIF_CMD_IPV4NMASK)
|
||||
#define F_FW_NETIF_CMD_IPV4NMASK V_FW_NETIF_CMD_IPV4NMASK(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_IPV4ADDR 1
|
||||
#define S_FW_NETIF_CMD_IPV4ADDR 13
|
||||
#define M_FW_NETIF_CMD_IPV4ADDR 0x1
|
||||
#define V_FW_NETIF_CMD_IPV4ADDR(x) ((x) << S_FW_NETIF_CMD_IPV4ADDR)
|
||||
#define G_FW_NETIF_CMD_IPV4ADDR(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_IPV4ADDR) & M_FW_NETIF_CMD_IPV4ADDR)
|
||||
#define F_FW_NETIF_CMD_IPV4ADDR V_FW_NETIF_CMD_IPV4ADDR(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_IPV4GW 0
|
||||
#define S_FW_NETIF_CMD_IPV4GW 12
|
||||
#define M_FW_NETIF_CMD_IPV4GW 0x1
|
||||
#define V_FW_NETIF_CMD_IPV4GW(x) ((x) << S_FW_NETIF_CMD_IPV4GW)
|
||||
#define G_FW_NETIF_CMD_IPV4GW(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_IPV4GW) & M_FW_NETIF_CMD_IPV4GW)
|
||||
#define F_FW_NETIF_CMD_IPV4GW V_FW_NETIF_CMD_IPV4GW(1U)
|
||||
|
||||
#define S_FW_NETIF_CMD_NETIFI 8
|
||||
#define M_FW_NETIF_CMD_NETIFI 0xffffff
|
||||
#define V_FW_NETIF_CMD_NETIFI(x) ((x) << S_FW_NETIF_CMD_NETIFI)
|
||||
#define G_FW_NETIF_CMD_NETIFI(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_NETIFI) & M_FW_NETIF_CMD_NETIFI)
|
||||
|
||||
#define S_FW_NETIF_CMD_IFADRIDX 0
|
||||
#define M_FW_NETIF_CMD_IFADRIDX 0xff
|
||||
#define V_FW_NETIF_CMD_IFADRIDX(x) ((x) << S_FW_NETIF_CMD_IFADRIDX)
|
||||
#define G_FW_NETIF_CMD_IFADRIDX(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_IFADRIDX) & M_FW_NETIF_CMD_IFADRIDX)
|
||||
|
||||
#define S_FW_NETIF_CMD_PORTID 28
|
||||
#define M_FW_NETIF_CMD_PORTID 0xf
|
||||
#define V_FW_NETIF_CMD_PORTID(x) ((x) << S_FW_NETIF_CMD_PORTID)
|
||||
#define G_FW_NETIF_CMD_PORTID(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_PORTID) & M_FW_NETIF_CMD_PORTID)
|
||||
|
||||
#define S_FW_NETIF_CMD_VLANID 16
|
||||
#define M_FW_NETIF_CMD_VLANID 0xfff
|
||||
#define V_FW_NETIF_CMD_VLANID(x) ((x) << S_FW_NETIF_CMD_VLANID)
|
||||
@ -5461,6 +5766,42 @@ struct fw_netif_cmd {
|
||||
#define G_FW_NETIF_CMD_MTUVAL(x) \
|
||||
(((x) >> S_FW_NETIF_CMD_MTUVAL) & M_FW_NETIF_CMD_MTUVAL)
|
||||
|
||||
enum fw_watchdog_actions {
|
||||
FW_WATCHDOG_ACTION_FLR = 0x1,
|
||||
FW_WATCHDOG_ACTION_BYPASS = 0x2,
|
||||
};
|
||||
|
||||
#define FW_WATCHDOG_MAX_TIMEOUT_SECS 60
|
||||
|
||||
struct fw_watchdog_cmd {
|
||||
__be32 op_to_write;
|
||||
__be32 retval_len16;
|
||||
__be32 timeout;
|
||||
__be32 actions;
|
||||
};
|
||||
|
||||
struct fw_clip_cmd {
|
||||
__be32 op_to_write;
|
||||
__be32 alloc_to_len16;
|
||||
__be64 ip_hi;
|
||||
__be64 ip_lo;
|
||||
__be32 r4[2];
|
||||
};
|
||||
|
||||
#define S_FW_CLIP_CMD_ALLOC 31
|
||||
#define M_FW_CLIP_CMD_ALLOC 0x1
|
||||
#define V_FW_CLIP_CMD_ALLOC(x) ((x) << S_FW_CLIP_CMD_ALLOC)
|
||||
#define G_FW_CLIP_CMD_ALLOC(x) \
|
||||
(((x) >> S_FW_CLIP_CMD_ALLOC) & M_FW_CLIP_CMD_ALLOC)
|
||||
#define F_FW_CLIP_CMD_ALLOC V_FW_CLIP_CMD_ALLOC(1U)
|
||||
|
||||
#define S_FW_CLIP_CMD_FREE 30
|
||||
#define M_FW_CLIP_CMD_FREE 0x1
|
||||
#define V_FW_CLIP_CMD_FREE(x) ((x) << S_FW_CLIP_CMD_FREE)
|
||||
#define G_FW_CLIP_CMD_FREE(x) \
|
||||
(((x) >> S_FW_CLIP_CMD_FREE) & M_FW_CLIP_CMD_FREE)
|
||||
#define F_FW_CLIP_CMD_FREE V_FW_CLIP_CMD_FREE(1U)
|
||||
|
||||
enum fw_error_type {
|
||||
FW_ERROR_TYPE_EXCEPTION = 0x0,
|
||||
FW_ERROR_TYPE_HWMODULE = 0x1,
|
||||
@ -5570,6 +5911,94 @@ struct fw_debug_cmd {
|
||||
#define G_FW_DEBUG_CMD_TYPE(x) \
|
||||
(((x) >> S_FW_DEBUG_CMD_TYPE) & M_FW_DEBUG_CMD_TYPE)
|
||||
|
||||
|
||||
/******************************************************************************
|
||||
* P C I E F W R E G I S T E R
|
||||
**************************************/
|
||||
|
||||
/**
|
||||
* Register definitions for the PCIE_FW register which the firmware uses
|
||||
* to retain status across RESETs. This register should be considered
|
||||
* as a READ-ONLY register for Host Software and only to be used to
|
||||
* track firmware initialization/error state, etc.
|
||||
*/
|
||||
#define S_PCIE_FW_ERR 31
|
||||
#define M_PCIE_FW_ERR 0x1
|
||||
#define V_PCIE_FW_ERR(x) ((x) << S_PCIE_FW_ERR)
|
||||
#define G_PCIE_FW_ERR(x) (((x) >> S_PCIE_FW_ERR) & M_PCIE_FW_ERR)
|
||||
#define F_PCIE_FW_ERR V_PCIE_FW_ERR(1U)
|
||||
|
||||
#define S_PCIE_FW_INIT 30
|
||||
#define M_PCIE_FW_INIT 0x1
|
||||
#define V_PCIE_FW_INIT(x) ((x) << S_PCIE_FW_INIT)
|
||||
#define G_PCIE_FW_INIT(x) (((x) >> S_PCIE_FW_INIT) & M_PCIE_FW_INIT)
|
||||
#define F_PCIE_FW_INIT V_PCIE_FW_INIT(1U)
|
||||
|
||||
#define S_PCIE_FW_HALT 29
|
||||
#define M_PCIE_FW_HALT 0x1
|
||||
#define V_PCIE_FW_HALT(x) ((x) << S_PCIE_FW_HALT)
|
||||
#define G_PCIE_FW_HALT(x) (((x) >> S_PCIE_FW_HALT) & M_PCIE_FW_HALT)
|
||||
#define F_PCIE_FW_HALT V_PCIE_FW_HALT(1U)
|
||||
|
||||
#define S_PCIE_FW_STAGE 21
|
||||
#define M_PCIE_FW_STAGE 0x7
|
||||
#define V_PCIE_FW_STAGE(x) ((x) << S_PCIE_FW_STAGE)
|
||||
#define G_PCIE_FW_STAGE(x) (((x) >> S_PCIE_FW_STAGE) & M_PCIE_FW_STAGE)
|
||||
|
||||
#define S_PCIE_FW_ASYNCNOT_VLD 20
|
||||
#define M_PCIE_FW_ASYNCNOT_VLD 0x1
|
||||
#define V_PCIE_FW_ASYNCNOT_VLD(x) \
|
||||
((x) << S_PCIE_FW_ASYNCNOT_VLD)
|
||||
#define G_PCIE_FW_ASYNCNOT_VLD(x) \
|
||||
(((x) >> S_PCIE_FW_ASYNCNOT_VLD) & M_PCIE_FW_ASYNCNOT_VLD)
|
||||
#define F_PCIE_FW_ASYNCNOT_VLD V_PCIE_FW_ASYNCNOT_VLD(1U)
|
||||
|
||||
#define S_PCIE_FW_ASYNCNOTINT 19
|
||||
#define M_PCIE_FW_ASYNCNOTINT 0x1
|
||||
#define V_PCIE_FW_ASYNCNOTINT(x) \
|
||||
((x) << S_PCIE_FW_ASYNCNOTINT)
|
||||
#define G_PCIE_FW_ASYNCNOTINT(x) \
|
||||
(((x) >> S_PCIE_FW_ASYNCNOTINT) & M_PCIE_FW_ASYNCNOTINT)
|
||||
#define F_PCIE_FW_ASYNCNOTINT V_PCIE_FW_ASYNCNOTINT(1U)
|
||||
|
||||
#define S_PCIE_FW_ASYNCNOT 16
|
||||
#define M_PCIE_FW_ASYNCNOT 0x7
|
||||
#define V_PCIE_FW_ASYNCNOT(x) ((x) << S_PCIE_FW_ASYNCNOT)
|
||||
#define G_PCIE_FW_ASYNCNOT(x) \
|
||||
(((x) >> S_PCIE_FW_ASYNCNOT) & M_PCIE_FW_ASYNCNOT)
|
||||
|
||||
#define S_PCIE_FW_MASTER_VLD 15
|
||||
#define M_PCIE_FW_MASTER_VLD 0x1
|
||||
#define V_PCIE_FW_MASTER_VLD(x) ((x) << S_PCIE_FW_MASTER_VLD)
|
||||
#define G_PCIE_FW_MASTER_VLD(x) \
|
||||
(((x) >> S_PCIE_FW_MASTER_VLD) & M_PCIE_FW_MASTER_VLD)
|
||||
#define F_PCIE_FW_MASTER_VLD V_PCIE_FW_MASTER_VLD(1U)
|
||||
|
||||
#define S_PCIE_FW_MASTER 12
|
||||
#define M_PCIE_FW_MASTER 0x7
|
||||
#define V_PCIE_FW_MASTER(x) ((x) << S_PCIE_FW_MASTER)
|
||||
#define G_PCIE_FW_MASTER(x) (((x) >> S_PCIE_FW_MASTER) & M_PCIE_FW_MASTER)
|
||||
|
||||
#define S_PCIE_FW_RESET_VLD 11
|
||||
#define M_PCIE_FW_RESET_VLD 0x1
|
||||
#define V_PCIE_FW_RESET_VLD(x) ((x) << S_PCIE_FW_RESET_VLD)
|
||||
#define G_PCIE_FW_RESET_VLD(x) \
|
||||
(((x) >> S_PCIE_FW_RESET_VLD) & M_PCIE_FW_RESET_VLD)
|
||||
#define F_PCIE_FW_RESET_VLD V_PCIE_FW_RESET_VLD(1U)
|
||||
|
||||
#define S_PCIE_FW_RESET 8
|
||||
#define M_PCIE_FW_RESET 0x7
|
||||
#define V_PCIE_FW_RESET(x) ((x) << S_PCIE_FW_RESET)
|
||||
#define G_PCIE_FW_RESET(x) \
|
||||
(((x) >> S_PCIE_FW_RESET) & M_PCIE_FW_RESET)
|
||||
|
||||
#define S_PCIE_FW_REGISTERED 0
|
||||
#define M_PCIE_FW_REGISTERED 0xff
|
||||
#define V_PCIE_FW_REGISTERED(x) ((x) << S_PCIE_FW_REGISTERED)
|
||||
#define G_PCIE_FW_REGISTERED(x) \
|
||||
(((x) >> S_PCIE_FW_REGISTERED) & M_PCIE_FW_REGISTERED)
|
||||
|
||||
|
||||
/******************************************************************************
|
||||
* B I N A R Y H E A D E R F O R M A T
|
||||
**********************************************/
|
||||
@ -5579,7 +6008,7 @@ struct fw_debug_cmd {
|
||||
*/
|
||||
struct fw_hdr {
|
||||
__u8 ver;
|
||||
__u8 reserved1;
|
||||
__u8 chip; /* terminator chip family */
|
||||
__be16 len512; /* bin length in units of 512-bytes */
|
||||
__be32 fw_ver; /* firmware version */
|
||||
__be32 tp_microcode_ver; /* tcp processor microcode version */
|
||||
@ -5591,7 +6020,16 @@ struct fw_hdr {
|
||||
__u8 intfver_iscsi;
|
||||
__u8 intfver_fcoe;
|
||||
__u8 reserved2;
|
||||
__be32 reserved3[27];
|
||||
__u32 reserved3;
|
||||
__u32 reserved4;
|
||||
__u32 reserved5;
|
||||
__be32 flags;
|
||||
__be32 reserved6[23];
|
||||
};
|
||||
|
||||
enum fw_hdr_chip {
|
||||
FW_HDR_CHIP_T4,
|
||||
FW_HDR_CHIP_T5
|
||||
};
|
||||
|
||||
#define S_FW_HDR_FW_VER_MAJOR 24
|
||||
@ -5622,4 +6060,18 @@ struct fw_hdr {
|
||||
#define G_FW_HDR_FW_VER_BUILD(x) \
|
||||
(((x) >> S_FW_HDR_FW_VER_BUILD) & M_FW_HDR_FW_VER_BUILD)
|
||||
|
||||
enum fw_hdr_intfver {
|
||||
FW_HDR_INTFVER_NIC = 0x00,
|
||||
FW_HDR_INTFVER_VNIC = 0x00,
|
||||
FW_HDR_INTFVER_OFLD = 0x00,
|
||||
FW_HDR_INTFVER_RI = 0x00,
|
||||
FW_HDR_INTFVER_ISCSIPDU = 0x00,
|
||||
FW_HDR_INTFVER_ISCSI = 0x00,
|
||||
FW_HDR_INTFVER_FCOE = 0x00,
|
||||
};
|
||||
|
||||
enum fw_hdr_flags {
|
||||
FW_HDR_FLAGS_RESET_HALT = 0x00000001,
|
||||
};
|
||||
|
||||
#endif /* _T4FW_INTERFACE_H_ */
|
@ -31,15 +31,18 @@
|
||||
#ifndef __T4_OFFLOAD_H__
|
||||
#define __T4_OFFLOAD_H__
|
||||
|
||||
/* CPL message priority levels */
|
||||
enum {
|
||||
CPL_PRIORITY_DATA = 0, /* data messages */
|
||||
CPL_PRIORITY_SETUP = 1, /* connection setup messages */
|
||||
CPL_PRIORITY_TEARDOWN = 0, /* connection teardown messages */
|
||||
CPL_PRIORITY_LISTEN = 1, /* listen start/stop messages */
|
||||
CPL_PRIORITY_ACK = 1, /* RX ACK messages */
|
||||
CPL_PRIORITY_CONTROL = 1 /* control messages */
|
||||
};
|
||||
/* XXX: flagrant misuse of mbuf fields (during tx by TOM) */
|
||||
#define MBUF_EQ(m) (*((void **)(&(m)->m_pkthdr.rcvif)))
|
||||
/* These have to work for !M_PKTHDR so we use a field from m_hdr. */
|
||||
#define MBUF_TX_CREDITS(m) ((m)->m_hdr.pad[0])
|
||||
#define MBUF_DMA_MAPPED(m) ((m)->m_hdr.pad[1])
|
||||
|
||||
#define INIT_ULPTX_WR(w, wrlen, atomic, tid) do { \
|
||||
(w)->wr.wr_hi = htonl(V_FW_WR_OP(FW_ULPTX_WR) | V_FW_WR_ATOMIC(atomic)); \
|
||||
(w)->wr.wr_mid = htonl(V_FW_WR_LEN16(DIV_ROUND_UP(wrlen, 16)) | \
|
||||
V_FW_WR_FLOWID(tid)); \
|
||||
(w)->wr.wr_lo = cpu_to_be64(0); \
|
||||
} while (0)
|
||||
|
||||
#define INIT_TP_WR(w, tid) do { \
|
||||
(w)->wr.wr_hi = htonl(V_FW_WR_OP(FW_TP_WR) | \
|
||||
@ -49,13 +52,19 @@ enum {
|
||||
(w)->wr.wr_lo = cpu_to_be64(0); \
|
||||
} while (0)
|
||||
|
||||
#define INIT_TP_WR_MIT_CPL(w, cpl, tid) do { \
|
||||
INIT_TP_WR(w, tid); \
|
||||
OPCODE_TID(w) = htonl(MK_OPCODE_TID(cpl, tid)); \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* Max # of ATIDs. The absolute HW max is 16K but we keep it lower.
|
||||
*/
|
||||
#define MAX_ATIDS 8192U
|
||||
|
||||
struct serv_entry {
|
||||
union serv_entry {
|
||||
void *data;
|
||||
union serv_entry *next;
|
||||
};
|
||||
|
||||
union aopen_entry {
|
||||
@ -71,8 +80,7 @@ struct tid_info {
|
||||
void **tid_tab;
|
||||
unsigned int ntids;
|
||||
|
||||
struct serv_entry *stid_tab;
|
||||
unsigned long *stid_bmap;
|
||||
union serv_entry *stid_tab;
|
||||
unsigned int nstids;
|
||||
unsigned int stid_base;
|
||||
|
||||
@ -84,10 +92,15 @@ struct tid_info {
|
||||
unsigned int ftid_base;
|
||||
unsigned int ftids_in_use;
|
||||
|
||||
struct mtx atid_lock;
|
||||
union aopen_entry *afree;
|
||||
unsigned int atids_in_use;
|
||||
|
||||
struct mtx stid_lock;
|
||||
union serv_entry *sfree;
|
||||
unsigned int stids_in_use;
|
||||
|
||||
unsigned int tids_in_use;
|
||||
};
|
||||
|
||||
struct t4_range {
|
||||
@ -101,6 +114,40 @@ struct t4_virt_res { /* virtualized HW resources */
|
||||
struct t4_range stag;
|
||||
struct t4_range rq;
|
||||
struct t4_range pbl;
|
||||
struct t4_range qp;
|
||||
struct t4_range cq;
|
||||
struct t4_range ocq;
|
||||
};
|
||||
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
enum {
|
||||
ULD_TOM = 1,
|
||||
};
|
||||
|
||||
struct adapter;
|
||||
struct port_info;
|
||||
struct uld_info {
|
||||
SLIST_ENTRY(uld_info) link;
|
||||
int refcount;
|
||||
int uld_id;
|
||||
int (*attach)(struct adapter *, void **);
|
||||
int (*detach)(void *);
|
||||
};
|
||||
|
||||
struct uld_softc {
|
||||
struct uld_info *uld;
|
||||
void *softc;
|
||||
};
|
||||
|
||||
struct tom_tunables {
|
||||
int sndbuf;
|
||||
int ddp;
|
||||
int indsz;
|
||||
int ddp_thres;
|
||||
};
|
||||
|
||||
int t4_register_uld(struct uld_info *);
|
||||
int t4_unregister_uld(struct uld_info *);
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -124,6 +124,7 @@ typedef boolean_t bool;
|
||||
#define PCI_EXP_LNKSTA PCIR_EXPRESS_LINK_STA
|
||||
#define PCI_EXP_LNKSTA_CLS PCIM_LINK_STA_SPEED
|
||||
#define PCI_EXP_LNKSTA_NLW PCIM_LINK_STA_WIDTH
|
||||
#define PCI_EXP_DEVCTL2 0x28
|
||||
|
||||
static inline int
|
||||
ilog2(long x)
|
||||
|
@ -47,6 +47,8 @@ enum {
|
||||
T4_SET_FILTER, /* program a filter */
|
||||
T4_DEL_FILTER, /* delete a filter */
|
||||
T4_GET_SGE_CONTEXT, /* get SGE context for a queue */
|
||||
T4_LOAD_FW, /* flash firmware */
|
||||
T4_GET_MEM, /* read memory */
|
||||
};
|
||||
|
||||
struct t4_reg {
|
||||
@ -62,6 +64,11 @@ struct t4_regdump {
|
||||
uint32_t *data;
|
||||
};
|
||||
|
||||
struct t4_data {
|
||||
uint32_t len;
|
||||
uint8_t *data;
|
||||
};
|
||||
|
||||
/*
|
||||
* A hardware filter is some valid combination of these.
|
||||
*/
|
||||
@ -73,8 +80,8 @@ struct t4_regdump {
|
||||
#define T4_FILTER_IP_DPORT 0x20 /* Destination IP port */
|
||||
#define T4_FILTER_FCoE 0x40 /* Fibre Channel over Ethernet packet */
|
||||
#define T4_FILTER_PORT 0x80 /* Physical ingress port */
|
||||
#define T4_FILTER_OVLAN 0x100 /* Outer VLAN ID */
|
||||
#define T4_FILTER_IVLAN 0x200 /* Inner VLAN ID */
|
||||
#define T4_FILTER_VNIC 0x100 /* VNIC id or outer VLAN */
|
||||
#define T4_FILTER_VLAN 0x200 /* VLAN ID */
|
||||
#define T4_FILTER_IP_TOS 0x400 /* IPv4 TOS/IPv6 Traffic Class */
|
||||
#define T4_FILTER_IP_PROTO 0x800 /* IP protocol */
|
||||
#define T4_FILTER_ETH_TYPE 0x1000 /* Ethernet Type */
|
||||
@ -131,8 +138,8 @@ struct t4_filter_tuple {
|
||||
* is used to select the global mode and all filters are limited to the
|
||||
* set of fields allowed by the global mode.
|
||||
*/
|
||||
uint16_t ovlan; /* outer VLAN */
|
||||
uint16_t ivlan; /* inner VLAN */
|
||||
uint16_t vnic; /* VNIC id or outer VLAN tag */
|
||||
uint16_t vlan; /* VLAN tag */
|
||||
uint16_t ethtype; /* Ethernet type */
|
||||
uint8_t tos; /* TOS/Traffic Type */
|
||||
uint8_t proto; /* protocol type */
|
||||
@ -141,8 +148,8 @@ struct t4_filter_tuple {
|
||||
uint32_t matchtype:3; /* MPS match type */
|
||||
uint32_t frag:1; /* fragmentation extension header */
|
||||
uint32_t macidx:9; /* exact match MAC index */
|
||||
uint32_t ivlan_vld:1; /* inner VLAN valid */
|
||||
uint32_t ovlan_vld:1; /* outer VLAN valid */
|
||||
uint32_t vlan_vld:1; /* VLAN valid */
|
||||
uint32_t vnic_vld:1; /* VNIC id/outer VLAN tag valid */
|
||||
};
|
||||
|
||||
struct t4_filter_specification {
|
||||
@ -199,6 +206,12 @@ struct t4_sge_context {
|
||||
uint32_t data[T4_SGE_CONTEXT_SIZE / 4];
|
||||
};
|
||||
|
||||
struct t4_mem_range {
|
||||
uint32_t addr;
|
||||
uint32_t len;
|
||||
uint32_t *data;
|
||||
};
|
||||
|
||||
#define CHELSIO_T4_GETREG _IOWR('f', T4_GETREG, struct t4_reg)
|
||||
#define CHELSIO_T4_SETREG _IOW('f', T4_SETREG, struct t4_reg)
|
||||
#define CHELSIO_T4_REGDUMP _IOWR('f', T4_REGDUMP, struct t4_regdump)
|
||||
@ -209,4 +222,6 @@ struct t4_sge_context {
|
||||
#define CHELSIO_T4_DEL_FILTER _IOW('f', T4_DEL_FILTER, struct t4_filter)
|
||||
#define CHELSIO_T4_GET_SGE_CONTEXT _IOWR('f', T4_GET_SGE_CONTEXT, \
|
||||
struct t4_sge_context)
|
||||
#define CHELSIO_T4_LOAD_FW _IOW('f', T4_LOAD_FW, struct t4_data)
|
||||
#define CHELSIO_T4_GET_MEM _IOW('f', T4_GET_MEM, struct t4_mem_range)
|
||||
#endif
|
||||
|
@ -37,7 +37,9 @@ __FBSDID("$FreeBSD$");
|
||||
#include <sys/mutex.h>
|
||||
#include <sys/rwlock.h>
|
||||
#include <sys/socket.h>
|
||||
#include <sys/sbuf.h>
|
||||
#include <net/if.h>
|
||||
#include <net/if_types.h>
|
||||
#include <net/ethernet.h>
|
||||
#include <net/if_vlan_var.h>
|
||||
#include <net/if_dl.h>
|
||||
@ -50,9 +52,26 @@ __FBSDID("$FreeBSD$");
|
||||
#include "common/common.h"
|
||||
#include "common/jhash.h"
|
||||
#include "common/t4_msg.h"
|
||||
#include "offload.h"
|
||||
#include "t4_l2t.h"
|
||||
|
||||
/*
|
||||
* Module locking notes: There is a RW lock protecting the L2 table as a
|
||||
* whole plus a spinlock per L2T entry. Entry lookups and allocations happen
|
||||
* under the protection of the table lock, individual entry changes happen
|
||||
* while holding that entry's spinlock. The table lock nests outside the
|
||||
* entry locks. Allocations of new entries take the table lock as writers so
|
||||
* no other lookups can happen while allocating new entries. Entry updates
|
||||
* take the table lock as readers so multiple entries can be updated in
|
||||
* parallel. An L2T entry can be dropped by decrementing its reference count
|
||||
* and therefore can happen in parallel with entry allocation but no entry
|
||||
* can change state or increment its ref count during allocation as both of
|
||||
* these perform lookups.
|
||||
*
|
||||
* Note: We do not take refereces to ifnets in this module because both
|
||||
* the TOE and the sockets already hold references to the interfaces and the
|
||||
* lifetime of an L2T entry is fully contained in the lifetime of the TOE.
|
||||
*/
|
||||
|
||||
/* identifies sync vs async L2T_WRITE_REQs */
|
||||
#define S_SYNC_WR 12
|
||||
#define V_SYNC_WR(x) ((x) << S_SYNC_WR)
|
||||
@ -76,34 +95,251 @@ struct l2t_data {
|
||||
struct l2t_entry l2tab[L2T_SIZE];
|
||||
};
|
||||
|
||||
static int do_l2t_write_rpl(struct sge_iq *, const struct rss_header *,
|
||||
struct mbuf *);
|
||||
|
||||
#define VLAN_NONE 0xfff
|
||||
#define SA(x) ((struct sockaddr *)(x))
|
||||
#define SIN(x) ((struct sockaddr_in *)(x))
|
||||
#define SINADDR(x) (SIN(x)->sin_addr.s_addr)
|
||||
|
||||
/*
|
||||
* Module locking notes: There is a RW lock protecting the L2 table as a
|
||||
* whole plus a spinlock per L2T entry. Entry lookups and allocations happen
|
||||
* under the protection of the table lock, individual entry changes happen
|
||||
* while holding that entry's spinlock. The table lock nests outside the
|
||||
* entry locks. Allocations of new entries take the table lock as writers so
|
||||
* no other lookups can happen while allocating new entries. Entry updates
|
||||
* take the table lock as readers so multiple entries can be updated in
|
||||
* parallel. An L2T entry can be dropped by decrementing its reference count
|
||||
* and therefore can happen in parallel with entry allocation but no entry
|
||||
* can change state or increment its ref count during allocation as both of
|
||||
* these perform lookups.
|
||||
*
|
||||
* Note: We do not take refereces to ifnets in this module because both
|
||||
* the TOE and the sockets already hold references to the interfaces and the
|
||||
* lifetime of an L2T entry is fully contained in the lifetime of the TOE.
|
||||
* Allocate a free L2T entry. Must be called with l2t_data.lock held.
|
||||
*/
|
||||
static struct l2t_entry *
|
||||
alloc_l2e(struct l2t_data *d)
|
||||
{
|
||||
struct l2t_entry *end, *e, **p;
|
||||
|
||||
rw_assert(&d->lock, RA_WLOCKED);
|
||||
|
||||
if (!atomic_load_acq_int(&d->nfree))
|
||||
return (NULL);
|
||||
|
||||
/* there's definitely a free entry */
|
||||
for (e = d->rover, end = &d->l2tab[L2T_SIZE]; e != end; ++e)
|
||||
if (atomic_load_acq_int(&e->refcnt) == 0)
|
||||
goto found;
|
||||
|
||||
for (e = d->l2tab; atomic_load_acq_int(&e->refcnt); ++e) ;
|
||||
found:
|
||||
d->rover = e + 1;
|
||||
atomic_subtract_int(&d->nfree, 1);
|
||||
|
||||
/*
|
||||
* The entry we found may be an inactive entry that is
|
||||
* presently in the hash table. We need to remove it.
|
||||
*/
|
||||
if (e->state < L2T_STATE_SWITCHING) {
|
||||
for (p = &d->l2tab[e->hash].first; *p; p = &(*p)->next) {
|
||||
if (*p == e) {
|
||||
*p = e->next;
|
||||
e->next = NULL;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
e->state = L2T_STATE_UNUSED;
|
||||
return (e);
|
||||
}
|
||||
|
||||
/*
|
||||
* Write an L2T entry. Must be called with the entry locked.
|
||||
* The write may be synchronous or asynchronous.
|
||||
*/
|
||||
static int
|
||||
write_l2e(struct adapter *sc, struct l2t_entry *e, int sync)
|
||||
{
|
||||
struct mbuf *m;
|
||||
struct cpl_l2t_write_req *req;
|
||||
|
||||
mtx_assert(&e->lock, MA_OWNED);
|
||||
|
||||
if ((m = m_gethdr(M_NOWAIT, MT_DATA)) == NULL)
|
||||
return (ENOMEM);
|
||||
|
||||
req = mtod(m, struct cpl_l2t_write_req *);
|
||||
m->m_pkthdr.len = m->m_len = sizeof(*req);
|
||||
|
||||
INIT_TP_WR(req, 0);
|
||||
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ, e->idx |
|
||||
V_SYNC_WR(sync) | V_TID_QID(sc->sge.fwq.abs_id)));
|
||||
req->params = htons(V_L2T_W_PORT(e->lport) | V_L2T_W_NOREPLY(!sync));
|
||||
req->l2t_idx = htons(e->idx);
|
||||
req->vlan = htons(e->vlan);
|
||||
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
|
||||
|
||||
t4_mgmt_tx(sc, m);
|
||||
|
||||
if (sync && e->state != L2T_STATE_SWITCHING)
|
||||
e->state = L2T_STATE_SYNC_WRITE;
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate an L2T entry for use by a switching rule. Such need to be
|
||||
* explicitly freed and while busy they are not on any hash chain, so normal
|
||||
* address resolution updates do not see them.
|
||||
*/
|
||||
struct l2t_entry *
|
||||
t4_l2t_alloc_switching(struct l2t_data *d)
|
||||
{
|
||||
struct l2t_entry *e;
|
||||
|
||||
rw_rlock(&d->lock);
|
||||
e = alloc_l2e(d);
|
||||
if (e) {
|
||||
mtx_lock(&e->lock); /* avoid race with t4_l2t_free */
|
||||
e->state = L2T_STATE_SWITCHING;
|
||||
atomic_store_rel_int(&e->refcnt, 1);
|
||||
mtx_unlock(&e->lock);
|
||||
}
|
||||
rw_runlock(&d->lock);
|
||||
return e;
|
||||
}
|
||||
|
||||
/*
|
||||
* Sets/updates the contents of a switching L2T entry that has been allocated
|
||||
* with an earlier call to @t4_l2t_alloc_switching.
|
||||
*/
|
||||
int
|
||||
t4_l2t_set_switching(struct adapter *sc, struct l2t_entry *e, uint16_t vlan,
|
||||
uint8_t port, uint8_t *eth_addr)
|
||||
{
|
||||
int rc;
|
||||
|
||||
e->vlan = vlan;
|
||||
e->lport = port;
|
||||
memcpy(e->dmac, eth_addr, ETHER_ADDR_LEN);
|
||||
mtx_lock(&e->lock);
|
||||
rc = write_l2e(sc, e, 0);
|
||||
mtx_unlock(&e->lock);
|
||||
return (rc);
|
||||
}
|
||||
|
||||
int
|
||||
t4_init_l2t(struct adapter *sc, int flags)
|
||||
{
|
||||
int i;
|
||||
struct l2t_data *d;
|
||||
|
||||
d = malloc(sizeof(*d), M_CXGBE, M_ZERO | flags);
|
||||
if (!d)
|
||||
return (ENOMEM);
|
||||
|
||||
d->rover = d->l2tab;
|
||||
atomic_store_rel_int(&d->nfree, L2T_SIZE);
|
||||
rw_init(&d->lock, "L2T");
|
||||
|
||||
for (i = 0; i < L2T_SIZE; i++) {
|
||||
d->l2tab[i].idx = i;
|
||||
d->l2tab[i].state = L2T_STATE_UNUSED;
|
||||
mtx_init(&d->l2tab[i].lock, "L2T_E", NULL, MTX_DEF);
|
||||
atomic_store_rel_int(&d->l2tab[i].refcnt, 0);
|
||||
}
|
||||
|
||||
sc->l2t = d;
|
||||
t4_register_cpl_handler(sc, CPL_L2T_WRITE_RPL, do_l2t_write_rpl);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
int
|
||||
t4_free_l2t(struct l2t_data *d)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < L2T_SIZE; i++)
|
||||
mtx_destroy(&d->l2tab[i].lock);
|
||||
rw_destroy(&d->lock);
|
||||
free(d, M_CXGBE);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
static inline unsigned int
|
||||
vlan_prio(const struct l2t_entry *e)
|
||||
{
|
||||
return e->vlan >> 13;
|
||||
}
|
||||
|
||||
static char
|
||||
l2e_state(const struct l2t_entry *e)
|
||||
{
|
||||
switch (e->state) {
|
||||
case L2T_STATE_VALID: return 'V'; /* valid, fast-path entry */
|
||||
case L2T_STATE_STALE: return 'S'; /* needs revalidation, but usable */
|
||||
case L2T_STATE_SYNC_WRITE: return 'W';
|
||||
case L2T_STATE_RESOLVING: return e->arpq_head ? 'A' : 'R';
|
||||
case L2T_STATE_SWITCHING: return 'X';
|
||||
default: return 'U';
|
||||
}
|
||||
}
|
||||
|
||||
int
|
||||
sysctl_l2t(SYSCTL_HANDLER_ARGS)
|
||||
{
|
||||
struct adapter *sc = arg1;
|
||||
struct l2t_data *l2t = sc->l2t;
|
||||
struct l2t_entry *e;
|
||||
struct sbuf *sb;
|
||||
int rc, i, header = 0;
|
||||
char ip[60];
|
||||
|
||||
if (l2t == NULL)
|
||||
return (ENXIO);
|
||||
|
||||
rc = sysctl_wire_old_buffer(req, 0);
|
||||
if (rc != 0)
|
||||
return (rc);
|
||||
|
||||
sb = sbuf_new_for_sysctl(NULL, NULL, 4096, req);
|
||||
if (sb == NULL)
|
||||
return (ENOMEM);
|
||||
|
||||
e = &l2t->l2tab[0];
|
||||
for (i = 0; i < L2T_SIZE; i++, e++) {
|
||||
mtx_lock(&e->lock);
|
||||
if (e->state == L2T_STATE_UNUSED)
|
||||
goto skip;
|
||||
|
||||
if (header == 0) {
|
||||
sbuf_printf(sb, " Idx IP address "
|
||||
"Ethernet address VLAN/P LP State Users Port");
|
||||
header = 1;
|
||||
}
|
||||
if (e->state == L2T_STATE_SWITCHING || e->v6)
|
||||
ip[0] = 0;
|
||||
else
|
||||
snprintf(ip, sizeof(ip), "%s",
|
||||
inet_ntoa(*(struct in_addr *)&e->addr[0]));
|
||||
|
||||
/* XXX: accessing lle probably not safe? */
|
||||
sbuf_printf(sb, "\n%4u %-15s %02x:%02x:%02x:%02x:%02x:%02x %4d"
|
||||
" %u %2u %c %5u %s",
|
||||
e->idx, ip, e->dmac[0], e->dmac[1], e->dmac[2],
|
||||
e->dmac[3], e->dmac[4], e->dmac[5],
|
||||
e->vlan & 0xfff, vlan_prio(e), e->lport,
|
||||
l2e_state(e), atomic_load_acq_int(&e->refcnt),
|
||||
e->lle ? e->lle->lle_tbl->llt_ifp->if_xname : "");
|
||||
skip:
|
||||
mtx_unlock(&e->lock);
|
||||
}
|
||||
|
||||
rc = sbuf_finish(sb);
|
||||
sbuf_delete(sb);
|
||||
|
||||
return (rc);
|
||||
}
|
||||
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
static inline void
|
||||
l2t_hold(struct l2t_data *d, struct l2t_entry *e)
|
||||
{
|
||||
if (atomic_fetchadd_int(&e->refcnt, 1) == 0) /* 0 -> 1 transition */
|
||||
atomic_add_int(&d->nfree, -1);
|
||||
atomic_subtract_int(&d->nfree, 1);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -153,38 +389,6 @@ addreq(const struct l2t_entry *e, const uint32_t *addr)
|
||||
return e->addr[0] ^ addr[0];
|
||||
}
|
||||
|
||||
/*
|
||||
* Write an L2T entry. Must be called with the entry locked (XXX: really?).
|
||||
* The write may be synchronous or asynchronous.
|
||||
*/
|
||||
static int
|
||||
write_l2e(struct adapter *sc, struct l2t_entry *e, int sync)
|
||||
{
|
||||
struct mbuf *m;
|
||||
struct cpl_l2t_write_req *req;
|
||||
|
||||
if ((m = m_gethdr(M_NOWAIT, MT_DATA)) == NULL)
|
||||
return (ENOMEM);
|
||||
|
||||
req = mtod(m, struct cpl_l2t_write_req *);
|
||||
m->m_pkthdr.len = m->m_len = sizeof(*req);
|
||||
|
||||
INIT_TP_WR(req, 0);
|
||||
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ, e->idx |
|
||||
V_SYNC_WR(sync) | V_TID_QID(sc->sge.fwq.abs_id)));
|
||||
req->params = htons(V_L2T_W_PORT(e->lport) | V_L2T_W_NOREPLY(!sync));
|
||||
req->l2t_idx = htons(e->idx);
|
||||
req->vlan = htons(e->vlan);
|
||||
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
|
||||
|
||||
t4_mgmt_tx(sc, m);
|
||||
|
||||
if (sync && e->state != L2T_STATE_SWITCHING)
|
||||
e->state = L2T_STATE_SYNC_WRITE;
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Add a packet to an L2T entry's queue of packets awaiting resolution.
|
||||
* Must be called with the entry's lock held.
|
||||
@ -194,53 +398,133 @@ arpq_enqueue(struct l2t_entry *e, struct mbuf *m)
|
||||
{
|
||||
mtx_assert(&e->lock, MA_OWNED);
|
||||
|
||||
m->m_next = NULL;
|
||||
KASSERT(m->m_nextpkt == NULL, ("%s: m_nextpkt not NULL", __func__));
|
||||
if (e->arpq_head)
|
||||
e->arpq_tail->m_next = m;
|
||||
e->arpq_tail->m_nextpkt = m;
|
||||
else
|
||||
e->arpq_head = m;
|
||||
e->arpq_tail = m;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate a free L2T entry. Must be called with l2t_data.lock held.
|
||||
*/
|
||||
static struct l2t_entry *
|
||||
alloc_l2e(struct l2t_data *d)
|
||||
static inline void
|
||||
send_pending(struct adapter *sc, struct l2t_entry *e)
|
||||
{
|
||||
struct l2t_entry *end, *e, **p;
|
||||
struct mbuf *m, *next;
|
||||
|
||||
rw_assert(&d->lock, RA_WLOCKED);
|
||||
mtx_assert(&e->lock, MA_OWNED);
|
||||
|
||||
if (!atomic_load_acq_int(&d->nfree))
|
||||
return (NULL);
|
||||
for (m = e->arpq_head; m; m = next) {
|
||||
next = m->m_nextpkt;
|
||||
m->m_nextpkt = NULL;
|
||||
t4_wrq_tx(sc, MBUF_EQ(m), m);
|
||||
}
|
||||
e->arpq_head = e->arpq_tail = NULL;
|
||||
}
|
||||
|
||||
/* there's definitely a free entry */
|
||||
for (e = d->rover, end = &d->l2tab[L2T_SIZE]; e != end; ++e)
|
||||
if (atomic_load_acq_int(&e->refcnt) == 0)
|
||||
goto found;
|
||||
#ifdef INET
|
||||
/*
|
||||
* Looks up and fills up an l2t_entry's lle. We grab all the locks that we need
|
||||
* ourself, and update e->state at the end if e->lle was successfully filled.
|
||||
*
|
||||
* The lle passed in comes from arpresolve and is ignored as it does not appear
|
||||
* to be of much use.
|
||||
*/
|
||||
static int
|
||||
l2t_fill_lle(struct adapter *sc, struct l2t_entry *e, struct llentry *unused)
|
||||
{
|
||||
int rc = 0;
|
||||
struct sockaddr_in sin;
|
||||
struct ifnet *ifp = e->ifp;
|
||||
struct llentry *lle;
|
||||
|
||||
for (e = d->l2tab; atomic_load_acq_int(&e->refcnt); ++e) ;
|
||||
found:
|
||||
d->rover = e + 1;
|
||||
atomic_add_int(&d->nfree, -1);
|
||||
bzero(&sin, sizeof(struct sockaddr_in));
|
||||
if (e->v6)
|
||||
panic("%s: IPv6 L2 resolution not supported yet.", __func__);
|
||||
|
||||
/*
|
||||
* The entry we found may be an inactive entry that is
|
||||
* presently in the hash table. We need to remove it.
|
||||
*/
|
||||
if (e->state < L2T_STATE_SWITCHING) {
|
||||
for (p = &d->l2tab[e->hash].first; *p; p = &(*p)->next) {
|
||||
if (*p == e) {
|
||||
*p = e->next;
|
||||
e->next = NULL;
|
||||
break;
|
||||
}
|
||||
sin.sin_family = AF_INET;
|
||||
sin.sin_len = sizeof(struct sockaddr_in);
|
||||
memcpy(&sin.sin_addr, e->addr, sizeof(struct sockaddr_in));
|
||||
|
||||
mtx_assert(&e->lock, MA_NOTOWNED);
|
||||
KASSERT(e->addr && ifp, ("%s: bad prep before call", __func__));
|
||||
|
||||
IF_AFDATA_LOCK(ifp);
|
||||
lle = lla_lookup(LLTABLE(ifp), LLE_EXCLUSIVE, SA(&sin));
|
||||
IF_AFDATA_UNLOCK(ifp);
|
||||
if (!LLE_IS_VALID(lle))
|
||||
return (ENOMEM);
|
||||
if (!(lle->la_flags & LLE_VALID)) {
|
||||
rc = EINVAL;
|
||||
goto done;
|
||||
}
|
||||
|
||||
LLE_ADDREF(lle);
|
||||
|
||||
mtx_lock(&e->lock);
|
||||
if (e->state == L2T_STATE_RESOLVING) {
|
||||
KASSERT(e->lle == NULL, ("%s: lle already valid", __func__));
|
||||
e->lle = lle;
|
||||
memcpy(e->dmac, &lle->ll_addr, ETHER_ADDR_LEN);
|
||||
write_l2e(sc, e, 1);
|
||||
} else {
|
||||
KASSERT(e->lle == lle, ("%s: lle changed", __func__));
|
||||
LLE_REMREF(lle);
|
||||
}
|
||||
mtx_unlock(&e->lock);
|
||||
done:
|
||||
LLE_WUNLOCK(lle);
|
||||
return (rc);
|
||||
}
|
||||
#endif
|
||||
|
||||
int
|
||||
t4_l2t_send(struct adapter *sc, struct mbuf *m, struct l2t_entry *e)
|
||||
{
|
||||
#ifndef INET
|
||||
return (EINVAL);
|
||||
#else
|
||||
struct llentry *lle = NULL;
|
||||
struct sockaddr_in sin;
|
||||
struct ifnet *ifp = e->ifp;
|
||||
|
||||
if (e->v6)
|
||||
panic("%s: IPv6 L2 resolution not supported yet.", __func__);
|
||||
|
||||
bzero(&sin, sizeof(struct sockaddr_in));
|
||||
sin.sin_family = AF_INET;
|
||||
sin.sin_len = sizeof(struct sockaddr_in);
|
||||
memcpy(&sin.sin_addr, e->addr, sizeof(struct sockaddr_in));
|
||||
|
||||
again:
|
||||
switch (e->state) {
|
||||
case L2T_STATE_STALE: /* entry is stale, kick off revalidation */
|
||||
if (arpresolve(ifp, NULL, NULL, SA(&sin), e->dmac, &lle) == 0)
|
||||
l2t_fill_lle(sc, e, lle);
|
||||
|
||||
/* Fall through */
|
||||
|
||||
case L2T_STATE_VALID: /* fast-path, send the packet on */
|
||||
return t4_wrq_tx(sc, MBUF_EQ(m), m);
|
||||
|
||||
case L2T_STATE_RESOLVING:
|
||||
case L2T_STATE_SYNC_WRITE:
|
||||
mtx_lock(&e->lock);
|
||||
if (e->state != L2T_STATE_SYNC_WRITE &&
|
||||
e->state != L2T_STATE_RESOLVING) {
|
||||
/* state changed by the time we got here */
|
||||
mtx_unlock(&e->lock);
|
||||
goto again;
|
||||
}
|
||||
arpq_enqueue(e, m);
|
||||
mtx_unlock(&e->lock);
|
||||
|
||||
if (e->state == L2T_STATE_RESOLVING &&
|
||||
arpresolve(ifp, NULL, NULL, SA(&sin), e->dmac, &lle) == 0)
|
||||
l2t_fill_lle(sc, e, lle);
|
||||
}
|
||||
|
||||
e->state = L2T_STATE_UNUSED;
|
||||
return e;
|
||||
return (0);
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
@ -287,75 +571,214 @@ t4_l2t_release(struct l2t_entry *e)
|
||||
t4_l2e_free(e);
|
||||
}
|
||||
|
||||
static int
|
||||
do_l2t_write_rpl(struct sge_iq *iq, const struct rss_header *rss,
|
||||
struct mbuf *m)
|
||||
{
|
||||
struct adapter *sc = iq->adapter;
|
||||
const struct cpl_l2t_write_rpl *rpl = (const void *)(rss + 1);
|
||||
unsigned int tid = GET_TID(rpl);
|
||||
unsigned int idx = tid & (L2T_SIZE - 1);
|
||||
|
||||
if (__predict_false(rpl->status != CPL_ERR_NONE)) {
|
||||
log(LOG_ERR,
|
||||
"Unexpected L2T_WRITE_RPL status %u for entry %u\n",
|
||||
rpl->status, idx);
|
||||
return (EINVAL);
|
||||
}
|
||||
|
||||
if (tid & F_SYNC_WR) {
|
||||
struct l2t_entry *e = &sc->l2t->l2tab[idx];
|
||||
|
||||
mtx_lock(&e->lock);
|
||||
if (e->state != L2T_STATE_SWITCHING) {
|
||||
send_pending(sc, e);
|
||||
e->state = L2T_STATE_VALID;
|
||||
}
|
||||
mtx_unlock(&e->lock);
|
||||
}
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate an L2T entry for use by a switching rule. Such need to be
|
||||
* explicitly freed and while busy they are not on any hash chain, so normal
|
||||
* address resolution updates do not see them.
|
||||
* Reuse an L2T entry that was previously used for the same next hop.
|
||||
*/
|
||||
static void
|
||||
reuse_entry(struct l2t_entry *e)
|
||||
{
|
||||
struct llentry *lle;
|
||||
|
||||
mtx_lock(&e->lock); /* avoid race with t4_l2t_free */
|
||||
lle = e->lle;
|
||||
if (lle) {
|
||||
KASSERT(lle->la_flags & LLE_VALID,
|
||||
("%s: invalid lle stored in l2t_entry", __func__));
|
||||
|
||||
if (lle->la_expire >= time_uptime)
|
||||
e->state = L2T_STATE_STALE;
|
||||
else
|
||||
e->state = L2T_STATE_VALID;
|
||||
} else
|
||||
e->state = L2T_STATE_RESOLVING;
|
||||
mtx_unlock(&e->lock);
|
||||
}
|
||||
|
||||
/*
|
||||
* The TOE wants an L2 table entry that it can use to reach the next hop over
|
||||
* the specified port. Produce such an entry - create one if needed.
|
||||
*
|
||||
* Note that the ifnet could be a pseudo-device like if_vlan, if_lagg, etc. on
|
||||
* top of the real cxgbe interface.
|
||||
*/
|
||||
struct l2t_entry *
|
||||
t4_l2t_alloc_switching(struct l2t_data *d)
|
||||
t4_l2t_get(struct port_info *pi, struct ifnet *ifp, struct sockaddr *sa)
|
||||
{
|
||||
struct l2t_entry *e;
|
||||
struct l2t_data *d = pi->adapter->l2t;
|
||||
int addr_len;
|
||||
uint32_t *addr;
|
||||
int hash;
|
||||
struct sockaddr_in6 *sin6;
|
||||
unsigned int smt_idx = pi->port_id;
|
||||
|
||||
rw_rlock(&d->lock);
|
||||
if (sa->sa_family == AF_INET) {
|
||||
addr = (uint32_t *)&SINADDR(sa);
|
||||
addr_len = sizeof(SINADDR(sa));
|
||||
} else if (sa->sa_family == AF_INET6) {
|
||||
sin6 = (struct sockaddr_in6 *)sa;
|
||||
addr = (uint32_t *)&sin6->sin6_addr.s6_addr;
|
||||
addr_len = sizeof(sin6->sin6_addr.s6_addr);
|
||||
} else
|
||||
return (NULL);
|
||||
|
||||
hash = addr_hash(addr, addr_len, ifp->if_index);
|
||||
|
||||
rw_wlock(&d->lock);
|
||||
for (e = d->l2tab[hash].first; e; e = e->next) {
|
||||
if (!addreq(e, addr) && e->ifp == ifp && e->smt_idx == smt_idx){
|
||||
l2t_hold(d, e);
|
||||
if (atomic_load_acq_int(&e->refcnt) == 1)
|
||||
reuse_entry(e);
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
|
||||
/* Need to allocate a new entry */
|
||||
e = alloc_l2e(d);
|
||||
if (e) {
|
||||
mtx_lock(&e->lock); /* avoid race with t4_l2t_free */
|
||||
e->state = L2T_STATE_SWITCHING;
|
||||
e->state = L2T_STATE_RESOLVING;
|
||||
memcpy(e->addr, addr, addr_len);
|
||||
e->ifindex = ifp->if_index;
|
||||
e->smt_idx = smt_idx;
|
||||
e->ifp = ifp;
|
||||
e->hash = hash;
|
||||
e->lport = pi->lport;
|
||||
e->v6 = (addr_len == 16);
|
||||
e->lle = NULL;
|
||||
atomic_store_rel_int(&e->refcnt, 1);
|
||||
if (ifp->if_type == IFT_L2VLAN)
|
||||
VLAN_TAG(ifp, &e->vlan);
|
||||
else
|
||||
e->vlan = VLAN_NONE;
|
||||
e->next = d->l2tab[hash].first;
|
||||
d->l2tab[hash].first = e;
|
||||
mtx_unlock(&e->lock);
|
||||
}
|
||||
rw_runlock(&d->lock);
|
||||
done:
|
||||
rw_wunlock(&d->lock);
|
||||
return e;
|
||||
}
|
||||
|
||||
/*
|
||||
* Sets/updates the contents of a switching L2T entry that has been allocated
|
||||
* with an earlier call to @t4_l2t_alloc_switching.
|
||||
* Called when the host's neighbor layer makes a change to some entry that is
|
||||
* loaded into the HW L2 table.
|
||||
*/
|
||||
int
|
||||
t4_l2t_set_switching(struct adapter *sc, struct l2t_entry *e, uint16_t vlan,
|
||||
uint8_t port, uint8_t *eth_addr)
|
||||
void
|
||||
t4_l2t_update(struct adapter *sc, struct llentry *lle)
|
||||
{
|
||||
e->vlan = vlan;
|
||||
e->lport = port;
|
||||
memcpy(e->dmac, eth_addr, ETHER_ADDR_LEN);
|
||||
return write_l2e(sc, e, 0);
|
||||
}
|
||||
struct l2t_entry *e;
|
||||
struct l2t_data *d = sc->l2t;
|
||||
struct sockaddr *sa = L3_ADDR(lle);
|
||||
struct llentry *old_lle = NULL;
|
||||
uint32_t *addr = (uint32_t *)&SINADDR(sa);
|
||||
struct ifnet *ifp = lle->lle_tbl->llt_ifp;
|
||||
int hash = addr_hash(addr, sizeof(*addr), ifp->if_index);
|
||||
|
||||
struct l2t_data *
|
||||
t4_init_l2t(int flags)
|
||||
{
|
||||
int i;
|
||||
struct l2t_data *d;
|
||||
KASSERT(d != NULL, ("%s: no L2 table", __func__));
|
||||
LLE_WLOCK_ASSERT(lle);
|
||||
KASSERT(lle->la_flags & LLE_VALID || lle->la_flags & LLE_DELETED,
|
||||
("%s: entry neither valid nor deleted.", __func__));
|
||||
|
||||
d = malloc(sizeof(*d), M_CXGBE, M_ZERO | flags);
|
||||
if (!d)
|
||||
return (NULL);
|
||||
|
||||
d->rover = d->l2tab;
|
||||
atomic_store_rel_int(&d->nfree, L2T_SIZE);
|
||||
rw_init(&d->lock, "L2T");
|
||||
|
||||
for (i = 0; i < L2T_SIZE; i++) {
|
||||
d->l2tab[i].idx = i;
|
||||
d->l2tab[i].state = L2T_STATE_UNUSED;
|
||||
mtx_init(&d->l2tab[i].lock, "L2T_E", NULL, MTX_DEF);
|
||||
atomic_store_rel_int(&d->l2tab[i].refcnt, 0);
|
||||
rw_rlock(&d->lock);
|
||||
for (e = d->l2tab[hash].first; e; e = e->next) {
|
||||
if (!addreq(e, addr) && e->ifp == ifp) {
|
||||
mtx_lock(&e->lock);
|
||||
if (atomic_load_acq_int(&e->refcnt))
|
||||
goto found;
|
||||
e->state = L2T_STATE_STALE;
|
||||
mtx_unlock(&e->lock);
|
||||
break;
|
||||
}
|
||||
}
|
||||
rw_runlock(&d->lock);
|
||||
|
||||
return (d);
|
||||
/* The TOE has no interest in this LLE */
|
||||
return;
|
||||
|
||||
found:
|
||||
rw_runlock(&d->lock);
|
||||
|
||||
if (atomic_load_acq_int(&e->refcnt)) {
|
||||
|
||||
/* Entry is referenced by at least 1 offloaded connection. */
|
||||
|
||||
/* Handle deletes first */
|
||||
if (lle->la_flags & LLE_DELETED) {
|
||||
if (lle == e->lle) {
|
||||
e->lle = NULL;
|
||||
e->state = L2T_STATE_RESOLVING;
|
||||
LLE_REMREF(lle);
|
||||
}
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (lle != e->lle) {
|
||||
old_lle = e->lle;
|
||||
LLE_ADDREF(lle);
|
||||
e->lle = lle;
|
||||
}
|
||||
|
||||
if (e->state == L2T_STATE_RESOLVING ||
|
||||
memcmp(e->dmac, &lle->ll_addr, ETHER_ADDR_LEN)) {
|
||||
|
||||
/* unresolved -> resolved; or dmac changed */
|
||||
|
||||
memcpy(e->dmac, &lle->ll_addr, ETHER_ADDR_LEN);
|
||||
write_l2e(sc, e, 1);
|
||||
} else {
|
||||
|
||||
/* +ve reinforcement of a valid or stale entry */
|
||||
|
||||
}
|
||||
|
||||
e->state = L2T_STATE_VALID;
|
||||
|
||||
} else {
|
||||
/*
|
||||
* Entry was used previously but is unreferenced right now.
|
||||
* e->lle has been released and NULL'd out by t4_l2t_free, or
|
||||
* l2t_release is about to call t4_l2t_free and do that.
|
||||
*
|
||||
* Either way this is of no interest to us.
|
||||
*/
|
||||
}
|
||||
|
||||
done:
|
||||
mtx_unlock(&e->lock);
|
||||
if (old_lle)
|
||||
LLE_FREE(old_lle);
|
||||
}
|
||||
|
||||
int
|
||||
t4_free_l2t(struct l2t_data *d)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < L2T_SIZE; i++)
|
||||
mtx_destroy(&d->l2tab[i].lock);
|
||||
rw_destroy(&d->lock);
|
||||
free(d, M_CXGBE);
|
||||
|
||||
return (0);
|
||||
}
|
||||
#endif
|
||||
|
@ -54,18 +54,26 @@ struct l2t_entry {
|
||||
struct mbuf *arpq_head; /* list of mbufs awaiting resolution */
|
||||
struct mbuf *arpq_tail;
|
||||
struct mtx lock;
|
||||
volatile uint32_t refcnt; /* entry reference count */
|
||||
volatile int refcnt; /* entry reference count */
|
||||
uint16_t hash; /* hash bucket the entry is on */
|
||||
uint8_t v6; /* whether entry is for IPv6 */
|
||||
uint8_t lport; /* associated offload logical port */
|
||||
uint8_t dmac[ETHER_ADDR_LEN]; /* next hop's MAC address */
|
||||
};
|
||||
|
||||
struct l2t_data *t4_init_l2t(int);
|
||||
int t4_init_l2t(struct adapter *, int);
|
||||
int t4_free_l2t(struct l2t_data *);
|
||||
struct l2t_entry *t4_l2t_alloc_switching(struct l2t_data *);
|
||||
int t4_l2t_set_switching(struct adapter *, struct l2t_entry *, uint16_t,
|
||||
uint8_t, uint8_t *);
|
||||
void t4_l2t_release(struct l2t_entry *);
|
||||
int sysctl_l2t(SYSCTL_HANDLER_ARGS);
|
||||
|
||||
#ifndef TCP_OFFLOAD_DISABLE
|
||||
struct l2t_entry *t4_l2t_get(struct port_info *, struct ifnet *,
|
||||
struct sockaddr *);
|
||||
int t4_l2t_send(struct adapter *, struct mbuf *, struct l2t_entry *);
|
||||
void t4_l2t_update(struct adapter *, struct llentry *);
|
||||
#endif
|
||||
|
||||
#endif /* __T4_L2T_H */
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -3,5 +3,6 @@
|
||||
#
|
||||
|
||||
SUBDIR = if_cxgbe
|
||||
SUBDIR+= firmware
|
||||
|
||||
.include <bsd.subdir.mk>
|
||||
|
27
sys/modules/cxgbe/firmware/Makefile
Normal file
27
sys/modules/cxgbe/firmware/Makefile
Normal file
@ -0,0 +1,27 @@
|
||||
#
|
||||
# $FreeBSD$
|
||||
#
|
||||
|
||||
T4FW = ${.CURDIR}/../../../dev/cxgbe/firmware
|
||||
.PATH: ${T4FW}
|
||||
|
||||
KMOD = t4fw_cfg
|
||||
FIRMWS = ${KMOD}.txt:${KMOD}:1.0.0.0
|
||||
|
||||
# You can have additional configuration files in the ${T4FW} directory.
|
||||
# t4fw_cfg_<name>.txt
|
||||
CFG_FILES != cd ${T4FW} && echo ${KMOD}_*.txt
|
||||
.for F in ${CFG_FILES}
|
||||
.if exists(${F})
|
||||
FIRMWS += ${F}:${F:C/.txt//}:1.0.0.0
|
||||
.endif
|
||||
.endfor
|
||||
|
||||
# The firmware binary is optional.
|
||||
# t4fw-<a>.<b>.<c>.<d>.bin
|
||||
FW_BIN != cd ${T4FW} && echo t4fw-*.bin
|
||||
.if exists(${FW_BIN})
|
||||
FIRMWS += ${FW_BIN}:t4fw:${FW_BIN:C/t4fw-//:C/.bin//}
|
||||
.endif
|
||||
|
||||
.include <bsd.kmod.mk>
|
@ -396,12 +396,12 @@ do_show_info_header(uint32_t mode)
|
||||
printf (" Port");
|
||||
break;
|
||||
|
||||
case T4_FILTER_OVLAN:
|
||||
printf (" vld:oVLAN");
|
||||
case T4_FILTER_VNIC:
|
||||
printf (" vld:VNIC");
|
||||
break;
|
||||
|
||||
case T4_FILTER_IVLAN:
|
||||
printf (" vld:iVLAN");
|
||||
case T4_FILTER_VLAN:
|
||||
printf (" vld:VLAN");
|
||||
break;
|
||||
|
||||
case T4_FILTER_IP_TOS:
|
||||
@ -653,18 +653,18 @@ do_show_one_filter_info(struct t4_filter *t, uint32_t mode)
|
||||
printf(" %1d/%1d", t->fs.val.iport, t->fs.mask.iport);
|
||||
break;
|
||||
|
||||
case T4_FILTER_OVLAN:
|
||||
case T4_FILTER_VNIC:
|
||||
printf(" %1d:%1x:%02x/%1d:%1x:%02x",
|
||||
t->fs.val.ovlan_vld, (t->fs.val.ovlan >> 7) & 0x7,
|
||||
t->fs.val.ovlan & 0x7f, t->fs.mask.ovlan_vld,
|
||||
(t->fs.mask.ovlan >> 7) & 0x7,
|
||||
t->fs.mask.ovlan & 0x7f);
|
||||
t->fs.val.vnic_vld, (t->fs.val.vnic >> 7) & 0x7,
|
||||
t->fs.val.vnic & 0x7f, t->fs.mask.vnic_vld,
|
||||
(t->fs.mask.vnic >> 7) & 0x7,
|
||||
t->fs.mask.vnic & 0x7f);
|
||||
break;
|
||||
|
||||
case T4_FILTER_IVLAN:
|
||||
case T4_FILTER_VLAN:
|
||||
printf(" %1d:%04x/%1d:%04x",
|
||||
t->fs.val.ivlan_vld, t->fs.val.ivlan,
|
||||
t->fs.mask.ivlan_vld, t->fs.mask.ivlan);
|
||||
t->fs.val.vlan_vld, t->fs.val.vlan,
|
||||
t->fs.mask.vlan_vld, t->fs.mask.vlan);
|
||||
break;
|
||||
|
||||
case T4_FILTER_IP_TOS:
|
||||
@ -830,11 +830,11 @@ get_filter_mode(void)
|
||||
if (mode & T4_FILTER_IP_TOS)
|
||||
printf("tos ");
|
||||
|
||||
if (mode & T4_FILTER_IVLAN)
|
||||
printf("ivlan ");
|
||||
if (mode & T4_FILTER_VLAN)
|
||||
printf("vlan ");
|
||||
|
||||
if (mode & T4_FILTER_OVLAN)
|
||||
printf("ovlan ");
|
||||
if (mode & T4_FILTER_VNIC)
|
||||
printf("vnic ");
|
||||
|
||||
if (mode & T4_FILTER_PORT)
|
||||
printf("iport ");
|
||||
@ -868,11 +868,12 @@ set_filter_mode(int argc, const char *argv[])
|
||||
if (!strcmp(argv[0], "tos"))
|
||||
mode |= T4_FILTER_IP_TOS;
|
||||
|
||||
if (!strcmp(argv[0], "ivlan"))
|
||||
mode |= T4_FILTER_IVLAN;
|
||||
if (!strcmp(argv[0], "vlan"))
|
||||
mode |= T4_FILTER_VLAN;
|
||||
|
||||
if (!strcmp(argv[0], "ovlan"))
|
||||
mode |= T4_FILTER_OVLAN;
|
||||
if (!strcmp(argv[0], "ovlan") ||
|
||||
!strcmp(argv[0], "vnic"))
|
||||
mode |= T4_FILTER_VNIC;
|
||||
|
||||
if (!strcmp(argv[0], "iport"))
|
||||
mode |= T4_FILTER_PORT;
|
||||
@ -936,15 +937,20 @@ set_filter(uint32_t idx, int argc, const char *argv[])
|
||||
t.fs.val.iport = val;
|
||||
t.fs.mask.iport = mask;
|
||||
} else if (!parse_val_mask("ovlan", args, &val, &mask)) {
|
||||
t.fs.val.ovlan = val;
|
||||
t.fs.mask.ovlan = mask;
|
||||
t.fs.val.ovlan_vld = 1;
|
||||
t.fs.mask.ovlan_vld = 1;
|
||||
} else if (!parse_val_mask("ivlan", args, &val, &mask)) {
|
||||
t.fs.val.ivlan = val;
|
||||
t.fs.mask.ivlan = mask;
|
||||
t.fs.val.ivlan_vld = 1;
|
||||
t.fs.mask.ivlan_vld = 1;
|
||||
t.fs.val.vnic = val;
|
||||
t.fs.mask.vnic = mask;
|
||||
t.fs.val.vnic_vld = 1;
|
||||
t.fs.mask.vnic_vld = 1;
|
||||
} else if (!parse_val_mask("vnic", args, &val, &mask)) {
|
||||
t.fs.val.vnic = val;
|
||||
t.fs.mask.vnic = mask;
|
||||
t.fs.val.vnic_vld = 1;
|
||||
t.fs.mask.vnic_vld = 1;
|
||||
} else if (!parse_val_mask("vlan", args, &val, &mask)) {
|
||||
t.fs.val.vlan = val;
|
||||
t.fs.mask.vlan = mask;
|
||||
t.fs.val.vlan_vld = 1;
|
||||
t.fs.mask.vlan_vld = 1;
|
||||
} else if (!parse_val_mask("tos", args, &val, &mask)) {
|
||||
t.fs.val.tos = val;
|
||||
t.fs.mask.tos = mask;
|
||||
|
Loading…
Reference in New Issue
Block a user