ixgbe(4): Update to version 3.1.13-k

Add support for two new devices:  X552 SFP+ 10 GbE, and the single port
version of X550T.

Submitted by:	erj
Reviewed by:	gnn
Sponsored by:	Intel Corporation
Differential Revision:	https://reviews.freebsd.org/D4186
This commit is contained in:
Sean Bruno 2015-12-23 22:45:17 +00:00
parent 7144d5cbc5
commit a9ca1c79c6
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=292674
23 changed files with 962 additions and 977 deletions

View File

@ -1861,6 +1861,8 @@ dev/ixgbe/if_ixv.c optional ixv inet \
compile-with "${NORMAL_C} -I$S/dev/ixgbe -DSMP"
dev/ixgbe/ix_txrx.c optional ix inet | ixv inet \
compile-with "${NORMAL_C} -I$S/dev/ixgbe"
dev/ixgbe/ixgbe_osdep.c optional ix inet | ixv inet \
compile-with "${NORMAL_C} -I$S/dev/ixgbe"
dev/ixgbe/ixgbe_phy.c optional ix inet | ixv inet \
compile-with "${NORMAL_C} -I$S/dev/ixgbe"
dev/ixgbe/ixgbe_api.c optional ix inet | ixv inet \

View File

@ -1,319 +0,0 @@
FreeBSD Driver for Intel(R) Ethernet 10 Gigabit PCI Express Server Adapters
============================================================================
/*$FreeBSD$*/
Jun 18, 2013
Contents
========
- Overview
- Supported Adapters
- Building and Installation
- Additional Configurations and Tuning
- Known Limitations
Overview
========
This file describes the FreeBSD* driver for the
Intel(R) Ethernet 10 Gigabit Family of Adapters.
For questions related to hardware requirements, refer to the documentation
supplied with your Intel 10GbE adapter. All hardware requirements listed
apply to use with FreeBSD.
Supported Adapters
==================
The driver in this release is compatible with 82598 and 82599-based Intel
Network Connections.
SFP+ Devices with Pluggable Optics
----------------------------------
82599-BASED ADAPTERS
NOTE: If your 82599-based Intel(R) Ethernet Network Adapter came with Intel
optics, or is an Intel(R) Ethernet Server Adapter X520-2, then it only supports
Intel optics and/or the direct attach cables listed below.
When 82599-based SFP+ devices are connected back to back, they should be set to
the same Speed setting. Results may vary if you mix speed settings.
Supplier Type Part Numbers
SR Modules
Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
LR Modules
Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
The following is a list of 3rd party SFP+ modules and direct attach cables that
have received some testing. Not all modules are applicable to all devices.
Supplier Type Part Numbers
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10g single rate FTLX8571D3BCV-IT
Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
Finistar 1000BASE-T SFP FCLF8522P2BTL
Avago 1000BASE-T SFP ABCU-5710RZ
NOTE: As of driver version 2.5.13 it is possible to allow the operation
of unsupported modules by setting the static variable 'allow_unsupported_sfp'
to TRUE and rebuilding the driver. If problems occur please assure that they
can be reproduced with fully supported optics first.
82599-based adapters support all passive and active limiting direct attach
cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
Laser turns off for SFP+ when ifconfig down
--------------------------------------------------------
"ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters.
"ifconfig up" turns on the later.
82598-BASED ADAPTERS
NOTES for 82598-Based Adapters:
- Intel(R) Ethernet Network Adapters that support removable optical modules
only support their original module type (i.e., the Intel(R) 10 Gigabit SR
Dual Port Express Module only supports SR optical modules). If you plug
in a different type of module, the driver will not load.
- Hot Swapping/hot plugging optical modules is not supported.
- Only single speed, 10 gigabit modules are supported.
- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
types are not supported. Please see your system documentation for details.
The following is a list of 3rd party SFP+ modules and direct attach cables that have
received some testing. Not all modules are applicable to all devices.
Supplier Type Part Numbers
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
82598-based adapters support all passive direct attach cables that comply
with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
cables are not supported.
Third party optic modules and cables referred to above are listed only for the
purpose of highlighting third party specifications and potential compatibility,
and are not recommendations or endorsements or sponsorship of any third party's
product by Intel. Intel is not endorsing or promoting products made by any
third party and the third party reference is provided only to share information
regarding certain optic modules and cables with the above specifications. There
may be other manufacturers or suppliers, producing or supplying optic modules
and cables with similar or matching descriptions. Customers must use their own
discretion and diligence to purchase optic modules and cables from any third
party of their choice. Customer are solely responsible for assessing the
suitability of the product and/or devices and for the selection of the vendor
for purchasing any product. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL
DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF
SUCH THIRD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.
Configuration and Tuning
========================
The driver supports Transmit/Receive Checksum Offload and Jumbo Frames on
all 10 Gigabit adapters.
Jumbo Frames
------------
To enable Jumbo Frames, use the ifconfig utility to increase the MTU
beyond 1500 bytes.
NOTES:
- The Jumbo Frames setting on the switch must be set to at least
22 bytes larger than that of the adapter.
- There are known performance issues with this driver when running
UDP traffic with Jumbo Frames.
The Jumbo Frames MTU range for Intel Adapters is 1500 to 16114. The default
MTU range is 1500. To modify the setting, enter the following:
ifconfig ix<interface_num> <hostname or IP address> mtu 9000
To confirm an interface's MTU value, use the ifconfig command. To confirm
the MTU used between two specific devices, use:
route get <destination_IP_address>
VLANs
-----
To create a new VLAN pseudo-interface:
ifconfig <vlan_name> create
To associate the VLAN pseudo-interface with a physical interface and
assign a VLAN ID, IP address, and netmask:
ifconfig <vlan_name> <ip_address> netmask <subnet_mask> vlan
<vlan_id> vlandev <physical_interface>
Example:
ifconfig vlan10 10.0.0.1 netmask 255.255.255.0 vlan 10 vlandev ixgbe0
In this example, all packets will be marked on egress with 802.1Q VLAN
tags, specifying a VLAN ID of 10.
To remove a VLAN pseudo-interface:
ifconfig <vlan_name> destroy
Checksum Offload
----------------
Checksum offloading supports both TCP and UDP packets and is
supported for both transmit and receive.
Checksum offloading can be enabled or disabled using ifconfig.
Both transmit and receive offloading will be either enabled or
disabled together. You cannot enable/disable one without the other.
To enable checksum offloading:
ifconfig <interface_num> rxcsum
To disable checksum offloading:
ifconfig <interface_num> -rxcsum
To confirm the current setting:
ifconfig <interface_num>
TSO
---
TSO is enabled by default.
To disable:
ifconfig <interface_num> -tso
To re-enable:
ifconfig <interface_num> tso
LRO
---
Large Receive Offload is available in the driver; it is on by default.
It can be disabled by using:
ifconfig <interface_num> -lro
To enable:
ifconfig <interface_num> lro
Important system configuration changes:
---------------------------------------
When there is a choice run on a 64bit OS rather than 32, it makes a
significant difference in improvement.
The interface can generate a high number of interrupts. To avoid running
into the limit set by the kernel, adjust hw.intr_storm_threshold
setting using sysctl:
sysctl hw.intr_storm_threshold=9000 (the default is 1000)
For this change to take effect on boot, edit /etc/sysctl.conf and add the
line:
hw.intr_storm_threshold=9000
If you still see Interrupt Storm detected messages, increase the limit to a
higher number, or the detection can be disabled by setting it to 0.
The default number of descriptors is 2048, increasing or descreasing
may improve performance in some workloads, but change carefully.
Known Limitations
=================
For known hardware and troubleshooting issues, refer to the following website.
http://support.intel.com/support/go/network/adapter/home.htm
Either select the link for your adapter or perform a search for the adapter
number. The adapter's page lists many issues. For a complete list of hardware
issues download your adapter's user guide and read the Release Notes.
UDP stress test with 10GbE driver
---------------------------------
Under small packets UDP stress test with 10GbE driver, the FreeBSD system
will drop UDP packets due to the fullness of socket buffers. You may want
to change the driver's Flow Control variables to the minimum value for
controlling packet reception.
Attempting to configure larger MTUs with a large numbers of processors may
generate the error message "ix0:could not setup receive structures"
--------------------------------------------------------------------------
When using the ixgbe driver with RSS autoconfigured based on the number of
cores (the default setting) and that number is larger than 4, increase the
memory resources allocated for the mbuf pool as follows:
Add to the sysctl.conf file for the system:
kern.ipc.nmbclusters=262144
kern.ipc.nmbjumbop=262144
Lower than expected performance on dual port 10GbE devices
----------------------------------------------------------
Some PCI-E x8 slots are actually configured as x4 slots. These slots have
insufficient bandwidth for full 10Gbe line rate with dual port 10GbE devices.
The driver will detect this situation and will write the following message in
the system log: "PCI-Express bandwidth available for this card is not
sufficient for optimal performance. For optimal performance a x8 PCI-Express
slot is required."
If this error occurs, moving your adapter to a true x8 slot will resolve the
issue.
Support
=======
For general information and support, go to the Intel support website at:
www.intel.com/support/
If an issue is identified with the released source code on the supported
kernel with a supported adapter, email the specific information related to
the issue to freebsd@intel.com
License
=======
This software program is released under the terms of a license agreement
between you ('Licensee') and Intel. Do not use or load this software or any
associated materials (collectively, the 'Software') until you have carefully
read the full terms and conditions of the LICENSE located in this software
package. By loading or using the Software, you agree to the terms of this
Agreement. If you do not agree with the terms of this Agreement, do not
install or use the Software.
* Other names and brands may be claimed as the property of others.

File diff suppressed because it is too large Load Diff

View File

@ -43,7 +43,7 @@
/*********************************************************************
* Driver version
*********************************************************************/
char ixv_driver_version[] = "1.4.0";
char ixv_driver_version[] = "1.4.6-k";
/*********************************************************************
* PCI Device ID Table
@ -292,7 +292,7 @@ ixv_attach(device_t dev)
/* Allocate, clear, and link in our adapter structure */
adapter = device_get_softc(dev);
adapter->dev = adapter->osdep.dev = dev;
adapter->dev = dev;
hw = &adapter->hw;
#ifdef DEV_NETMAP
@ -322,7 +322,7 @@ ixv_attach(device_t dev)
/* Do base PCI setup - map BAR0 */
if (ixv_allocate_pci_resources(adapter)) {
device_printf(dev, "Allocation of PCI resources failed\n");
device_printf(dev, "ixv_allocate_pci_resources() failed!\n");
error = ENXIO;
goto err_out;
}
@ -353,6 +353,7 @@ ixv_attach(device_t dev)
/* Allocate our TX/RX Queues */
if (ixgbe_allocate_queues(adapter)) {
device_printf(dev, "ixgbe_allocate_queues() failed!\n");
error = ENOMEM;
goto err_out;
}
@ -363,7 +364,7 @@ ixv_attach(device_t dev)
*/
error = ixgbe_init_shared_code(hw);
if (error) {
device_printf(dev,"Shared Code Initialization Failure\n");
device_printf(dev, "ixgbe_init_shared_code() failed!\n");
error = EIO;
goto err_late;
}
@ -371,23 +372,37 @@ ixv_attach(device_t dev)
/* Setup the mailbox */
ixgbe_init_mbx_params_vf(hw);
ixgbe_reset_hw(hw);
/* Reset mbox api to 1.0 */
error = ixgbe_reset_hw(hw);
if (error == IXGBE_ERR_RESET_FAILED)
device_printf(dev, "ixgbe_reset_hw() failure: Reset Failed!\n");
else if (error)
device_printf(dev, "ixgbe_reset_hw() failed with error %d\n", error);
if (error) {
error = EIO;
goto err_late;
}
/* Get the Mailbox API version */
device_printf(dev,"MBX API %d negotiation: %d\n",
ixgbe_mbox_api_11,
ixgbevf_negotiate_api_version(hw, ixgbe_mbox_api_11));
/* Negotiate mailbox API version */
error = ixgbevf_negotiate_api_version(hw, ixgbe_mbox_api_11);
if (error) {
device_printf(dev, "MBX API 1.1 negotiation failed! Error %d\n", error);
error = EIO;
goto err_late;
}
error = ixgbe_init_hw(hw);
if (error) {
device_printf(dev,"Hardware Initialization Failure\n");
device_printf(dev, "ixgbe_init_hw() failed!\n");
error = EIO;
goto err_late;
}
error = ixv_allocate_msix(adapter);
if (error)
if (error) {
device_printf(dev, "ixv_allocate_msix() failed!\n");
goto err_late;
}
/* If no mac address was assigned, make a random one */
if (!ixv_check_ether_addr(hw->mac.addr)) {
@ -447,7 +462,7 @@ ixv_detach(device_t dev)
/* Make sure VLANS are not using driver */
if (adapter->ifp->if_vlantrunk != NULL) {
device_printf(dev,"Vlan in use, detach first\n");
device_printf(dev, "Vlan in use, detach first\n");
return (EBUSY);
}
@ -556,13 +571,13 @@ ixv_ioctl(struct ifnet * ifp, u_long command, caddr_t data)
#endif
case SIOCSIFMTU:
IOCTL_DEBUGOUT("ioctl: SIOCSIFMTU (Set Interface MTU)");
if (ifr->ifr_mtu > IXGBE_MAX_FRAME_SIZE - ETHER_HDR_LEN) {
if (ifr->ifr_mtu > IXGBE_MAX_FRAME_SIZE - IXGBE_MTU_HDR) {
error = EINVAL;
} else {
IXGBE_CORE_LOCK(adapter);
ifp->if_mtu = ifr->ifr_mtu;
adapter->max_frame_size =
ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
ifp->if_mtu + IXGBE_MTU_HDR;
ixv_init_locked(adapter);
IXGBE_CORE_UNLOCK(adapter);
}
@ -643,9 +658,9 @@ ixv_init_locked(struct adapter *adapter)
struct ifnet *ifp = adapter->ifp;
device_t dev = adapter->dev;
struct ixgbe_hw *hw = &adapter->hw;
u32 mhadd, gpie;
int error = 0;
INIT_DEBUGOUT("ixv_init: begin");
INIT_DEBUGOUT("ixv_init_locked: begin");
mtx_assert(&adapter->core_mtx, MA_OWNED);
hw->adapter_stopped = FALSE;
ixgbe_stop_adapter(hw);
@ -662,12 +677,17 @@ ixv_init_locked(struct adapter *adapter)
/* Prepare transmit descriptors and buffers */
if (ixgbe_setup_transmit_structures(adapter)) {
device_printf(dev,"Could not setup transmit structures\n");
device_printf(dev, "Could not setup transmit structures\n");
ixv_stop(adapter);
return;
}
/* Reset VF and renegotiate mailbox API version */
ixgbe_reset_hw(hw);
error = ixgbevf_negotiate_api_version(hw, ixgbe_mbox_api_11);
if (error)
device_printf(dev, "MBX API 1.1 negotiation failed! Error %d\n", error);
ixv_initialize_transmit_units(adapter);
/* Setup Multicast table */
@ -684,7 +704,7 @@ ixv_init_locked(struct adapter *adapter)
/* Prepare receive descriptors and buffers */
if (ixgbe_setup_receive_structures(adapter)) {
device_printf(dev,"Could not setup receive structures\n");
device_printf(dev, "Could not setup receive structures\n");
ixv_stop(adapter);
return;
}
@ -692,12 +712,6 @@ ixv_init_locked(struct adapter *adapter)
/* Configure RX settings */
ixv_initialize_receive_units(adapter);
/* Enable Enhanced MSIX mode */
gpie = IXGBE_READ_REG(&adapter->hw, IXGBE_GPIE);
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_EIAME;
gpie |= IXGBE_GPIE_PBA_SUPPORT | IXGBE_GPIE_OCD;
IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
/* Set the various hardware offload abilities */
ifp->if_hwassist = 0;
if (ifp->if_capenable & IFCAP_TSO4)
@ -709,19 +723,9 @@ ixv_init_locked(struct adapter *adapter)
#endif
}
/* Set MTU size */
if (ifp->if_mtu > ETHERMTU) {
mhadd = IXGBE_READ_REG(hw, IXGBE_MHADD);
mhadd &= ~IXGBE_MHADD_MFS_MASK;
mhadd |= adapter->max_frame_size << IXGBE_MHADD_MFS_SHIFT;
IXGBE_WRITE_REG(hw, IXGBE_MHADD, mhadd);
}
/* Set up VLAN offload and filter */
ixv_setup_vlan_support(adapter);
callout_reset(&adapter->timer, hz, ixv_local_timer, adapter);
/* Set up MSI/X routing */
ixv_configure_ivars(adapter);
@ -737,6 +741,9 @@ ixv_init_locked(struct adapter *adapter)
/* Config/Enable Link */
ixv_config_link(adapter);
/* Start watchdog */
callout_reset(&adapter->timer, hz, ixv_local_timer, adapter);
/* And now turn on interrupts */
ixv_enable_intr(adapter);
@ -1414,7 +1421,7 @@ ixv_allocate_pci_resources(struct adapter *adapter)
&rid, RF_ACTIVE);
if (!(adapter->pci_mem)) {
device_printf(dev,"Unable to allocate bus resource: memory\n");
device_printf(dev, "Unable to allocate bus resource: memory\n");
return (ENXIO);
}
@ -1422,12 +1429,11 @@ ixv_allocate_pci_resources(struct adapter *adapter)
rman_get_bustag(adapter->pci_mem);
adapter->osdep.mem_bus_space_handle =
rman_get_bushandle(adapter->pci_mem);
adapter->hw.hw_addr = (u8 *) &adapter->osdep.mem_bus_space_handle;
adapter->hw.hw_addr = (u8 *)&adapter->osdep.mem_bus_space_handle;
/* Pick up the tuneable queues */
adapter->num_queues = ixv_num_queues;
adapter->hw.back = &adapter->osdep;
adapter->hw.back = adapter;
/*
** Now setup MSI/X, should
@ -1535,7 +1541,7 @@ ixv_setup_interface(device_t dev, struct adapter *adapter)
ether_ifattach(ifp, adapter->hw.mac.addr);
adapter->max_frame_size =
ifp->if_mtu + ETHER_HDR_LEN + ETHER_CRC_LEN;
ifp->if_mtu + IXGBE_MTU_HDR_VLAN;
/*
* Tell the upper layer(s) we support long frames.
@ -1556,7 +1562,6 @@ ixv_setup_interface(device_t dev, struct adapter *adapter)
*/
ifmedia_init(&adapter->media, IFM_IMASK, ixv_media_change,
ixv_media_status);
ifmedia_add(&adapter->media, IFM_ETHER | IFM_FDX, 0, NULL);
ifmedia_add(&adapter->media, IFM_ETHER | IFM_AUTO, 0, NULL);
ifmedia_set(&adapter->media, IFM_ETHER | IFM_AUTO);
@ -1567,19 +1572,11 @@ static void
ixv_config_link(struct adapter *adapter)
{
struct ixgbe_hw *hw = &adapter->hw;
u32 autoneg, err = 0;
u32 autoneg;
if (hw->mac.ops.check_link)
err = hw->mac.ops.check_link(hw, &autoneg,
hw->mac.ops.check_link(hw, &autoneg,
&adapter->link_up, FALSE);
if (err)
goto out;
if (hw->mac.ops.setup_link)
err = hw->mac.ops.setup_link(hw,
autoneg, adapter->link_up);
out:
return;
}
@ -1646,7 +1643,6 @@ ixv_initialize_receive_units(struct adapter *adapter)
struct ixgbe_hw *hw = &adapter->hw;
struct ifnet *ifp = adapter->ifp;
u32 bufsz, rxcsum, psrtype;
int max_frame;
if (ifp->if_mtu > ETHERMTU)
bufsz = 4096 >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
@ -1659,9 +1655,8 @@ ixv_initialize_receive_units(struct adapter *adapter)
IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype);
/* Tell PF our expected packet-size */
max_frame = ifp->if_mtu + IXGBE_MTU_HDR;
ixgbevf_rlpml_set_vf(hw, max_frame);
/* Tell PF our max_frame size */
ixgbevf_rlpml_set_vf(hw, adapter->max_frame_size);
for (int i = 0; i < adapter->num_queues; i++, rxr++) {
u64 rdba = rxr->rxdma.dma_paddr;
@ -1763,7 +1758,7 @@ ixv_setup_vlan_support(struct adapter *adapter)
{
struct ixgbe_hw *hw = &adapter->hw;
u32 ctrl, vid, vfta, retry;
struct rx_ring *rxr;
/*
** We get here thru init_locked, meaning
@ -1779,6 +1774,12 @@ ixv_setup_vlan_support(struct adapter *adapter)
ctrl = IXGBE_READ_REG(hw, IXGBE_VFRXDCTL(i));
ctrl |= IXGBE_RXDCTL_VME;
IXGBE_WRITE_REG(hw, IXGBE_VFRXDCTL(i), ctrl);
/*
* Let Rx path know that it needs to store VLAN tag
* as part of extra mbuf info.
*/
rxr = &adapter->rx_rings[i];
rxr->vtag_strip = TRUE;
}
/*
@ -1794,7 +1795,7 @@ ixv_setup_vlan_support(struct adapter *adapter)
** based on the bits set in each
** of the array ints.
*/
for ( int j = 0; j < 32; j++) {
for (int j = 0; j < 32; j++) {
retry = 0;
if ((vfta & (1 << j)) == 0)
continue;
@ -1821,10 +1822,10 @@ ixv_register_vlan(void *arg, struct ifnet *ifp, u16 vtag)
struct adapter *adapter = ifp->if_softc;
u16 index, bit;
if (ifp->if_softc != arg) /* Not our event */
if (ifp->if_softc != arg) /* Not our event */
return;
if ((vtag == 0) || (vtag > 4095)) /* Invalid */
if ((vtag == 0) || (vtag > 4095)) /* Invalid */
return;
IXGBE_CORE_LOCK(adapter);

View File

@ -81,27 +81,6 @@ static bool ixgbe_rsc_enable = FALSE;
static int atr_sample_rate = 20;
#endif
/* Shared PCI config read/write */
inline u16
ixgbe_read_pci_cfg(struct ixgbe_hw *hw, u32 reg)
{
u16 value;
value = pci_read_config(((struct ixgbe_osdep *)hw->back)->dev,
reg, 2);
return (value);
}
inline void
ixgbe_write_pci_cfg(struct ixgbe_hw *hw, u32 reg, u16 value)
{
pci_write_config(((struct ixgbe_osdep *)hw->back)->dev,
reg, value, 2);
return;
}
/*********************************************************************
* Local Function prototypes
*********************************************************************/
@ -189,8 +168,8 @@ ixgbe_start(struct ifnet *ifp)
#else /* ! IXGBE_LEGACY_TX */
/*
** Multiqueue Transmit driver
**
** Multiqueue Transmit Entry Point
** (if_transmit function)
*/
int
ixgbe_mq_start(struct ifnet *ifp, struct mbuf *m)
@ -213,10 +192,14 @@ ixgbe_mq_start(struct ifnet *ifp, struct mbuf *m)
if (M_HASHTYPE_GET(m) != M_HASHTYPE_NONE) {
#ifdef RSS
if (rss_hash2bucket(m->m_pkthdr.flowid,
M_HASHTYPE_GET(m), &bucket_id) == 0)
/* TODO: spit out something if bucket_id > num_queues? */
M_HASHTYPE_GET(m), &bucket_id) == 0) {
i = bucket_id % adapter->num_queues;
else
#ifdef IXGBE_DEBUG
if (bucket_id > adapter->num_queues)
if_printf(ifp, "bucket_id (%d) > num_queues "
"(%d)\n", bucket_id, adapter->num_queues);
#endif
} else
#endif
i = m->m_pkthdr.flowid % adapter->num_queues;
} else
@ -448,6 +431,7 @@ ixgbe_xmit(struct tx_ring *txr, struct mbuf **m_headp)
}
#endif
olinfo_status |= IXGBE_ADVTXD_CC;
i = txr->next_avail_desc;
for (j = 0; j < nsegs; j++) {
bus_size_t seglen;
@ -742,8 +726,12 @@ ixgbe_tx_ctx_setup(struct tx_ring *txr, struct mbuf *mp,
struct adapter *adapter = txr->adapter;
struct ixgbe_adv_tx_context_desc *TXD;
struct ether_vlan_header *eh;
#ifdef INET
struct ip *ip;
#endif
#ifdef INET6
struct ip6_hdr *ip6;
#endif
u32 vlan_macip_lens = 0, type_tucmd_mlhl = 0;
int ehdrlen, ip_hlen = 0;
u16 etype;
@ -751,9 +739,11 @@ ixgbe_tx_ctx_setup(struct tx_ring *txr, struct mbuf *mp,
int offload = TRUE;
int ctxd = txr->next_avail_desc;
u16 vtag = 0;
caddr_t l3d;
/* First check if TSO is to be used */
if (mp->m_pkthdr.csum_flags & CSUM_TSO)
if (mp->m_pkthdr.csum_flags & (CSUM_IP_TSO|CSUM_IP6_TSO))
return (ixgbe_tso_setup(txr, mp, cmd_type_len, olinfo_status));
if ((mp->m_pkthdr.csum_flags & CSUM_OFFLOAD) == 0)
@ -796,17 +786,31 @@ ixgbe_tx_ctx_setup(struct tx_ring *txr, struct mbuf *mp,
if (offload == FALSE)
goto no_offloads;
/*
* If the first mbuf only includes the ethernet header, jump to the next one
* XXX: This assumes the stack splits mbufs containing headers on header boundaries
* XXX: And assumes the entire IP header is contained in one mbuf
*/
if (mp->m_len == ehdrlen && mp->m_next)
l3d = mtod(mp->m_next, caddr_t);
else
l3d = mtod(mp, caddr_t) + ehdrlen;
switch (etype) {
case ETHERTYPE_IP:
ip = (struct ip *)(mp->m_data + ehdrlen);
ip = (struct ip *)(l3d);
ip_hlen = ip->ip_hl << 2;
ipproto = ip->ip_p;
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_IPV4;
/* Insert IPv4 checksum into data descriptors */
if (mp->m_pkthdr.csum_flags & CSUM_IP) {
ip->ip_sum = 0;
*olinfo_status |= IXGBE_TXD_POPTS_IXSM << 8;
}
break;
case ETHERTYPE_IPV6:
ip6 = (struct ip6_hdr *)(mp->m_data + ehdrlen);
ip6 = (struct ip6_hdr *)(l3d);
ip_hlen = sizeof(struct ip6_hdr);
/* XXX-BZ this will go badly in case of ext hdrs. */
ipproto = ip6->ip6_nxt;
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_IPV6;
break;
@ -817,29 +821,32 @@ ixgbe_tx_ctx_setup(struct tx_ring *txr, struct mbuf *mp,
vlan_macip_lens |= ip_hlen;
/* No support for offloads for non-L4 next headers */
switch (ipproto) {
case IPPROTO_TCP:
if (mp->m_pkthdr.csum_flags & CSUM_TCP)
if (mp->m_pkthdr.csum_flags & (CSUM_IP_TCP | CSUM_IP6_TCP))
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
else
offload = false;
break;
case IPPROTO_UDP:
if (mp->m_pkthdr.csum_flags & CSUM_UDP)
if (mp->m_pkthdr.csum_flags & (CSUM_IP_UDP | CSUM_IP6_UDP))
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP;
else
offload = false;
break;
#if __FreeBSD_version >= 800000
case IPPROTO_SCTP:
if (mp->m_pkthdr.csum_flags & CSUM_SCTP)
if (mp->m_pkthdr.csum_flags & (CSUM_IP_SCTP | CSUM_IP6_SCTP))
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP;
else
offload = false;
break;
#endif
default:
offload = FALSE;
offload = false;
break;
}
if (offload) /* For the TX descriptor setup */
if (offload) /* Insert L4 checksum into data descriptors */
*olinfo_status |= IXGBE_TXD_POPTS_TXSM << 8;
no_offloads:
@ -884,7 +891,6 @@ ixgbe_tso_setup(struct tx_ring *txr, struct mbuf *mp,
#endif
struct tcphdr *th;
/*
* Determine where frame payload starts.
* Jump over vlan headers if already present
@ -1041,7 +1047,7 @@ ixgbe_txeof(struct tx_ring *txr)
BUS_DMASYNC_POSTREAD);
do {
union ixgbe_adv_tx_desc *eop= buf->eop;
union ixgbe_adv_tx_desc *eop = buf->eop;
if (eop == NULL) /* No work */
break;
@ -1283,6 +1289,7 @@ ixgbe_setup_hw_rsc(struct rx_ring *rxr)
rxr->hw_rsc = TRUE;
}
/*********************************************************************
*
* Refresh mbuf buffers for RX descriptor rings
@ -1418,7 +1425,6 @@ ixgbe_allocate_receive_buffers(struct rx_ring *rxr)
return (error);
}
static void
ixgbe_free_receive_ring(struct rx_ring *rxr)
{
@ -1438,7 +1444,6 @@ ixgbe_free_receive_ring(struct rx_ring *rxr)
}
}
/*********************************************************************
*
* Initialize a receive ring and its buffers.
@ -1916,13 +1921,17 @@ ixgbe_rxeof(struct ix_queue *que)
sendmp->m_pkthdr.flowid =
le32toh(cur->wb.lower.hi_dword.rss);
switch (pkt_info & IXGBE_RXDADV_RSSTYPE_MASK) {
case IXGBE_RXDADV_RSSTYPE_IPV4:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_IPV4);
break;
case IXGBE_RXDADV_RSSTYPE_IPV4_TCP:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_TCP_IPV4);
break;
case IXGBE_RXDADV_RSSTYPE_IPV4:
case IXGBE_RXDADV_RSSTYPE_IPV6:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_IPV4);
M_HASHTYPE_RSS_IPV6);
break;
case IXGBE_RXDADV_RSSTYPE_IPV6_TCP:
M_HASHTYPE_SET(sendmp,
@ -1932,14 +1941,11 @@ ixgbe_rxeof(struct ix_queue *que)
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_IPV6_EX);
break;
case IXGBE_RXDADV_RSSTYPE_IPV6:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_IPV6);
break;
case IXGBE_RXDADV_RSSTYPE_IPV6_TCP_EX:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_TCP_IPV6_EX);
break;
#if __FreeBSD_version > 1100000
case IXGBE_RXDADV_RSSTYPE_IPV4_UDP:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_UDP_IPV4);
@ -1952,6 +1958,7 @@ ixgbe_rxeof(struct ix_queue *que)
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_RSS_UDP_IPV6_EX);
break;
#endif
default:
M_HASHTYPE_SET(sendmp,
M_HASHTYPE_OPAQUE);
@ -2021,34 +2028,28 @@ ixgbe_rx_checksum(u32 staterr, struct mbuf * mp, u32 ptype)
{
u16 status = (u16) staterr;
u8 errors = (u8) (staterr >> 24);
bool sctp = FALSE;
bool sctp = false;
if ((ptype & IXGBE_RXDADV_PKTTYPE_ETQF) == 0 &&
(ptype & IXGBE_RXDADV_PKTTYPE_SCTP) != 0)
sctp = TRUE;
sctp = true;
/* IPv4 checksum */
if (status & IXGBE_RXD_STAT_IPCS) {
if (!(errors & IXGBE_RXD_ERR_IPE)) {
/* IP Checksum Good */
mp->m_pkthdr.csum_flags = CSUM_IP_CHECKED;
mp->m_pkthdr.csum_flags |= CSUM_IP_VALID;
} else
mp->m_pkthdr.csum_flags = 0;
mp->m_pkthdr.csum_flags |= CSUM_L3_CALC;
/* IP Checksum Good */
if (!(errors & IXGBE_RXD_ERR_IPE))
mp->m_pkthdr.csum_flags |= CSUM_L3_VALID;
}
/* TCP/UDP/SCTP checksum */
if (status & IXGBE_RXD_STAT_L4CS) {
u64 type = (CSUM_DATA_VALID | CSUM_PSEUDO_HDR);
#if __FreeBSD_version >= 800000
if (sctp)
type = CSUM_SCTP_VALID;
#endif
mp->m_pkthdr.csum_flags |= CSUM_L4_CALC;
if (!(errors & IXGBE_RXD_ERR_TCPE)) {
mp->m_pkthdr.csum_flags |= type;
mp->m_pkthdr.csum_flags |= CSUM_L4_VALID;
if (!sctp)
mp->m_pkthdr.csum_data = htons(0xffff);
}
}
}
return;
}
/********************************************************************

View File

@ -140,12 +140,6 @@
/* Alignment for rings */
#define DBA_ALIGN 128
/*
* This parameter controls the maximum no of times the driver will loop in
* the isr. Minimum Value = 1
*/
#define MAX_LOOP 10
/*
* This is the max watchdog interval, ie. the time that can
* pass between any two TX clean operations, such only happening
@ -162,9 +156,11 @@
/* These defines are used in MTU calculations */
#define IXGBE_MAX_FRAME_SIZE 9728
#define IXGBE_MTU_HDR (ETHER_HDR_LEN + ETHER_CRC_LEN + \
#define IXGBE_MTU_HDR (ETHER_HDR_LEN + ETHER_CRC_LEN)
#define IXGBE_MTU_HDR_VLAN (ETHER_HDR_LEN + ETHER_CRC_LEN + \
ETHER_VLAN_ENCAP_LEN)
#define IXGBE_MAX_MTU (IXGBE_MAX_FRAME_SIZE - IXGBE_MTU_HDR)
#define IXGBE_MAX_MTU_VLAN (IXGBE_MAX_FRAME_SIZE - IXGBE_MTU_HDR_VLAN)
/* Flow control constants */
#define IXGBE_FC_PAUSE 0xFFFF
@ -181,6 +177,9 @@
* modern Intel CPUs, results in 40 bytes wasted and a significant drop
* in observed efficiency of the optimization, 97.9% -> 81.8%.
*/
#if __FreeBSD_version < 1002000
#define MPKTHSIZE (sizeof(struct m_hdr) + sizeof(struct pkthdr))
#endif
#define IXGBE_RX_COPY_HDR_PADDED ((((MPKTHSIZE - 1) / 32) + 1) * 32)
#define IXGBE_RX_COPY_LEN (MSIZE - IXGBE_RX_COPY_HDR_PADDED)
#define IXGBE_RX_COPY_ALIGN (IXGBE_RX_COPY_HDR_PADDED - MPKTHSIZE)
@ -211,7 +210,6 @@
#define MSIX_82598_BAR 3
#define MSIX_82599_BAR 4
#define IXGBE_TSO_SIZE 262140
#define IXGBE_TX_BUFFER_SIZE ((u32) 1514)
#define IXGBE_RX_HDR 128
#define IXGBE_VFTA_SIZE 128
#define IXGBE_BR_SIZE 4096
@ -221,8 +219,12 @@
#define IXV_EITR_DEFAULT 128
/* Offload bits in mbuf flag */
#if __FreeBSD_version >= 800000
/* Supported offload bits in mbuf flag */
#if __FreeBSD_version >= 1000000
#define CSUM_OFFLOAD (CSUM_IP_TSO|CSUM_IP6_TSO|CSUM_IP| \
CSUM_IP_UDP|CSUM_IP_TCP|CSUM_IP_SCTP| \
CSUM_IP6_UDP|CSUM_IP6_TCP|CSUM_IP6_SCTP)
#elif __FreeBSD_version >= 800000
#define CSUM_OFFLOAD (CSUM_IP|CSUM_TCP|CSUM_UDP|CSUM_SCTP)
#else
#define CSUM_OFFLOAD (CSUM_IP|CSUM_TCP|CSUM_UDP)
@ -243,7 +245,11 @@
#define IXGBE_LOW_LATENCY 128
#define IXGBE_AVE_LATENCY 400
#define IXGBE_BULK_LATENCY 1200
#define IXGBE_LINK_ITR 2000
/* Using 1FF (the max value), the interval is ~1.05ms */
#define IXGBE_LINK_ITR_QUANTA 0x1FF
#define IXGBE_LINK_ITR ((IXGBE_LINK_ITR_QUANTA << 3) & \
IXGBE_EITR_ITR_INT_MASK)
/* MAC type macros */
#define IXGBE_IS_X550VF(_adapter) \
@ -449,11 +455,11 @@ struct ixgbe_vf {
/* Our adapter structure */
struct adapter {
struct ifnet *ifp;
struct ixgbe_hw hw;
struct ixgbe_osdep osdep;
struct device *dev;
struct ifnet *ifp;
struct resource *pci_mem;
struct resource *msix_mem;

View File

@ -660,7 +660,7 @@ static s32 ixgbe_check_mac_link_82598(struct ixgbe_hw *hw,
hw->phy.ops.read_reg(hw, 0xC00C, IXGBE_TWINAX_DEV,
&adapt_comp_reg);
if (link_up_wait_to_complete) {
for (i = 0; i < IXGBE_LINK_UP_TIME; i++) {
for (i = 0; i < hw->mac.max_link_up_time; i++) {
if ((link_reg & 1) &&
((adapt_comp_reg & 1) == 0)) {
*link_up = TRUE;
@ -689,7 +689,7 @@ static s32 ixgbe_check_mac_link_82598(struct ixgbe_hw *hw,
links_reg = IXGBE_READ_REG(hw, IXGBE_LINKS);
if (link_up_wait_to_complete) {
for (i = 0; i < IXGBE_LINK_UP_TIME; i++) {
for (i = 0; i < hw->mac.max_link_up_time; i++) {
if (links_reg & IXGBE_LINKS_UP) {
*link_up = TRUE;
break;

View File

@ -382,8 +382,8 @@ s32 ixgbe_init_ops_82599(struct ixgbe_hw *hw)
mac->max_tx_queues = IXGBE_82599_MAX_TX_QUEUES;
mac->max_msix_vectors = ixgbe_get_pcie_msix_count_generic(hw);
mac->arc_subsystem_valid = (IXGBE_READ_REG(hw, IXGBE_FWSM) &
IXGBE_FWSM_MODE_MASK) ? TRUE : FALSE;
mac->arc_subsystem_valid = !!(IXGBE_READ_REG(hw, IXGBE_FWSM_BY_MAC(hw))
& IXGBE_FWSM_MODE_MASK);
hw->mbx.ops.init_params = ixgbe_init_mbx_params_pf;
@ -1370,7 +1370,7 @@ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl,
* Continue setup of fdirctrl register bits:
* Turn perfect match filtering on
* Report hash in RSS field of Rx wb descriptor
* Initialize the drop queue
* Initialize the drop queue to queue 127
* Move the flexible bytes to use the ethertype - shift 6 words
* Set the maximum length per hash bucket to 0xA filters
* Send interrupt when 64 (0x4 * 16) filters are left
@ -1381,6 +1381,9 @@ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl,
(0x6 << IXGBE_FDIRCTRL_FLEX_SHIFT) |
(0xA << IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT) |
(4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT);
if ((hw->mac.type == ixgbe_mac_X550) ||
(hw->mac.type == ixgbe_mac_X550EM_x))
fdirctrl |= IXGBE_FDIRCTRL_DROP_NO_MATCH;
if (cloud_mode)
fdirctrl |=(IXGBE_FDIRCTRL_FILTERMODE_CLOUD <<
@ -1392,6 +1395,39 @@ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl,
return IXGBE_SUCCESS;
}
/**
* ixgbe_set_fdir_drop_queue_82599 - Set Flow Director drop queue
* @hw: pointer to hardware structure
* @dropqueue: Rx queue index used for the dropped packets
**/
void ixgbe_set_fdir_drop_queue_82599(struct ixgbe_hw *hw, u8 dropqueue)
{
u32 fdirctrl;
DEBUGFUNC("ixgbe_set_fdir_drop_queue_82599");
/* Clear init done bit and drop queue field */
fdirctrl = IXGBE_READ_REG(hw, IXGBE_FDIRCTRL);
fdirctrl &= ~(IXGBE_FDIRCTRL_DROP_Q_MASK | IXGBE_FDIRCTRL_INIT_DONE);
/* Set drop queue */
fdirctrl |= (dropqueue << IXGBE_FDIRCTRL_DROP_Q_SHIFT);
if ((hw->mac.type == ixgbe_mac_X550) ||
(hw->mac.type == ixgbe_mac_X550EM_x))
fdirctrl |= IXGBE_FDIRCTRL_DROP_NO_MATCH;
IXGBE_WRITE_REG(hw, IXGBE_FDIRCMD,
(IXGBE_READ_REG(hw, IXGBE_FDIRCMD) |
IXGBE_FDIRCMD_CLEARHT));
IXGBE_WRITE_FLUSH(hw);
IXGBE_WRITE_REG(hw, IXGBE_FDIRCMD,
(IXGBE_READ_REG(hw, IXGBE_FDIRCMD) &
~IXGBE_FDIRCMD_CLEARHT));
IXGBE_WRITE_FLUSH(hw);
/* write hashes and fdirctrl register, poll for completion */
ixgbe_fdir_enable_82599(hw, fdirctrl);
}
/*
* These defines allow us to quickly generate all of the necessary instructions
* in the function below by simply calling out IXGBE_COMPUTE_SIG_HASH_ITERATION
@ -1492,16 +1528,15 @@ u32 ixgbe_atr_compute_sig_hash_82599(union ixgbe_atr_hash_dword input,
* Note that the tunnel bit in input must not be set when the hardware
* tunneling support does not exist.
**/
s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_hash_dword input,
union ixgbe_atr_hash_dword common,
u8 queue)
void ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_hash_dword input,
union ixgbe_atr_hash_dword common,
u8 queue)
{
u64 fdirhashcmd;
u8 flow_type;
bool tunnel;
u32 fdircmd;
s32 err;
DEBUGFUNC("ixgbe_fdir_add_signature_filter_82599");
@ -1523,7 +1558,7 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
break;
default:
DEBUGOUT(" Error on flow type input\n");
return IXGBE_ERR_CONFIG;
return;
}
/* configure FDIRCMD register */
@ -1542,15 +1577,9 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
fdirhashcmd |= ixgbe_atr_compute_sig_hash_82599(input, common);
IXGBE_WRITE_REG64(hw, IXGBE_FDIRHASH, fdirhashcmd);
err = ixgbe_fdir_check_cmd_complete(hw, &fdircmd);
if (err) {
DEBUGOUT("Flow Director command did not complete!\n");
return err;
}
DEBUGOUT2("Tx Queue=%x hash=%x\n", queue, (u32)fdirhashcmd);
return IXGBE_SUCCESS;
return;
}
#define IXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \

View File

@ -35,8 +35,10 @@
#include "ixgbe_api.h"
#include "ixgbe_common.h"
#define IXGBE_EMPTY_PARAM
static const u32 ixgbe_mvals_base[IXGBE_MVALS_IDX_LIMIT] = {
IXGBE_MVALS_INIT()
IXGBE_MVALS_INIT(IXGBE_EMPTY_PARAM)
};
static const u32 ixgbe_mvals_X540[IXGBE_MVALS_IDX_LIMIT] = {
@ -113,6 +115,7 @@ s32 ixgbe_init_shared_code(struct ixgbe_hw *hw)
status = IXGBE_ERR_DEVICE_NOT_SUPPORTED;
break;
}
hw->mac.max_link_up_time = IXGBE_LINK_UP_TIME;
return status;
}
@ -187,6 +190,7 @@ s32 ixgbe_set_mac_type(struct ixgbe_hw *hw)
hw->mvals = ixgbe_mvals_X540;
break;
case IXGBE_DEV_ID_X550T:
case IXGBE_DEV_ID_X550T1:
hw->mac.type = ixgbe_mac_X550;
hw->mvals = ixgbe_mvals_X550;
break;

View File

@ -148,10 +148,10 @@ s32 ixgbe_reinit_fdir_tables_82599(struct ixgbe_hw *hw);
s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 fdirctrl);
s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl,
bool cloud_mode);
s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_hash_dword input,
union ixgbe_atr_hash_dword common,
u8 queue);
void ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
union ixgbe_atr_hash_dword input,
union ixgbe_atr_hash_dword common,
u8 queue);
s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
union ixgbe_atr_input *input_mask, bool cloud_mode);
s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
@ -180,6 +180,7 @@ s32 ixgbe_read_i2c_combined_unlocked(struct ixgbe_hw *hw, u8 addr, u16 reg,
u16 *val);
s32 ixgbe_write_i2c_byte(struct ixgbe_hw *hw, u8 byte_offset, u8 dev_addr,
u8 data);
void ixgbe_set_fdir_drop_queue_82599(struct ixgbe_hw *hw, u8 dropqueue);
s32 ixgbe_write_i2c_byte_unlocked(struct ixgbe_hw *hw, u8 byte_offset,
u8 dev_addr, u8 data);
s32 ixgbe_write_i2c_combined(struct ixgbe_hw *hw, u8 addr, u16 reg, u16 val);

View File

@ -70,7 +70,7 @@ s32 ixgbe_init_ops_generic(struct ixgbe_hw *hw)
{
struct ixgbe_eeprom_info *eeprom = &hw->eeprom;
struct ixgbe_mac_info *mac = &hw->mac;
u32 eec = IXGBE_READ_REG(hw, IXGBE_EEC);
u32 eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
DEBUGFUNC("ixgbe_init_ops_generic");
@ -188,6 +188,7 @@ bool ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw)
case IXGBE_DEV_ID_X540T1:
case IXGBE_DEV_ID_X540_BYPASS:
case IXGBE_DEV_ID_X550T:
case IXGBE_DEV_ID_X550T1:
case IXGBE_DEV_ID_X550EM_X_10G_T:
supported = TRUE;
break;
@ -1038,7 +1039,7 @@ void ixgbe_set_lan_id_multi_port_pcie(struct ixgbe_hw *hw)
bus->lan_id = bus->func;
/* check for a port swap */
reg = IXGBE_READ_REG(hw, IXGBE_FACTPS);
reg = IXGBE_READ_REG(hw, IXGBE_FACTPS_BY_MAC(hw));
if (reg & IXGBE_FACTPS_LFS)
bus->func ^= 0x1;
}
@ -1164,7 +1165,7 @@ s32 ixgbe_init_eeprom_params_generic(struct ixgbe_hw *hw)
* Check for EEPROM present first.
* If not present leave as none
*/
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
if (eec & IXGBE_EEC_PRES) {
eeprom->type = ixgbe_eeprom_spi;
@ -1725,14 +1726,14 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
status = IXGBE_ERR_SWFW_SYNC;
if (status == IXGBE_SUCCESS) {
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
/* Request EEPROM Access */
eec |= IXGBE_EEC_REQ;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
for (i = 0; i < IXGBE_EEPROM_GRANT_ATTEMPTS; i++) {
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
if (eec & IXGBE_EEC_GNT)
break;
usec_delay(5);
@ -1741,7 +1742,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
/* Release if grant not acquired */
if (!(eec & IXGBE_EEC_GNT)) {
eec &= ~IXGBE_EEC_REQ;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
DEBUGOUT("Could not acquire EEPROM grant\n");
hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM);
@ -1752,7 +1753,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw)
if (status == IXGBE_SUCCESS) {
/* Clear CS and SK */
eec &= ~(IXGBE_EEC_CS | IXGBE_EEC_SK);
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
}
@ -1782,7 +1783,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
* If the SMBI bit is 0 when we read it, then the bit will be
* set and we have the semaphore
*/
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM_BY_MAC(hw));
if (!(swsm & IXGBE_SWSM_SMBI)) {
status = IXGBE_SUCCESS;
break;
@ -1807,7 +1808,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
* If the SMBI bit is 0 when we read it, then the bit will be
* set and we have the semaphore
*/
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM_BY_MAC(hw));
if (!(swsm & IXGBE_SWSM_SMBI))
status = IXGBE_SUCCESS;
}
@ -1815,17 +1816,17 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw)
/* Now get the semaphore between SW/FW through the SWESMBI bit */
if (status == IXGBE_SUCCESS) {
for (i = 0; i < timeout; i++) {
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM_BY_MAC(hw));
/* Set the SW EEPROM semaphore bit to request access */
swsm |= IXGBE_SWSM_SWESMBI;
IXGBE_WRITE_REG(hw, IXGBE_SWSM, swsm);
IXGBE_WRITE_REG(hw, IXGBE_SWSM_BY_MAC(hw), swsm);
/*
* If we set the bit successfully then we got the
* semaphore.
*/
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM_BY_MAC(hw));
if (swsm & IXGBE_SWSM_SWESMBI)
break;
@ -1922,15 +1923,15 @@ static void ixgbe_standby_eeprom(struct ixgbe_hw *hw)
DEBUGFUNC("ixgbe_standby_eeprom");
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
/* Toggle CS to flush commands */
eec |= IXGBE_EEC_CS;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
eec &= ~IXGBE_EEC_CS;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
}
@ -1950,7 +1951,7 @@ static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data,
DEBUGFUNC("ixgbe_shift_out_eeprom_bits");
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
/*
* Mask is used to shift "count" bits of "data" out to the EEPROM
@ -1971,7 +1972,7 @@ static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data,
else
eec &= ~IXGBE_EEC_DI;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
@ -1988,7 +1989,7 @@ static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data,
/* We leave the "DI" bit set to "0" when we leave this routine. */
eec &= ~IXGBE_EEC_DI;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
IXGBE_WRITE_FLUSH(hw);
}
@ -2011,7 +2012,7 @@ static u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count)
* the value of the "DO" bit. During this "shifting in" process the
* "DI" bit should always be clear.
*/
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
eec &= ~(IXGBE_EEC_DO | IXGBE_EEC_DI);
@ -2019,7 +2020,7 @@ static u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count)
data = data << 1;
ixgbe_raise_eeprom_clk(hw, &eec);
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
eec &= ~(IXGBE_EEC_DI);
if (eec & IXGBE_EEC_DO)
@ -2045,7 +2046,7 @@ static void ixgbe_raise_eeprom_clk(struct ixgbe_hw *hw, u32 *eec)
* (setting the SK bit), then delay
*/
*eec = *eec | IXGBE_EEC_SK;
IXGBE_WRITE_REG(hw, IXGBE_EEC, *eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), *eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
}
@ -2064,7 +2065,7 @@ static void ixgbe_lower_eeprom_clk(struct ixgbe_hw *hw, u32 *eec)
* delay
*/
*eec = *eec & ~IXGBE_EEC_SK;
IXGBE_WRITE_REG(hw, IXGBE_EEC, *eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), *eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
}
@ -2079,19 +2080,19 @@ static void ixgbe_release_eeprom(struct ixgbe_hw *hw)
DEBUGFUNC("ixgbe_release_eeprom");
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
eec |= IXGBE_EEC_CS; /* Pull CS high */
eec &= ~IXGBE_EEC_SK; /* Lower SCK */
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
IXGBE_WRITE_FLUSH(hw);
usec_delay(1);
/* Stop requesting EEPROM access */
eec &= ~IXGBE_EEC_REQ;
IXGBE_WRITE_REG(hw, IXGBE_EEC, eec);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), eec);
hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM);
@ -3147,6 +3148,9 @@ s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw)
DEBUGOUT("GIO Master Disable bit didn't clear - requesting resets\n");
hw->mac.flags |= IXGBE_FLAGS_DOUBLE_RESET_REQUIRED;
if (hw->mac.type >= ixgbe_mac_X550)
goto out;
/*
* Before proceeding, make sure that the PCIe block does not have
* transactions pending.
@ -4069,7 +4073,7 @@ s32 ixgbe_check_mac_link_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
}
if (link_up_wait_to_complete) {
for (i = 0; i < IXGBE_LINK_UP_TIME; i++) {
for (i = 0; i < hw->mac.max_link_up_time; i++) {
if (links_reg & IXGBE_LINKS_UP) {
*link_up = TRUE;
break;
@ -4715,7 +4719,7 @@ bool ixgbe_mng_present(struct ixgbe_hw *hw)
if (hw->mac.type < ixgbe_mac_82599EB)
return FALSE;
fwsm = IXGBE_READ_REG(hw, IXGBE_FWSM);
fwsm = IXGBE_READ_REG(hw, IXGBE_FWSM_BY_MAC(hw));
fwsm &= IXGBE_FWSM_MODE_MASK;
return fwsm == IXGBE_FWSM_FW_MODE_PT;
}
@ -4730,7 +4734,7 @@ bool ixgbe_mng_enabled(struct ixgbe_hw *hw)
{
u32 fwsm, manc, factps;
fwsm = IXGBE_READ_REG(hw, IXGBE_FWSM);
fwsm = IXGBE_READ_REG(hw, IXGBE_FWSM_BY_MAC(hw));
if ((fwsm & IXGBE_FWSM_MODE_MASK) != IXGBE_FWSM_FW_MODE_PT)
return FALSE;
@ -4739,7 +4743,7 @@ bool ixgbe_mng_enabled(struct ixgbe_hw *hw)
return FALSE;
if (hw->mac.type <= ixgbe_mac_X540) {
factps = IXGBE_READ_REG(hw, IXGBE_FACTPS);
factps = IXGBE_READ_REG(hw, IXGBE_FACTPS_BY_MAC(hw));
if (factps & IXGBE_FACTPS_MNGCG)
return FALSE;
}

View File

@ -148,6 +148,11 @@ s32 ixgbe_dcb_calculate_tc_credits_cee(struct ixgbe_hw *hw,
/* Calculate credit refill ratio using multiplier */
credit_refill = min(link_percentage * min_multiplier,
(u32)IXGBE_DCB_MAX_CREDIT_REFILL);
/* Refill at least minimum credit */
if (credit_refill < min_credit)
credit_refill = min_credit;
p->data_credits_refill = (u16)credit_refill;
/* Calculate maximum credit for the TC */
@ -158,7 +163,7 @@ s32 ixgbe_dcb_calculate_tc_credits_cee(struct ixgbe_hw *hw,
* of a TC is too small, the maximum credit may not be
* enough to send out a jumbo frame in data plane arbitration.
*/
if (credit_max && (credit_max < min_credit))
if (credit_max < min_credit)
credit_max = min_credit;
if (direction == IXGBE_DCB_TX_CONFIG) {

View File

@ -31,3 +31,58 @@
******************************************************************************/
/*$FreeBSD$*/
#include "ixgbe_osdep.h"
#include "ixgbe.h"
inline device_t
ixgbe_dev_from_hw(struct ixgbe_hw *hw)
{
return ((struct adapter *)hw->back)->dev;
}
inline u16
ixgbe_read_pci_cfg(struct ixgbe_hw *hw, u32 reg)
{
return pci_read_config(((struct adapter *)hw->back)->dev,
reg, 2);
}
inline void
ixgbe_write_pci_cfg(struct ixgbe_hw *hw, u32 reg, u16 value)
{
pci_write_config(((struct adapter *)hw->back)->dev,
reg, value, 2);
}
inline u32
ixgbe_read_reg(struct ixgbe_hw *hw, u32 reg)
{
return bus_space_read_4(((struct adapter *)hw->back)->osdep.mem_bus_space_tag,
((struct adapter *)hw->back)->osdep.mem_bus_space_handle,
reg);
}
inline void
ixgbe_write_reg(struct ixgbe_hw *hw, u32 reg, u32 val)
{
bus_space_write_4(((struct adapter *)hw->back)->osdep.mem_bus_space_tag,
((struct adapter *)hw->back)->osdep.mem_bus_space_handle,
reg, val);
}
inline u32
ixgbe_read_reg_array(struct ixgbe_hw *hw, u32 reg, u32 offset)
{
return bus_space_read_4(((struct adapter *)hw->back)->osdep.mem_bus_space_tag,
((struct adapter *)hw->back)->osdep.mem_bus_space_handle,
reg + (offset << 2));
}
inline void
ixgbe_write_reg_array(struct ixgbe_hw *hw, u32 reg, u32 offset, u32 val)
{
bus_space_write_4(((struct adapter *)hw->back)->osdep.mem_bus_space_tag,
((struct adapter *)hw->back)->osdep.mem_bus_space_handle,
reg + (offset << 2), val);
}

View File

@ -61,7 +61,7 @@
#define usec_delay(x) DELAY(x)
#define msec_delay(x) DELAY(1000*(x))
#define DBG 0
#define DBG 0
#define MSGOUT(S, A, B) printf(S "\n", A, B)
#define DEBUGFUNC(F) DEBUGOUT(F);
#if DBG
@ -165,7 +165,7 @@ void prefetch(void *x)
* non-overlapping regions and 32-byte padding on both src and dst.
*/
static __inline int
ixgbe_bcopy(void *_src, void *_dst, int l)
ixgbe_bcopy(void *restrict _src, void *restrict _dst, int l)
{
uint64_t *src = _src;
uint64_t *dst = _dst;
@ -183,11 +183,13 @@ struct ixgbe_osdep
{
bus_space_tag_t mem_bus_space_tag;
bus_space_handle_t mem_bus_space_handle;
struct device *dev;
};
/* These routines are needed by the shared code */
/* These routines need struct ixgbe_hw declared */
struct ixgbe_hw;
device_t ixgbe_dev_from_hw(struct ixgbe_hw *hw);
/* These routines are needed by the shared code */
extern u16 ixgbe_read_pci_cfg(struct ixgbe_hw *, u32);
#define IXGBE_READ_PCIE_WORD ixgbe_read_pci_cfg
@ -196,26 +198,18 @@ extern void ixgbe_write_pci_cfg(struct ixgbe_hw *, u32, u16);
#define IXGBE_WRITE_FLUSH(a) IXGBE_READ_REG(a, IXGBE_STATUS)
#define IXGBE_READ_REG(a, reg) (\
bus_space_read_4( ((struct ixgbe_osdep *)(a)->back)->mem_bus_space_tag, \
((struct ixgbe_osdep *)(a)->back)->mem_bus_space_handle, \
reg))
extern u32 ixgbe_read_reg(struct ixgbe_hw *, u32);
#define IXGBE_READ_REG(a, reg) ixgbe_read_reg(a, reg)
#define IXGBE_WRITE_REG(a, reg, value) (\
bus_space_write_4( ((struct ixgbe_osdep *)(a)->back)->mem_bus_space_tag, \
((struct ixgbe_osdep *)(a)->back)->mem_bus_space_handle, \
reg, value))
extern void ixgbe_write_reg(struct ixgbe_hw *, u32, u32);
#define IXGBE_WRITE_REG(a, reg, val) ixgbe_write_reg(a, reg, val)
extern u32 ixgbe_read_reg_array(struct ixgbe_hw *, u32, u32);
#define IXGBE_READ_REG_ARRAY(a, reg, offset) \
ixgbe_read_reg_array(a, reg, offset)
#define IXGBE_READ_REG_ARRAY(a, reg, offset) (\
bus_space_read_4( ((struct ixgbe_osdep *)(a)->back)->mem_bus_space_tag, \
((struct ixgbe_osdep *)(a)->back)->mem_bus_space_handle, \
(reg + ((offset) << 2))))
#define IXGBE_WRITE_REG_ARRAY(a, reg, offset, value) (\
bus_space_write_4( ((struct ixgbe_osdep *)(a)->back)->mem_bus_space_tag, \
((struct ixgbe_osdep *)(a)->back)->mem_bus_space_handle, \
(reg + ((offset) << 2)), value))
extern void ixgbe_write_reg_array(struct ixgbe_hw *, u32, u32, u32);
#define IXGBE_WRITE_REG_ARRAY(a, reg, offset, val) \
ixgbe_write_reg_array(a, reg, offset, val)
#endif /* _IXGBE_OS_H_ */

View File

@ -508,7 +508,9 @@ enum ixgbe_phy_type ixgbe_get_phy_type_from_id(u32 phy_id)
case TN1010_PHY_ID:
phy_type = ixgbe_phy_tn;
break;
case X550_PHY_ID:
case X550_PHY_ID1:
case X550_PHY_ID2:
case X550_PHY_ID3:
case X540_PHY_ID:
phy_type = ixgbe_phy_aq;
break;
@ -830,7 +832,7 @@ s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw)
if (hw->mac.type == ixgbe_mac_X550) {
if (speed & IXGBE_LINK_SPEED_5GB_FULL) {
/* Set or unset auto-negotiation 1G advertisement */
/* Set or unset auto-negotiation 5G advertisement */
hw->phy.ops.read_reg(hw,
IXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG,
IXGBE_MDIO_AUTO_NEG_DEV_TYPE,
@ -848,7 +850,7 @@ s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw)
}
if (speed & IXGBE_LINK_SPEED_2_5GB_FULL) {
/* Set or unset auto-negotiation 1G advertisement */
/* Set or unset auto-negotiation 2.5G advertisement */
hw->phy.ops.read_reg(hw,
IXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG,
IXGBE_MDIO_AUTO_NEG_DEV_TYPE,
@ -950,54 +952,70 @@ s32 ixgbe_setup_phy_link_speed_generic(struct ixgbe_hw *hw,
hw->phy.autoneg_advertised |= IXGBE_LINK_SPEED_100_FULL;
/* Setup link based on the new speed settings */
hw->phy.ops.setup_link(hw);
ixgbe_setup_phy_link(hw);
return IXGBE_SUCCESS;
}
/**
* ixgbe_get_copper_speeds_supported - Get copper link speeds from phy
* @hw: pointer to hardware structure
*
* Determines the supported link capabilities by reading the PHY auto
* negotiation register.
**/
static s32 ixgbe_get_copper_speeds_supported(struct ixgbe_hw *hw)
{
s32 status;
u16 speed_ability;
status = hw->phy.ops.read_reg(hw, IXGBE_MDIO_PHY_SPEED_ABILITY,
IXGBE_MDIO_PMA_PMD_DEV_TYPE,
&speed_ability);
if (status)
return status;
if (speed_ability & IXGBE_MDIO_PHY_SPEED_10G)
hw->phy.speeds_supported |= IXGBE_LINK_SPEED_10GB_FULL;
if (speed_ability & IXGBE_MDIO_PHY_SPEED_1G)
hw->phy.speeds_supported |= IXGBE_LINK_SPEED_1GB_FULL;
if (speed_ability & IXGBE_MDIO_PHY_SPEED_100M)
hw->phy.speeds_supported |= IXGBE_LINK_SPEED_100_FULL;
switch (hw->mac.type) {
case ixgbe_mac_X550:
hw->phy.speeds_supported |= IXGBE_LINK_SPEED_2_5GB_FULL;
hw->phy.speeds_supported |= IXGBE_LINK_SPEED_5GB_FULL;
break;
case ixgbe_mac_X550EM_x:
hw->phy.speeds_supported &= ~IXGBE_LINK_SPEED_100_FULL;
break;
default:
break;
}
return status;
}
/**
* ixgbe_get_copper_link_capabilities_generic - Determines link capabilities
* @hw: pointer to hardware structure
* @speed: pointer to link speed
* @autoneg: boolean auto-negotiation value
*
* Determines the supported link capabilities by reading the PHY auto
* negotiation register.
**/
s32 ixgbe_get_copper_link_capabilities_generic(struct ixgbe_hw *hw,
ixgbe_link_speed *speed,
bool *autoneg)
{
s32 status;
u16 speed_ability;
s32 status = IXGBE_SUCCESS;
DEBUGFUNC("ixgbe_get_copper_link_capabilities_generic");
*speed = 0;
*autoneg = TRUE;
if (!hw->phy.speeds_supported)
status = ixgbe_get_copper_speeds_supported(hw);
status = hw->phy.ops.read_reg(hw, IXGBE_MDIO_PHY_SPEED_ABILITY,
IXGBE_MDIO_PMA_PMD_DEV_TYPE,
&speed_ability);
if (status == IXGBE_SUCCESS) {
if (speed_ability & IXGBE_MDIO_PHY_SPEED_10G)
*speed |= IXGBE_LINK_SPEED_10GB_FULL;
if (speed_ability & IXGBE_MDIO_PHY_SPEED_1G)
*speed |= IXGBE_LINK_SPEED_1GB_FULL;
if (speed_ability & IXGBE_MDIO_PHY_SPEED_100M)
*speed |= IXGBE_LINK_SPEED_100_FULL;
}
/* Internal PHY does not support 100 Mbps */
if (hw->mac.type == ixgbe_mac_X550EM_x)
*speed &= ~IXGBE_LINK_SPEED_100_FULL;
if (hw->mac.type == ixgbe_mac_X550) {
*speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
*speed |= IXGBE_LINK_SPEED_5GB_FULL;
}
*speed = hw->phy.speeds_supported;
return status;
}

View File

@ -92,16 +92,23 @@
#define IXGBE_CS4227_GLOBAL_ID_LSB 0
#define IXGBE_CS4227_SCRATCH 2
#define IXGBE_CS4227_GLOBAL_ID_VALUE 0x03E5
#define IXGBE_CS4227_SCRATCH_VALUE 0x5aa5
#define IXGBE_CS4227_RETRIES 5
#define IXGBE_CS4227_RESET_PENDING 0x1357
#define IXGBE_CS4227_RESET_COMPLETE 0x5AA5
#define IXGBE_CS4227_RETRIES 15
#define IXGBE_CS4227_EFUSE_STATUS 0x0181
#define IXGBE_CS4227_LINE_SPARE22_MSB 0x12AD /* Reg to program speed */
#define IXGBE_CS4227_LINE_SPARE24_LSB 0x12B0 /* Reg to program EDC */
#define IXGBE_CS4227_HOST_SPARE22_MSB 0x1AAD /* Reg to program speed */
#define IXGBE_CS4227_HOST_SPARE24_LSB 0x1AB0 /* Reg to program EDC */
#define IXGBE_CS4227_EEPROM_STATUS 0x5001
#define IXGBE_CS4227_EEPROM_LOAD_OK 0x0001
#define IXGBE_CS4227_SPEED_1G 0x8000
#define IXGBE_CS4227_SPEED_10G 0
#define IXGBE_CS4227_EDC_MODE_CX1 0x0002
#define IXGBE_CS4227_EDC_MODE_SR 0x0004
#define IXGBE_CS4227_EDC_MODE_DIAG 0x0008
#define IXGBE_CS4227_RESET_HOLD 500 /* microseconds */
#define IXGBE_CS4227_RESET_DELAY 500 /* milliseconds */
#define IXGBE_CS4227_RESET_DELAY 450 /* milliseconds */
#define IXGBE_CS4227_CHECK_DELAY 30 /* milliseconds */
#define IXGBE_PE 0xE0 /* Port expander address */
#define IXGBE_PE_OUTPUT 1 /* Output register offset */

View File

@ -131,6 +131,7 @@
#define IXGBE_DEV_ID_X540_BYPASS 0x155C
#define IXGBE_DEV_ID_X540T1 0x1560
#define IXGBE_DEV_ID_X550T 0x1563
#define IXGBE_DEV_ID_X550T1 0x15D1
#define IXGBE_DEV_ID_X550EM_X_KX4 0x15AA
#define IXGBE_DEV_ID_X550EM_X_KR 0x15AB
#define IXGBE_DEV_ID_X550EM_X_SFP 0x15AC
@ -1544,7 +1545,9 @@ struct ixgbe_dmac_config {
#define TN1010_PHY_ID 0x00A19410
#define TNX_FW_REV 0xB
#define X540_PHY_ID 0x01540200
#define X550_PHY_ID 0x01540220
#define X550_PHY_ID1 0x01540220
#define X550_PHY_ID2 0x01540223
#define X550_PHY_ID3 0x01540221
#define X557_PHY_ID 0x01540240
#define AQ_FW_REV 0x20
#define QT2022_PHY_ID 0x0043A400
@ -1937,6 +1940,7 @@ enum {
* FIP (0x8914): Filter 4
* LLDP (0x88CC): Filter 5
* LACP (0x8809): Filter 6
* FC (0x8808): Filter 7
*/
#define IXGBE_ETQF_FILTER_EAPOL 0
#define IXGBE_ETQF_FILTER_FCOE 2
@ -1944,6 +1948,7 @@ enum {
#define IXGBE_ETQF_FILTER_FIP 4
#define IXGBE_ETQF_FILTER_LLDP 5
#define IXGBE_ETQF_FILTER_LACP 6
#define IXGBE_ETQF_FILTER_FC 7
/* VLAN Control Bit Masks */
#define IXGBE_VLNCTRL_VET 0x0000FFFF /* bits 0-15 */
#define IXGBE_VLNCTRL_CFI 0x10000000 /* bit 28 */
@ -2803,7 +2808,9 @@ enum ixgbe_fdir_pballoc_type {
#define IXGBE_FDIRCTRL_REPORT_STATUS 0x00000020
#define IXGBE_FDIRCTRL_REPORT_STATUS_ALWAYS 0x00000080
#define IXGBE_FDIRCTRL_DROP_Q_SHIFT 8
#define IXGBE_FDIRCTRL_DROP_Q_MASK 0x00007F00
#define IXGBE_FDIRCTRL_FLEX_SHIFT 16
#define IXGBE_FDIRCTRL_DROP_NO_MATCH 0x00008000
#define IXGBE_FDIRCTRL_FILTERMODE_SHIFT 21
#define IXGBE_FDIRCTRL_FILTERMODE_MACVLAN 0x0001 /* bit 23:21, 001b */
#define IXGBE_FDIRCTRL_FILTERMODE_CLOUD 0x0002 /* bit 23:21, 010b */
@ -2907,6 +2914,11 @@ enum ixgbe_fdir_pballoc_type {
#define FW_DISABLE_RXEN_CMD 0xDE
#define FW_DISABLE_RXEN_LEN 0x1
#define FW_PHY_MGMT_REQ_CMD 0x20
#define FW_INT_PHY_REQ_CMD 0xB
#define FW_INT_PHY_REQ_LEN 10
#define FW_INT_PHY_REQ_READ 0
#define FW_INT_PHY_REQ_WRITE 1
/* Host Interface Command Structures */
struct ixgbe_hic_hdr {
@ -2975,6 +2987,21 @@ struct ixgbe_hic_disable_rxen {
u16 pad3;
};
struct ixgbe_hic_internal_phy_req {
struct ixgbe_hic_hdr hdr;
u8 port_number;
u8 command_type;
u16 address;
u16 rsv1;
u32 write_data;
u16 pad;
};
struct ixgbe_hic_internal_phy_resp {
struct ixgbe_hic_hdr hdr;
u32 read_data;
};
/* Transmit Descriptor - Legacy */
struct ixgbe_legacy_tx_desc {
@ -3310,6 +3337,7 @@ union ixgbe_atr_hash_dword {
IXGBE_CAT(SRAMREL, m), \
IXGBE_CAT(FACTPS, m), \
IXGBE_CAT(SWSM, m), \
IXGBE_CAT(SWFW_SYNC, m), \
IXGBE_CAT(FWSM, m), \
IXGBE_CAT(SDP0_GPIEN, m), \
IXGBE_CAT(SDP1_GPIEN, m), \
@ -3792,6 +3820,7 @@ struct ixgbe_mac_info {
u8 flags;
struct ixgbe_dmac_config dmac_config;
bool set_lben;
u32 max_link_up_time;
};
struct ixgbe_phy_info {
@ -3806,6 +3835,7 @@ struct ixgbe_phy_info {
u32 phy_semaphore_mask;
bool reset_disable;
ixgbe_autoneg_advertised autoneg_advertised;
ixgbe_link_speed speeds_supported;
enum ixgbe_smart_speed smart_speed;
bool smart_speed_active;
bool multispeed_fiber;
@ -3918,15 +3948,15 @@ struct ixgbe_hw {
#define IXGBE_FUSES0_300MHZ (1 << 5)
#define IXGBE_FUSES0_REV1 (1 << 6)
#define IXGBE_KRM_PORT_CAR_GEN_CTRL(P) ((P == 0) ? (0x4010) : (0x8010))
#define IXGBE_KRM_LINK_CTRL_1(P) ((P == 0) ? (0x420C) : (0x820C))
#define IXGBE_KRM_AN_CNTL_1(P) ((P == 0) ? (0x422C) : (0x822C))
#define IXGBE_KRM_DSP_TXFFE_STATE_4(P) ((P == 0) ? (0x4634) : (0x8634))
#define IXGBE_KRM_DSP_TXFFE_STATE_5(P) ((P == 0) ? (0x4638) : (0x8638))
#define IXGBE_KRM_RX_TRN_LINKUP_CTRL(P) ((P == 0) ? (0x4B00) : (0x8B00))
#define IXGBE_KRM_PMD_DFX_BURNIN(P) ((P == 0) ? (0x4E00) : (0x8E00))
#define IXGBE_KRM_TX_COEFF_CTRL_1(P) ((P == 0) ? (0x5520) : (0x9520))
#define IXGBE_KRM_RX_ANA_CTL(P) ((P == 0) ? (0x5A00) : (0x9A00))
#define IXGBE_KRM_PORT_CAR_GEN_CTRL(P) ((P) ? 0x8010 : 0x4010)
#define IXGBE_KRM_LINK_CTRL_1(P) ((P) ? 0x820C : 0x420C)
#define IXGBE_KRM_AN_CNTL_1(P) ((P) ? 0x822C : 0x422C)
#define IXGBE_KRM_DSP_TXFFE_STATE_4(P) ((P) ? 0x8634 : 0x4634)
#define IXGBE_KRM_DSP_TXFFE_STATE_5(P) ((P) ? 0x8638 : 0x4638)
#define IXGBE_KRM_RX_TRN_LINKUP_CTRL(P) ((P) ? 0x8B00 : 0x4B00)
#define IXGBE_KRM_PMD_DFX_BURNIN(P) ((P) ? 0x8E00 : 0x4E00)
#define IXGBE_KRM_TX_COEFF_CTRL_1(P) ((P) ? 0x9520 : 0x5520)
#define IXGBE_KRM_RX_ANA_CTL(P) ((P) ? 0x9A00 : 0x5A00)
#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_32B (1 << 9)
#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_KRPCS (1 << 11)
@ -3960,15 +3990,6 @@ struct ixgbe_hw {
#define IXGBE_KRM_TX_COEFF_CTRL_1_CZERO_EN (1 << 3)
#define IXGBE_KRM_TX_COEFF_CTRL_1_OVRRD_EN (1 << 31)
#define IXGBE_KX4_LINK_CNTL_1 0x4C
#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX (1 << 16)
#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4 (1 << 17)
#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX (1 << 24)
#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX4 (1 << 25)
#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_ENABLE (1 << 29)
#define IXGBE_KX4_LINK_CNTL_1_TETH_FORCE_LINK_UP (1 << 30)
#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART (1 << 31)
#define IXGBE_SB_IOSF_INDIRECT_CTRL 0x00011144
#define IXGBE_SB_IOSF_INDIRECT_DATA 0x00011148
@ -3985,8 +4006,6 @@ struct ixgbe_hw {
#define IXGBE_SB_IOSF_CTRL_BUSY_SHIFT 31
#define IXGBE_SB_IOSF_CTRL_BUSY (1 << IXGBE_SB_IOSF_CTRL_BUSY_SHIFT)
#define IXGBE_SB_IOSF_TARGET_KR_PHY 0
#define IXGBE_SB_IOSF_TARGET_KX4_PHY 1
#define IXGBE_SB_IOSF_TARGET_KX4_PCS 2
#define IXGBE_NW_MNG_IF_SEL 0x00011178
#define IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE (1 << 24)

View File

@ -225,8 +225,6 @@ s32 ixgbe_reset_hw_vf(struct ixgbe_hw *hw)
if (ret_val)
return ret_val;
msgbuf[0] &= ~IXGBE_VT_MSGTYPE_CTS;
if (msgbuf[0] != (IXGBE_VF_RESET | IXGBE_VT_MSGTYPE_ACK) &&
msgbuf[0] != (IXGBE_VF_RESET | IXGBE_VT_MSGTYPE_NACK))
return IXGBE_ERR_INVALID_MAC_ADDR;

View File

@ -138,8 +138,8 @@ s32 ixgbe_init_ops_X540(struct ixgbe_hw *hw)
* ARC supported; valid only if manageability features are
* enabled.
*/
mac->arc_subsystem_valid = (IXGBE_READ_REG(hw, IXGBE_FWSM) &
IXGBE_FWSM_MODE_MASK) ? TRUE : FALSE;
mac->arc_subsystem_valid = !!(IXGBE_READ_REG(hw, IXGBE_FWSM_BY_MAC(hw))
& IXGBE_FWSM_MODE_MASK);
hw->mbx.ops.init_params = ixgbe_init_mbx_params_pf;
@ -356,7 +356,7 @@ s32 ixgbe_init_eeprom_params_X540(struct ixgbe_hw *hw)
eeprom->semaphore_delay = 10;
eeprom->type = ixgbe_flash;
eec = IXGBE_READ_REG(hw, IXGBE_EEC);
eec = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
IXGBE_EEC_SIZE_SHIFT);
eeprom->word_size = 1 << (eeprom_size +
@ -681,8 +681,8 @@ s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw)
goto out;
}
flup = IXGBE_READ_REG(hw, IXGBE_EEC) | IXGBE_EEC_FLUP;
IXGBE_WRITE_REG(hw, IXGBE_EEC, flup);
flup = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw)) | IXGBE_EEC_FLUP;
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), flup);
status = ixgbe_poll_flash_update_done_X540(hw);
if (status == IXGBE_SUCCESS)
@ -691,11 +691,11 @@ s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw)
DEBUGOUT("Flash update time out\n");
if (hw->mac.type == ixgbe_mac_X540 && hw->revision_id == 0) {
flup = IXGBE_READ_REG(hw, IXGBE_EEC);
flup = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
if (flup & IXGBE_EEC_SEC1VAL) {
flup |= IXGBE_EEC_FLUP;
IXGBE_WRITE_REG(hw, IXGBE_EEC, flup);
IXGBE_WRITE_REG(hw, IXGBE_EEC_BY_MAC(hw), flup);
}
status = ixgbe_poll_flash_update_done_X540(hw);
@ -724,7 +724,7 @@ static s32 ixgbe_poll_flash_update_done_X540(struct ixgbe_hw *hw)
DEBUGFUNC("ixgbe_poll_flash_update_done_X540");
for (i = 0; i < IXGBE_FLUDONE_ATTEMPTS; i++) {
reg = IXGBE_READ_REG(hw, IXGBE_EEC);
reg = IXGBE_READ_REG(hw, IXGBE_EEC_BY_MAC(hw));
if (reg & IXGBE_EEC_FLUDONE) {
status = IXGBE_SUCCESS;
break;
@ -775,10 +775,11 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
if (ixgbe_get_swfw_sync_semaphore(hw))
return IXGBE_ERR_SWFW_SYNC;
swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC);
swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw));
if (!(swfw_sync & (fwmask | swmask | hwmask))) {
swfw_sync |= swmask;
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC, swfw_sync);
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw),
swfw_sync);
ixgbe_release_swfw_sync_semaphore(hw);
msec_delay(5);
return IXGBE_SUCCESS;
@ -805,10 +806,10 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
*/
if (ixgbe_get_swfw_sync_semaphore(hw))
return IXGBE_ERR_SWFW_SYNC;
swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC);
swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw));
if (swfw_sync & (fwmask | hwmask)) {
swfw_sync |= swmask;
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC, swfw_sync);
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw), swfw_sync);
ixgbe_release_swfw_sync_semaphore(hw);
msec_delay(5);
return IXGBE_SUCCESS;
@ -852,9 +853,9 @@ void ixgbe_release_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask)
swmask |= mask & IXGBE_GSSR_I2C_MASK;
ixgbe_get_swfw_sync_semaphore(hw);
swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC);
swfw_sync = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw));
swfw_sync &= ~swmask;
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC, swfw_sync);
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw), swfw_sync);
ixgbe_release_swfw_sync_semaphore(hw);
msec_delay(5);
@ -881,7 +882,7 @@ static s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw)
* If the SMBI bit is 0 when we read it, then the bit will be
* set and we have the semaphore
*/
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM_BY_MAC(hw));
if (!(swsm & IXGBE_SWSM_SMBI)) {
status = IXGBE_SUCCESS;
break;
@ -892,7 +893,7 @@ static s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw)
/* Now get the semaphore between SW/FW through the REGSMP bit */
if (status == IXGBE_SUCCESS) {
for (i = 0; i < timeout; i++) {
swsm = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC);
swsm = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw));
if (!(swsm & IXGBE_SWFW_REGSMP))
break;
@ -932,13 +933,13 @@ static void ixgbe_release_swfw_sync_semaphore(struct ixgbe_hw *hw)
/* Release both semaphores by writing 0 to the bits REGSMP and SMBI */
swsm = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC);
swsm = IXGBE_READ_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw));
swsm &= ~IXGBE_SWFW_REGSMP;
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC, swsm);
IXGBE_WRITE_REG(hw, IXGBE_SWFW_SYNC_BY_MAC(hw), swsm);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM);
swsm = IXGBE_READ_REG(hw, IXGBE_SWSM_BY_MAC(hw));
swsm &= ~IXGBE_SWSM_SMBI;
IXGBE_WRITE_REG(hw, IXGBE_SWSM, swsm);
IXGBE_WRITE_REG(hw, IXGBE_SWSM_BY_MAC(hw), swsm);
IXGBE_WRITE_FLUSH(hw);
}

View File

@ -114,119 +114,6 @@ static s32 ixgbe_write_cs4227(struct ixgbe_hw *hw, u16 reg, u16 value)
return ixgbe_write_i2c_combined_unlocked(hw, IXGBE_CS4227, reg, value);
}
/**
* ixgbe_get_cs4227_status - Return CS4227 status
* @hw: pointer to hardware structure
*
* Returns error if CS4227 not successfully initialized
**/
static s32 ixgbe_get_cs4227_status(struct ixgbe_hw *hw)
{
s32 status;
u16 value = 0;
u16 reg_slice, reg_val;
u8 retry;
for (retry = 0; retry < IXGBE_CS4227_RETRIES; ++retry) {
status = ixgbe_read_cs4227(hw, IXGBE_CS4227_GLOBAL_ID_LSB,
&value);
if (status != IXGBE_SUCCESS)
return status;
if (value == IXGBE_CS4227_GLOBAL_ID_VALUE)
break;
msec_delay(IXGBE_CS4227_CHECK_DELAY);
}
if (value != IXGBE_CS4227_GLOBAL_ID_VALUE)
return IXGBE_ERR_PHY;
status = ixgbe_read_cs4227(hw, IXGBE_CS4227_SCRATCH, &value);
if (status != IXGBE_SUCCESS)
return status;
/* If this is the first time after power-on, check the ucode.
* Otherwise, this will disrupt link on all ports. Because we
* can only do this the first time, we must check all ports,
* not just our own.
*/
if (value != IXGBE_CS4227_SCRATCH_VALUE) {
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB;
reg_val = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 0x1;
status = ixgbe_write_cs4227(hw, reg_slice,
reg_val);
if (status != IXGBE_SUCCESS)
return status;
reg_slice = IXGBE_CS4227_HOST_SPARE24_LSB;
reg_val = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 0x1;
status = ixgbe_write_cs4227(hw, reg_slice,
reg_val);
if (status != IXGBE_SUCCESS)
return status;
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB + (1 << 12);
reg_val = (IXGBE_CS4227_EDC_MODE_SR << 1) | 0x1;
status = ixgbe_write_cs4227(hw, reg_slice,
reg_val);
if (status != IXGBE_SUCCESS)
return status;
reg_slice = IXGBE_CS4227_HOST_SPARE24_LSB + (1 << 12);
reg_val = (IXGBE_CS4227_EDC_MODE_SR << 1) | 0x1;
status = ixgbe_write_cs4227(hw, reg_slice,
reg_val);
if (status != IXGBE_SUCCESS)
return status;
msec_delay(10);
}
/* Verify that the ucode is operational on all ports. */
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB;
reg_val = 0xFFFF;
status = ixgbe_read_cs4227(hw, reg_slice, &reg_val);
if (status != IXGBE_SUCCESS)
return status;
if (reg_val != 0)
return IXGBE_ERR_PHY;
reg_slice = IXGBE_CS4227_HOST_SPARE24_LSB;
reg_val = 0xFFFF;
status = ixgbe_read_cs4227(hw, reg_slice, &reg_val);
if (status != IXGBE_SUCCESS)
return status;
if (reg_val != 0)
return IXGBE_ERR_PHY;
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB + (1 << 12);
reg_val = 0xFFFF;
status = ixgbe_read_cs4227(hw, reg_slice, &reg_val);
if (status != IXGBE_SUCCESS)
return status;
if (reg_val != 0)
return IXGBE_ERR_PHY;
reg_slice = IXGBE_CS4227_HOST_SPARE24_LSB + (1 << 12);
reg_val = 0xFFFF;
status = ixgbe_read_cs4227(hw, reg_slice, &reg_val);
if (status != IXGBE_SUCCESS)
return status;
if (reg_val != 0)
return IXGBE_ERR_PHY;
/* Set scratch for next time. */
status = ixgbe_write_cs4227(hw, IXGBE_CS4227_SCRATCH,
IXGBE_CS4227_SCRATCH_VALUE);
if (status != IXGBE_SUCCESS)
return status;
status = ixgbe_read_cs4227(hw, IXGBE_CS4227_SCRATCH, &value);
if (status != IXGBE_SUCCESS)
return status;
if (value != IXGBE_CS4227_SCRATCH_VALUE)
return IXGBE_ERR_PHY;
return IXGBE_SUCCESS;
}
/**
* ixgbe_read_pe - Read register from port expander
* @hw: pointer to hardware structure
@ -269,13 +156,17 @@ static s32 ixgbe_write_pe(struct ixgbe_hw *hw, u8 reg, u8 value)
* ixgbe_reset_cs4227 - Reset CS4227 using port expander
* @hw: pointer to hardware structure
*
* This function assumes that the caller has acquired the proper semaphore.
* Returns error code
**/
static s32 ixgbe_reset_cs4227(struct ixgbe_hw *hw)
{
s32 status;
u32 retry;
u16 value;
u8 reg;
/* Trigger hard reset. */
status = ixgbe_read_pe(hw, IXGBE_PE_OUTPUT, &reg);
if (status != IXGBE_SUCCESS)
return status;
@ -310,7 +201,29 @@ static s32 ixgbe_reset_cs4227(struct ixgbe_hw *hw)
if (status != IXGBE_SUCCESS)
return status;
/* Wait for the reset to complete. */
msec_delay(IXGBE_CS4227_RESET_DELAY);
for (retry = 0; retry < IXGBE_CS4227_RETRIES; retry++) {
status = ixgbe_read_cs4227(hw, IXGBE_CS4227_EFUSE_STATUS,
&value);
if (status == IXGBE_SUCCESS &&
value == IXGBE_CS4227_EEPROM_LOAD_OK)
break;
msec_delay(IXGBE_CS4227_CHECK_DELAY);
}
if (retry == IXGBE_CS4227_RETRIES) {
ERROR_REPORT1(IXGBE_ERROR_INVALID_STATE,
"CS4227 reset did not complete.");
return IXGBE_ERR_PHY;
}
status = ixgbe_read_cs4227(hw, IXGBE_CS4227_EEPROM_STATUS, &value);
if (status != IXGBE_SUCCESS ||
!(value & IXGBE_CS4227_EEPROM_LOAD_OK)) {
ERROR_REPORT1(IXGBE_ERROR_INVALID_STATE,
"CS4227 EEPROM did not load successfully.");
return IXGBE_ERR_PHY;
}
return IXGBE_SUCCESS;
}
@ -321,29 +234,75 @@ static s32 ixgbe_reset_cs4227(struct ixgbe_hw *hw)
**/
static void ixgbe_check_cs4227(struct ixgbe_hw *hw)
{
s32 status = IXGBE_SUCCESS;
u32 swfw_mask = hw->phy.phy_semaphore_mask;
s32 status;
u16 value = 0;
u8 retry;
for (retry = 0; retry < IXGBE_CS4227_RETRIES; retry++) {
status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
if (status != IXGBE_SUCCESS) {
ERROR_REPORT2(IXGBE_ERROR_CAUTION,
"semaphore failed with %d\n", status);
return;
"semaphore failed with %d", status);
msec_delay(IXGBE_CS4227_CHECK_DELAY);
continue;
}
status = ixgbe_get_cs4227_status(hw);
if (status == IXGBE_SUCCESS) {
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
msec_delay(hw->eeprom.semaphore_delay);
return;
}
ixgbe_reset_cs4227(hw);
/* Get status of reset flow. */
status = ixgbe_read_cs4227(hw, IXGBE_CS4227_SCRATCH, &value);
if (status == IXGBE_SUCCESS &&
value == IXGBE_CS4227_RESET_COMPLETE)
goto out;
if (status != IXGBE_SUCCESS ||
value != IXGBE_CS4227_RESET_PENDING)
break;
/* Reset is pending. Wait and check again. */
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
msec_delay(hw->eeprom.semaphore_delay);
msec_delay(IXGBE_CS4227_CHECK_DELAY);
}
ERROR_REPORT2(IXGBE_ERROR_CAUTION,
"Unable to initialize CS4227, err=%d\n", status);
/* If still pending, assume other instance failed. */
if (retry == IXGBE_CS4227_RETRIES) {
status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
if (status != IXGBE_SUCCESS) {
ERROR_REPORT2(IXGBE_ERROR_CAUTION,
"semaphore failed with %d", status);
return;
}
}
/* Reset the CS4227. */
status = ixgbe_reset_cs4227(hw);
if (status != IXGBE_SUCCESS) {
ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE,
"CS4227 reset failed: %d", status);
goto out;
}
/* Reset takes so long, temporarily release semaphore in case the
* other driver instance is waiting for the reset indication.
*/
ixgbe_write_cs4227(hw, IXGBE_CS4227_SCRATCH,
IXGBE_CS4227_RESET_PENDING);
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
msec_delay(10);
status = hw->mac.ops.acquire_swfw_sync(hw, swfw_mask);
if (status != IXGBE_SUCCESS) {
ERROR_REPORT2(IXGBE_ERROR_CAUTION,
"semaphore failed with %d", status);
return;
}
/* Record completion for next time. */
status = ixgbe_write_cs4227(hw, IXGBE_CS4227_SCRATCH,
IXGBE_CS4227_RESET_COMPLETE);
out:
hw->mac.ops.release_swfw_sync(hw, swfw_mask);
msec_delay(hw->eeprom.semaphore_delay);
}
/**
@ -452,8 +411,11 @@ s32 ixgbe_init_ops_X550EM(struct ixgbe_hw *hw)
hw->bus.type = ixgbe_bus_type_internal;
mac->ops.get_bus_info = ixgbe_get_bus_info_X550em;
mac->ops.read_iosf_sb_reg = ixgbe_read_iosf_sb_reg_x550;
mac->ops.write_iosf_sb_reg = ixgbe_write_iosf_sb_reg_x550;
if (hw->mac.type == ixgbe_mac_X550EM_x) {
mac->ops.read_iosf_sb_reg = ixgbe_read_iosf_sb_reg_x550;
mac->ops.write_iosf_sb_reg = ixgbe_write_iosf_sb_reg_x550;
}
mac->ops.get_media_type = ixgbe_get_media_type_X550em;
mac->ops.setup_sfp = ixgbe_setup_sfp_modules_X550em;
mac->ops.get_link_capabilities = ixgbe_get_link_capabilities_X550em;
@ -679,7 +641,7 @@ s32 ixgbe_setup_eee_X550(struct ixgbe_hw *hw, bool enable_eee)
if (enable_eee) {
eeer |= (IXGBE_EEER_TX_LPI_EN | IXGBE_EEER_RX_LPI_EN);
if (hw->device_id == IXGBE_DEV_ID_X550T) {
if (hw->mac.type == ixgbe_mac_X550) {
/* Advertise EEE capability */
hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_EEE_ADVT,
IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_eee_reg);
@ -717,7 +679,7 @@ s32 ixgbe_setup_eee_X550(struct ixgbe_hw *hw, bool enable_eee)
} else {
eeer &= ~(IXGBE_EEER_TX_LPI_EN | IXGBE_EEER_RX_LPI_EN);
if (hw->device_id == IXGBE_DEV_ID_X550T) {
if (hw->mac.type == ixgbe_mac_X550) {
/* Disable advertised EEE capability */
hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_EEE_ADVT,
IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_eee_reg);
@ -816,7 +778,7 @@ void ixgbe_set_ethertype_anti_spoofing_X550(struct ixgbe_hw *hw,
**/
static s32 ixgbe_iosf_wait(struct ixgbe_hw *hw, u32 *ctrl)
{
u32 i, command;
u32 i, command = 0;
/* Check every 10 usec to see if the address cycle completed.
* The SB IOSF BUSY bit will clear when the operation is
@ -1444,8 +1406,6 @@ static s32 ixgbe_setup_kr_speed_x550em(struct ixgbe_hw *hw,
return status;
reg_val |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE;
reg_val &= ~(IXGBE_KRM_LINK_CTRL_1_TETH_AN_FEC_REQ |
IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_FEC);
reg_val &= ~(IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KR |
IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KX);
@ -1492,14 +1452,10 @@ s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw)
* to determine internal PHY mode.
*/
phy->nw_mng_if_sel = IXGBE_READ_REG(hw, IXGBE_NW_MNG_IF_SEL);
/* If internal PHY mode is KR, then initialize KR link */
if (phy->nw_mng_if_sel & IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE) {
speed = IXGBE_LINK_SPEED_10GB_FULL |
IXGBE_LINK_SPEED_1GB_FULL;
ret_val = ixgbe_setup_kr_speed_x550em(hw, speed);
}
phy->ops.identify_sfp = ixgbe_identify_sfp_module_X550em;
}
@ -1516,7 +1472,7 @@ s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw)
/* Set functions pointers based on phy type */
switch (hw->phy.type) {
case ixgbe_phy_x550em_kx4:
phy->ops.setup_link = ixgbe_setup_kx4_x550em;
phy->ops.setup_link = NULL;
phy->ops.read_reg = ixgbe_read_phy_reg_x550em;
phy->ops.write_reg = ixgbe_write_phy_reg_x550em;
break;
@ -1543,7 +1499,11 @@ s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw)
ret_val = ixgbe_setup_kr_speed_x550em(hw, speed);
}
phy->ops.enter_lplu = ixgbe_enter_lplu_t_x550em;
/* setup SW LPLU only for first revision */
if (!(IXGBE_FUSES0_REV1 & IXGBE_READ_REG(hw,
IXGBE_FUSES0_GROUP(0))))
phy->ops.enter_lplu = ixgbe_enter_lplu_t_x550em;
phy->ops.handle_lasi = ixgbe_handle_lasi_ext_t_x550em;
phy->ops.reset = ixgbe_reset_phy_t_X550em;
break;
@ -1580,9 +1540,14 @@ s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
/* flush pending Tx transactions */
ixgbe_clear_tx_pending(hw);
/* PHY ops must be identified and initialized prior to reset */
if (hw->device_id == IXGBE_DEV_ID_X550EM_X_10G_T) {
/* Config MDIO clock speed before the first MDIO PHY access */
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
hlreg0 &= ~IXGBE_HLREG0_MDCSPD;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
}
/* Identify PHY and related function pointers */
/* PHY ops must be identified and initialized prior to reset */
status = hw->phy.ops.init(hw);
if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)
@ -1659,13 +1624,6 @@ s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
hw->mac.num_rar_entries = 128;
hw->mac.ops.init_rx_addrs(hw);
if (hw->device_id == IXGBE_DEV_ID_X550EM_X_10G_T) {
/* Config MDIO clock speed. */
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
hlreg0 &= ~IXGBE_HLREG0_MDCSPD;
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
}
if (hw->device_id == IXGBE_DEV_ID_X550EM_X_SFP)
ixgbe_setup_mux_ctl(hw);
@ -1726,43 +1684,6 @@ s32 ixgbe_setup_kr_x550em(struct ixgbe_hw *hw)
return ixgbe_setup_kr_speed_x550em(hw, hw->phy.autoneg_advertised);
}
/**
* ixgbe_setup_kx4_x550em - Configure the KX4 PHY.
* @hw: pointer to hardware structure
*
* Configures the integrated KX4 PHY.
**/
s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw)
{
s32 status;
u32 reg_val;
status = ixgbe_read_iosf_sb_reg_x550(hw, IXGBE_KX4_LINK_CNTL_1,
IXGBE_SB_IOSF_TARGET_KX4_PCS, &reg_val);
if (status)
return status;
reg_val &= ~(IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4 |
IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX);
reg_val |= IXGBE_KX4_LINK_CNTL_1_TETH_AN_ENABLE;
/* Advertise 10G support. */
if (hw->phy.autoneg_advertised & IXGBE_LINK_SPEED_10GB_FULL)
reg_val |= IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4;
/* Advertise 1G support. */
if (hw->phy.autoneg_advertised & IXGBE_LINK_SPEED_1GB_FULL)
reg_val |= IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX;
/* Restart auto-negotiation. */
reg_val |= IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART;
status = ixgbe_write_iosf_sb_reg_x550(hw, IXGBE_KX4_LINK_CNTL_1,
IXGBE_SB_IOSF_TARGET_KX4_PCS, reg_val);
return status;
}
/**
* ixgbe_setup_mac_link_sfp_x550em - Setup internal/external the PHY for SFP
* @hw: pointer to hardware structure
@ -1791,38 +1712,53 @@ s32 ixgbe_setup_mac_link_sfp_x550em(struct ixgbe_hw *hw,
if (ret_val != IXGBE_SUCCESS)
return ret_val;
/* Configure CS4227 for LINE connection rate then type. */
reg_slice = IXGBE_CS4227_LINE_SPARE22_MSB + (hw->bus.lan_id << 12);
reg_val = (speed & IXGBE_LINK_SPEED_10GB_FULL) ? 0 : 0x8000;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
if (!(hw->phy.nw_mng_if_sel & IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE)) {
/* Configure CS4227 LINE side to 10G SR. */
reg_slice = IXGBE_CS4227_LINE_SPARE22_MSB +
(hw->bus.lan_id << 12);
reg_val = IXGBE_CS4227_SPEED_10G;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB + (hw->bus.lan_id << 12);
if (setup_linear)
reg_val = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 0x1;
else
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB +
(hw->bus.lan_id << 12);
reg_val = (IXGBE_CS4227_EDC_MODE_SR << 1) | 0x1;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
/* Configure CS4227 for HOST connection rate then type. */
reg_slice = IXGBE_CS4227_HOST_SPARE22_MSB + (hw->bus.lan_id << 12);
reg_val = (speed & IXGBE_LINK_SPEED_10GB_FULL) ? 0 : 0x8000;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
/* Configure CS4227 for HOST connection rate then type. */
reg_slice = IXGBE_CS4227_HOST_SPARE22_MSB +
(hw->bus.lan_id << 12);
reg_val = (speed & IXGBE_LINK_SPEED_10GB_FULL) ?
IXGBE_CS4227_SPEED_10G : IXGBE_CS4227_SPEED_1G;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
reg_slice = IXGBE_CS4227_HOST_SPARE24_LSB + (hw->bus.lan_id << 12);
if (setup_linear)
reg_val = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 0x1;
else
reg_val = (IXGBE_CS4227_EDC_MODE_SR << 1) | 0x1;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
reg_slice = IXGBE_CS4227_HOST_SPARE24_LSB +
(hw->bus.lan_id << 12);
if (setup_linear)
reg_val = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 0x1;
else
reg_val = (IXGBE_CS4227_EDC_MODE_SR << 1) | 0x1;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
/* If internal link mode is XFI, then setup XFI internal link. */
if (!(hw->phy.nw_mng_if_sel & IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE))
/* Setup XFI internal link. */
ret_val = ixgbe_setup_ixfi_x550em(hw, &speed);
} else {
/* Configure internal PHY for KR/KX. */
ixgbe_setup_kr_speed_x550em(hw, speed);
/* Configure CS4227 LINE side to proper mode. */
reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB +
(hw->bus.lan_id << 12);
if (setup_linear)
reg_val = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 0x1;
else
reg_val = (IXGBE_CS4227_EDC_MODE_SR << 1) | 0x1;
ret_val = ixgbe_write_i2c_combined(hw, IXGBE_CS4227, reg_slice,
reg_val);
}
return ret_val;
}
@ -2743,6 +2679,10 @@ s32 ixgbe_enter_lplu_t_x550em(struct ixgbe_hw *hw)
u32 save_autoneg;
bool link_up;
/* SW LPLU not required on later HW revisions. */
if (IXGBE_FUSES0_REV1 & IXGBE_READ_REG(hw, IXGBE_FUSES0_GROUP(0)))
return IXGBE_SUCCESS;
/* If blocked by MNG FW, then don't restart AN */
if (ixgbe_check_reset_blocked(hw))
return IXGBE_SUCCESS;
@ -2924,7 +2864,7 @@ s32 ixgbe_setup_fc_X550em(struct ixgbe_hw *hw)
goto out;
}
if (hw->phy.media_type == ixgbe_media_type_backplane) {
if (hw->device_id == IXGBE_DEV_ID_X550EM_X_KR) {
ret_val = ixgbe_read_iosf_sb_reg_x550(hw,
IXGBE_KRM_AN_CNTL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
@ -2940,9 +2880,8 @@ s32 ixgbe_setup_fc_X550em(struct ixgbe_hw *hw)
IXGBE_KRM_AN_CNTL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
/* Not all devices fully support AN. */
if (hw->device_id == IXGBE_DEV_ID_X550EM_X_KR)
hw->fc.disable_fc_autoneg = TRUE;
/* This device does not fully support AN. */
hw->fc.disable_fc_autoneg = TRUE;
}
out:

View File

@ -69,7 +69,7 @@ void ixgbe_set_ethertype_anti_spoofing_X550(struct ixgbe_hw *hw,
s32 ixgbe_write_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr,
u32 device_type, u32 data);
s32 ixgbe_read_iosf_sb_reg_x550(struct ixgbe_hw *hw, u32 reg_addr,
u32 device_type, u32 *data);
u32 device_type, u32 *data);
void ixgbe_disable_mdd_X550(struct ixgbe_hw *hw);
void ixgbe_enable_mdd_X550(struct ixgbe_hw *hw);
void ixgbe_mdd_event_X550(struct ixgbe_hw *hw, u32 *vf_bitmap);
@ -82,7 +82,6 @@ void ixgbe_init_mac_link_ops_X550em(struct ixgbe_hw *hw);
s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw);
s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw);
s32 ixgbe_setup_kr_x550em(struct ixgbe_hw *hw);
s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw);
s32 ixgbe_init_ext_t_x550em(struct ixgbe_hw *hw);
s32 ixgbe_setup_internal_phy_t_x550em(struct ixgbe_hw *hw);
s32 ixgbe_setup_phy_loopback_x550em(struct ixgbe_hw *hw);

View File

@ -5,7 +5,7 @@
KMOD = if_ix
SRCS = device_if.h bus_if.h pci_if.h
SRCS += opt_inet.h opt_inet6.h opt_rss.h
SRCS += if_ix.c ix_txrx.c
SRCS += if_ix.c ix_txrx.c ixgbe_osdep.c
# Shared source
SRCS += ixgbe_common.c ixgbe_api.c ixgbe_phy.c ixgbe_mbx.c ixgbe_vf.c
SRCS += ixgbe_dcb.c ixgbe_dcb_82598.c ixgbe_dcb_82599.c

View File

@ -5,7 +5,7 @@
KMOD = if_ixv
SRCS = device_if.h bus_if.h pci_if.h
SRCS += opt_inet.h opt_inet6.h opt_rss.h
SRCS += if_ixv.c ix_txrx.c
SRCS += if_ixv.c ix_txrx.c ixgbe_osdep.c
# Shared source
SRCS += ixgbe_common.c ixgbe_api.c ixgbe_phy.c ixgbe_mbx.c ixgbe_vf.c
SRCS += ixgbe_dcb.c ixgbe_dcb_82598.c ixgbe_dcb_82599.c