add function to support ethtool ops:
- get_reg_length
- get_regs
- get_eeprom_length
- get_eeprom
- set_eeprom
Signed-off-by: Liang-Min Larry Wang <liang-min.wang@intel.com>
Acked-by: Andrew Harvey <agh@cisco.com>
Acked-by: David Harton <dharton@cisco.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
- set_mac_addr
Signed-off-by: Liang-Min Larry Wang <liang-min.wang@intel.com>
Acked-by: Andrew Harvey <agh@cisco.com>
Acked-by: David Harton <dharton@cisco.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
- set_mac_addr
Signed-off-by: Liang-Min Larry Wang <liang-min.wang@intel.com>
Acked-by: Andrew Harvey <agh@cisco.com>
Acked-by: David Harton <dharton@cisco.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Move malloc inside eal and create a new section in MAINTAINERS file for
Memory Allocation in EAL.
Create a dummy malloc library to avoid breaking applications that have
librte_malloc in their DT_NEEDED entries.
This is the first step towards using malloc to allocate memory directly
from memsegs. Thus, memzones would allocate memory through malloc,
allowing to free memzones.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
The workaround for Tx tunnel offloading can now be replaced with packet
type flag checking.
The ol_flags for IPv4/IPv6 and tunnel Rx offloading are replaced with
packet type flags.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be enabled
by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to support
this change for Vector PMD.
As 'struct rte_kni_mbuf' for KNI should be right mapped to
'struct rte_mbuf', it should be modified accordingly.
In ixgbe PMD driver, corresponding changes are added for the mbuf changes,
especially the bit masks of packet type for 'ol_flags' are replaced by
unified packet type. In addition, more packet types (UDP, TCP and SCTP)
are supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be enabled by
RTE_NEXT_ABI.
Note that around 2% performance drop (64B) was observed of doing 4 ports
(1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Added RX and TX bytes counter support to the PCAP statistics.
Added TX counter support for pcap dumper and interface functions.
Renamed RX and TX packet counters for consistency.
Signed-off-by: Klaus Degner <kd@allegro-packets.com>
Tested-by: John McNamara <john.mcnamara@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The cuckoo hash has a fixed number of entries per bucket, so the
configuration parameter for this is unused. We change this field in the
parameters struct to "reserved" to indicate that there is now no such
parameter value, while at the same time keeping ABI consistency.
Fixes: 48a3991196 ("hash: replace with cuckoo hash implementation")
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds a poll mode driver for the mPIPE hardware present on
TILE-Gx SoCs.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Extend eth_pcap rx and tx to support jumbo frames.
On the receive side read large packets into multiple mbufs and
on the transmit side convert them back to a single pcap buffer.
Signed-off-by: Tero Aho <tero.aho@coriant.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
fm10k has 128 RETA entries in 32 registers, but it only initialized
first 32 when doing multiple rx queue configurations. This fix will
initialize all 128 entries instead.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
The default MAC address is read from hardware and copied to
Device Ethernet Link address array in the device initialization phase,
which bypasses fm10k MAC address number check mechanism,
and will cause an error message when adding default VLAN:
"MAC address number not match"
Fix it by moving default MAC address registration to device
initialize phase.
Fixes: f5c1a236a2 ("fm10k: fix default mac/vlan in switch")
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
If a descriptor the device drive is handling is the context descriptor,
its type value will be 0x1.
When using the not operator ! to do the conditional check, if the expression
value is zero, the device driver will consider the transaction for this
descriptor has been completed, even its DD field is still 0x1 which means
NIC has not finished the operation on this descriptor.
Use the 0xF to check the DD status to avoid the above issue happens.
Fixes: 4861cde461 ("i40e: new poll mode driver")
Fixes: 05999aab4c ("i40e: add or delete flow director")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
There's a parameter "autoneg on|off" in testpmd CLI "set flow_ctrl ...". This
parameter is used to enable/disable auto negotiation for flow control. But it's
not supported yet.
The auto negotiation is enabled by default, we have no way to disable it. This
patch lets the parameter "autoneg on|off" be supproted.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
When initialize the hardware, the stat should be reset.
Otherwise when detach then attach port, the stat will not
be re-init to zero.
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Update pci id table to include more supported Chelsio T5 devices.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
CXGBE PMD rx allocates a new mbuf everytime, which could lead to performance
hit. Instead, do bulk allocation of mbufs and re-use them.
Also, simplify the overall rx-handler, and update its logic to fix rx perf.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Add ixgbe support for new ethdev APIs to enable and read IEEE1588/
802.1AS PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add ixgbe support for new ethdev APIs to enable and read IEEE1588
PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add e1000/igb support for new ethdev APIs to enable and read
IEEE1588 PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
No reason to inline large functions. Compiler will decide already
based on optimization level.
Also register array should be const.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
By defining macro as a stub it is possible to get rid of #ifdef's
in the actual code. Always evaluate the argument (even in the stub)
so that there are no extra unused variable errors.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Refactor the logic to compute receive offload flags to a simpler
function. And add support for putting RSS flow hash into packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bill Hong <bhong@brocade.com>
Acked-by: Yong Wang <yongwang@vmware.com>
The Intel version of VMXNET3 driver does not handle link state properly.
The VMXNET3 API returns 1 if connected and 0 if disconnected.
Also need to return correct value to indicate state change.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Change sending loop to support multi-segment mbufs.
The VMXNET3 api has start-of-packet and end-packet flags, so it
is not hard to send multi-segment mbuf's.
Also, update descriptor in 32 bit value rather than toggling
bitfields which is slower and error prone.
Based on code in earlier driver, and the Linux kernel driver.
Add a compiler barrier to make sure that update of earlier descriptor
are completed prior to update of generation bit on start of packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
There are several stats here which are never set, and have no way
to be displayed. Assume in future xstats could be used.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Remove check for packets greater than MTU. No other driver does
this, it should be handled at higher layer
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Support the VLAN filter functionality of the VMXNET3 interface.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
To support querying hash key size per port, an new field of
'hash_key_size' was added in 'struct rte_eth_dev_info' for storing
hash key size in bytes.
The correct hash key size in bytes should be filled into the
'struct rte_eth_dev_info', to support querying it.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch extends flow director to support l2_payload flow
type in i40e driver.
Test report: http://dpdk.org/ml/archives/dev/2015-June/020238.html
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This path renames the mirror type in rte_eth_mirror_conf and macros,
and rework the mirror set in ixgbe drivers by using new definition.
It also fixes some coding style.
Test report: http://dpdk.org/ml/archives/dev/2015-June/019118.html
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Rename rte_eth_vmdq_mirror_conf to rte_eth_mirror_conf and move
the maximum rule id check from ethdev level to driver.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Add checksum offload capability flags which have already been
supported for a long time.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
It configures specific registers to enable double vlan stripping
on RX side and insertion on TX side.
The RX descriptors will be parsed, the vlan tags and flags will be
saved to corresponding mbuf fields if vlan tag is detected.
The TX descriptors will be configured according to the
configurations in mbufs, to trigger the hardware insertion of
double vlan tags for each packets sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Little endian to CPU order conversion had been added for reading
vlan tag from RX descriptor, while its original source line was
forgotten to delete. That's a discarded source line and should be
deleted.
Fixes: 23fcffe8ff ("ixgbe: fix id and hash with flow director")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The parameter tx_free_thresh is not consistent between the drivers:
some use it as rte_eth_tx_burst() requires, some release buffers when
the number of free descriptors drop below this value.
Let's use it as most fast-path code does, which is the latter, and update
comments throughout the code to reflect that.
Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The function to clear the TX ring when a port was being closed, e.g. on
exit in testpmd, was not checking the mbuf refcnt before freeing it.
Since the function in the vector driver to clear the ring after TX does
not set the pointer to NULL post-free, this caused crashes if mbuf
debugging was turned on.
To reproduce the issue, ensure the follow config variables are set:
RTE_IXGBE_INC_VECTOR
RTE_LIBRTE_MBUF_DEBUG
Then compile up and run testpmd using 10G ports with the vector driver.
Start traffic and let some flow through, then type "stop" and "quit" at
the testpmd prompt, and crash will occur. Output below:
testpmd> quit
Stopping port 0...done
Stopping port 1...PANIC in rte_mbuf_sanity_check():
bad ref cnt
[New Thread 0x7fffabfff700 (LWP 145312)]
[New Thread 0x7fffb47fe700 (LWP 145311)]
[New Thread 0x7fffb4fff700 (LWP 145310)]
[New Thread 0x7ffff6cd5700 (LWP 145309)]
18: [/home/bruce/dpdk.org/x86_64-native-linuxapp-gcc/app/testpmd(_start+0x29)
<....snip for brevity...>
Program received signal SIGABRT, Aborted.
0x00007ffff7120a98 in raise () from /lib64/libc.so.6
A similar error occurs when clearing the RX ring, which is also fixed by
this patch.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
As well as the fast-path functions in the rxtx code, there are also
functions which set up and tear down the descriptor rings. Since these
are not performance critical functions, there is no need to have them
extensively optimized, so we add __attribute__((cold)) to their
definitions. This has the side-effect of making debugging them easier as
the compiler does not optimize them as heavily, so more variables are
accessible by default in gdb.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When compiling the cxgbe driver with icc, multiple errors about using
enums as integers appear across a number of files, including in the base
code and in the DPDK-specific driver code.
.../drivers/net/cxgbe/cxgbe_main.c(386): error #188: enumerated type mixed
with another type
t4_get_port_type_description(pi->port_type));
^
For the errors in the base driver code we use the CFLAGS_BASE_DRIVER
approach used by other drivers to disable warnings.
For errors in the DPDK-specific code, typecasts are used to fix the
errors in the code itself.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
fm10k was failing to run in XEN domain0, as the physical
memory for DMA should be allocated and translated
in a different way for XEN domain0. So
rte_memzone_reserve_bounded() should be used for DMA
memory allocation, and rte_mem_phy2mch() should be used
for DMA memory address translation to support running
fm10k PMD in XEN domain0.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Multicast loopback must be disabled on PF devices to prevent the adapter
from sending frames back. Required with MOFED 3.0.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
RDs are a new feature of MOFED 3.0 that makes Verbs aware of how CQ and QP
resources are being used for internal performance tuning.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Depending on adapters features and VXLAN support in the kernel, VXLAN frames
can be automatically recognized, in which case checksum validation and
generation occurs on inner and outer L3 and L4.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
TX queue elements (struct txq_elt) contain WR and SGE structures required by
ibv_post_send(). This commit replaces them with a single pointer to the
related TX mbuf considering that:
- There is no need to keep these structures around forever since the
hardware doesn't access them after ibv_post_send() and send_pending*()
have returned.
- The TX queue index stored in the WR ID field is not used for completions
anymore since they use a separate counter (elts_comp_cd).
- The WR structure itself was only useful for ibv_post_send(), it is
currently only used to store the mbuf data address and an offset to the
mbuf structure in the WR ID field. send_pending*() callbacks only require
SGEs or buffer pointers.
Therefore for single segment mbufs, send_pending() or send_pending_inline()
can be used directly without involving SGEs. For scattered mbufs, SGEs are
allocated on the stack and passed to send_pending_sg_list().
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit makes scattered TX support entirely optional by moving it to a
separate function that is only available when MLX4_PMD_SGE_WR_N > 1.
Improves performance when scattered support is not needed.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The "raw" post send interface was experimental and has been deprecated. This
commit replaces it with a new low level interface that dissociates post and
flush (doorbell) operations for improved QP performance.
The CQ polling function is updated as well.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Instead of requesting a completion event for each TX burst, request it on a
fixed schedule once every MLX4_PMD_TX_PER_COMP_REQ (currently 64) packets to
improve performance.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit replaces the CQ polling and QP posting functions
(mlx4_rx_burst() only) with a new low level interface to improve
performance.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Gilad Berman <giladb@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Querying the netdevice instead of deriving the port's MAC address from its
GID is less prone to errors. There is no guarantee that the GID will always
contain it nor that the algorithm won't change.
Signed-off-by: Or Ami <ora@mellanox.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit fixes the "Multiple RX VLAN filters can be configured, but only
the first one works" bug. Since a single flow specification cannot contain
several VLAN definitions, the flows table is extended with MLX4_MAX_VLAN_IDS
possible specifications per configured MAC address.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Or Ami <ora@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Starting from MLNX_OFED 3.0 FW 2.34.5000 when working with optimized
steering mode (-7) QPs can be attached to the port's MAC, therefore no need
for the check.
Signed-off-by: Or Ami <ora@mellanox.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The number of descriptors must be a multiple of MLX4_PMD_SGE_WR_N.
Signed-off-by: Or Ami <ora@mellanox.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit drops "exp" from related function and type names to stop using
the experimental API.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Query interface properties using the ethtool API instead of Verbs
through ibv_query_port(). The returned information is more accurate for
Ethernet links since several link speeds cannot be mapped to Verbs
semantics.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Although using the PMD from a forked process is still unsupported, this
commit makes Verbs safe enough for applications to call fork() for other
purposes.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Make rxq_setup_qp() handle inline support like rxq_setup_qp_rss() instead of
having two separate functions.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This is done by storing the current index in the RX queue structure.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
HAVE_EXP_QUERY_DEVICE is used to check whether ibv_exp_query_device() can be
used. RSS and inline receive features depend on it.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Since Mellanox OFED 3.0 and Linux 3.15, interface port numbers are stored
in dev_port instead of dev_id sysfs files.
Signed-off-by: Or Ami <ora@mellanox.com>
Signed-off-by: Nitzan Weller <nitzanwe@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
When failing to allocate a segment, mlx4_rx_burst_sp() may call
rte_pktmbuf_free() on an incomplete scattered mbuf whose next pointer
in the last segment is not set.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
GCC_VERSION is empty in case of clang:
/bin/sh: line 0: test: -ge: unary operator expected
It cannot be quoted because an integer is expected.
So the fix is to check empty value in a separate test.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Adds cxgbe poll mode driver for DPDK under drivers/net/cxgbe directory.
This patch:
1. Adds the Makefile to compile cxgbe pmd.
2. Registers and initializes the cxgbe pmd driver.
Enable cxgbe PMD for compilation and linking with changes to:
1. config/common_linuxapp to add macros for cxgbe pmd.
2. drivers/net/Makefile to add cxgbe pmd to the compile list.
3. mk/rte.app.mk to add cxgbe pmd to link.
Update MAINTAINERS file to claim responsibility for the cxgbe PMD.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
[Thomas: add disabled config for bsdapp]
Adds hardware specific api for all the Chelsio T5 adapters under
drivers/net/cxgbe/base directory.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Add missing initialization of to pci_dev driver
The link from pci_dev back to the ethernet driver was not being set.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
When stopping the bond device we don't need to try and free up the LACPDU's
from deactivated devices since this is covered by
bond_mode_8023ad_deactivate_slave().
This fixes the following:
[ 0.100569] PANIC in bond_ethdev_stop():
[ 0.100589] line 1172 assert "port->rx_ring != NULL" failed
Signed-off-by: Jan Blunck <jblunck@infradead.org>
Acked-by: Declan Doherty <declan.doherty@intel.com>
On Fortville NIC, link status change interrupt callback is not executed when
slave in bonding is (re-)started. It causes that slave's NIC is inactive even
if its link status is up on the start.
This patch invokes lsc callback, just after port's start, to check its initial
link status and manage properly.
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
This patch adds max_link_up_time parameter to mac structure.
This parameter is used to control maximum link polling time in
ixgbe_check_mac_link functions. It is required to prevent long wait
time on x550 PHY that have no external link.
Since x550 is handled by software, we have to reset internal (PHY to
PHY) link when external link changes, and after reset, we have to wait for
this link to be established. As a result of not having interrupts, we have
to poll for link state, and we know that this link comes up much faster
than default 9 seconds. This parameter is added to prevent waiting 9
seconds for link when external link is not established.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Since this 82599 device supports WoL(Wake on LAN), driver needs
a define for the sub device ID.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds support for new x550 PHY IDs:
0x0154 0x0223
0x0154 0x0221
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Add new values that vary by MAC type that are introduced in
x540, x550.
And remove some meaningless comments BTW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Add a const u32 *mvals pointer to the ixgbe_hw struct to point to an
array of mac-type-dependent values. These can include register
offsets, masks, whatever can be in a u32. When the ixgbe_hw struct
is initialized, a pointer to the appropriate array must be set.
The IXGBE_I2CCTL register references are changed to use it.
Use the mvals array to hold differing values used for
IXGBE_I2C_* symbols.
Use the mvals array to hold differing values used for
IXGBE_*_GPI* symbols.
Use the mvals array to hold differing values use for CIAA and
CIAD symbols.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
According to the hardware, the LINE side of the cs4227 needs to be
set to 10 Gbps SR mode regardless of the configuration of the link.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The code in the if statement is specific to the KR PHY. However,
the conditional was only checking for backplane. If the driver
was using the KX4 backplane PHY, this code would execute but
would write to the wrong PHY.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch disables SW LPLU on x557 V2. It also sets the
enter_lplu function pointer to NULL on x557 V2. LPLU will be
implemented in FW for all x557 V2 interfaces. The SW LPLU
implementation must be disabled on V2 to avoid conflicts with
FW. SW LPLU support is still required for x557 V1.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch implements ixgbe_led_on_t_X550em and ixgbe_led_off_t_X550em
function for turning on and off LEDs on X557 external PHY.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The MDIO clock needs to be configured for a specific speed for
x550em. We expected this to be done automatically, but in
the early days of the project this was not happening. We put
code in to do this ourselves.
Eventually, we decided that there is no harm in having SW do
this all the time, so we may not remove this code in the future.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Ensure link is still up after getting the speed, to ensure that the
speed read is valid.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch removes the clearing of the FEC(Forward Error Correction)
bits in ixgbe_setup_kr_speed_x550em. FEC default enablement is
configured via the NVM and SW should not be overriding these defaults
in this function.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch adds support for x550em KR/iXFI internal link modes. The initial
x550em-10GBASET and x550em-SFP designs use iXFI internal link mode between
the internal PHY and the external PHY.
However future designs will use a KR internal link. This patch is intended
to future proof the driver by adding the KR internal link support in the
driver now.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
EEE(Energy Efficient Ethernet) is not supported on the initial revision
of IXGBE_DEV_ID_X550EM_X_KR. We determine the revision by reading a fuse
register.
Also, the requirements for FEC(Forward Error Correction) have changed
slightly. Now, we don't change the "request" bit at all. When EEE is
enabled, we advertise that we are capable. When EEE is disabled, we do
not advertise that we are capable. This change makes us consistent with
the power-on defaults that are in the NVM.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
The FEC(Forward Error Correction) feature had been disabled
because it increases power consumption. However, some customers
want to use it. This patch enables FEC when EEE(Energy Efficient
Ethernet) is disabled; FEC was already being disabled when EEE
was enabled, but now both are done in the same function. The two
features are not allowed to be enabled at the same time. The two
features cannot both be disabled. If this ability is ever
determined to be needed, we will need to define a new user parameter
to control FEC independently of EEE.
Fixes: d4c9ffd4fe ("ixgbe/base: disable X550em FEC to save power")
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
During init, check the ucode running in the CS4227. If
it is not responding correctly, reset the part. This is
a global reset so it must only be done the first time a
driver loads after power-on.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>