To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet type among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Note that around 2.5% performance drop (64B) was observed of doing
4 ports (1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
To unify packet types among all PMDs, bit masks of packet type for
'ol_flags' are replaced by unified packet type.
To avoid breaking ABI compatibility, all the changes would be
enabled by RTE_NEXT_ABI.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
In order to unify the packet type, the field of 'packet_type' in
'struct rte_mbuf' needs to be extended from 16 to 32 bits.
Accordingly, some fields in 'struct rte_mbuf' are re-organized to support
this change for Vector PMD.
As 'struct rte_kni_mbuf' for KNI should be right mapped to
'struct rte_mbuf', it should be modified accordingly.
In ixgbe PMD driver, corresponding changes are added for the mbuf changes,
especially the bit masks of packet type for 'ol_flags' are replaced by
unified packet type. In addition, more packet types (UDP, TCP and SCTP)
are supported in vectorized ixgbe PMD.
To avoid breaking ABI compatibility, all the changes would be enabled by
RTE_NEXT_ABI.
Note that around 2% performance drop (64B) was observed of doing 4 ports
(1 port per 82599 card) IO forwarding on the same SNB core.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Added RX and TX bytes counter support to the PCAP statistics.
Added TX counter support for pcap dumper and interface functions.
Renamed RX and TX packet counters for consistency.
Signed-off-by: Klaus Degner <kd@allegro-packets.com>
Tested-by: John McNamara <john.mcnamara@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
The cuckoo hash has a fixed number of entries per bucket, so the
configuration parameter for this is unused. We change this field in the
parameters struct to "reserved" to indicate that there is now no such
parameter value, while at the same time keeping ABI consistency.
Fixes: 48a399119619 ("hash: replace with cuckoo hash implementation")
Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
This commit adds a poll mode driver for the mPIPE hardware present on
TILE-Gx SoCs.
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Extend eth_pcap rx and tx to support jumbo frames.
On the receive side read large packets into multiple mbufs and
on the transmit side convert them back to a single pcap buffer.
Signed-off-by: Tero Aho <tero.aho@coriant.com>
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
fm10k has 128 RETA entries in 32 registers, but it only initialized
first 32 when doing multiple rx queue configurations. This fix will
initialize all 128 entries instead.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
The default MAC address is read from hardware and copied to
Device Ethernet Link address array in the device initialization phase,
which bypasses fm10k MAC address number check mechanism,
and will cause an error message when adding default VLAN:
"MAC address number not match"
Fix it by moving default MAC address registration to device
initialize phase.
Fixes: f5c1a236a218 ("fm10k: fix default mac/vlan in switch")
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Michael Qiu <michael.qiu@intel.com>
If a descriptor the device drive is handling is the context descriptor,
its type value will be 0x1.
When using the not operator ! to do the conditional check, if the expression
value is zero, the device driver will consider the transaction for this
descriptor has been completed, even its DD field is still 0x1 which means
NIC has not finished the operation on this descriptor.
Use the 0xF to check the DD status to avoid the above issue happens.
Fixes: 4861cde46116 ("i40e: new poll mode driver")
Fixes: 05999aab4ca6 ("i40e: add or delete flow director")
Signed-off-by: Zhe Tao <zhe.tao@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
There's a parameter "autoneg on|off" in testpmd CLI "set flow_ctrl ...". This
parameter is used to enable/disable auto negotiation for flow control. But it's
not supported yet.
The auto negotiation is enabled by default, we have no way to disable it. This
patch lets the parameter "autoneg on|off" be supproted.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Changchun Ouyang <changchun.ouyang@intel.com>
When initialize the hardware, the stat should be reset.
Otherwise when detach then attach port, the stat will not
be re-init to zero.
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Update pci id table to include more supported Chelsio T5 devices.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
CXGBE PMD rx allocates a new mbuf everytime, which could lead to performance
hit. Instead, do bulk allocation of mbufs and re-use them.
Also, simplify the overall rx-handler, and update its logic to fix rx perf.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Signed-off-by: Kumar Sanghvi <kumaras@chelsio.com>
Add ixgbe support for new ethdev APIs to enable and read IEEE1588/
802.1AS PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add ixgbe support for new ethdev APIs to enable and read IEEE1588
PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Add e1000/igb support for new ethdev APIs to enable and read
IEEE1588 PTP timestamps.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
No reason to inline large functions. Compiler will decide already
based on optimization level.
Also register array should be const.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
By defining macro as a stub it is possible to get rid of #ifdef's
in the actual code. Always evaluate the argument (even in the stub)
so that there are no extra unused variable errors.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Refactor the logic to compute receive offload flags to a simpler
function. And add support for putting RSS flow hash into packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bill Hong <bhong@brocade.com>
Acked-by: Yong Wang <yongwang@vmware.com>
The Intel version of VMXNET3 driver does not handle link state properly.
The VMXNET3 API returns 1 if connected and 0 if disconnected.
Also need to return correct value to indicate state change.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Change sending loop to support multi-segment mbufs.
The VMXNET3 api has start-of-packet and end-packet flags, so it
is not hard to send multi-segment mbuf's.
Also, update descriptor in 32 bit value rather than toggling
bitfields which is slower and error prone.
Based on code in earlier driver, and the Linux kernel driver.
Add a compiler barrier to make sure that update of earlier descriptor
are completed prior to update of generation bit on start of packet.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
There are several stats here which are never set, and have no way
to be displayed. Assume in future xstats could be used.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Remove check for packets greater than MTU. No other driver does
this, it should be handled at higher layer
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
Support the VLAN filter functionality of the VMXNET3 interface.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Yong Wang <yongwang@vmware.com>
To support querying hash key size per port, an new field of
'hash_key_size' was added in 'struct rte_eth_dev_info' for storing
hash key size in bytes.
The correct hash key size in bytes should be filled into the
'struct rte_eth_dev_info', to support querying it.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch extends flow director to support l2_payload flow
type in i40e driver.
Test report: http://dpdk.org/ml/archives/dev/2015-June/020238.html
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This path renames the mirror type in rte_eth_mirror_conf and macros,
and rework the mirror set in ixgbe drivers by using new definition.
It also fixes some coding style.
Test report: http://dpdk.org/ml/archives/dev/2015-June/019118.html
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Rename rte_eth_vmdq_mirror_conf to rte_eth_mirror_conf and move
the maximum rule id check from ethdev level to driver.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
Add checksum offload capability flags which have already been
supported for a long time.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
It configures specific registers to enable double vlan stripping
on RX side and insertion on TX side.
The RX descriptors will be parsed, the vlan tags and flags will be
saved to corresponding mbuf fields if vlan tag is detected.
The TX descriptors will be configured according to the
configurations in mbufs, to trigger the hardware insertion of
double vlan tags for each packets sent out.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Little endian to CPU order conversion had been added for reading
vlan tag from RX descriptor, while its original source line was
forgotten to delete. That's a discarded source line and should be
deleted.
Fixes: 23fcffe8ffac ("ixgbe: fix id and hash with flow director")
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
The parameter tx_free_thresh is not consistent between the drivers:
some use it as rte_eth_tx_burst() requires, some release buffers when
the number of free descriptors drop below this value.
Let's use it as most fast-path code does, which is the latter, and update
comments throughout the code to reflect that.
Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The function to clear the TX ring when a port was being closed, e.g. on
exit in testpmd, was not checking the mbuf refcnt before freeing it.
Since the function in the vector driver to clear the ring after TX does
not set the pointer to NULL post-free, this caused crashes if mbuf
debugging was turned on.
To reproduce the issue, ensure the follow config variables are set:
RTE_IXGBE_INC_VECTOR
RTE_LIBRTE_MBUF_DEBUG
Then compile up and run testpmd using 10G ports with the vector driver.
Start traffic and let some flow through, then type "stop" and "quit" at
the testpmd prompt, and crash will occur. Output below:
testpmd> quit
Stopping port 0...done
Stopping port 1...PANIC in rte_mbuf_sanity_check():
bad ref cnt
[New Thread 0x7fffabfff700 (LWP 145312)]
[New Thread 0x7fffb47fe700 (LWP 145311)]
[New Thread 0x7fffb4fff700 (LWP 145310)]
[New Thread 0x7ffff6cd5700 (LWP 145309)]
18: [/home/bruce/dpdk.org/x86_64-native-linuxapp-gcc/app/testpmd(_start+0x29)
<....snip for brevity...>
Program received signal SIGABRT, Aborted.
0x00007ffff7120a98 in raise () from /lib64/libc.so.6
A similar error occurs when clearing the RX ring, which is also fixed by
this patch.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
As well as the fast-path functions in the rxtx code, there are also
functions which set up and tear down the descriptor rings. Since these
are not performance critical functions, there is no need to have them
extensively optimized, so we add __attribute__((cold)) to their
definitions. This has the side-effect of making debugging them easier as
the compiler does not optimize them as heavily, so more variables are
accessible by default in gdb.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
When compiling the cxgbe driver with icc, multiple errors about using
enums as integers appear across a number of files, including in the base
code and in the DPDK-specific driver code.
.../drivers/net/cxgbe/cxgbe_main.c(386): error #188: enumerated type mixed
with another type
t4_get_port_type_description(pi->port_type));
^
For the errors in the base driver code we use the CFLAGS_BASE_DRIVER
approach used by other drivers to disable warnings.
For errors in the DPDK-specific code, typecasts are used to fix the
errors in the code itself.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
fm10k was failing to run in XEN domain0, as the physical
memory for DMA should be allocated and translated
in a different way for XEN domain0. So
rte_memzone_reserve_bounded() should be used for DMA
memory allocation, and rte_mem_phy2mch() should be used
for DMA memory address translation to support running
fm10k PMD in XEN domain0.
Signed-off-by: Shaopeng He <shaopeng.he@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Multicast loopback must be disabled on PF devices to prevent the adapter
from sending frames back. Required with MOFED 3.0.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
RDs are a new feature of MOFED 3.0 that makes Verbs aware of how CQ and QP
resources are being used for internal performance tuning.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Depending on adapters features and VXLAN support in the kernel, VXLAN frames
can be automatically recognized, in which case checksum validation and
generation occurs on inner and outer L3 and L4.
Signed-off-by: Olga Shern <olgas@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
TX queue elements (struct txq_elt) contain WR and SGE structures required by
ibv_post_send(). This commit replaces them with a single pointer to the
related TX mbuf considering that:
- There is no need to keep these structures around forever since the
hardware doesn't access them after ibv_post_send() and send_pending*()
have returned.
- The TX queue index stored in the WR ID field is not used for completions
anymore since they use a separate counter (elts_comp_cd).
- The WR structure itself was only useful for ibv_post_send(), it is
currently only used to store the mbuf data address and an offset to the
mbuf structure in the WR ID field. send_pending*() callbacks only require
SGEs or buffer pointers.
Therefore for single segment mbufs, send_pending() or send_pending_inline()
can be used directly without involving SGEs. For scattered mbufs, SGEs are
allocated on the stack and passed to send_pending_sg_list().
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
This commit makes scattered TX support entirely optional by moving it to a
separate function that is only available when MLX4_PMD_SGE_WR_N > 1.
Improves performance when scattered support is not needed.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
The "raw" post send interface was experimental and has been deprecated. This
commit replaces it with a new low level interface that dissociates post and
flush (doorbell) operations for improved QP performance.
The CQ polling function is updated as well.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Instead of requesting a completion event for each TX burst, request it on a
fixed schedule once every MLX4_PMD_TX_PER_COMP_REQ (currently 64) packets to
improve performance.
Signed-off-by: Alex Rosenbaum <alexr@mellanox.com>
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>