Commit Graph

64 Commits

Author SHA1 Message Date
Olivier Matz
ebb7bcabb8 drivers/net: do not touch mbuf next or nb segs on Rx
Now that the m->next pointer and m->nb_segs is expected to be set (to
NULL and 1 respectively) after a mempool_get(), we can avoid to write them
in the Rx functions of drivers.

Only some drivers are patched, it's not an exhaustive patch. It gives
the idea to do the same in other drivers.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-05 11:30:29 +02:00
Olivier Matz
54e9290269 mbuf: make segment prefree function public
Document the function and make it public, since it is used at several
places in the drivers. The old one is marked as deprecated.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-04-05 11:30:29 +02:00
Bernard Iremonger
9e030de1d0 net/ixgbe: allocate TC bandwidth
Ixgbe supports to set the relative bandwidth for the TCs.
It's a global setting for the PF and all the VFs of a
physical port.
This feature provide the API to set the bandwidth.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2017-04-04 19:03:03 +02:00
Zhiyong Yang
646412f9ff net/ixgbe: remove limit of Tx burst size
To add a wrapper function to remove the limit of tx burst size and
implement the "make an best effort to transmit the pkts" policy.
The patch makes ixgbe vec function work in a consistent behavior
like ixgbe_xmit_pkts_simple and ixgbe_xmit_pkts do that.

Cc: Helin Zhang <helin.zhang@intel.com>
Cc: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Zhiyong Yang <zhiyong.yang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2017-04-04 19:02:55 +02:00
Wenzhuo Lu
1c4da4ef96 net/ixgbe: fix TC bandwidth setting
4 and 8 TCs are supported on ixgbe. By default there're
8 TCs. So when initializing the device, the bandwidth for
8 TCs is set.
When changing the TC number, it's only considered setting
the bandwidth for 4 TCs. If the user change the number
from 4 to 8, the TCs' bandwidth is not right.

Fixes: 0807f80d35 ("ixgbe: DCB / flow control")
Cc: stable@dpdk.org

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2017-04-04 18:59:47 +02:00
Jingjing Wu
1b51b0bbc2 net/ixgbe: fix multi-queue mode check in SRIOV mode
In SRIOV case, ETH_MQ_RX_VMDQ_DCB and ETH_MQ_RX_DCB should be considered as
the same meaning, due to the multi-queue mapping is the same SRIOV and VMDq
in ixgbe.

Fixes: 27b609cbd1 ("ethdev: move the multi-queue mode check to specific drivers")
Cc: stable@dpdk.org

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
2017-04-04 15:52:51 +02:00
Wenzhuo Lu
8552304c86 net/ixgbe: fix all queues drop setting of DCB
DCB is split to RX and TX mode. All-queues-drop is set for TX mode.
It's not appropriate because all-queue-drop is a RX feature.
Move this setting from TX to RX.

Fixes: f3f9b17bb8 ("net/ixgbe: support multiqueue mode VMDq DCB with SRIOV")
Cc: stable@dpdk.org

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2017-04-04 15:52:51 +02:00
Olivier Matz
a2919e13d9 net/ixgbe: implement descriptor status API
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-03-30 15:27:42 +02:00
Olivier Matz
0ef850c4f6 ethdev: move a queue id check to generic layer
The check of queue_id is done in all drivers implementing
rte_eth_rx_queue_count(). Factorize this check in the generic function.

Note that the nfp driver was doing the check differently, which could
induce crashes if the queue index was too big.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2017-03-09 19:29:51 +01:00
Jianbo Liu
a98212de4a net/ixgbe: fix received packets number for ARM
To get better performance, Rx bulk alloc recv function will scan 8 descs
in one time, but the statuses are not consistent on ARM platform because
the memory allocated for Rx descriptors is cacheable hugepages.
This patch is to calculate the number of received packets by scan DD bit
sequentially, and stops when meeting the first packet with DD bit unset.

Fixes: 7431041062 ("ixgbe: allow rx bulk alloc")
Cc: stable@dpdk.org

Signed-off-by: Jianbo Liu <jianbo.liu@linaro.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2017-02-10 12:25:49 +01:00
Jingjing Wu
859a17db7e net/ixgbe: fix bitmask of supported Tx flags
Add missed PKT_TX_IEEE1588_TMST to bitmask of all supported
packet Tx flags.

Fixes: 7829b8d52b ("net/ixgbe: add Tx preparation")

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2017-02-10 12:25:49 +01:00
Santosh Shukla
a66114965b net/ixgbe: use I/O device memory read/write API
Replace the raw I/O device memory read/write access with eal
abstraction for I/O device memory read/write access to fix
portability issues across different architectures.

CC: Helin Zhang <helin.zhang@intel.com>
CC: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
2017-01-18 17:18:26 +01:00
Ilya Maximets
2652a9fb21 net/ixgbe: allow bulk alloc for the max size desc ring
The only reason why bulk alloc disabled for the rings with
more than (IXGBE_MAX_RING_DESC - RTE_PMD_IXGBE_RX_MAX_BURST)
descriptors is the possible out-of-bound access to the dma
memory. But it's the artificial limit and can be easily
avoided by allocating of RTE_PMD_IXGBE_RX_MAX_BURST more
descriptors in memory. This will not interfere the HW and,
as soon as all rings' memory zeroized, Rx functions will
work correctly.

This change allows to use vectorized Rx functions with
4096 descriptors in Rx ring which is important to achieve
zero packet drop rate in high-load installations.

Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2017-01-17 19:41:42 +01:00
Tiwei Bie
b35d309710 net/ixgbe: add MACsec offload
MACsec (or LinkSec, 802.1AE) is a MAC level encryption/authentication
scheme defined in IEEE 802.1AE that uses symmetric cryptography.
This commit adds the MACsec offload support for ixgbe.

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2017-01-15 19:16:37 +01:00
Tomasz Kulasek
7829b8d52b net/ixgbe: add Tx preparation
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2017-01-04 20:40:22 +01:00
Bernard Iremonger
f3f9b17bb8 net/ixgbe: support multiqueue mode VMDq DCB with SRIOV
Allow Data Center Bridge (DCB) configuration when SRIOV is enabled.

Signed-off-by: Rahul R Shah <rahul.r.shah@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-10-26 20:12:45 +02:00
Xiao Wang
9bc52f49fc net/ixgbe: implement new Rx checksum flag
Add CKSUM_GOOD flag to distinguish a good checksum from an unknown one.

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Jing Chen <jing.d.chen@intel.com>
2016-10-14 01:41:34 +02:00
Amine Kherbouche
f03723017a remove unused ring includes
This patch removes all unused <rte_ring.h> headers.

Signed-off-by: Amine Kherbouche <amine.kherbouche@6wind.com>
2016-09-16 10:16:02 +02:00
Konstantin Ananyev
1160de6779 net/ixgbe: fix missed packet types on Rx
ixgbe PMD RX function(s) misses some packet types that are:
 - correctly recognised by the underlying HW.
 - marked as supported by ixgbe_dev_supported_ptypes_get().

Fixes: 9586ebd358 ("ixgbe: replace some offload flags with packet type")

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
2016-06-23 15:59:02 +02:00
Olivier Matz
b37b528d95 mbuf: add new Rx flags for stripped VLAN
The behavior of PKT_RX_VLAN_PKT was not very well defined, resulting in
PMDs not advertising the same flags in similar conditions.

Following discussion in [1], introduce 2 new flags PKT_RX_VLAN_STRIPPED
and PKT_RX_QINQ_STRIPPED that are better defined:

  PKT_RX_VLAN_STRIPPED: a vlan has been stripped by the hardware and its
  tci is saved in mbuf->vlan_tci. This can only happen if vlan stripping
  is enabled in the RX configuration of the PMD.

For now, the old flag PKT_RX_VLAN_PKT is kept but marked as deprecated.
It should be removed from applications and PMDs in a future revision.

This patch also updates the drivers. For PKT_RX_VLAN_PKT:

- e1000, enic, i40e, mlx5, nfp, vmxnet3: done, PKT_RX_VLAN_PKT already
  had the same meaning than PKT_RX_VLAN_STRIPPED, minor update is
  required.
- fm10k: done, PKT_RX_VLAN_PKT already had the same meaning than
  PKT_RX_VLAN_STRIPPED, and vlan stripping is always enabled on fm10k.
- ixgbe: modification done (vector and normal), the old flag was set
  when a vlan was recognized, even if vlan stripping was disabled.
- the other drivers do not support vlan stripping.

For PKT_RX_QINQ_PKT, it was only supported on i40e, and the behavior was
already correct, so we can reuse the same bit value for
PKT_RX_QINQ_STRIPPED.

[1] http://dpdk.org/ml/archives/dev/2016-April/037837.html,

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-06-15 17:18:57 +02:00
Olivier Matz
fbfd99551c mbuf: add raw allocation function
Many drivers provide their own implementation of rte_mbuf_raw_alloc(),
duplicating the code. Introduce a new public function in rte_mbuf to
allocate a raw mbuf (uninitialized).

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2016-05-17 08:31:33 +02:00
Piotr Azarewicz
8ea3bacb37 ixgbe: fix bit masking in queue stop
The masking for the RX/TX enable bit was incorrect in the rx and tx
queue stop functions. Instead of using "& MASK" it used "| MASK" which
would always return true. This error was found by converity scan.

CID 13215 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: txdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.

CID 13216 : Wrong operator used (CONSTANT_EXPRESSION_RESULT)
operator_confusion: rxdctl | 33554432 is always 1/true regardless of the
values of its operand. This occurs as the logical second operand of
'&&'.

Coverity issue: 13215
Coverity issue: 13216
Fixes: 029fd06d40 ("ixgbe: queue start and stop")

Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-05-06 15:51:22 +02:00
Stephen Hemminger
f83f50394b ixgbe: clean up code style
Run ixgbe driver through checkpatch and fix the issues highlighted
Fix line spacing, some bad indentation, and in a couple
of cases use short circuit (already there) return to lessen indentation.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>

Applied with four additional fixes for issues highlighted by checkpatch
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
2016-05-06 15:51:22 +02:00
Wenzhuo Lu
533e050a06 ixgbe: fix packet type for VXLAN and NVGRE on X550
VxLAN & NVGRE are supported by x550. As we know HW can parse
the packet and tell SW the type info. For VxLAN & NVGRE packets
there's some change. HW will not tell SW the info of the outer
header but the inner header instead. But we always take the
info as it's for the outer header. So the packet type info is
not right when x550 receives VxLAN & NVGRE packets.

As x550 only supports IPv4 VxLAN & NVGRE packets, we can tell
the outer header of VxLAN is IPv4 + UDP, and the outer header
of NVGRE is IPv4 only. What we don't know is if there's
optional field in the outer IPv4 header.

This patch implement the support of packet type for VxLAN &
NVGRE. And it fixes the wrong packet type issue either.

BTW:
It doesn't fix any existing commit as although it resolve an
issue it's more like a new feature but not a fix.

Reported-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-04-08 22:33:19 +02:00
Aaron Conole
e64a0edb01 ixgbe: fix uninitialized warning
Silence a compiler warning that this variable may be used uninitialized.

Signed-off-by: Aaron Conole <aconole@redhat.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
2016-03-31 17:09:56 +02:00
Jianfeng Tan
78a38edf66 ethdev: query supported packet types
Add a new API rte_eth_dev_get_supported_ptypes to query what packet types
can be filled by a given device. The device should be already started or
its PMD RX burst function already decided, since the packet types supported
may vary depending on RX function.

Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
2016-03-25 18:56:43 +01:00
Wenzhuo Lu
a7740dc130 ixgbe: support new devices and MAC types
Add the support for new devices and mac types, as supported by the base
code update.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2016-03-16 17:09:27 +01:00
Stephen Hemminger
02fb58d4c7 ixgbe: fix whitespace
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Helin Zhang <helin.zhang@intel.com>
2016-03-16 17:04:37 +01:00
Stephen Hemminger
06554d381d ixgbe: speed up non-vector Tx
The freeing of mbuf's in ixgbe is one of the observable hot spots
under load. Optimize it by doing bulk free of mbufs using code similar
to i40e and fm10k.

Drop the no longer needed micro-optimization for the no refcount flag.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-03-16 16:58:39 +01:00
Wenzhuo Lu
d87c3058bb ixgbe: offload VxLAN and NVGRE Tx checksum on X550
The patch add VxLAN & NVGRE TX checksum off-load. When the flag of
outer IP header checksum offload is set, we'll set the context
descriptor to enable this checksum off-load.

Also update release notes for VxLAN & NVGRE checksum off-load support.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-03-13 11:52:58 +01:00
Wenzhuo Lu
d909af8f72 ixgbe: offload VxLAN and NVGRE Rx checksum on X550
X550 will do VxLAN & NVGRE RX checksum off-load automatically.
This patch exposes the result of the checksum off-load.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2016-03-13 11:52:52 +01:00
Ravi Kerur
d6b324c00f mbuf: get DMA address
Macros RTE_MBUF_DATA_DMA_ADDR and RTE_MBUF_DATA_DMA_ADDR_DEFAULT
are defined in each PMD driver file. Convert macros to inline
functions and move them to common lib/librte_mbuf/rte_mbuf.h file.
PMD drivers include rte_mbuf.h file directly/indirectly hence no
additioanl header file inclusion is necessary.

Signed-off-by: Ravi Kerur <rkerur@gmail.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
2016-03-04 16:01:15 +01:00
Huawei Xie
693f715da4 remove extra parentheses in return statement
fix the error reported by checkpatch:
  "ERROR: return is not a function, parentheses are not required"

remove parentheses in return like:
  "return (logical expressions)"

remove parentheses in return a function like:
  "return (rte_mempool_lookup(...))"

Fixes: 6307b909b8 ("lib: remove extra parenthesis after return")

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
2016-02-10 15:47:50 +01:00
Stephen Hemminger
1e0b2709fe ixgbe: use common functions to manage DMA zone
Adapt DMA memory for Xen at runtime.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
2015-11-13 11:47:20 +01:00
Konstantin Ananyev
71f39b07b6 ixgbe: fix Tx hang when RS distance exceeds HW limit
One of the ways to reproduce the issue:

testpmd <EAL-OPTIONS> -- -i --txqflags=0
testpmd> set fwd txonly
testpmd> set txpkts 64,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4
testpmd> set txsplit rand
testpmd> start

After some time TX on ixgbe queue will hang,
and all packet transmission on that queue will stop.

This bug was first reported and investigated by
Vlad Zolotarov <vladz@cloudius-systems.com>:
"We can reproduce this issue when stressed the xmit path with a lot of highly
fragmented TCP frames (packets with up to 33 fragments with non-headers
fragments as small as 4 bytes) with all offload features enabled."

The root cause is that ixgbe_xmit_pkts() in some cases violates the HW rule
that the distance between TDs with RS bit set should not exceed 40 TDs.

>From the latest 82599 spec update:
"When WTHRESH is set to zero, the software device driver should set the RS bit
in the Tx descriptors with the EOP bit set and at least once in the 40
descriptors."

The fix is to make sure that the distance between TDs with RS bit set
would never exceed HW limit.
As part of that fix, tx_rs_thresh for ixgbe PMD is not allowed to be greater
then to 32 to comply with HW restrictions.

With that fix slight slowdown for the full-featured ixgbe TX path
might be observed (from our testing - up to 4%).

ixgbe simple TX path is unaffected by that patch.

Reported-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2015-11-12 00:22:26 +01:00
Pablo de Lara
6b6861c171 ethdev: check queue state before starting or stopping
Following the same approach taken with dev_started field
in rte_eth_dev_data structure, this patch adds two new fields
in it, rx_queue_state and tx_queue_state arrays, which track
which queues have been started and which not.

This is important to avoid trying to start/stop twice a queue,
which will result in undefined behaviour
(which may cause RX/TX disruption).

Mind that only the PMDs which have queue_start/stop functions
have been changed to update this field, as the functions will
check the queue state before switching it.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-11-04 17:52:14 +01:00
Didier Pallard
6b39b947a9 ixgbe: remove useless fields in checksum offload
According to Table 7-38: Valid Fields by Offload Option
of Intel ® 82599 10 GbE Controller Datasheet,
L4LEN field is not needed for L4 XSUM computation by the hardware.
So remove l4_len from tx_offload_mask in ixgbe_set_xmit_ctx
function used to build the context transmitted to the hardware.

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2015-11-03 11:40:36 +01:00
Konstantin Ananyev
dee5f1fd5f ixgbe: get queue info and descriptor limits
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Remy Horton <remy.horton@intel.com>
2015-11-02 00:13:59 +01:00
Jingjing Wu
1f7b42e42e ixgbe: support DCB+RSS multi-queue mode
This patch enables DCB+RSS multi-queue mode, and also fix some coding
style.

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
2015-11-01 14:52:06 +01:00
Jingjing Wu
cb60ede6e3 ethdev: rename DCB field in config structs
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
2015-11-01 14:44:31 +01:00
Wenzhuo Lu
35ffb4ff14 ixgbevf: support RSS reta/hash query and update
This patch implements the VF RSS reta/hash query and update function
on 10G NICs. But the update function is only provided for x550. Because
the other NICs don't have the separate registers for VF, we don't want
to let a VF NIC change the shared RSS reta/hash registers. It may cause
PF and other VF NICs' behavior change without being noticed.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-10-28 18:30:05 +01:00
Wenzhuo Lu
f4d1598ee1 ixgbevf: support RSS config on x550
On x550, there're separate registers provided for VF RSS while on the other
10G NICs, for example, 82599, VF and PF share the same registers.
This patch lets x550 use the VF specific registers when doing RSS configuration
on VF. The behavior of other 10G NICs doesn't change.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-10-28 18:30:05 +01:00
Wenzhuo Lu
4bee94a6c2 ixgbe: support 512 RSS entries on x550
Comparing with the older NICs, x550's RSS redirection table is enlarged to 512
entries. As the original code is for the NICs which have a 128 entries RSS table,
it means only part of the RSS table is set on x550. So, RSS cannot work as
expected on x550, it doesn't redirect the packets evenly.
This patch configs the entries beyond 128 on x550 to let RSS work well, and also
update the query and update functions to support 512 entries.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-10-28 18:30:00 +01:00
Cunming Liang
45e73f4208 ixgbe: remove burst size restriction of vector Rx
On receive side, the burst size now floor aligns to RTE_IXGBE_DESCS_PER_LOOP
power of 2. According to this rule, the burst size less than 4 still won't
receive anything.
(Before this change, the burst size less than 32 can't receive anything.)
_recv_*_pkts_vec returns no more than 32(RTE_IXGBE_RXQ_REARM_THRESH) packets.

On transmit side, the max burst size no longer bind with a constant, however
it still requires to check the cross tx_rs_thresh violation.

There's no obvious performance drop found on both recv_pkts_vec
and recv_scattered_pkts_vec on burst size 32.

Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Reviewed-by: Zoltan Kiss <zoltan.kiss@linaro.org>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-09-09 15:17:41 +02:00
Wenzhuo Lu
b7fcd13c90 ixgbe: fix X550 DCB
There's a DCB issue on x550. For 8 TCs, if a packet with user priority 6
or 7 is injected to the NIC, then the NIC will put 3 packets into the
queue. There's also a similar issue for 4 TCs.
The root cause is RXPBSIZE is not right. RXPBSIZE of x550 is 384. It's
different from other 10G NICs. We need to set the RXPBSIZE according to
the NIC type.

Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
2015-09-09 15:17:41 +02:00
Thomas Monjalon
ab351fe1c9 mbuf: remove packet type from offload flags
The extended unified packet type is now part of the standard ABI.
As mbuf struct is changed, the mbuf library version is incremented.

Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
2015-09-03 19:22:48 +02:00
Konstantin Ananyev
33dd07136f ixgbe: fix Rx with buffer address not word aligned
Niantic HW expects Header Buffer Address in the RXD to be word aligned.
So, if mbuf's buf_physaddr is not word aligned then
RX path will not work properly.
Right now, in ixgbe PMD we always setup Packet Buffer Address(PBA) and
Header Buffer Address (HBA) to the same value:
buf_physaddr + RTE_PKTMBUF_HEADROOM.
As ixgbe PMD doesn't support split header feature anyway,
the issue can be fixed just by always setting HBA in the RXD to zero.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-08-03 22:45:52 +02:00
Stephen Hemminger
371fb68f18 ixgbe: fix log level of debug messages
All the debug chatter messages in the system log causes
complaints from users. Change the INFO messages to DEBUG
for normal startup kind of stuff.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2015-07-30 20:15:56 +02:00
Konstantin Ananyev
1b20b07d86 ixgbe: fix scalar scattered Rx with CRC
For 2.1 release, in attempt to minimize number of RX routines to support,
ixgbe scatter and ixgbe LRO RX routines were merged into one
that can handle both cases.
Though I completely missed the fact, that while LRO could only be used
when HW CRC strip is enabled, scatter RX should work for both cases
(HW CRC strip on/off).
That patch restores missed functionality.

Fixes: 9d8a92628f ("ixgbe: remove simple scalar scattered Rx method")

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
2015-07-30 02:15:32 +02:00
Xuelin Shi
2e49ae79eb ixgbe: fix data access on big endian cpu
1. cpu use data owned by ixgbe must use rte_le_to_cpu_xx(...)
2. cpu fill data to ixgbe must use rte_cpu_to_le_xx(...)
3. checking pci status with converted constant

Signed-off-by: Xuelin Shi <xuelin.shi@freescale.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2015-07-30 02:15:32 +02:00