As 40G NIC supports different sizes (128/512/64 entries) of
redirection table from that (128 entries) of 1G and 10G NICs,
support of multiple sizes of redirection table is needed.
It includes,
* Redefine 'struct rte_eth_rss_reta' in ethdev.
- To 'struct rte_eth_rss_reta_entry64' which contains 64
entries and 64 bits mask.
- Array of above new structure can be used for any number of
redirection table entries, as long as the number is multiple
of 64. This is quite flexible for the future expanding of
redirection table.
* Redefinition of relevant interfaces in ethdev.
- Interface of reta update has been redefined with new parameters.
- Interface of reta query has been redefined with new parameters.
* Rework of 1G PMD in igb.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 10G PMD in ixgbe.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 40G PMD (PF only) in i40e.
- reta update has been reworked.
- reta query has been reworked.
* Implement relevant commands in testpmd.
Test report: http://dpdk.org/ml/archives/dev/2014-November/008362.html
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Erlu Chen <erlu.chen@intel.com>
As more and more information are different between PF and VF, ops
of 'dev_infos_get' has been implemented respectively. In addition,
returning redirection table size has been supported in it.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The scattered_rx configuration is updated in dev_start().
For the execution sequence "stop, re-configure and then re-start",
it expects using the new configuration.
But during re-configure, the stored data may still be the old one.
The patch clean the configuration anyway in dev_stop().
So that make sure always get the best Rx routine.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Config PFVML2FLT register in ixgbe PMD to enable it receive broadcast and multicast packets;
also factorize the common logic with ixgbe_set_pool_rx_mode.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
When using Intel C++ compiler(icc) 14.0.1.106 or the older icc 13.x
version, the mbuf initializer variable was not getting configured
correctly, as the mb_def variable was not set correctly. This is due
to an issue with icc (DPD200249565 which already been fixed in
icc 14.0.2 and newer compiler release) where it incorrectly calculates
the field offsets with initializers when zero-sized fields
are used in a structure.
To work around this, the code in ixgbe_rxq_vec_setup does not setup the
fields using an initializer, but instead assigns the values individually
in code.
NOTE: There is no performance impact to this change as the queue
setup functions are not data-plane APIs, but are only used at app
initialization.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
If the vector pmd option is turned on in the compile time config file,
then always call the vector rxq setup, since we can now use the vector
PMD for receiving jumbo frames that need chained mbufs, a.k.a scattered
packets. Up till now, this function was not being called when receiving
scattered packets, potentially leading to problems with mbufs not being
properly initialized.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
Remove the "static" prefix to the template mbuf variable in
ixgbe_rxq_vec_setup function. This will then allow different
threads to initialize different RX queues at the same time,
without one overwriting the other's data.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
An error has been introduced by commit 1f22652ca8
("fix perf regression due to moved pool ptr").
Fix the case where RTE_MBUF_REFCNT is disabled.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch disables compilation complain from lower GCC version (less than 4.6).
Note: Only supported versions of GCC are 4.x.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Many sample apps use duplicated code to set rte_eth_txconf and rte_eth_rxconf
structures. This patch allows the user to get a default optimal RX/TX configuration
through rte_eth_dev_info get, and still any parameters may be tweaked as wished,
before setting up queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
[Thomas: split patch]
Since commit aae1047905 ("use the right debug macro"),
DEBUGOUT was replaced by PMD_DRV_LOG which requires at least
2 arguments. But the level argument was missing.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Make a small improvement to slow path TX performance by adding in a
prefetch for the second mbuf cache line.
Also move assignment of l2/l3 length values only when needed.
What I've done with the prefetches is two-fold:
1) changed it from prefetching the mbuf (first cache line) to prefetching
the mbuf pool pointer (second cache line) so that when we go to access
the pool pointer to free transmitted mbufs we don't get a cache miss. When
clearing the ring and freeing mbufs, the pool pointer is the only mbuf
field used, so we don't need that first cache line.
2) changed the code to prefetch earlier - in effect to prefetch one mbuf
ahead. The original code prefetched the mbuf to be freed as soon as it
started processing the mbuf to replace it. Instead now, every time we
calculate what the next mbuf position is going to be we prefetch the mbuf
in that position (i.e. the mbuf pool pointer we are going to free the mbuf
to), even while we are still updating the previous mbuf slot on the ring.
This gives the prefetch much more time to resolve and get the data we need
in the cache before we need it.
In terms of performance difference, a quick sanity test using testpmd
on a Xeon (Sandy Bridge uarch) platform showed performance increases
between approx 8-18%, depending on the particular RX path used in
conjuntion with this TX path code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The receive functions for packets do not modify the next pointer so
the next pointer should always be cleared on mbuf free, just in case.
The slow-path TX needs to clear it, and the standard mbuf free function
also needs to clear it. Fast path TX does not handle chained mbufs so
is unaffected
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add new file to support controller X550, therefore update the Makefile
and README file. It also updates the API functions, DCB related functions,
mailbox related functions, etc to support X550.
In addition, some new macros used by X550 are added.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: merge dependent patches]
- Implement functions to do I2C byte read and write
- Support 82599_QSFP_SF_QP and 82599_LS
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: split patch]
- Store lan_id and physical semaphore mask into hardware physical information,
and use them to control read and write physical registers in IXGBE base code.
- Extend mask from 16 bits to 32 bits for releasing or acquiring SWFW semaphore
in IXGBE base code. It is used in reading and writing I2C byte.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
[Thomas: merge dependent patches]
Remove unnecessary delay when setting up physical link and negotiating
in IXGBE base code.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Introduce a new argument to let caller determine if it need read and
return data or not after executing host interface command in IXGBE base code.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Refines function to let eeprom checksum calculation return
either a negative error code on error, or the 16-bit checksum
in IXGBE base code.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Implements a function to check command complete for flow director in
IXGBE base code, and replaces related code snippet with this function.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
This patch defines new error type in IXGBE base code; they are
used to report different kinds of error.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Clang fails with an error about a variable being used uninitialized:
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c:67:30:
error: variable 'dma_addr0' is uninitialized
when used here [-Werror,-Wuninitialized]
dma_addr0 = _mm_xor_si128(dma_addr0, dma_addr0);
^~~~~~~~~
This error can be fixed by replacing the call to xor which
takes two parameters, by a call to setzero, which does not take any.
Reported-by: Keith Wiles <keith.wiles@windriver.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Rename start_?x_per_q to ?x_deferred_start
and add comments.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
No need to restrict usage of non Intel SFP.
If (hw->phy.type == ixgbe_phy_sfp_intel) is false,
a warning will be logged.
It was disabled for ixgbe and enabled but unused for i40e.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Since the introduction of vector PMD, a bug in ixgbe_rxq_rearm could
cause a crash. As long as the memory pool allocated to the RX queue
has mbufs available, there is no problem. After allocation of _all_
mbufs from the memory pool, previously returned mbufs by
rte_eth_rx_burst could be accessed by subsequent calls to the PMD and
could be returned by subsequent calls to rte_eth_rx_burst. From the
perspective of the application, the means that fields within the mbuf
could change and that previously allocated mbufs could appear multiple
times.
After failure of mbuf allocation, the dd bits should indicate that the
packets are not ready. For this, this patch adds code to reset the dd
bits in the first RTE_IXGBE_DESCS_PER_LOOP packets of the next
RTE_IXGBE_RXQ_REARM_THRESH packets only if the next
RTE_IXGBE_RXQ_REARM_THRESH packets that will be accessed contain
previously allocated packets.
Setting the bits is not enough. The bits are checked _after_ setting
the mbuf fields, thus a mechanism is needed to prevent the previously
used mbuf pointers from being accessed during the speculative load of
the mbuf fields. For this reason, not only the dd bits are reset, but
also the mbufs associated to those descriptors are set to point to a
"fake" mbuf.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
When Flow Director was used together with bulk alloc, id and hash
was swapped when packet matches flow director filter due to improper
fdir field initialization.
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Helin Zhang <helin.zhang@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
[Thomas: merged with mbuf changes]
'init' messages should always be logged and filtered at runtime by rte_log.
All the more so as these messages are not in the datapath.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- remove leading \n in some messages,
- remove trailing \n in some messages,
- split multi lines messages.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>