RSS offload types were defined separately for 1/10G and 40G NICs,
and have no relationship with flow types. The modifications are to
unify all RSS offload types for all PMDs. Unified RSS offload types
have new and common names which can be used for any PMD or
applications, and decouple from specific hardwares.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: merge with fm10k]
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch defines new functions dealing with ntuple filters which is
corresponding to 2tuple filter for 82580 and i350 in HW, and to 5tuple
filter for 82576 in HW.
It removes old functions which deal with 2tuple and 5tuple filters in igb driver.
Ntuple filter is dealt with through entrance eth_igb_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch defines new functions dealing with syn filter.
It removes old functions of syn filter in igb driver.
Syn filter is dealt with through entrance eth_igb_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
This patch defines new functions dealing with flex filter.
It removes old functions of flex filter in igb driver.
Syn filter is dealt with through entrance eth_igb_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
e1000 is little endian, but cpu maybe not.
add necessary conversions.
rte_cpu_to_le_32(...) for PCI write
rte_le_to_cpu_32(...) for PCI read.
Signed-off-by: Xuelin Shi <xuelin.shi@freescale.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
To differentiate libraries that break ABI, we add a library version number
suffix to the library, which must be incremented when a given libraries ABI is
broken. This patch enforces that addition, sets the initial abi soname
extension to 1 for each library and creates a symlink to the base SONAME so that
the test applications will link properly.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Add linker version script files to each DPDK library to put a stake in the
ground from which we can start cleaning up API's
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
The rte_eth_stats_get is the only API that should call the device
statistics function directly, and it already does a memset of the
resulting structure since commit 02331c16ec. Therefore doing
memset() in the driver is redundant and should be removed.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
[David: remove also in igbvf and pcap PMDs]
Acked-By: David Marchand <david.marchand@6wind.com>
This patch removes old functions which deal with ethertype filter in igb driver.
It also defines eth_igb_filter_ctrl which is binding to filter_ctrl API,
and ethertype filter can be dealt with through this new entrance.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
Some of the NICs supported by DPDK have a possibility to accelerate TCP
traffic by using segmentation offload. The application prepares a packet
with valid TCP header with size up to 64K and deleguates the
segmentation to the NIC.
Implement the generic part of TCP segmentation offload in rte_mbuf. It
introduces 2 new fields in rte_mbuf: l4_len (length of L4 header in bytes)
and tso_segsz (MSS of packets).
To delegate the TCP segmentation to the hardware, the user has to:
- set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
PKT_TX_TCP_CKSUM)
- set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- set PKT_TX_IP_CKSUM if it's IPv4, and set the IP checksum to 0 in
the packet
- fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
- calculate the pseudo header checksum without taking ip_len in account,
and set it in the TCP header, for instance by using
rte_ipv4_phdr_cksum(ip_hdr, ol_flags)
The API is inspired from ixgbe hardware (the next commit adds the
support for ixgbe), but it seems generic enough to be used for other
hw/drivers in the future.
This commit also reworks the way l2_len and l3_len are used in igb
and ixgbe drivers as the l2_l3_len is not available anymore in mbuf.
Signed-off-by: Mirek Walukiewicz <miroslaw.walukiewicz@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This definition is specific to Intel PMD drivers and its definition
"indicate what bits required for building TX context" shows that it
should not be in the generic rte_mbuf.h but in the PMD driver.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
According to Intel® 82599 10 GbE Controller Datasheet (Table 7-38), both
L2 and L3 lengths are needed to offload the IP checksum.
Note that the e1000 driver does not need to be patched as it already
contains the fix.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As 40G NIC supports different sizes (128/512/64 entries) of
redirection table from that (128 entries) of 1G and 10G NICs,
support of multiple sizes of redirection table is needed.
It includes,
* Redefine 'struct rte_eth_rss_reta' in ethdev.
- To 'struct rte_eth_rss_reta_entry64' which contains 64
entries and 64 bits mask.
- Array of above new structure can be used for any number of
redirection table entries, as long as the number is multiple
of 64. This is quite flexible for the future expanding of
redirection table.
* Redefinition of relevant interfaces in ethdev.
- Interface of reta update has been redefined with new parameters.
- Interface of reta query has been redefined with new parameters.
* Rework of 1G PMD in igb.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 10G PMD in ixgbe.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 40G PMD (PF only) in i40e.
- reta update has been reworked.
- reta query has been reworked.
* Implement relevant commands in testpmd.
Test report: http://dpdk.org/ml/archives/dev/2014-November/008362.html
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Erlu Chen <erlu.chen@intel.com>
As more and more information are different between PF and VF,
ops of 'dev_infos_get' has been implemented respectively. In
addition, new field of 'reta_size' has been added in
'struct rte_eth_dev_info' for returning redirection table size.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Config VM offload register in igb PMD to enable it receive broadcast and multicast packets.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Many sample apps use duplicated code to set rte_eth_txconf and rte_eth_rxconf
structures. This patch allows the user to get a default optimal RX/TX configuration
through rte_eth_dev_info get, and still any parameters may be tweaked as wished,
before setting up queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@6wind.com>
[Thomas: split patch]
igb_ethdev.c contains function eth_igb_infos_get() which should set
number of tx/rx queues supported by the hardware. It contains huge
[switch] but there is no i211 case!
Also, there are few other places which mention i210, but not mention i211.
I didn't check it enough to say it is totally correct.
For now I see that it just able to send and receive some packets.
Signed-off-by: Sergey Mironov <grrwlf@gmail.com>
Reviewed-by: Helin Zhang <helin.zhang@intel.com>
[Thomas: fix indent]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
'init' messages should always be logged and filtered at runtime by rte_log.
All the more so as these messages are not in the datapath.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- remove leading \n in some messages,
- remove trailing \n in some messages,
- split multi lines messages,
- introduce PMD_INIT_FUNC_TRACE macro and use it instead of
PMD_INIT_LOG(DEBUG, "some_func")
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Prepare for next commit, indent sections where log messages will be modified so
that next patch is only about \n.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Since base driver always add a trailing \n, add a PMD_DRV_LOG_RAW macro that
will not add one.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
- We should not use DEBUGOUT* / DEBUGFUNC macros in pmd.
These macros come as compat wrappers for base driver.
- We should avoid calling RTE_LOG directly as pmd provides a wrapper for logs.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Reviewed-by: Jay Rolette <rolette@infiniteio.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The offload flags field (ol_flags) was 16-bits and had no further room
for expansion. This patch increases the field size to 64-bits, using up
the remaining reserved space in the single-cache-line mbuf.
NOTE: none of the values for existing flags have been changed, i.e. no
new numbers have been explicitly reserved between existing flag
definitions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The mbuf structure already contains a pointer to the beginning of the
buffer (m->buf_addr). It is not needed to use 8 bytes again to store
another pointer to the beginning of the data.
Using a 16 bits unsigned integer is enough as we know that a mbuf is
never longer than 64KB. We gain 6 bytes in the structure thanks to
this modification.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated to apply to latest on mainline.
* Disabled vector PMD in config as it relies heavily on the mbuf layout
This will be re-enabled in a subsequent commit once vPMD has been
reworked to take account of mbuf changes.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The vlan_macip structure combined a vlan tag id with l2 and l3 headers
lengths for tracking offloads. However, this structure was only used as
a unit by the e1000 and ixgbe drivers, not generally.
This patch removes the structure from the mbuf header and places the
fields into the mbuf structure directly at the required point, without
any net effect on the structure layout. This allows us to treat the vlan
tags and header length fields as separate for future mbuf changes. The
drivers which were written to use the combined structure still do so,
using a driver-local definition of it.
Reduce perf regression caused by splitting vlan_macip field. This is
done by providing a single uint16_t value to allow writing/clearing
the l2 and l3 lengths together. There is still a small perf hit to the
slow path TX due to the reads from vlan_tci and l2/l3 lengths being
separated. (<5% in my tests with testpmd with no extra params).
Unfortunately, this cannot be eliminated, without restoring the vlan
tags and l2/l3 lengths as a combined 32-bit field. This would prevent
us from ever looking to move those fields about and is an artificial tie
that applies only for performance in igb and ixgbe drivers. Therefore,
this patch keeps the vlan_tci field separate from the lengths as the
best solution going forward.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
In some cases we may want to tag a packet for a particular destination
or output port, so rename the "in_port" field in the mbuf to just "port"
so that it can be re-used for this purpose if an application needs it.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The rte_pktmbuf structure was initially included in the rte_mbuf
structure. This was needed when there was 2 types of mbuf (ctrl and
packet). As the control mbuf has been removed, we can merge the
rte_pktmbuf into the rte_mbuf structure.
Advantages of doing this:
- the access to mbuf fields is easier (ex: m->data instead of m->pkt.data)
- make the structure more consistent: for instance, there was no reason
to have the ol_flags field in rte_mbuf
- it will allow a deeper reorganization of the rte_mbuf structure in the
next commits, allowing to gain several bytes in it
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
[Bruce: updated for latest code and new example apps]
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
The initial role of rte_ctrlmbuf is to carry generic messages (data
pointer + data length) but it's not used by the DPDK or it applications.
Keeping it implies:
- loosing 1 byte in the rte_mbuf structure
- having some dead code rte_mbuf.[ch]
This patch removes this feature. Thanks to it, it is now possible to
simplify the rte_mbuf structure by merging the rte_pktmbuf structure
in it. This is done in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
* Updated patch to HEAD.
* Modified patch to retain the old function names for ctrl mbufs as
macros. This helps with app compatibility, and allows the concept
of a control mbuf to be reintroduced via a single-bit flag in
a future change.
* Updated the packet framework ip_pipeline example application to
work following this change.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The function rte_snprintf serves no useful purpose. It is the
same as snprintf() for all valid inputs. Deprecate it and
replace all uses in current code.
Leave the tests for the deprecated function in place.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Only some devices support the link state interrupt configuration option.
Link state control does not work in virtual drivers
(virtio, vmxnet3, igbvf, and ixgbevf). Instead of having the application
try and guess whether it will work or not provide a driver flag that
can be checked instead.
Note: if device driver doesn't support link state control, what
would happen previously is that the code would never detect link
transitions. This prevents that.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
[Thomas: rename flag]
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Intel PMDs are built on top of the base drivers which are provided by Intel
and shouldn't be modified to allow easy batch upgrade from Intel.
The base driver is a "shared code" between many projects. But in DPDK,
the "base driver" naming makes more sense.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
This patch adds two new functions in ethdev api to retrieve current MTU and
change MTU of a port.
Only .mtu_set function is pmd specific.
pmd should update its max_rx_pkt_len if needed.
This operation has been implemented for rte_em_pmd, rte_igb_pmd and
rte_ixgbe_pmd.
Signed-off-by: Samuel Gauthier <samuel.gauthier@6wind.com>
Signed-off-by: Ivan Boule <ivan.boule@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
We might want to be sure the scatter packets reception handler is selected in a
pmd. This makes it possible to then change mtu later, without the need of
restarting a port.
It is then the pmd duty to tell it enabled the scatter reception handler by
setting dev->data->scattered_rx to 1.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add autoneg field in flow control parameters.
This makes it easier to understand why changing some parameters does not always
have the expected result.
Changing autoneg is not supported at the moment.
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This patch adds a new function in ethdev api to retrieve current flow control
configuration.
This operation has been implemented for rte_em_pmd, rte_igb_pmd and
rte_ixgbe_pmd.
Signed-off-by: Zijie Pan <zijie.pan@6wind.com>
Signed-off-by: David Marchand <david.marchand@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
- i40e RSS flags have been added (and enlarged to 64-bit)
- A new configuration of 'uint8_t rss_key_len' has been added in
'struct rte_eth_rss_conf' to support different length of RSS keys.
- In each PMD, only the supported flags are masked.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Signed-off-by: Jing Chen <jing.d.chen@intel.com>
Acked-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Heqing Zhu <heqing.zhu@intel.com>
Tested-by: Waterman Cao <waterman.cao@intel.com>