This patch removes all references to RTE_MBUF_REFCNT, setting the refcnt
field in the mbuf struct permanently.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
RSS offload types were defined separately for 1/10G and 40G NICs,
and have no relationship with flow types. The modifications are to
unify all RSS offload types for all PMDs. Unified RSS offload types
have new and common names which can be used for any PMD or
applications, and decouple from specific hardwares.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: merge with fm10k]
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Flow types was defined actually for i40e hardware specifically,
and wasn't able to be used for defining RSS offload types of all
PMDs. It removed the enum flow types, and uses macros instead
with new names. The new macros can be used for defining RSS
offload types later. Also modifications are made in i40e and
testpmd accordingly.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
[Thomas: merge with new flow director API]
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
This patch defines new functions dealing with ntuple filters which is
corresponding to 5tuple in HW.
It removes old functions which deal with 5tuple filters.
Ntuple filter is dealt with through entrance ixgbe_dev_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
This patch defines new functions dealing with syn filter.
It removes old functions which deal with syn filter.
Syn filter is dealt with through entrance ixgbe_dev_filter_ctrl.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
This patch implement RTE_ETH_FILTER_FLUSH operation to delete all
flow director filters in ixgbe driver.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch changes the get info operation to be implemented through
filter_ctrl API and RTE_ETH_FILTER_INFO/RTE_ETH_FILTER_STATS ops.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch implement the mask configuration of flow director filter,
by using the mask defined in rte_fdir_conf instead of callback function
fdir_set_masks.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch implement the flexpayload configuration of flow director filter.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
This patch changes the add/delete/update operations to be implemented through
filter_ctrl API and RTE_ETH_FILTER_ADD/RTE_ETH_FILTER_DELETE/RTE_ETH_FILTER_UPDATE ops.
It also removes the callback functions:
- ixgbe_eth_dev_ops.fdir_add_signature_filter
- ixgbe_eth_dev_ops.fdir_update_signature_filter
- ixgbe_eth_dev_ops.fdir_remove_signature_filter
- ixgbe_eth_dev_ops.fdir_add_perfect_filter
- ixgbe_eth_dev_ops.fdir_update_perfect_filter
- ixgbe_eth_dev_ops.fdir_remove_perfect_filter
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
CONFIG_RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is a prerequisite
of CONFIG_RTE_IXGBE_INC_VECTOR.
Reported-by: Alexander Belyakov <abelyako@gmail.com>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
When the vector pmd was receiving a mix of packets of various sizes,
some of which were split across multiple mbufs, there was an issue
with reassembly of the jumbo frames. This was due to a skipped increment
when using "continue" in a while loop. Changing the loop to a "for"
loop fixes this problem, by ensuring the increment is always performed.
Reported-by: Prashant Upadhyaya <praupadhyaya@gmail.com>
Reported-by: Martin Weiser <martin.weiser@allegro-packets.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Martin Weiser <martin.weiser@allegro-packets.com>
ixgbe is little endian, but cpu maybe not.
add necessary conversions.
rte_cpu_to_le_32(...) for PCI write
rte_le_to_cpu_32(...) for PCI read.
Signed-off-by: Xuelin Shi <xuelin.shi@freescale.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
In loopback mode, it's expected force link up even when there's no cable connect.
But in codes, setup_sfp() rewrites the related register.
It causes in the case 'multispeed_fiber', it can't link up without cable connect.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Acked-by: Patrick Lu <patrick.lu@intel.com>
To differentiate libraries that break ABI, we add a library version number
suffix to the library, which must be incremented when a given libraries ABI is
broken. This patch enforces that addition, sets the initial abi soname
extension to 1 for each library and creates a symlink to the base SONAME so that
the test applications will link properly.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
Add linker version script files to each DPDK library to put a stake in the
ground from which we can start cleaning up API's
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
For ixgbe, when queue start fails, for example, mbuf allocate
failure, the device will still start successfully, which could be
an issue.
Add return status check of queue start to avoid this issue.
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
To follow up the comments from Pawel Wodkowski, remove this unnecessary check,
as check_mq_mode has already check the queue number in device configure stage,
if the queue number of vf is not correct, it will return error code and exit,
so it doesn't need check again here in device start stage (note: pf_host_configure
is called in device start stage).
Fixes: 42d2f78abc ("configure VF RSS")
Suggested-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
The link_status variable is not set when device is initialized.
This can lead to problems with link never being reported as up
if using some SFP modules where the link is instantly on.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
The rte_eth_stats_get is the only API that should call the device
statistics function directly, and it already does a memset of the
resulting structure since commit 02331c16ec. Therefore doing
memset() in the driver is redundant and should be removed.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
[David: remove also in igbvf and pcap PMDs]
Acked-By: David Marchand <david.marchand@6wind.com>
This patch removes old functions which deal with ethertype filter in ixgbe driver.
It also defines ixgbe_dev_filter_ctrl which is binding to filter_ctrl API,
and ethertype filter can be dealt with through this new entrance.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable VF RSS.
The psrtype will determine how many queues the received packets will distribute to,
and the value of psrtype should depends on both facet: max VF rxq number which
has been negotiated with PF, and the number of rxq specified in config on guest.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Reviewed-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Get the available Rx and Tx queue number when receiving IXGBE_VF_GET_QUEUES
message from VF.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Reviewed-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Negotiate API version with VF when receiving the IXGBE_VF_API_NEGOTIATE message.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Reviewed-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Put global register configuring out of loop for queue; also fix typo and indent.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Reviewed-by: Vlad Zolotarov <vladz@cloudius-systems.com>
This patch fixes checking the link state of a virtual function. If the
state has already been checked, it does not need to be checked
again. Previously, get_link_status in the ixgbe_hw struct was used to
track if the information had already been retrieved, but this field
was always set to false (signifying that the information was
up-to-date). The problem was introduced by commit 8ef32003 which was
part of a patch set to update the ixgbe portion of the PMD. This patch
does not break consistency with the ixgbevf driver. Instead, it fixes
the problem at the level of DPDK.
Applications that rely on the reported link speed could fail without
this patch. The qos_sched example application provided with DPDK did
not run when virtual functions were used. The output for this example
application is shown below:
EAL: Error - exiting with code: 1
Cause: Unable to config sched subport 0, err=-2
The problem and the effect of the patch can been seen by running the
l2fwd example application using the following command:
sudo ./build/l2fwd -c 0x3 -n 4 -- -p 0x3 -T 0
Before the patch has been applied (with both links up):
...
Checking link statusdone
Port 0 Link Up - speed 100 Mbps - half-duplex
Port 1 Link Up - speed 100 Mbps - half-duplex
L2FWD: entering main loop on lcore 1
...
After the patch has been applied (with both links up):
...
Checking link statusdone
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
L2FWD: entering main loop on lcore 1
...
Before the patch has been applied (with link 0 down, link 1 up):
...
Checking link statusdone
Port 0 Link Up - speed 100 Mbps - half-duplex
Port 1 Link Up - speed 100 Mbps - half-duplex
L2FWD: entering main loop on lcore 1
...
After the patch has been applied (with link 0 down, link 1 up):
...
Checking link status............................................................
..............................done
Port 0 Link Down
Port 1 Link Up - speed 10000 Mbps - full-duplex
...
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f18c2a00000
EAL: PCI memory mapped at 0x7f18c2a80000
Segmentation fault (core dumped)
This is introduced by commit: 46bc9d75
ixgbe: fix multi-process support
When start primary process with command line:
./app/test/test -n 1 -c ffff -m 64
then start the second one:
./app/test/test -n 1 --proc-type=secondary --file-prefix=rte
This segment-fault will occur.
Root cause is test app on primary process only starts device, but
the queue need initialized by manually command line.
So the tx queue is still NULL when secondary process startup.
Reported-by: Yong Liu <yong.liu@intel.com>
Signed-off-by: Michael Qiu <michael.qiu@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Add missing setup for X540 MAC type when setting up VF.
Additional check exists in Linux driver but not in DPDK.
Signed-off-by: Bill Hong <bhong@brocade.com>
Signed-off-by: Stephen Hemminger <shemming@brocade.com>
When using multiple processes, the TX function used in all processes
should be the same, otherwise the secondary processes cannot transmit
more than tx-ring-size - 1 packets.
To achieve this, we extract out the code to select the ixgbe TX function
to be used into a separate function inside the ixgbe driver, and call
that from a secondary process when it is attaching to an
already-configured NIC.
Testing with symmetric MP app shows that we are able to RX and TX from
both primary and secondary processes once this patch is applied.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
Switch the order of the conditions in a while loop, so we check the
range of "i" against the max, before using it to index into the array.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
The template mbuf_initializer is hard coded with a buflen which
might have been set differently by the application at the time of
mbuf pool creation.
- move buf_len fields out of rearm_data marker.
- make ixgbe_recv_pkts_vec() not touch buf_len field at all
(as all other RX functions behave).
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jean-Mickael Guerin <jean-mickael.guerin@6wind.com>
Add a compiler barrier to make sure all fields covered by
the marker rearm_data are assigned before the read.
Fixes: 0ff3324da2 ("ixgbe: rework vector pmd following mbuf changes")
Signed-off-by: Jean-Mickael Guerin <jean-mickael.guerin@6wind.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since commit aae1047905 ("use the right debug macro"),
DEBUGOUT was replaced by PMD_DRV_LOG which requires at least
2 arguments. But the level argument was missing.
Commit 7a10de5e27 fixed the logs but not the macros FUNC_PTR_OR_*
which are not preprocessed if RTE_LIBRTE_IXGBE_DEBUG_DRIVER is disabled.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: David Marchand <david.marchand@6wind.com>
Commit 1224decaa4 ("support TCP segmentation offload")
changed the way the bitfields are assigned in ixgbe, example:
tx_offload_mask.l2_len = ~0;
This result in a compilation error with clang:
error: implicit truncation from 'int' to bitfield
changes value from -1 to 127 [-Werror,-Wbitfield-constant-conversion]
Replacing the '=' with a '|=' fixes the issue.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The statistics that is reported through the rx_nombuf fields in struct
rte_eth_stats was not set when the vector PMD was used. The statistics
should report the number of mbufs that could _not_ be allocated during
rearm of the RX queue. The non-vector PMD reports it correctly. The
use of either vector PMD or non-vector PMD depends on runtime
configuration. Hence it is possible that a change in configuration
would disable this statistics. To prevent this from happening, the
statistics should be reported by both implementations.
Signed-off-by: Balazs Nemeth <balazs.nemeth@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
CACHE_LINE_SIZE is a macro defined in machine/param.h in FreeBSD and
conflicts with DPDK macro version.
Adding RTE_ prefix to avoid conflicts.
CACHE_LINE_MASK and CACHE_LINE_ROUNDUP are also prefixed.
Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
[Thomas: updated on HEAD, including PPC]
Implement TSO (TCP segmentation offload) in ixgbe driver. The driver is
now able to use PKT_TX_TCP_SEG mbuf flag and mbuf hardware offload infos
(l2_len, l3_len, l4_len, tso_segsz) to configure the hardware support of
TCP segmentation.
In ixgbe, when doing TSO, the IP length must not be included in the TCP
pseudo header checksum. A new function ixgbe_fix_tcp_phdr_cksum() is
used to fix the pseudo header checksum of the packet before giving it to
the hardware.
In the patch, the tx_desc_cksum_flags_to_olinfo() and
tx_desc_ol_flags_to_cmdtype() functions have been reworked to make them
clearer. This should not impact performance as gcc (version 4.8 in my
case) is smart enough to convert the tests into a code that does not
contain any branch instruction.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Some of the NICs supported by DPDK have a possibility to accelerate TCP
traffic by using segmentation offload. The application prepares a packet
with valid TCP header with size up to 64K and deleguates the
segmentation to the NIC.
Implement the generic part of TCP segmentation offload in rte_mbuf. It
introduces 2 new fields in rte_mbuf: l4_len (length of L4 header in bytes)
and tso_segsz (MSS of packets).
To delegate the TCP segmentation to the hardware, the user has to:
- set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
PKT_TX_TCP_CKSUM)
- set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- set PKT_TX_IP_CKSUM if it's IPv4, and set the IP checksum to 0 in
the packet
- fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
- calculate the pseudo header checksum without taking ip_len in account,
and set it in the TCP header, for instance by using
rte_ipv4_phdr_cksum(ip_hdr, ol_flags)
The API is inspired from ixgbe hardware (the next commit adds the
support for ixgbe), but it seems generic enough to be used for other
hw/drivers in the future.
This commit also reworks the way l2_len and l3_len are used in igb
and ixgbe drivers as the l2_l3_len is not available anymore in mbuf.
Signed-off-by: Mirek Walukiewicz <miroslaw.walukiewicz@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
This definition is specific to Intel PMD drivers and its definition
"indicate what bits required for building TX context" shows that it
should not be in the generic rte_mbuf.h but in the PMD driver.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Since commit 4332beee9 "mbuf: expand ol_flags field to 64-bits", the
packet flags are now 64 bits wide. Some occurences were forgotten in
the ixgbe driver.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
According to Intel® 82599 10 GbE Controller Datasheet (Table 7-38), both
L2 and L3 lengths are needed to offload the IP checksum.
Note that the e1000 driver does not need to be patched as it already
contains the fix.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
As 40G NIC supports different sizes (128/512/64 entries) of
redirection table from that (128 entries) of 1G and 10G NICs,
support of multiple sizes of redirection table is needed.
It includes,
* Redefine 'struct rte_eth_rss_reta' in ethdev.
- To 'struct rte_eth_rss_reta_entry64' which contains 64
entries and 64 bits mask.
- Array of above new structure can be used for any number of
redirection table entries, as long as the number is multiple
of 64. This is quite flexible for the future expanding of
redirection table.
* Redefinition of relevant interfaces in ethdev.
- Interface of reta update has been redefined with new parameters.
- Interface of reta query has been redefined with new parameters.
* Rework of 1G PMD in igb.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 10G PMD in ixgbe.
- reta update has been reworked.
- reta query has been reworked.
* Rework of 40G PMD (PF only) in i40e.
- reta update has been reworked.
- reta query has been reworked.
* Implement relevant commands in testpmd.
Test report: http://dpdk.org/ml/archives/dev/2014-November/008362.html
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Erlu Chen <erlu.chen@intel.com>
As more and more information are different between PF and VF, ops
of 'dev_infos_get' has been implemented respectively. In addition,
returning redirection table size has been supported in it.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The scattered_rx configuration is updated in dev_start().
For the execution sequence "stop, re-configure and then re-start",
it expects using the new configuration.
But during re-configure, the stored data may still be the old one.
The patch clean the configuration anyway in dev_stop().
So that make sure always get the best Rx routine.
Signed-off-by: Cunming Liang <cunming.liang@intel.com>
Config PFVML2FLT register in ixgbe PMD to enable it receive broadcast and multicast packets;
also factorize the common logic with ixgbe_set_pool_rx_mode.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
When using Intel C++ compiler(icc) 14.0.1.106 or the older icc 13.x
version, the mbuf initializer variable was not getting configured
correctly, as the mb_def variable was not set correctly. This is due
to an issue with icc (DPD200249565 which already been fixed in
icc 14.0.2 and newer compiler release) where it incorrectly calculates
the field offsets with initializers when zero-sized fields
are used in a structure.
To work around this, the code in ixgbe_rxq_vec_setup does not setup the
fields using an initializer, but instead assigns the values individually
in code.
NOTE: There is no performance impact to this change as the queue
setup functions are not data-plane APIs, but are only used at app
initialization.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@6wind.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>
If the vector pmd option is turned on in the compile time config file,
then always call the vector rxq setup, since we can now use the vector
PMD for receiving jumbo frames that need chained mbufs, a.k.a scattered
packets. Up till now, this function was not being called when receiving
scattered packets, potentially leading to problems with mbufs not being
properly initialized.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
Remove the "static" prefix to the template mbuf variable in
ixgbe_rxq_vec_setup function. This will then allow different
threads to initialize different RX queues at the same time,
without one overwriting the other's data.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Changchun Ouyang <Changchun.ouyang@intel.com>
An error has been introduced by commit 1f22652ca8
("fix perf regression due to moved pool ptr").
Fix the case where RTE_MBUF_REFCNT is disabled.
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Acked-by: Thomas Monjalon <thomas.monjalon@6wind.com>