There is an 82599 errata that UDP frames with a zero checksum are
incorrectly marked as checksum invalid by the hardware. This was
leading to misleading PKT_RX_L4_CKSUM_BAD flag.
This patch changes the bad UDP checksum to PKT_RX_L4_CKSUM_UNKNOWN,
so the software application will then have to recompute the checksum
itself if needed.
Bugzilla ID: 629
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Reported-by: Paolo Valerio <pvalerio@redhat.com>
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: Paolo Valerio <pvalerio@redhat.com>
Disable NFS header filtering whether NFS packets coalescing are
required or not, in order to make RSS can work on NFS packets.
The code without the patch does follow datasheet, but not consistent
with the ixgbe kernel driver. It causes NFS packets to be filtered
and make them flow into queue 0, before RSS can work on them.
Fixes: b826efba6d ("net/ixgbe: align register setting when RSC is disabled")
Fixes: 8eecb3295a ("ixgbe: add LRO support")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
The rte_ethdev_driver.h, rte_ethdev_vdev.h and rte_ethdev_pci.h files are
for drivers only and should be a private to DPDK and not installed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Steven Webster <steven.webster@windriver.com>
The `data_sz` name is fine, but it looks out of place because nothing
else has "data" prefix in that structure. Rename it to "size", as well
as add more clarity to the comments around each struct member.
Fixes: 6a17919b0e ("eal: change power intrinsics API")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Implement support for the power management API by implementing a
`get_monitor_addr` function that will return an address of an RX ring's
status bit.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Signed-off-by: Liang Ma <liang.j.ma@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
If a VF request to set a invalid maximum packet length value,
The PF kernel driver may disable its reception.
This patch add codes to output information and return the error status.
Fixes: 12cd0cccc3 ("ixgbevf: allow to set MTU")
Cc: stable@dpdk.org
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
The device-specific metadata was stored in the deprecated field udata64.
It is moved to a dynamic mbuf field in order to allow removal of udata64.
The name rte_security_dynfield is not very descriptive
but it should be replaced later by separate fields for each type of data
that drivers pass to the upper layer.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
The config options CONFIG_RTE_* are simple RTE_* defines with meson.
Now that make support is dropped, update the names in logs and comments.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
Use the newer macros defined by meson in all DPDK source code, to ensure
there are no errors when the old non-standard macros are removed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
Performance improvement: use a write combining store
instead of a regular mmio write to update queue tail
registers.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
The new macro __rte_cold, for compiler hinting,
is now used where appropriate for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
For ixgbe, there is restriction that data buffers of any transmitted
packet must include at least 12 bytes of the src/dst Ethernet MAC
addresses as well as 2 bytes of the Type/Len field, otherwise, tx hang
would happen.
This patch adds check for those illegal packets and protects TX from
hanging.
Fixes: 7829b8d52b ("net/ixgbe: add Tx preparation")
Cc: stable@dpdk.org
Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add stubs for ixgbe_xmit_fixed_burst_vec,
ixgbe_rx_queue_release_mbufs_vec and
ixgbe_txq_vec_setup
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Remove weak symbols from ixgbe_rxtx.c file as
it is done in i40e driver in commit "02ad704708"
(net/i40e: eliminate weak symbols in data path)
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
CONFIG_RTE_IXGBE_INC_VECTOR is enabled by default, so remove
it and use architecture specific flags.
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Build error:
In function ‘ixgbe_recv_pkts_lro_bulk_alloc’:
../drivers/net/ixgbe/ixgbe_rxtx.c:2209:24:
error: ‘next_sc_entry’ may be used uninitialized in this function
[-Werror=maybe-uninitialized]
next_sc_entry->fbuf = first_seg;
^
http://mails.dpdk.org/archives/test-report/2020-January/113891.html
This is a compiler false positive and error not seen by newer compilers,
or clang but to fix the warning initializing the complained variable.
According git bisect, no idea how:
Fixes: ad43b7bce9 ("net/ixgbe: avoid multiple definitions of bool")
Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Add support to the ixgbe driver for the API rte_eth_tx_done_cleanup
to force free consumed buffers on Tx ring.
Signed-off-by: Chenxu Di <chenxux.di@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Add DEV_RX_OFFLOAD_RSS_HASH flag for all PMDs that support RSS hash
delivery.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The device private pointer (dev_private) is of type void *
therefore no cast is necessary in C.
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add 'RTE_' prefix to defines:
- rename ETHER_ADDR_LEN as RTE_ETHER_ADDR_LEN.
- rename ETHER_TYPE_LEN as RTE_ETHER_TYPE_LEN.
- rename ETHER_CRC_LEN as RTE_ETHER_CRC_LEN.
- rename ETHER_HDR_LEN as RTE_ETHER_HDR_LEN.
- rename ETHER_MIN_LEN as RTE_ETHER_MIN_LEN.
- rename ETHER_MAX_LEN as RTE_ETHER_MAX_LEN.
- rename ETHER_MTU as RTE_ETHER_MTU.
- rename ETHER_MAX_VLAN_FRAME_LEN as RTE_ETHER_MAX_VLAN_FRAME_LEN.
- rename ETHER_MAX_VLAN_ID as RTE_ETHER_MAX_VLAN_ID.
- rename ETHER_MAX_JUMBO_FRAME_LEN as RTE_ETHER_MAX_JUMBO_FRAME_LEN.
- rename ETHER_MIN_MTU as RTE_ETHER_MIN_MTU.
- rename ETHER_LOCAL_ADMIN_ADDR as RTE_ETHER_LOCAL_ADMIN_ADDR.
- rename ETHER_GROUP_ADDR as RTE_ETHER_GROUP_ADDR.
- rename ETHER_TYPE_IPv4 as RTE_ETHER_TYPE_IPv4.
- rename ETHER_TYPE_IPv6 as RTE_ETHER_TYPE_IPv6.
- rename ETHER_TYPE_ARP as RTE_ETHER_TYPE_ARP.
- rename ETHER_TYPE_VLAN as RTE_ETHER_TYPE_VLAN.
- rename ETHER_TYPE_RARP as RTE_ETHER_TYPE_RARP.
- rename ETHER_TYPE_QINQ as RTE_ETHER_TYPE_QINQ.
- rename ETHER_TYPE_ETAG as RTE_ETHER_TYPE_ETAG.
- rename ETHER_TYPE_1588 as RTE_ETHER_TYPE_1588.
- rename ETHER_TYPE_SLOW as RTE_ETHER_TYPE_SLOW.
- rename ETHER_TYPE_TEB as RTE_ETHER_TYPE_TEB.
- rename ETHER_TYPE_LLDP as RTE_ETHER_TYPE_LLDP.
- rename ETHER_TYPE_MPLS as RTE_ETHER_TYPE_MPLS.
- rename ETHER_TYPE_MPLSM as RTE_ETHER_TYPE_MPLSM.
- rename ETHER_VXLAN_HLEN as RTE_ETHER_VXLAN_HLEN.
- rename ETHER_ADDR_FMT_SIZE as RTE_ETHER_ADDR_FMT_SIZE.
- rename VXLAN_GPE_TYPE_IPV4 as RTE_VXLAN_GPE_TYPE_IPV4.
- rename VXLAN_GPE_TYPE_IPV6 as RTE_VXLAN_GPE_TYPE_IPV6.
- rename VXLAN_GPE_TYPE_ETH as RTE_VXLAN_GPE_TYPE_ETH.
- rename VXLAN_GPE_TYPE_NSH as RTE_VXLAN_GPE_TYPE_NSH.
- rename VXLAN_GPE_TYPE_MPLS as RTE_VXLAN_GPE_TYPE_MPLS.
- rename VXLAN_GPE_TYPE_GBP as RTE_VXLAN_GPE_TYPE_GBP.
- rename VXLAN_GPE_TYPE_VBNG as RTE_VXLAN_GPE_TYPE_VBNG.
- rename ETHER_VXLAN_GPE_HLEN as RTE_ETHER_VXLAN_GPE_HLEN.
Do not update the command line library to avoid adding a dependency to
librte_net.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tx desc's DD status is not cleaned by NIC automatically after packets
have been transmitted until software refill a new packet during next
loop. So when tx_free_thresh + tx_rs_thresh > nb_desc, it is possible
that an outdated DD status be checked as tx_next_dd, then segment fault
happen due to free a NULL mbuf pointer.
Then patch fixes this issue by
1. try to adapt tx_rs_thresh to an aggressive tx_free_thresh.
2. queue setup fail when tx_free_thresh + tx_rs_thresh > nb_desc
Fixes: af75078fec ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Compiling on Fedora 30, we get the following warning, causing build failure
when Werror flag is set:
../drivers/net/ixgbe/ixgbe_rxtx.c:2141:14: warning:
‘nmb’ may be used uninitialized in this function [-Wmaybe-uninitialized]
2141 | rxe->mbuf = nmb;
| ~~~~~~~~~~^~~~~
Initializing the value to "NULL" fixes the issue.
Fixes: 8eecb3295a ("ixgbe: add LRO support")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
ixgbevf VLAN strip and extend capabilities were removed when
migrating to the bit flags implementation.
Restoring the capability to enable the VLAN strip offload at
configuration time.
Fixes: ec3b1124d1 ("net/ixgbe: convert to new Rx offloads API")
Cc: stable@dpdk.org
Signed-off-by: David Harton <dharton@cisco.com>
Acked-by: Wei Zhao <wei.zhao1@intel.com>
Loopback mode is also supported on X540 and X550 NICs, according to
their datasheet (section 15.2). The way to set it up is a little
different of the 82599.
Signed-off-by: Julien Meunier <julien.meunier@nokia.com>
Acked-by: Wei Zhao <wei.zhao1@intel.com>
Only TX->RX loopback is supported currently on 82599EB. If a user wants
to apply an another loopback configuration (!= IXGBE_LPBK_82599_TX_RX),
ixgbe PMD ignores it and continues the configuration without raising
any error.
Let's increase robustness of this part by checking if the requested
loopback mode is correct for the current device, before starting it.
If it is not valid, PMD will refuse to start.
Signed-off-by: Julien Meunier <julien.meunier@nokia.com>
Acked-by: Wei Zhao <wei.zhao1@intel.com>
When starting the device, the RSS table is initialized. So the RSS
update before device_start would be overwritten. This patch allows users
to update the RSS reta table before device_start.
Fixes: db5b65301d ("ethdev: allow to set RSS hash computation flags and/or key")
Cc: stable@dpdk.org
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The only time that software should write to the TDH register
is after a reset (hardware reset or CTRL.RST) and
before enabling the transmit function (TXDCTL.ENABLE).
If software were to write to this register while the transmit
function was enabled, the on-chip descriptor buffers might
be invalidated and the hardware could become confused.
Fixes: 029fd06d40 ("ixgbe: queue start and stop")
Cc: stable@dpdk.org
Signed-off-by: Yanglong Wu <yanglong.wu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Tx offload mask is updated in following commit: commit 1037ed842c
("mbuf: fix Tx offload mask"). Currently, the new added offload
flags are not supported in PMD and application will fail to call
PMD transmit prepare function.
This patch updates IXGBE_TX_OFFFLOAD_MASK.
Fixes: 1037ed842c ("mbuf: fix Tx offload mask")
Cc: stable@dpdk.org
Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
If hash function is 0, it should disable RSS then return 0.
Fixes: 518cc3927b ("net/ixgbe: move RSS to flow API")
Cc: stable@dpdk.org
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Tested-by: Yuan Peng <yuan.peng@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
eal: add shorthand __rte_weak macro
qat: update code to use __rte_weak macro
avf: update code to use __rte_weak macro
fm10k: update code to use __rte_weak macro
i40e: update code to use __rte_weak macro
ixgbe: update code to use __rte_weak macro
mlx5: update code to use __rte_weak macro
virtio: update code to use __rte_weak macro
acl: update code to use __rte_weak macro
bpf: update code to use __rte_weak macro
Signed-off-by: Keith Wiles <keith.wiles@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
At the tail end of comment about barriers (I feel your pain);
remove mild profanity.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Removed DEV_RX_OFFLOAD_CRC_STRIP offload flag.
Without any specific Rx offload flag, default behavior by PMDs is to
strip CRC.
PMDs that support keeping CRC should advertise DEV_RX_OFFLOAD_KEEP_CRC
Rx offload capability.
Applications that require keeping CRC should check PMD capability first
and if it is supported can enable this feature by setting
DEV_RX_OFFLOAD_KEEP_CRC in Rx offload flag in rte_eth_dev_configure()
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jan Remes <remes@netcope.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
remove queue id checks from dev_ops
ixgbe_dev_[rx/tx]_queue_[start/stop]
queue id checks already done by ethdev APIs that are calling these
dev_ops
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
In DPDK 17.11, the ethdev offloads API has changed:
commit cba7f53b71 ("ethdev: introduce Tx queue offloads API")
commit ce17eddefc ("ethdev: introduce Rx queue offloads API")
The new API is documented in the programmer's guide:
http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#hardware-offload
For reminder, the main concepts in the new API were:
- All offloads are disabled by default
- Distinction between per port and per queue offloads.
The transition bits are now removed:
- Translation of the old API in ethdev
- rte_eth_conf.rxmode.ignore_offload_bitfield
- ETH_TXQ_FLAGS_IGNORE
The old API bits are now removed:
- Rx per-port rte_eth_conf.rxmode.[bit-fields]
- Tx per-queue rte_eth_txconf.txq_flags
- ETH_TXQ_FLAGS_NO*
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
DEV_RX_OFFLOAD_KEEP_CRC offload flag is added. PMDs that support
keeping CRC should advertise this offload capability.
DEV_RX_OFFLOAD_CRC_STRIP flag will remain one more release
default behavior in PMDs are to keep the CRC until this flag removed
Until DEV_RX_OFFLOAD_CRC_STRIP flag is removed:
- Setting both KEEP_CRC & CRC_STRIP is INVALID
- Setting only CRC_STRIP PMD should strip the CRC
- Setting only KEEP_CRC PMD should keep the CRC
- Not setting both PMD should keep the CRC
A helper function rte_eth_dev_is_keep_crc() has been added to be able to
change the no flag behavior with minimal changes in PMDs.
The PMDs that doesn't report the DEV_RX_OFFLOAD_KEEP_CRC offload can
remove rte_eth_dev_is_keep_crc() checks next release, related code
commented to help the maintenance task.
And DEV_RX_OFFLOAD_CRC_STRIP has been added to virtual drivers since
they don't use CRC at all, when an application requires this offload
virtual PMDs should not return error.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Allain Legacy <allain.legacy@windriver.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This patch check if a input requested offloading is valid or not.
Any reuqested offloading must be supported in the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If any offloading is enabled in rte_eth_dev_configure() by application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(),
it can be enabled or disabled for individual queue in
ret_eth_[rt]x_queue_setup().
A new added offloading is the one which hasn't been enabled in
rte_eth_dev_configure() and is reuqested to be enabled in
rte_eth_[rt]x_queue_setup(), it must be per-queue type,
otherwise trigger an error log.
The underlying PMD must be aware that the requested offloadings
to PMD specific queue_setup() function only carries those
new added offloadings of per-queue type.
This patch can make above such checking in a common way in rte_ethdev
layer to avoid same checking in underlying PMD.
This patch assumes that all PMDs in 18.05-rc2 have already
converted to offload API defined in 17.11 . It also assumes
that all PMDs can return correct offloading capabilities
in rte_eth_dev_infos_get().
In the beginning of [rt]x_queue_setup() of underlying PMD,
add offloads = [rt]xconf->offloads |
dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API
defined in 17.11 to avoid upper application broken due to offload
API change.
PMD can use the info that input [rt]xconf->offloads only carry
the new added per-queue offloads to do some optimization or some
code change on base of this patch.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
The ixgbe PF and VF use the same default value
for EITR defined in header file.
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Since we move to new offload APIs, txq_flags is no long needed.
This patch remove the dependence on that.
Fixes: 51215925a3 ("net/ixgbe: convert to new Tx offloads API")
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
RSS hash types (ETH_RSS_* macros defined in rte_ethdev.h) describe the
protocol header fields of a packet that must be taken into account while
computing RSS.
When facing encapsulated (e.g. tunneled) packets, there is an ambiguity as
to whether these should apply to inner or outer packets. Applications need
the ability to tell exactly "where" RSS must be performed.
This is addressed by adding encapsulation level information to the RSS flow
action. Its default value is 0 and stands for the usual unspecified
behavior. Other values provide a specific encapsulation level.
Contrary to the change announced by commit 676b605182 ("doc: announce
ethdev API change for RSS configuration"), this patch does not affect
struct rte_eth_rss_conf but struct rte_flow_action_rss as the former is not
used anymore by the RSS flow action. ABI impact is therefore limited to
rte_flow.
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
By definition, RSS involves some kind of hash algorithm, usually Toeplitz.
Until now it could not be modified on a flow rule basis and PMDs had to
always assume RTE_ETH_HASH_FUNCTION_DEFAULT, which remains the default
behavior when unspecified (0).
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Since its inception, the rte_flow RSS action has been relying in part on
external struct rte_eth_rss_conf for compatibility with the legacy RSS API.
This structure lacks parameters such as the hash algorithm to use, and more
recently, a method to tell which layer RSS should be performed on [1].
Given struct rte_eth_rss_conf will never be flexible enough to represent a
complete RSS configuration (e.g. RETA table), this patch supersedes it by
extending the rte_flow RSS action directly.
A subsequent patch will add a field to use a non-default RSS hash
algorithm. To that end, a field named "types" replaces the field formerly
known as "rss_hf" and standing for "RSS hash functions" as it was
confusing. Actual RSS hash function types are defined by enum
rte_eth_hash_function.
This patch updates all PMDs and example applications accordingly.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
[1] commit 676b605182 ("doc: announce ethdev API change for RSS
configuration")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>