The value of rte_errno must be positive in case of an error.
Fixes: d77d07391d ("net/sfc: support flow API RSS action")
Cc: stable@dpdk.org
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Such a message could be generated by two places
in the code, and there is a difference in the
text albeit there is no difference in the meaning.
These two messages must be the same so that
automated error log comparison may be carried
out without confusion.
Fixes: a9825ccf5b ("net/sfc: support flow API filters")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Number of descriptors in equal stride super-buffer Rx mode defines
number of packet buffers to be used. Each HW Rx descriptor has
many packet buffers and the number depends on total size of mbuf
and CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB value.
Typically it makes a bit less than 32 buffers per descriptor.
Since HW Rx descriptors must be pushed by 8, it makes about 256
as required minimum. Double it in advertised minimum to allow for
at least 2 refill blocks.
Fixes: 390f9b8d82 ("net/sfc: support equal stride super-buffer Rx mode")
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Fixes: 73280c1e4f ("net/sfc: support xstats retrieval by ID")
Fixes: 7b9891769f ("net/sfc: support extended statistics")
Cc: stable@dpdk.org
Signed-off-by: Andy Green <andy@warmcat.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
If application uses Tx offload API and sets ETH_TXQ_FLAGS_IGNORE flag,
it still should have inner TCP/UDP checksum offload enabled if it is
supported and TCP/UDP checksum offload is requested.
Fixes: c78d280e88 ("net/sfc: convert to new Tx offload API")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch check if a input requested offloading is valid or not.
Any reuqested offloading must be supported in the device capabilities.
Any offloading is disabled by default if it is not set in the parameter
dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If any offloading is enabled in rte_eth_dev_configure() by application,
it is enabled on all queues no matter whether it is per-queue or
per-port type and no matter whether it is set or cleared in
[rt]x_conf->offloads to rte_eth_[rt]x_queue_setup().
If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(),
it can be enabled or disabled for individual queue in
ret_eth_[rt]x_queue_setup().
A new added offloading is the one which hasn't been enabled in
rte_eth_dev_configure() and is reuqested to be enabled in
rte_eth_[rt]x_queue_setup(), it must be per-queue type,
otherwise trigger an error log.
The underlying PMD must be aware that the requested offloadings
to PMD specific queue_setup() function only carries those
new added offloadings of per-queue type.
This patch can make above such checking in a common way in rte_ethdev
layer to avoid same checking in underlying PMD.
This patch assumes that all PMDs in 18.05-rc2 have already
converted to offload API defined in 17.11 . It also assumes
that all PMDs can return correct offloading capabilities
in rte_eth_dev_infos_get().
In the beginning of [rt]x_queue_setup() of underlying PMD,
add offloads = [rt]xconf->offloads |
dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API
defined in 17.11 to avoid upper application broken due to offload
API change.
PMD can use the info that input [rt]xconf->offloads only carry
the new added per-queue offloads to do some optimization or some
code change on base of this patch.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
These stats are availble on Medford2 DPDK firmware variant
which support equal stride super-buffer Rx mode. RXDP_HLB_IDLE
capability bit is set when the stats are available.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
There is no necessity to fill in TxQ flags since ethdev maps
Tx offloads to TxQ flags on device info get for apps which are
not converted yet to Tx offloads API.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Prepare function that parse flow rule actions to support not
fate-deciding actions.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The mark value for MATCH_ACTION_MARK has a maximum value.
Requesting a value larger than the maximum will cause the
filter insertion to fail with EINVAL. This patch allows the
driver to check the value at the filter validation.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch adds support for DPDK rte_flow "MARK" and "FLAG" filter
actions to filters on EF10 family NICs.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Filter actions MARK and FLAG are supported on Medford2 by DPDK
firmware variant.
Signed-off-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Equal stride super-buffer Rx mode allows to mark packets in HW
using filters. Process the data on datapath and advertise
corresponding features to allow flow API support to implement it.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add device argument to customize Rx descriptor wait timeout which
is supported in DPDK firmware variant only in equal stride super-buffer
Rx mode only.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
DPDK firmware variant supports equal stride super-buffer Rx mode which
provides higher packet rate and packet marks but requires dedicated
mempool manager with contiguous object block allocation (e.g. bucket).
Also the firmware supports subvariant without checksumming on Tx which
allows to reach higher packet rates on transmit if checksumming is not
required.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Equal stride super-buffer requires mempool with contiguous object
block allocation mechanism. Bucket mempool is the only which provides it.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
The callback is a dummy yet since no Rx datapath provides its own
callback, so all pools are supported.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
HW Rx descriptor represents many contiguous packet buffers which
follow each other. Number of buffers, stride and maximum DMA
length are setup-time configurable per Rx queue based on provided
mempool. The mempool must support contiguous block allocation and
get info API to retrieve number of objects in the block.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
The new argument will be used by the equal stride super-buffer
Rx datapath.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
One HW Rx descriptor has many packet buffers in the case of equal
stride super-buffer Rx modes. Each packet buffer is still treated
as separate SW Rx descriptor. rxq_entries is the size of HW Rx ring
whereas nb_rx_desc is the number of SW Rx descriptors.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Equal stride super-buffer Rx datapath does not support tunnels, code to
parse tunnel packet types and inner checksum offload is not required and
it is important to be able to compile it out on build time to avoid
extra CPU load.
Cutting of tunnels support relies on compiler optimizaitons to
be able to drop extra checks and branches if tun_ptype is always 0.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Equal stride super-buffer Rx datapath will use it as well.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Equal stride super-buffer Rx mode will be handled by the dedicated
Rx datapath and the mode has almost the same Rx event structure as
single packet Rx mode.
Restructure the code to allow the common parts to be shared.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
The function may be shared by different Rx datapath implementations.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
Equal stride super-buffer Rx mode is supported by DPDK firmware
variant. One Rx descriptor provides many Rx buffers to firmware.
Rx buffers follow each other with specified stride.
Also it supports head of line blocking with timeout to address
drops when no Rx descriptors are available. So it gives extra time
to the driver to provide Rx descriptors before drop.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
The type is an internal interface. Single integer is insufficient
to carry RxQ type-specific information in the case of equal stride
super-buffer Rx mode (packet buffers per bucket, maximum DMA length,
packet stride, head of line block timeout).
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Equal stride super-buffer is a new name instead of deprecated equal
stride packed stream to avoid confusion with previous packed stream.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
RSS action with only one destination queue and no specific settings
for hash types and key does not require dedicated RSS context and
may be simplified to QUEUE action.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
mask is a simple bit-mask applied before interpreting the contents
of spec and last.
Fixes: a9825ccf5b ("net/sfc: support flow API filters")
Cc: stable@dpdk.org
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@solarflare.com>
Reviewed-by: Roman Zhukov <roman.zhukov@oktetlabs.ru>
The current code has the following drawbacks:
- It is assumed that TCP 4-tuple hash is
always supported, which is untrue in
the case of packed stream FW variant.
- The driver is unaware of UDP hash support
available with latest firmware.
In order to cope with the mentioned issues, this
patch implements the new approach to handle hash
settings using the advanced EFX RSS interface.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
RSS handling will need more sophisticated fields
in the adapter context storage in future patches.
This patch groups existing fields in a dedicated
structure and updates the rest of the code.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
RSS is one of the most valuable features in the
driver, and one would hardly need to disable it
at build time. This patch withdraws unnecessary
conditionals for RSS snippets.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
One may submit advanced RSS settings as part of
rte_eth_conf to customise RSS configuration from
the very beginning. Currently the driver does not
check that piece of settings and proceeds with
default choices for RSS hash functions and RSS key.
This patch implements the required processing.
Fixes: 4ec1fc3ba8 ("net/sfc: add basic stubs for RSS support on driver attach")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Packed stream firmware variant on EF10 adapters has a
number of properties which must be taken into account:
- Only one exclusive RSS context is available per port.
- Only IP addresses can contribute to the hash value.
Huntington and Medford have one more limitation which
is important for the drivers capable of packed stream:
- Hash algorithm is non-standard (i.e. non-Toeplitz).
This implies XORing together source + destination
IP addresses (or last four bytes in the case of IPv6)
and using the result as the input to a Toeplitz hash.
This patch provides a number of improvements in order
to treat the mentioned limitations in the common code.
If the firmware variant is packed stream, the list of
supported hash tuples will include less variants, and
the maximum number of RSS contexts will be set to one.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Modern firmwares on EF10 adapters have support for
more traffic classes eligible for hash computation.
Also, it has become possible to adjust hashing per
individual class and select distinct packet fields
which will be able to contribute to the hash value.
This patch adds support for the mentioned features.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Currently, libefx has no support for additional RSS modes
available with later controllers. In order to support this,
libefx should be able to list available hash configurations.
This patch provides basic infrastructure for the new interface.
The client drivers will be able to query the list of supported
hash configurations for a particular hash algorithm. Also, it
will be possible to configure hashing by means of new definitions.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
clang 4.0.1-6 on Ubuntu generates false positive warning that shift
is negative. It is done regardless of the fact that the branch is
not taken because of previous check.
The warning is generate in EFX_INSERT_NATIVE32 used by
EFX_INSERT_FIELD_NATIVE32. All similar cases are fixed as well.
It is undesirable to suppress the warning completely.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
This new attribute enables applications to create flow rules that do not
simply match traffic whose origin is specified in the pattern (e.g. some
non-default physical port or VF), but actively affect it by applying the
flow rule at the lowest possible level in the underlying device.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
TPID handling in rte_flow VLAN and E_TAG pattern item definitions is not
consistent with the normal stacking order of pattern items, which is
confusing to applications.
Problem is that when followed by one of these layers, the EtherType field
of the preceding layer keeps its "inner" definition, and the "outer" TPID
is provided by the subsequent layer, the reverse of how a packet looks like
on the wire:
Wire: [ ETH TPID = A | VLAN EtherType = B | B DATA ]
rte_flow: [ ETH EtherType = B | VLAN TPID = A | B DATA ]
Worse, when QinQ is involved, the stacking order of VLAN layers is
unspecified. It is unclear whether it should be reversed (innermost to
outermost) as well given TPID applies to the previous layer:
Wire: [ ETH TPID = A | VLAN TPID = B | VLAN EtherType = C | C DATA ]
rte_flow 1: [ ETH EtherType = C | VLAN TPID = B | VLAN TPID = A | C DATA ]
rte_flow 2: [ ETH EtherType = C | VLAN TPID = A | VLAN TPID = B | C DATA ]
While specifying EtherType/TPID is hopefully rarely necessary, the stacking
order in case of QinQ and the lack of documentation remain an issue.
This patch replaces TPID in the VLAN pattern item with an inner
EtherType/TPID as is usually done everywhere else (e.g. struct vlan_hdr),
clarifies documentation and updates all relevant code.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Summary of changes for PMDs that implement ETH, VLAN or E_TAG pattern
items:
- bnxt: EtherType matching is supported with and without VLAN, but TPID
matching is not and triggers an error.
- e1000: EtherType matching is only supported with the ETHERTYPE filter,
which does not support VLAN matching, therefore no impact.
- enic: same as bnxt.
- i40e: same as bnxt with existing FDIR limitations on allowed EtherType
values. The remaining filter types (VXLAN, NVGRE, QINQ) do not support
EtherType matching.
- ixgbe: same as e1000, with additional minor change to rely on the new
E-Tag macro definition.
- mlx4: EtherType/TPID matching is not supported, no impact.
- mlx5: same as bnxt.
- mvpp2: same as bnxt.
- sfc: same as bnxt.
- tap: same as bnxt.
Fixes: b1a4b4cbc0 ("ethdev: introduce generic flow API")
Fixes: 99e7003831 ("net/ixgbe: parse L2 tunnel filter")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
RSS hash types (ETH_RSS_* macros defined in rte_ethdev.h) describe the
protocol header fields of a packet that must be taken into account while
computing RSS.
When facing encapsulated (e.g. tunneled) packets, there is an ambiguity as
to whether these should apply to inner or outer packets. Applications need
the ability to tell exactly "where" RSS must be performed.
This is addressed by adding encapsulation level information to the RSS flow
action. Its default value is 0 and stands for the usual unspecified
behavior. Other values provide a specific encapsulation level.
Contrary to the change announced by commit 676b605182 ("doc: announce
ethdev API change for RSS configuration"), this patch does not affect
struct rte_eth_rss_conf but struct rte_flow_action_rss as the former is not
used anymore by the RSS flow action. ABI impact is therefore limited to
rte_flow.
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
By definition, RSS involves some kind of hash algorithm, usually Toeplitz.
Until now it could not be modified on a flow rule basis and PMDs had to
always assume RTE_ETH_HASH_FUNCTION_DEFAULT, which remains the default
behavior when unspecified (0).
This breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Since its inception, the rte_flow RSS action has been relying in part on
external struct rte_eth_rss_conf for compatibility with the legacy RSS API.
This structure lacks parameters such as the hash algorithm to use, and more
recently, a method to tell which layer RSS should be performed on [1].
Given struct rte_eth_rss_conf will never be flexible enough to represent a
complete RSS configuration (e.g. RETA table), this patch supersedes it by
extending the rte_flow RSS action directly.
A subsequent patch will add a field to use a non-default RSS hash
algorithm. To that end, a field named "types" replaces the field formerly
known as "rss_hf" and standing for "RSS hash functions" as it was
confusing. Actual RSS hash function types are defined by enum
rte_eth_hash_function.
This patch updates all PMDs and example applications accordingly.
It breaks ABI compatibility for the following public functions:
- rte_flow_copy()
- rte_flow_create()
- rte_flow_query()
- rte_flow_validate()
[1] commit 676b605182 ("doc: announce ethdev API change for RSS
configuration")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch makes the following changes to flow rule actions:
- List order now matters, they are redefined as performed first to last
instead of "all simultaneously".
- Repeated actions are now supported (e.g. specifying QUEUE multiple times
now duplicates traffic among them). Previously only the last action of
any given kind was taken into account.
- No more distinction between terminating/non-terminating/meta actions.
Flow rules themselves are now defined as always terminating unless a
PASSTHRU action is specified.
These changes alter the behavior of flow rules in corner cases in order to
prepare the flow API for actions that modify traffic contents or properties
(e.g. encapsulation, compression) and for which order matter when combined.
Previously one would have to do so through multiple flow rules by combining
PASSTRHU with priority levels, however this proved overly complex to
implement at the PMD level, hence this simpler approach.
This breaks ABI compatibility for the following public functions:
- rte_flow_create()
- rte_flow_validate()
PMDs with rte_flow support are modified accordingly:
- bnxt: no change, implementation already forbids multiple actions and does
not support PASSTHRU.
- e1000: no change, same as bnxt.
- enic: modified to forbid redundant actions, no support for default drop.
- failsafe: no change needed.
- i40e: no change, implementation already forbids multiple actions.
- ixgbe: same as i40e.
- mlx4: modified to forbid multiple fate-deciding actions and drop when
unspecified.
- mlx5: same as mlx4, with other redundant actions also forbidden.
- sfc: same as mlx4.
- tap: implementation already complies with the new behavior except for
the default pass-through modified as a default drop.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Fixes: 4ec1fc3ba8 ("net/sfc: add basic stubs for RSS support on driver attach")
Cc: stable@dpdk.org
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>