25396 Commits

Author SHA1 Message Date
Ting Xu
f593944fc9 net/iavf: enable IRQ mapping configuration for large VF
The current IRQ mapping configuration only supports max 16 queues and
16 MSIX vectors. Change the queue vector mapping structure to indicate
up to 256 queues. A new opcode is used to handle the case with large
number of queues. To avoid adminq buffer size limitation, we support
to send the virtchnl message multiple times if needed.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-11-03 23:24:25 +01:00
Ting Xu
9a7695b227 net/iavf: enable multiple queues configuration for large VF
Since the adminq buffer size has a 4K limitation, the current virtchnl
command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once
to configure up to 256 queues. In this patch, we send the messages
multiple times to make sure that the buffer size is less than 4K each
time.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-11-03 23:24:25 +01:00
Ting Xu
e436cd4383 net/iavf: negotiate large VF and request more queues
Negotiate large VF capability with PF during VF initialization. If large
VF is supported and the number of queues larger than 16 is required, VF
requests additional queues from PF. Mark the state that large VF is
supported.

If the allocated queues number is larger than 16, the max RSS queue
region cannot be 16 anymore. Add the function to query max RSS queue
region from PF, use it in the RSS initialization and future filters
configuration.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-11-03 23:24:25 +01:00
Ting Xu
ef807926e1 net/iavf: support requesting additional queues from PF
Add a new virtchnl function to request additional queues from PF.
Current default queue pairs number when creating a VF is 16. In order to
support up to 256 queue pairs per VF, enable this request queues
function.

When requesting queues succeeds, PF will return an event message. If it
is handled by interrupt first, the request queues command cannot receive
the correct PF response and will wait until timeout. Therefore, disable
interrupt before requesting queues in order to handle the event message
asynchronously.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-11-03 23:24:25 +01:00
Ting Xu
5e03e316c7 net/iavf: handle virtchnl event message without interrupt
Currently, VF can only handle virtchnl event message by calling
interrupt.
It is not available in two cases:
1. If the event message comes during VF initialization before interrupt
   is enabled, this message will not be handled correctly.
2. Some virtchnl commands need to receive the event message and handle
   it with interrupt disabled.

To solve this issue, we add the virtchnl event message handling in the
process of reading vitchnl messages in adminq from PF.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-11-03 23:24:25 +01:00
Alexander Kozyrev
0f20acbf5e net/mlx5: implement vectorized MPRQ burst
MPRQ (Multi-Packet Rx Queue) processes one packet at a time using
simple scalar instructions. MPRQ works by posting a single large buffer
(consisted of multiple fixed-size strides) in order to receive multiple
packets at once on this buffer. A Rx packet is then copied to a
user-provided mbuf or PMD attaches the Rx packet to the mbuf by the
pointer to an external buffer.

There is an opportunity to speed up the packet receiving by processing
4 packets simultaneously using SIMD (single instruction, multiple data)
extensions. Allocate mbufs in batches for every MPRQ buffer and process
the packets in groups of 4 until all the strides are exhausted. Then
switch to another MPRQ buffer and repeat the process over again.

The vectorized MPRQ burst routine is engaged automatically in case
the mprq_en=1 devarg is specified and the vectorization is not disabled
explicitly by providing rx_vec_en=0 devarg. There is a limitation:
LRO is not supported and scalar MPRQ is selected if it is on.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:24:25 +01:00
Alexander Kozyrev
1ded26239a net/mlx5: refactor vectorized Rx
Move the main processing cycle into a separate function:
rxq_cq_process_v. Put the regular rxq_burst_v function
to a non-arch specific file. Having all SIMD instructions
in a single reusable block is a first preparatory step to
implement vectorized Rx burst for MPRQ feature.

Pass a pointer to the storage of mbufs directly to the
rxq_copy_mbuf_v instead of calculating the pointer inside
this function. This is needed for the future vectorized Rx
routing which is going to pass a different pointer here.

Calculate the number of packets to replenish inside the
mlx5_rx_replenish_bulk_mbuf. Containing this logic in one
place allows us to do the same for MPRQ case.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 23:24:25 +01:00
Xueming Li
16dbba257c net/mlx5: fix port shared data reference count
When probe a representor, tag cache hash table and modification cache
hash table allocated memory upon each port, overwrote previous existing
cache in shared context data.

This patch moves reference check of shared data prior to hash table
allocation to avoid such issue.

Fixes: 6801116688fe ("net/mlx5: fix multiple flow table hash list")
Fixes: 1ef4cdef2682 ("net/mlx5: fix flow tag hash list conversion")
Cc: stable@dpdk.org

Acked-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
2020-11-03 23:24:25 +01:00
Shiri Kuzin
42dcd453d9 net/mlx5: fix xstats reset reinitialization
The mlx5_xstats_reset clears the device extended statistics.
In this function the driver may reinitialize the structures
that are used to read device counters.

In case of reinitialization, the number of counters may
change, which wouldn't be taken into account by the
reset API callback and can cause a segmentation fault.

This issue is fixed by allocating the counters size after
the reinitialization.

Fixes: a4193ae3bc4f ("net/mlx5: support extended statistics")
Cc: stable@dpdk.org

Reported-by: Ralf Hoffmann <ralf.hoffmann@allegro-packets.com>
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
2b5b1aeb39 net/mlx5: optimize counter extend memory
Counter extend memory was allocated for non-batch counter to save the
extra DevX object. Currently, for non-batch counter which does not
support aging, entry in the generic counter struct is used only when
counter is free in free list, and bytes in the struct is used only when
counter is allocated in using.

In this case, the DevX object can be saved to the generic counter struct
union with entry memory when counter is allocated and union with bytes
when counter is free.
And pool type is also not needed as non-fallback mode only has generic
counter and aging counter, just a bit to indicate the pool is aged or
not will be enough.

This eliminates the counter extend info struct saves the memory.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
cfbdc3f938 net/mlx5: rename flow counter macro
Add the MLX5_ prefix to the defined counter macro names.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
e7138997e0 net/mlx5: make shared counters thread safe
The shared counters save the counter index to three level table. As
three level table supports multiple-thread operations now, the shared
counters can take advantage of the table to support multiple-thread.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
0796c7b1de net/mlx5: make three level table thread safe
This commit adds thread safety support in three level table using
spinlock and reference counter for each table entry.

An new mlx5_l3t_prepare_entry() function is added in order to support
multiple-thread operation.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
3aa279157f net/mlx5: synchronize flow counter pool creation
Currently, counter operations are not thread safe as the counter
pools' array resize is not protected.

This commit protects the container pools' array resize using a spinlock.
The original counter pool statistic memory allocate is moved to the
host thread in order to minimize the critical section. Since that pool
statistic memory is required only in query time. The container pools'
array should be resized by the user threads, the new pool may be used
by other rte_flow APIs before the host thread resize is done, if the
pool is not saved to the pools' array, the specified counter memory will
not be found as the pool is not saved to the counter management pool
array. The pool raw statistic memory will be filled in host thread.

The shared counters will be protected in other commit.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
994829e695 net/mlx5: remove single counter container
A flow counter which was allocated by a batch API couldn't be assigned
to a flow in the root table (group 0) in old rdma-core version.
Hence, a root table flow counter required PMD mechanism to manage
counters which were allocated singly.

Currently, the batch counters have already been supported in root table
includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET
enum and with a kernel driver includes
MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.

When the PMD uses rdma-core API to assign a batch counter to a root
table flow using invalid counter offset, it should get an error only
if the batch counter assignment for root table is supported.
Using this trial in the initialization time can help to detect the
support.

Using the above trial, if the support is valid, remove the management of
single counter container in the fast counter mechanism. Otherwise, move
the counter mechanism to fallback mode.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
df051a3e77 net/mlx5: optimize shared counter memory
Instead of using special memory to indicate shared counter, this patch
does the optimization to use the counter handler reserved memory to
indicate it.  The counter index with MLX5_CNT_SHARED_OFFSET means the
shared counter.

This patch is also an arrangement for a new adjustment to use batch
counter as shared counter.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Suanming Mou
6b7c717ed1 net/mlx5: locate aging pools in the general container
Commit [1] introduced different container for the aging counter
pools. In order to save container memory the aging counter pools
can be located in the general pool container.

This patch locates the aging counter pools in the general pool
container. Remove the aging container management.

[1] commit fd143711a6ea ("net/mlx5: separate aging counter pool range")

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 23:24:25 +01:00
Ferruh Yigit
0b42b92ae4 net/bnxt: fix xstats by id
The xstat by id device operation seems wrong, it fills 'xstats' struct
via 'bnxt_dev_xstats_get_op()' call, but the retrieved values are not
transferred to user input 'values' array.

ethdev layer 'rte_eth_xstats_get_by_id()' &
'rte_eth_xstats_get_names_by_id' already provides "by id" support when
device operations are missing.
It is good for PMD to provide these device operations if it has a more
performant way to get by id. But current implementation in PMD already
does same thing with the ethdev APIs, so removing them provides same
functionality.

Fixes: 88920136688c ("net/bnxt: support xstats get by id")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 23:24:25 +01:00
Somnath Kotur
97c3271781 net/bnxt: fix queue release
Some of the ring related memory was not being freed in both the release
ops. Fix to free them now.
Add some more NULL ptr checks in the corresponding queue_release_mbufs()
and queue_release_op() respectively.
Also call queue_release_op() in the error path of the corresponding
queue_setup_op()

Fixes: 6133f207970c ("net/bnxt: add Rx queue create/destroy")
Fixes: 51c87ebafc7d ("net/bnxt: add Tx queue create/destroy")
Cc: stable@dpdk.org

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-11-03 23:24:25 +01:00
Andrew Rybchenko
bfea115b9f doc: advertise flow API transfer rules support in sfc
Transfer rules support matching on various inner and outer packet
headers, traffic source items like PORT_ID, PHY_PORT, PF and VF and
actions to route traffic to destination (PORT_ID, PHY_PORT, PF, VF or
DROP), MARK, FLAG and apply VLAN push/pop transformations.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
dadff13793 net/sfc: support encap flow items in transfer rules
Add support for flow items VXLAN, Geneve and NVGRE to
MAE-specific RTE flow implementation.

Having support for these items implies the ability to insert
so-called outer MAE rules and refer to them in MAE action rules.
The patch takes care of all necessary facilities to do that.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
7a673e1a4a common/sfc_efx/base: support outer rule provisioning
Let the client insert / remove outer rules.
Let the client refer to an inserted outer rule in a match
specification of type ACTION.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
ed15d7f8e0 common/sfc_efx/base: validate and compare outer match specs
Let the client validate an outer match specification.
Let the client comprare classes of two outer match specifications.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
36d124c37d common/sfc_efx/base: add API to compare match specs
Match specification format and its size are not exposed to clients.
Provide an API to compare two match specifications.

A client would typically use this API to compare a match specification
of an outer rule being validated with match specifications of already
active outer rules (to make sure that rule class is supported).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
0ec6faf41d common/sfc_efx/base: add MAE match field VNET ID for tunnels
Add MCDI-compatible enumeration for this field and
provide necessary mappings for it to be inserted
directly into mask-value pairs buffer.

VNET_ID can be used to serve the following match fields:
rte_flow_item_vxlan.vni, rte_flow_item_geneve.vni,
rte_flow_item_nvgre.tni

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
1efc26e1e0 common/sfc_efx/base: add MAE encap match fields
Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

These fields are meant to comprise a so-called outer
match specification; provide necessary definitions.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
891408c45a common/sfc_efx/base: indicate MAE support for encapsulation
MAE provides support for encapsulation. One needs to insert
a so-called outer rule, which can match outer packet fields,
to require that matching packets be parsed as tunnel frames
of a given type (VXLAN, Geneve, NVGRE). Then it is possible
to chain this rule with an action rule in order to match on
inner fields and carry out some actions on matching packets.

Report to clients what encapsulation types are supported by
MAE. Indicate the number of priority levels for outer rules.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
c50442b0a7 net/sfc: support flow item UDP in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e0eb90cbf8 net/sfc: support flow item TCP in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
080845d3a8 common/sfc_efx/base: add MAE match fields for TCP and UDP
Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e191e0d51c net/sfc: support flow item IPv6 in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
d6192a7cca common/sfc_efx/base: add MAE match fields for IPv6
Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
bcf8dda8ca net/sfc: support flow item IPv4 in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
afb4125433 common/sfc_efx/base: add MAE match fields for IPv4
Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
900d0c1b71 net/sfc: support flow item VLAN in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

In a pattern, a L2 item preceding an item VLAN must have
correct "type" ("inner_type") set depending on the total
number of VLAN tags (double-tagging is supported):

"pattern eth type is X / vlan / end",
X = 0x8100, or 0x88a8, or 0x9100, or 0x9200, or 0x9300

"pattern eth type is X / vlan inner_type is 0x8100 / vlan / end"
X = 0x88a8, or 0x9100, or 0x9200, or 0x9300

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
7a376918c2 common/sfc_efx/base: add MAE match fields for VLAN
Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
33d96ca564 net/sfc: support flow item port ID in transfer rules
Add support for this flow item to MAE-specific RTE flow implementation.

The DPDK port must not relate to a different physical device.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
1fb65e4dae net/sfc: support flow action port ID in transfer rules
The action handler will use MAE action DELIVER with MPORT
of the PCIe function associated with a given DPDK port ID.
The DPDK port must not relate to a different physical device.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
1e7fbdf0ba net/sfc: support concept of switch domains/ports
A later patch will add support for RTE flow action PORT_ID
to MAE backend. The driver has to ensure that such actions
refer to RTE ethdev instances deployed on top of the same
physical device. Also, the driver needs a means to find
sibling RTE ethdev instances when parsing such actions.

In order to solve these problems, add a switch infrastructure
which allocates switch domains based on persistence of device
serial number string across switch ports included in a domain.
Explain mapping between RTE switch port IDs and MAE endpoints.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e86b48aa46 net/sfc: add HW switch ID helpers
The driver will need a means to figure out relationship between
RTE ethdev instances and underlying HW switch entities. For now,
use board serial number string as a unique HW switch identifier.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
833cfcd590 common/sfc_efx/base: add API for querying board info
Riverhead boards can provide extended version information.
Implement facilities necessary to obtain it.
Add an API for querying board information.

A client driver may use this to discover which of its instances
relate to which physical boards, based on board serial number
persistence for a given physical board.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
312191e86e common/sfc_efx/base: refactor version / boot info get helper
Refactor MCDI helper for version information and boot status
retrieval; it should comprise two dedicated helper functions.

A later patch will extend and reuse version retrieval helper.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
0839236d03 net/sfc: support flow action drop in transfer rules
Effectively, the resulting action will be of type DELIVER, and
destination MPORT will be a properly constructed NULL value.
This will achieve the requested behaviour (no delivery).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
bb024542ff common/sfc_efx/base: add API for adding action drop
Client drivers may need to request that matching traffic be dropped.
Add a dedicated API to support this. The API relies on action
DELIVER with properly constructed NULL MPORT argument.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
70f3cadce6 net/sfc: support flow actions PF and VF in transfer rules
The action handler will use MAE action DELIVER with MPORT of the PF/VF.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
2ada1448d8 net/sfc: support flow items PF and VF in transfer rules
Add support for these flow items to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
097058033f common/sfc_efx/base: add API to get mport of PF/VF
PCIe functions have static MPORTs which can be utilised by
MAE rules as delivery destinations for matching traffic.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
4bee5ad547 common/sfc_efx/base: add named constant for invalid VF
This makes existing code clearer. Also, it will be used by a later patch.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e7de2bcbe8 net/sfc: support flow action mark in MAE backend
The action handler will use MAE action MARK.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
83352289e1 common/sfc_efx/base: support adding mark action to set
This action can be added at any point before DELIVER.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00