Commit Graph

11021 Commits

Author SHA1 Message Date
Jiaqi Min
29f19cac62 net/i40e/base: update X722/X710 FW API version to 1.10
update X722/X710 FW API version to 1.10.

Signed-off-by: Piotr Azarewicz <piotr.azarewicz@intel.com>
Signed-off-by: Jiaqi Min <jiaqix.min@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-04-21 13:57:07 +02:00
Xiao Zhang
5908712aa5 net/iavf: support more flow patterns
Add patterns support for AH/ESP/L2TPV3OIP/PFCP.
Added patterns are as follows:
/* GTPU */
    eth/ipv4/udp/gtpu
    eth/ipv4/udp/gtpu/gtp_psc
/* ESP */
    eth/ipv4/esp  eth/ipv4/udp/esp
    eth/ipv6/esp  eth/ipv6/udp/esp
/* AH */
    eth/ipv4/ah  eth/ipv6/ah
/* L2TPV3 */
    eth/ipv4/l2tpv3oip  eth/ipv6/l2tpv3oip
/* PFCP */
    eth/ipv4/udp/pfcp  eth/ipv6/udp/pfcp

Signed-off-by: Xiao Zhang <xiao.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:07 +02:00
Qiming Yang
ff2d0c345c net/iavf: support generic flow API
This patch added iavf_flow_create, iavf_flow_destroy,
iavf_flow_flush and iavf_flow_validate support,
these are used to handle all the generic filters.

This patch supported basic L2, L3, L4 and GTPU patterns.

Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:07 +02:00
Yunjian Wang
497eb88c14 net/pfe: fix double free of MAC address
The 'mac_addrs' freeing has been moved to rte_eth_dev_release_port(),
so freeing 'mac_addrs' like this in pfe_eth_exit() is unnecessary and
will cause double free.

Fixes: 67fc3ff97c ("net/pfe: introduce basic functions")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
261bb99a21 net/mlx5: reorganize fallback counter management
Currently, the fallback counter is also allocated from the pool, the
specify fallback function code becomes a bit duplicate.

Reorganize the fallback counter code to make it reuse from the normal
counter code.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
826b8a8732 net/mlx5: split flow counter struct
Currently, the counter struct saves both the members used by batch
counters and none batch counters. The members which are only used
by none batch counters cost 16 bytes extra memory for batch counters.
As normally there will be limited none batch counters, mix the none
batch counter and batch counter members becomes quite expensive for
batch counter. If 1 million batch counters are created, it means 16 MB
memory which will not be used by the batch counters are allocated.

Split the mlx5_flow_counter struct for batch and none batch counters
helps save the memory.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
956d5c74d7 net/mlx5: optimize flow counter handle type
Currently, DV and verbs counters are both changed to indexed. It means
while creating the flow with counter, flow can save the indexed value to
address the counter.

Save the 4 bytes indexed value in the rte_flow instead of 8 bytes
pointer helps to save memory with millions of flows.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
4001d7ad26 net/mlx5: change Direct Verbs counter to indexed
This part of the counter optimize change the DV counter to indexed as
what have already done in verbs. In this case, all the mlx5 flow counter
can be addressed by index.

The counter index is composed of pool index and the counter offset in
the pool counter array. The batch and none batch counter dcs ID offset
0x800000 is used to avoid the mix up for the index. As batch counter dcs
ID starts from 0x800000 and none batch counter dcs starts from 0, the
0x800000 offset is added to the batch counter index to indicate the
index of batch counter.

The counter pointer in rte_flow struct will be aligned to index instead
of pointer. It will save 4 bytes memory for every rte_flow. With
millions of rte_flow, it will save MBytes memory.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
1d0ab7e795 common/mlx5: add batch counter id offset
This commit is a part for the DV counter optimization.

The batch counter dcs id starts from 0x800000 and none batch counter
starts from 0. As currently, the counter is changed to be indexed by
pool index and the offset of the counter in the pool counters_raw array.
It means now the counter index is same for batch and none batch counter.
Add the 0x800000 batch counter offset to the batch counter index helps
indicate the counter index is from batch or none batch container pool.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
c3d3b14099 net/mlx5: change verbs counter allocator to indexed
This is part of the counter optimize which will save the indexed counter
id instead of the counter pointer in the rte_flow.

Place the verbs counter into the container pool helps the counter to be
indexed correctly independent with the raw counter.

The counter pointer in rte_flow will be changed to indexed value after
the DV counter is also changed to indexed.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
c989f49a38 net/mlx5: optimize counter release query generation
Query generation was introduced to avoid counter to be reallocated
before the counter statistics be fully updated. Since the counters
be released between query trigger and query handler may miss the
packets arrived in the trigger and handler gap period. In this case,
user can only allocate the counter while pool query_gen is greater
than the counter query_gen + 1 which indicates a new round of query
finished, the statistic is fully updated.

Split the pool query_gen to start_query_gen and end_query_gen helps
to have a better identify for the counter released in the gap period.
And it helps the counter released before query trigger or after query
handler can be reallocated more efficiently.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Suanming Mou
92a0a3a138 net/mlx5: fix counter container usage
As none-batch counter pool allocates only one counter every time, after
the new allocated counter pop out, the pool will be empty and moved to
the end of the container list in the container.

Currently, the new non-batch counter allocation maybe happened with new
counter pool allocated, it means the new counter comes from a new pool.
While new pool is allocated, the container resize and switch happens.
In this case, after the pool becomes empty, it should be added to the
new container pool list as the pool belongs.

Update the container pointer accordingly with pool allocation to avoid
add the pool to the incorrect container.

Fixes: 5382d28c21 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
4ffe83d51e net/ena: update driver version to v2.1.0
The v2.1.0 is refactoring Tx and Rx paths, including few bug fixes and
is also adding a new features which are going to be available with the
newest hardware.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
83fd97b206 net/ena: reuse zero length Rx descriptor
Some ENA devices can pass to the driver descriptor with length 0. To
avoid extra allocation, the descriptor can be reused by simply putting
it back to the device.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
36278b8227 net/ena: refactor Tx
The original Tx function was very long and was containing both cleanup
and the sending sections. Because of that it was having a lot of local
variables, big indentation and was hard to read.

This function was split into 2 sections:
  * Sending - which is responsible for preparing the mbuf, mapping it
    to the device descriptors and finally, sending packet to the HW
  * Cleanup - which is releasing packets sent by the HW. Loop which was
    releasing packets was reworked a bit, to make intention more visible
    and aligned with other parts of the driver.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
c0006061fb net/ena: use macros for ring index operations
To improve code readability, abstraction was added for operating on IO
rings indexes.

Driver was defining local variable for ring mask in each function that
needed to operate on the ring indexes. Now it is being stored in the
ring as this value won't change unless size of the ring will change and
macros for advancing indexes using the mask has been added.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
7755060798 net/ena: limit refill threshold by fixed value
Divider used for both Tx and Rx cleanup/refill threshold can cause too
big delay in case of the really big rings - for example if the 8k Rx
ring will be used, the refill won't trigger unless 1024 threshold will
be reached. It will also cause driver to try to allocate that much
descriptors.

Limiting it by fixed value - 256 in that case, would limit maximum
time spent in repopulate function.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
74456796e0 net/ena: rework getting number of available descriptors
ena_com API should be preferred for getting number of used/available
descriptors unless extra calculation needs to be performed.

Some helper variables were added for storing values that are later
reused. Moreover, for limiting the value of sent/received packets to
the number of available descriptors, the RTE_MIN is used instead of
if function, which was doing similar thing but was less descriptive.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
1be097dc96 net/ena: refactor Rx
* Split main Rx function into multiple ones - the body of the main
  was very big and further there were 2 nested loops, which were
  making the code hard to read
* Rework how the Rx mbuf chains are being created - Instead of having
  while loop which has conditional check if it's first segment, handle
  this segment outside the loop and if more fragments are existing,
  process them inside.
* Initialize Rx mbuf using simple function - it's the common thing for
  the 1st and next segments.
* Create structure for Rx buffer to align it with Tx path, other ENA
  drivers and to make the variable name more descriptive - on DPDK, Rx
  buffer must hold only mbuf, so initially array of mbufs was used as
  the buffers. However, it was misleading, as it was named
  "rx_buffer_info". To make it more clear, the structure holding mbuf
  pointer was added and now there is possibility to expand it in the
  future without reworking the driver.
* Remove redundant variables and conditional checks.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
33dde075fc net/ena: disable meta caching
In the LLQ (Low-latency queue) mode, the device can indicate that meta
data descriptor caching is disabled. In that case the driver should send
valid meta descriptor on every Tx packet.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
e1e73e3229 net/ena: add Tx drops statistic
ENA device can report in the AENQ handler amount of Tx packets that were
dropped and not sent.

This statistic is showing global value for the device and because
rte_eth_stats is missing field that could indicate this value (it
isn't the Tx error), it is being presented as a extended statistic.

As the current design of extended statistics prevents tx_drops from
being an atomic variable and both tx_drops and rx_drops are only updated
from the AENQ handler, both were set as non-atomic for the alignment.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
38faa87eb8 net/ena: remove memory barriers before doorbells
The doorbell code is already issuing the doorbell by using rte_write.
Because of that, there is no need to do that before calling the
function.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
8a7a73f26c net/ena: support large LLQ headers
Default LLQ (Low-latency queue) maximum header size is 96 bytes and can
be too small for some types of packets - like IPv6 packets with multiple
extension. This can be fixed, by using large LLQ headers.

If the device supports larger LLQ headers, the user can activate them by
using device argument 'large_llq_hdr' with value '1'.

If the device isn't supporting this feature, the default value (96B)
will be used.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
5920d93083 net/ena: refactor getting IO queues capabilities
Reading values from the device is about the maximum capabilities of the
device. Because of that, the names of the fields storing those values,
functions and temporary variables, should be more descriptive in order
to improve self documentation of the code.

In connection with this, the way of getting maximum queue size could be
simplified - no hardcoded values are needed, as the device is going to
send it's capabilities anyway.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
badc3a6aa1 net/ena: set IO ring size to valid value
IO rings were configured with the maximum allowed size for the Tx/Rx
rings. However, the application could decide to create smaller rings.

This patch is using value stored in the ring instead of the value from
the adapter which is indicating the maximum allowed value.

Fixes: df238f84c0 ("net/ena: recreate HW IO rings on start and stop")
Cc: stable@dpdk.org

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
bde3b46f79 net/ena/base: update generation date and commit
The current ena_com version was generated on 25.09.2019.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
5dcdfbfaaf net/ena/base: fix indentation of multiple defines
As the alignment of the defines wasn't valid, it was removed at all, so
instead of using multiple spaces or tabs, the single space after define
name is being used.

Fixes: 99ecfbf845 ("ena: import communication layer")

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
422397c56c net/ena/base: fix types for printing timestamps
Because ena_com is being used by multiple platforms which are using
different C versions, PRIu64 cannot be used directly and must be defined
in the platform file.

Fixes: b2b02edeb0 ("net/ena/base: upgrade HAL for new HW features")

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
a366fe4164 net/ena/base: use 48-bit memory addresses
ENA device is using 48-bit memory for IO. Because of that, the upper
limit had to be updated.

From the driver perspective, it's just a cosmetic change to make
definition of the structure 'ena_common_mem_addr' more descriptive and
the address value was verified anyway for the valid range in the
function 'ena_com_mem_addr_set()'.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
14f3f532a1 net/ena/base: add error logs when preparing Tx
To make the debugging easier, the error logs were added in the Tx path.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
6a5283bbad net/ena/base: fix indentation in CQ polling
The spaces instead of tabs were used for the indent.

Fixes: 3adcba9a89 ("net/ena: update HAL to the newer version")

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:07 +02:00
Michal Krawczyk
b118993abd net/ena/base: fix documentation of functions
The documentation format was aligned and few typos were fixed.

Fixes: 99ecfbf845 ("ena: import communication layer")

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
f145360455 net/ena/base: add accelerated LLQ mode
In order to use the accelerated LLQ (Low-lateny queue) mode, the driver
must limit the Tx burst and be aware that the device has the meta
caching disabled. In that situation, the meta descriptor must be valid
on each Tx packet.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
c06c51d16c net/ena/base: remove extra properties strings
This buffer was never used by the ENA PMD. It could be used for
debugging, but it's presence is redundant now.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
d2138b2302 net/ena/base: rework interrupt moderation
This feature allows for adaptive interrupt moderation. It's not used by
the DPDK PMD, but is a part of the newest HAL version.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
585cacc67f net/ena/base: remove conversion of indirection table
After the indirection table is being saved in the device, there is no
need to convert it back, as it's already saved in host_rss_ind_tbl
array.

As a result, the call to the ena_com_ind_tbl_convert_from_device() is
not needed.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
6e585db689 net/ena/base: fix testing for supported hash function
There was a bug in ena_com_fill_hash_function(), which was causing bit to
be shifted left one bit too much.

To fix that, the ENA_FFS macro is being used (returning the location of
the first bit set), hash_function value is being subtracted by 1 if any
hash function is supported by the device and BIT macro is used for
shifting for better verbosity.

Fixes: 99ecfbf845 ("ena: import communication layer")
Cc: stable@dpdk.org

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
086c6b66e8 net/ena/base: generate default random RSS hash key
Although the RSS key still cannot be set, it is now being generated
every time the driver is being initialized.

Multiple devices can still have the same key if they're used by the same
driver.

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Igor Chauskin
29dc10d942 net/ena/base: prevent allocation of zero sized memory
rte_memzone_reserve() will reserve the biggest contiguous memzone
available if received 0 as size param.

Fixes: 9ba7981ec9 ("ena: add communication layer for DPDK")
Cc: stable@dpdk.org

Signed-off-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Igor Chauskin
b14fcac035 net/ena/base: make allocation macros thread-safe
Memory allocation region id could possibly be non-unique
due to non-atomic increment, causing allocation failure.

Fixes: 9ba7981ec9 ("ena: add communication layer for DPDK")
Cc: stable@dpdk.org

Signed-off-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Michal Krawczyk
38364c2687 net/ena: ensure Rx buffer size is at least 1400B
Some of the ENA devices can't handle buffers which are smaller than a
1400B. Because of this limitation, size of the buffer is being checked
and limited during the Rx queue setup.

If it's below the allowed value, PMD won't finish it's configuration
successfully..

Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Guy Tzalik <gtzalik@amazon.com>
2020-04-21 13:57:06 +02:00
Qi Zhang
3e1374f201 net/ice/base: remove unused code in switch rule
Update a switch rule' action from "to VSI" to "to VSI List"
should only happen when the same rule has been programmed with
a different fwd destination. This is already handled by below
code block:

m_entry = ice_find_adv_rule_entry(...)
if (m_entry) {
	...
	ice_adv_add_update_vsi_list(...)
}

The following ice_update_pkt_fwd_rule is unnecessary and should be
removed due to:
1) If a switch rule's action is still to VSI, which means, it is
   the first time be issued,  we don't need to update it "to VSI
   List."
2) Actually the implementation does not match the comment, it still
   update the rule with "to VSI" action.

Fixes: fed0c5ca5f ("net/ice/base: support programming a new switch recipe")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-04-21 13:57:06 +02:00
Lunyuan Cui
e74e1bb628 net/iavf: enable port reset
This patch is intended to add iavf_dev_reset ops, enable iavf to support
"port reset all".

Signed-off-by: Lunyuan Cui <lunyuanx.cui@intel.com>
Tested-by: Zhaoyan Chen <zhaoyan.chen@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
2020-04-21 13:57:06 +02:00
Lunyuan Cui
ea0c22fd82 net/i40e: enable MAC address as flow director input set
Enable source MAC address and destination MAC address as FDIR's
input set for ipv4-other, ipv4-udp and ipv4-tcp. When OVS-DPDK is
working as a pure L2 switch, enable MAC address as FDIR input set
with Mark+RSS action would help the performance speed up. And FVL
FDIR supports to change input set with MAC address.

Signed-off-by: Lunyuan Cui <lunyuanx.cui@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-04-21 13:57:06 +02:00
Junyu Jiang
2aa55b4135 net/ice: fix RSS advanced rule
This patch moved the RSS initialization from dev start to dev configure
to fix RSS advanced rule invalid issue after running port stop and port
start.

Fixes: 5ad3db8d4b ("net/ice: enable advanced RSS")
Cc: stable@dpdk.org

Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Tested-by: Zhiwei He <zhiwei.he@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
2020-04-21 13:57:06 +02:00
Yunjian Wang
9d5996c01d net/nfp: fix dangling pointer on probe failure
When nfp_pf_create_dev() is cleaning up, it does not correctly set
the dev_private variable to NULL, which will lead to a double free.

Fixes: ef28aa96e5 ("net/nfp: support multiprocess")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
2020-04-21 13:57:06 +02:00
Ferruh Yigit
735d826e6c net/nfp: fix log format specifiers
Changing format specifier for the 'size_t' as '%z' and for 'off_t' as
'%jd'.

Also this fix enables compiling PMD for 32bit architecture.

Fixes: 29a62d1476 ("net/nfp: add CPP bridge as service")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
2020-04-21 13:57:06 +02:00
Vadim Podovinnikov
321908b6e3 net/memif: fix resource leak
Fixes: c41a04958b ("net/memif: support multi-process")
Cc: stable@dpdk.org

Signed-off-by: Vadim Podovinnikov <podovinnikov@protei.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
36274f2871 net/netvsc: avoid possible live lock
Since the ring buffer with host is shared for both transmit
completions and receive packets, it is possible that transmitter
could get starved if receive ring gets full.

Better to process all outstanding events which frees up transmit
buffer slots, even if means dropping some packets.

Fixes: 7e6c824307 ("net/netvsc: avoid over filling Rx descriptor ring")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
99c67a0ae7 bus/vmbus: simplify arguments to need signal function
The transmit need signal function can avoid an unnecessary
dereference by passing the right pointer. This also makes
code better match FreeBSD driver.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
56edef9906 net/netvsc: handle Tx completions based on burst size
If tx_free_thresh is quite low, it is possible that we need to
cleanup based on burst size.

Fixes: fc30efe3a2 ("net/netvsc: change Rx descriptor setup and sizing")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
dfd3f0fce8 net/netvsc: remove process event optimization
Remove unlocked check for data in receive ring.
This check is not safe because of missing barriers etc.

Fixes: 4e9c73e96e ("net/netvsc: add Hyper-V network device")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
30408aab2d net/netvsc: fix memory free on device close
The netvsc PMD was putting the mac address in private data but the
core rte_ethdev doesn't allow that it. It has to be in rte_malloc'd
memory or a message will be printed on shutdown/close.
 EAL: Invalid memory

Fixes: f8279f47dd ("net/netvsc: fix crash in secondary process")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
cc02518132 net/netvsc: split send buffers from Tx descriptors
The VMBus has reserved transmit area (per device) and transmit
descriptors (per queue). The previous code was always having a 1:1
mapping between send buffers and descriptors.
This can lead to one queue starving another and also buffer bloat.

Change to working more like FreeBSD where there is a pool of transmit
descriptors per queue. If send buffer is not available then no
aggregation happens but the queue can still drain.

Fixes: 4e9c73e96e ("net/netvsc: add Hyper-V network device")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
107f3cf310 net/netvsc: handle Rx packets during multi-channel setup
It is possible for a packet to arrive during the configuration
process when setting up multiple queue mode. This would cause
configure to fail; fix by just ignoring receive packets while
waiting for control commands.

Use the receive ring lock to avoid possible races between
oddly behaved applications doing rx_burst and control operations
concurrently.

Fixes: 4e9c73e96e ("net/netvsc: add Hyper-V network device")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Stephen Hemminger
4bc7dc1110 net/netvsc: propagate descriptor limits from VF
If application cares about descriptor limits, the netvsc device
values should reflect those of the VF as well.

Fixes: dc7680e859 ("net/netvsc: support integrated VF")
Cc: stable@dpdk.org

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-04-21 13:57:06 +02:00
Beilei Xing
c8183dd8e0 net/ice: redirect switch rule to new VSI
After VF reset, VF's VSI number may be changed,
the switch rule which forwards packet to the old
VSI number should be redirected to the new VSI
number.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Beilei Xing
397b4b3c50 net/ice: enable flow redirect on switch
Enable flow redirect on switch, currently only
support VSI redirect.

Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
5d83492603 net/ice: fix input set of VLAN item
The input set for inner type of vlan item should
be ICE_INSET_ETHERTYPE, not ICE_INSET_VLAN_OUTER.
This mac vlan filter is also part of DCF switch filter.

Fixes: 47d460d632 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
5f164249f9 net/ice: add more flow support for permission stage
This patch add switch filter permission stage support
for more flow pattern in pf only pipeline mode.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
45b53ed370 net/ice: support IPv6 NAT-T
This patch add switch filter support for IPv6 NAT-T packets,
it enable switch filter to direct IPv6 packets with
NAT-T payload to specific action.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
277f125f00 net/ice: support PFCP
This patch add switch filter support for PFCP packets,
it enable switch filter to direct IPv4/IPv6 packets with
PFCP session or node payload to specific action.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
66ff885179 net/ice: support ESP/AH/L2TP
This patch add support for ESP/AH/L2TP packets,
it enable switch filter to direct IPv6 packets with
ESP/AH/L2TP payload to specific action.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
3428c6b6ec net/ice: add action number check for switch
The action number can only be one for DCF or PF
switch filter, not support multiple actions.

Fixes: 47d460d632 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
6bc7628c5e net/ice: change default tunnel type
The default tunnel type for switch filter change to new
definition of ICE_SW_TUN_AND_NON_TUN in order that the rule
will be apply to more packet type.

Fixes: 47d460d632 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
187e7244ad net/ice: support MAC VLAN rule
This patch add support for MAC VLAN rule,
it enable switch filter to direct packet base on
mac address and VLAN id.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
edbd1e921f net/ice: change switch parser to support flexible mask
DCF need to make configuration of flexible mask, that is to say
some input set mask may be not 0xFFFF type all one bit. In order
to direct L2/IP multicast packets, the mask for source IP maybe
0xF0000000, this patch enable switch filter parser for it.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
e6c0f28f3f net/ice: support more PPPoE input set
This patch add more support for PPPoE packet,
it enable switch filter to direct PPPoE packet base on
session id and PPP protocol type.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Wei Zhao
829c310681 net/ice: enable switch flow on DCF
DCF on CVL is a control plane VF which take the responsibility to
configure all the PF/global resources, this patch add support DCF
on to program forward rule to direct packets to VFs.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-04-21 13:57:06 +02:00
Satheesh Paul
0342232aa4 net/octeontx2: support custom L2 header
This patch adds SDP packet parsing support with custom L2 header,
adds support to include a field from custom header for flow tag
generation.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Pavan Nikhilesh
b372fff7d4 net/octeontx2: fix device configuration sequence
When an application invokes rte_eth_dev_configure consecutively without
setting up Rx/Tx queues, it will incorrectly return error while trying
to restore Rx/Tx queue configuration.

Fix configuration sequence by checking if any Rx/Tx queues are
previously configured before trying to restore them.

Fixes: 548b5839a3 ("net/octeontx2: add device configure operation")
Cc: stable@dpdk.org

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Krzysztof Kanas
3912fbde15 net/octeontx2: add TM capability
Add Traffic Management capability callbacks to provide
global, level and node capabilities. This patch also
adds documentation on Traffic Management Support.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Krzysztof Kanas
a3147ae9af net/octeontx2: add Tx queue rate limit
Add Tx queue ratelimiting support. This support is mutually
exclusive with TM support i.e when TM is configured, tx queue
ratelimiting config is no more valid.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
c3f733efd4 net/octeontx2: support TM debug
Add debug support to TM to dump configured topology
and registers. Also enable debug dump when sq flush fails.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
89d08a1ff8 net/octeontx2: add TM dynamic topology update
Add dynamic parent and shaper update callbacks that
can be used to change RR Quantum or PIR/CIR rate dynamically
post hierarchy commit. Dynamic parent update callback only
supports updating RR quantum of a given child with respect to
its parent. There is no support yet to change priority or parent
itself.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
1e25d57fae net/octeontx2: add TM stats and shaper profile
Add TM support for stats read and private shaper
profile addition or deletion.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
6ea54725f7 net/octeontx2: add TM hierarchy commit
Add TM hierarchy commit callback to support enabling
newly created topology.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Krzysztof Kanas
9e17ffb84b net/octeontx2: add TM node suspend/resume
Add TM support to suspend and resume nodes post hierarchy
commit.

Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
2746e76b2a net/octeontx2: support TM node add/delete
Adds support to Traffic Management callbacks "node_add"
and "node_delete". These callbacks doesn't support
dynamic node addition or deletion post hierarchy commit.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
43f3f05fb6 net/octeontx2: support dynamic topology update
Modify resource allocation and freeing logic to support
dynamic topology commit while to traffic is flowing.
This patch also modifies SQ flush to timeout based on minimum shaper
rate configured. SQ flush is further split to pre/post
functions to adhere to HW spec of 96XX C0.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
ec8ddd4fb1 net/octeontx2: restructure TM helper functions
Restructure traffic manager helper function by splitting to
multiple sets of register configurations like shaping, scheduling
and topology config.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
1e8d75d805 net/octeontx2: setup link config based on BP level
Configure NIX_AF_TL3_TL2X_LINKX_CFG using schq at
level based on NIX_AF_PSE_CHANNEL_LEVEL[BP_LEVEL].

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
c07fbbace8 net/octeontx2: support configuring link attributes
Adding support to configure link attributes like speed,
duplex, negotiation.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Vamsi Attunuru
fdbdf2721c net/octeontx2: enable error and RAS interrupt in configure
Patch adds routines to set/clear nix lf error & ras interrupt enable
registers. These nix lf error interrupts get triggered if there are
any failures during nix lf configuration. This interrupts are enabled
before any hardware configurations initiated on the allocated nix lf.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Andrzej Ostruszka <aostruszka@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
100f699242 net/octeontx: support Rx/Tx checksum offload
This patch implements rx/tx checksum offload. In case of
wrong checksum received (inner/outer l3/l4) it reports the
corresponding layer which has bad checksum and also corrects
it if hw checksum is enabled on tx side.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Vamsi Attunuru
241a650061 net/octeontx: support flow control
Patch adds ethdev flow control set/get callback ops,
pmd enables modifying flow control attributes like
rx_pause, tx_pause, high & low water mark.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
8b42b07eef net/octeontx: support set link up/down
Adding support for setting link up/down eth operation.
It is used to enable disable lmac.  Also implemented a
poll function for getting the link status at regular
intervals.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Vamsi Attunuru
56139e85ab net/octeontx: support VLAN filter offload
Patch adds support for vlan filter offload support.
MBOX messages for vlan filter on/off and vlan filter
entry add/rm are added to configure PCAM entries to
filter out the vlan traffic on a given port.

Patch also defines rx_offload_flag for vlan filtering.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
3151e6a687 net/octeontx: support MTU
Adding support for mtu eth operation which configures mtu based
on max pkt len.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
5cbe184802 net/octeontx: support fast mbuf free
This patch adds capability to fast release of mbuf
following successful transmission.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
7f4116bdbb net/octeontx: add framework for Rx/Tx offloads
Adding macro based framework to hook rx/tx burst function
pointers to the appropriate function based on rx/tx offloads.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Harman Kalra
85221a0c7c net/octeontx: support multi segment
Adding multi segment support to the octeontx PMD. Also
adding the logic to share rx/tx ofloads with the eventdev
code.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Kiran Kumar K
41fe7a3a11 net/octeontx2: offload bad L2/L3/L4 UDP lengths detection
Octeontx2 HW has support for detecting the bad L2/L3/L4 UDP lengths.
Since DPDK does not have specific error flag for this, exposing it
as bad checksum failure in mbuff:ol_flags to leverage this feature.

These errors will be propagated to the ol_flags as follows.

L2 length error ==> (PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD).
Both Outer and Inner L3 length error ==> PKT_RX_IP_CKSUM_BAD.
Outer L4 UDP length/port error ==> PKT_RX_OUTER_L4_CKSUM_BAD.
Inner L4 UDP length/port error ==> PKT_RX_L4_CKSUM_BAD.

Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Amit Gupta
be284df082 net/octeontx: fix meson build for disabled drivers
Add a condition to check if octeontx drivers are disabled.
octeontx drivers are built only if dependent drivers i.e.
ethdev, mempool and common/octeontx are enabled.

Bugzilla ID: 387
Fixes: 7f615033d6 ("drivers/net: build Cavium NIC PMDs with meson")
Cc: stable@dpdk.org

Signed-off-by: Amit Gupta <agupta3@marvell.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harman Kalra <hkalra@marvell.com>
2020-04-21 13:57:06 +02:00
Nithin Dabilpuram
5094a436c6 common/octeontx2: upgrade mbox definition to version 5
Sync mail box data structures to version 0x0005 to
that of kernel AF driver.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2020-04-21 13:57:06 +02:00
Yunjian Wang
252566dab5 net/tap: remove unused assert
The assert checks is not necessary, the gso_ctx is always non-NULL.

Fixes: 050316a883 ("net/tap: support TSO (TCP Segment Offload)")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-04-21 13:57:06 +02:00
Andrew Rybchenko
a0147be547 net/sfc: add Xilinx copyright
Xilinx acquired Solarflare in 2019.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: James Fox <jamesfox@xilinx.com>
2020-04-21 13:57:06 +02:00
Igor Romanov
98608e1824 net/sfc: check actual all multicast unknown unicast filters
Check that unknown unicast and unknown multicast filters are
applied and return an error if they are not applied. The error
is used in promiscuous and all multicast mode enable and disable
callbacks.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-04-21 13:57:06 +02:00
Igor Romanov
db26ec5f4b net/sfc/base: add API to get currently operating filters
Unknown unicast filter creation may fail because of insufficient
permissions on VF. This failure is handled internally in libefx MAC
reconfiguration without any way for a user to know if it happened.
Making the MAC reconfiguration forward error code of filter
reconfiguration would be too destructive to the existing code
that may rely on the function never returning that error.

Add an API for getting the status of current unknown unicast and
all multicast filters since user must know that requested
filters are actually applied.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-04-21 13:57:06 +02:00
Igor Romanov
172751baaa net/sfc/base: refactor multicast filters reconfiguration
Refactor the multicast filter reconfiguration stage of the reconfigure
function to make it clearer and allow for more convenient further
changes.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-04-21 13:57:06 +02:00