Commit Graph

13310 Commits

Author SHA1 Message Date
Ivan Malov
616b03e09e common/sfc_efx/base: support adding VLAN pop action to set
MAE supports stripping two tags, so this action can
be requested once or twice.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
041f22ec34 net/sfc: implement flow insert/remove in MAE backend
Exercise action set allocation / release and action rule
insertion / removal in order to let flow API callers
actually get created flows functioning.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
b4fac34715 common/sfc_efx/base: add MAE action rule provisioning APIs
Add APIs for action rule insert / remove operations.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
e61baa82e6 common/sfc_efx/base: add MAE action set provisioning APIs
The patch adds APIs for action set allocation / release.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
d487651090 net/sfc: support flow action PHY port in MAE backend
The action handler will use MAE action DELIVER with
MPORT of a given physical port.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
80019097c9 common/sfc_efx/base: support adding deliver action to set
Introduce a mechanism for adding actions to an action set and
add support for DELIVER action.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
0b90a1377d net/sfc: support flow item eth in MAE backend
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:25 +01:00
Ivan Malov
7055abad56 common/sfc_efx/base: add MAE match fields for Ethernet
Add MCDI-compatible enumeration for these fields and
provide necessary mappings for them to be inserted
directly into mask-value pairs buffer.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
9ca45fd145 net/sfc: support flow item PHY PORT in MAE backend
Add support for this flow item to MAE-specific RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
370ed675a9 common/sfc_efx/base: support setting PPORT in match spec
Add an API for setting mask-value pairs in a match specification
structure and add support for MAE field INGRESS_PORT of type PPORT.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
662286ae61 net/sfc: add actions parsing stub to MAE backend
If parsing a flow results in an action set specification
identical to an already existing one, duplication will
be avoided by reusing the list entry of the latter.
Using an attach helper and a reference counter
is meant to serve the said purpose.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
799889bada common/sfc_efx/base: add action set spec init/fini APIs
The engine is only able to carry out chosen actions on matching packets in
a strict order. No MCDI exists to identify supported actions and the order.
Still, the definition of the latter is available from the FW documentation.

The general idea is to define an action specification structure and supply
a client driver with APIs for adding actions individually, order-dependent.
A client driver is supposed to invoke an API on every action passed by the
application, and if an out-of-order action follows, the API will reject it.

Add an action set specification stub and supply initialise / finalise APIs.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
8b61dd99a1 net/sfc: add verify method to flow validate path
The new method is needed to make sure that a flow being
validated will have a chance to be accepted by the FW.
MAE-specific implementation of the method should
compare the class of a rule being validated with
the corresponding classes of active rules, and,
if no matches found, make a request to the FW.
Support for the latter will be added in future.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
bb71f7e0a3 common/sfc_efx/base: add match specs class comparison API
From MAE standpoint, a flow rule belongs to some class. Field capabilities
advertised by the FW provide a hint on whether changing a particular match
field value or its mask will affect the class of the rule. A client driver
can make use of the concept of a class by comparing a rule being validated
with already inserted ones so that if an existing rule with the same class
is encountered, it will become possible to skip making an explicit request
to the FW because the class of an already inserted rule is wittingly valid.

Implement an API for client drivers to carry out the said class comparison.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
05e8b05d08 net/sfc: validate match spec in MAE backend
Validate the match specification resulting from pattern
parsing within MAE backend in RTE flow implementation.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
34285fd089 common/sfc_efx/base: add match spec validate API
MAE has restrictions on what kind of mask a particular field can have in
a match specification. Add an API for client drivers to check
specifications.

The patch defines a field description list, whilst the list itself is
left empty. This is to provide a general idea of how field properties
will be used to validate a match specification. Particular fields
will be added to the list by follow-up patches.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
7a25bc9871 net/sfc: add pattern parsing stub to MAE backend
Add pattern parsing stub, define and implement flow cleanup method.
The latter is needed to free any dynamic structures allocated
during flow parsing.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
b75eb50d04 common/sfc_efx/base: add match spec init/fini APIs
An MAE rule is a function of match criteria and a priority. The said match
criteria have to be provided using "mask-value pairs" packing format which
on its own should not be exposed to client drivers. The latter have to use
a functional interface of sorts in order to generate a match specification.

Define an EFX match specification and implement initialise / finalise APIs.
The "mask-value pairs" buffer itself is not used in this particular patch,
so the corresponding struct member will be added in the follow-up patch.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
01628fc5f6 net/sfc: add concept of MAE (transfer) rules
Define the corresponding specification structure and
make the code identify MAE rules by testing transfer
attribute presence. Also, add a priority level check.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
d761ec9f33 common/sfc_efx/base: add MAE limit query API
Add an API for client drivers to query the engine limits.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
3af6451292 drivers: init/fini MAE on attach/detach
These actions affect MAE supplementary resources which are
libefx-internal.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
6f956d5cb1 common/sfc_efx/base: add MAE init/fini APIs
The patch adds APIs for client drivers to initialise / finalise
MAE-specific context in NIC control structure. The context
itself will be used by the follow-up patches to store
supportive data for library-internal consumers.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
02b234adde net/sfc: add stub for attaching to MAE
Add a stub for MAE attach / detach path and introduce MAE-specific
context.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Ivan Malov
eb4e80085f common/sfc_efx/base: indicate support for MAE
Riverhead boards maintain support for MAE, a low-level Match-Action
Engine.
The feature is documented in SF-122526-TC.

The new field will help client drivers to test NIC support for MAE
status.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-11-03 23:24:24 +01:00
Andrew Rybchenko
84d3fb7d7e common/sfc_efx/base: add MAE definitions to MCDI
MAE stands for Match-Action-Engine and will be used to
support rte_flow API transfer rules.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-11-03 23:24:24 +01:00
Savinay Dharmappa
45758e4218 net/softnic: fix out-of-bound access
This patch fixes out of bound access of an array.

Coverity issue: 363466, 363459
Fixes: b5dfa6703d ("net/softnic: update subport rate dynamically")
Cc: stable@dpdk.org

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
2020-11-03 23:24:24 +01:00
Simei Su
40d466fa9f net/ice: support ACL filter in DCF
Add ice_acl_create_filter to create a rule and ice_acl_destroy_filter
to destroy a rule. If a flow is matched by ACL filter, filter rule
will be set to HW. Currently IPV4/IPV4_UDP/IPV4_TCP/IPV4_SCTP pattern
and drop action are supported.

Signed-off-by: Simei Su <simei.su@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:24 +01:00
Simei Su
bd83b5dc68 net/ice: get PF VSI map
This patch gets PF vsi number when issuing ACL rule in DCF.

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:24 +01:00
Leyi Rong
fd0dd9b3cf net/iavf: fix unchecked Tx cleanup error
Coverity complains of unchecked return value warning of
iavf_xmit_cleanup, while this cleanup is opportunistic and will not cause
problems if it fails. So instead of checking the return value of
iavf_xmit_cleanup and return in case of cleanup failure, we directly cast
it to void function to make the Coverity happy.

Coverity issue: 363045
Fixes: 02d212ca31 ("net/iavf: rename remaining avf strings")
Cc: stable@dpdk.org

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:24 +01:00
Junyu Jiang
164c2001cc net/ice: fix SCTP RSS configuration
This patch configured RSS for sctp with IP address
and port as input set.

Fixes: 4717a12cfa ("net/ice: initialize and update RSS based on user config")
Cc: stable@dpdk.org

Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:24 +01:00
Simei Su
d647871c50 net/ice/base: fix bitmap set function
This patch corrects an upper limit value in for loop.

Fixes: dd4a3cef55 ("net/ice/base: introduce and use bitmap set API")
Cc: stable@dpdk.org

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-11-03 23:24:24 +01:00
Guinan Sun
b0cd767a75 net/iavf: fix adding multicast MAC address
When the multicast address list is added, it will flush
previous addresses first, and then add new ones.
If the number of multicast address in the list exceeds
the upper limit, it will cause failure, then need to
roll back previous addresses. This patch fixes the issue.

Fixes: 05e4c3aff3 ("net/iavf: support multicast configuration")
Cc: stable@dpdk.org

Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Tested-by: Yuan Peng <yuan.peng@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-11-03 23:24:13 +01:00
Dekel Peled
d5a7d04c79 net/mlx5: support query of age action
Recent patch [1] adds to ethdev the API for query of age action.
This patch implements in MLX5 PMD the query of age action using
this API.

[1] https://mails.dpdk.org/archives/dev/2020-October/184864.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
613d64e412 net/mlx5: log LRO minimal size
Add debug printout showing HCA capability lro_min_mss_size - the
minimal size of TCP segment required for coalescing.
MLX5 PMD documentation is updated to note this condition.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
90e30c7488 net/mlx5: fix use of atomic cmpset for age state
According to documentation [1], function rte_atomic16_cmpset()
return value is non-zero on success; 0 on failure.
In existing code this function is called, and the return value
is compared to AGE_CANDIDATE, which is defined as 1.
Such comparison is incorrect and can lead to unwanted behavior.

This patch updates the calls to rte_atomic16_cmpset(), to check
that the return value is 0 or non-zero.

[1] https://doc.dpdk.org/api/rte__atomic_8h.html

Fixes: fa2d01c87d ("net/mlx5: support flow aging")
Cc: stable@dpdk.org

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
491757372f net/mlx5: enforce limitation on IPv6 next protocol
Due to PRM requirement, the IPv6 header item 'proto' field, indicating
the next header protocol, should not be set as extension header.
This patch adds the relevant validation, and documents the limitation.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
0e5a0d8f75 net/mlx5: support match on IPv6 fragment extension
rte_flow update, following RFC [1], added to ethdev the rte_flow item
ipv6_frag_ext.
This patch adds to MLX5 PMD the option to match on this item type.

[1] http://mails.dpdk.org/archives/dev/2020-March/160255.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
ad3d227ead net/mlx5: support match on IPv6 fragment packets
This patch adds to MLX5 PMD the support of matching on IPv6
fragmented and non-fragmented packets, using the new field
has_frag_ext, added to rte_flow following RFC [1].

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
6859e67ef6 net/mlx5: support match on IPv4 fragment packets
This patch adds to MLX5 PMD the support of matching on IPv4
fragmented and non-fragmented packets, using the IPv4 header
fragment_offset field.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:25 +01:00
Dekel Peled
d7e2ea627d net/mlx5: remove handling of ICMP fragmented packets
Commit [1] forced setting of match on 'frag' bit mask 1 and value 0.
Previous patch in this series added support of match on fragmented and
non-fragmented packets on L3 items, so this setting is now redundant.

This patch removes the changes done in [1].

[1] commit 85407db9f60d ("net/mlx5: fix matching for ICMP fragments")

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 22:29:24 +01:00
Matan Azrad
3ec73abeed net/mlx5/linux: fix Tx queue operations decision
One of the conditions to create Tx queue object by DevX is to be sure
that the DPDK mlx5 driver is not going to be the E-Switch manager of
the device. The issue is with the default FDB flows managed by the
kernel driver, which are not created by the kernel when the Tx queues
are created by DevX.

The current decision is to create the Tx queues by Verbs when E-Switch
is enabled while the current behavior uses an opposite condition to
create them by DevX.

Create the Tx queues by Verbs when E-Switch is enabled.

Fixes: 86d259cec8 ("net/mlx5: separate Tx queue object creations")

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 22:29:24 +01:00
Matan Azrad
8dc775d8b1 net/mlx5: fix event queue number query
When a Rx\Tx queue is created by DevX, its CQ configuration should
include the EQ number of the interrupts.
The EQ is managed by the kernel and there is a glue API in order to
query the EQ number from the kernel.
The EQ query API gets a vector number specifies the kernel vector of
the interrupt handling.

The vector number was wrongly detected according to the configuration
CPU instead of using the device attributes of the supported vectors.
The CPU was wrongly detected by the rte_lcore_to_cpu_id API without any
check, and in case of non-EAL thread context the value was 0xFFFFFFFF
which caused a failure in the EQ number query API.

Use vector 0 for each EQ number query which must be supported by the
kernel.

Fixes: 08d1838f64 ("net/mlx5: implement CQ for Rx using DevX API")
Fixes: d133f4cdb7 ("net/mlx5: create clock queue for packet pacing")
Cc: stable@dpdk.org

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 22:29:24 +01:00
Matan Azrad
9ab9d46ab9 net/mlx5: fix Tx queue release
The HW objects of the Tx queue is created/destroyed in the device
start\stop stage while the ethdev configurations for the Tx queue
starts from the tx_queue_setup stage.
The PMD should save all the last configurations it got from the ethdev
and to apply them to the device in the dev_start operation.

Wrongly, last code added to mitigate the reference counters didn't take
into account the above rule and combined the configurations and HW
objects to be created\destroyed together.

This causes to memory leak and other memory issues.

Make sure the HW object is released in stop operation when there is no
any reference to it while the configurations stay saved.

Fixes: 17a57183c0 ("net/mlx5: mitigate Tx queue reference counters")

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 22:29:24 +01:00
Matan Azrad
015d2cb628 net/mlx5: fix Rx queue release
The HW objects of the Rx queue is created/destroyed in the device
start\stop stage while the ethdev configurations for the Rx queue
starts from the rx_queue_setup stage.
The PMD should save all the last configurations it got from the ethdev
and to apply them to the device in the dev_start operation.

Wrongly, last code added to mitigate the reference counters didn't take
into account the above rule and combined the configurations and HW
objects to be created\destroyed together.

This causes to memory leak and other memory issues.

Make sure the HW object is released in stop operation when there is no
any reference to it while the configurations stay saved.

Fixes: 24e4b650ba ("net/mlx5: mitigate Rx queue reference counters")

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-11-03 22:29:24 +01:00
Yi Yang
c0d002aed9 gso: fix mbuf freeing responsibility
rte_gso_segment decreased refcnt of pkt by one, but
it is wrong if pkt is external mbuf, pkt won't be
freed because of incorrect refcnt, the result is
application can't allocate mbuf from mempool because
mbufs in mempool are run out of.

One correct way is application should call
rte_pktmbuf_free after calling rte_gso_segment to free
pkt explicitly. rte_gso_segment must not handle it, this
should be responsibility of application.

This commit changed rte_gso_segment in functional behavior
and return value, so the application must take appropriate
actions according to return values, "ret < 0" means it
should free and drop 'pkt', "ret == 0" means 'pkt' isn't
GSOed but 'pkt' can be transmitted as a normal packet,
"ret > 0" means 'pkt' has been GSOed into two or multiple
segments, it should use "pkts_out" to transmit these
segments. The application must free 'pkt' after call
rte_gso_segment when return value isn't equal to 0.

Fixes: 119583797b ("gso: support TCP/IPv4 GSO")
Cc: stable@dpdk.org

Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-11-03 22:45:02 +01:00
Pablo de Lara
3bb803b81a crypto/aesni_mb: support Chacha-Poly in synchronous mode
Add support for Chacha20-Poly1305 in the CPU crypto synchronous API.

Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
2020-11-02 09:24:41 +01:00
Didier Pallard
8bd1040a70 crypto/octeontx2: fix out-of-place support
Out of place with linear buffers is supported by octeontx2
while not advertised.

Fixes: 6aa9ceaddf ("crypto/octeontx2: add symmetric capabilities")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Ankur Dwivedi <adwivedi@marvell.com>
2020-11-02 09:24:41 +01:00
Didier Pallard
16c011472d crypto/octeontx: fix out-of-place support
Out of place with linear buffers is supported by octeontx
while not advertised.

Fixes: 0dc1cffa4d ("crypto/octeontx: add hardware init routine")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Acked-by: Ankur Dwivedi <adwivedi@marvell.com>
2020-11-02 09:24:41 +01:00
Didier Pallard
a0fe0cc531 common/qat: add missing kmod dependency info
Dependency on kmod needed to manage crypto devices is missing
in qat crypto pmd.

Fixes: 0880c40113 ("drivers: advertise kmod dependencies in pmdinfo")
Cc: stable@dpdk.org

Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
2020-11-02 09:24:41 +01:00
David Marchand
0ea0bbfebc crypto/dpaa2_sec: remove dead code
RTE_LIBRTE_SECURITY_TEST never existed, the variable under this check is
never used.

Fixes: e117c18a1d ("crypto/dpaa2_sec: restructure session management")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-11-02 09:24:41 +01:00
Ankur Dwivedi
9fd11c1583 crypto/octeontx2: fix multi-process
During crypto device probe few functions should be called only
for the primary process. This patch fixes this issue.

Fixes: 818d138bcc ("crypto/octeontx2: add init sequence in probe")
Cc: stable@dpdk.org

Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Reviewed-by: Anoob Joseph <anoobj@marvell.com>
2020-11-02 09:24:40 +01:00
Nicolas Chautru
95ddd8ffc4 baseband/acc100: remove useless checks
Coverity reported dead code for a few error
checks which are indeed not reachable.

Coverity issue: 363451, 363454, 363455
Fixes: 5ad5060f8f ("baseband/acc100: add LDPC processing functions")
Fixes: f404dfe35c ("baseband/acc100: support 4G processing")

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
2020-11-02 09:24:40 +01:00
Yunjian Wang
240fb56cdb baseband/turbo_sw: fix memory leak in error path
In q_setup() allocated memory for the queue data, we should free
it when error happens, otherwise it will lead to memory leak.

Fixes: b8cfe2c9ae ("bb/turbo_sw: add software turbo driver")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Nicolas Chautru <nicolas.chautru@intel.com>
2020-11-02 09:24:40 +01:00
Thomas Monjalon
af270529ad ethdev: include mbuf registration in Tx timestamp API
Previously, the Tx timestamp field and flag were registered in testpmd,
as described in mlx5 guide.
For consistency between Rx and Tx timestamps,
managing mbuf registrations inside the driver, as properly documented,
is a simpler expectation.

The only driver to support this feature (mlx5) is updated
as well as the testpmd application.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
d23d73d088 net/pcap: switch Rx timestamp to dynamic mbuf field
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
af50731085 net/octeontx2: switch Rx timestamp to dynamic mbuf field
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.

The registration of field and flag is done in both
otx2_nix_dev_start() and otx2_nix_timesync_enable().

The dynamic offset and flag are stored in struct otx2_timesync_info
to favor cache locality.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
f6800feb3e net/nfb: switch Rx timestamp to dynamic mbuf field
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
04840ecbcf net/mlx5: switch Rx timestamp to dynamic mbuf field
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.

The dynamic offset and flag are stored in struct mlx5_rxq_data
to favor cache locality.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
042540e4ef net/mlx5: fix dynamic mbuf offset lookup check
The functions rte_mbuf_dynfield_lookup() and rte_mbuf_dynflag_lookup()
can return an offset starting with 0 or a negative error code.

In reality the first offsets are probably reserved forever,
but for the sake of strict API compliance,
the checks which considered 0 as an error are fixed.

Fixes: efa79e68c8 ("net/mlx5: support fine grain dynamic flag")
Fixes: 3172c471b8 ("net/mlx5: prepare Tx queue structures to support timestamp")
Fixes: 0febfcce36 ("net/mlx5: prepare Tx to support scheduling")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
61c41e2e39 net/dpaa2: switch Rx timestamp to dynamic mbuf field
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related mbuf flag is also replaced.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
a926951a60 net/ark: switch Rx timestamp to dynamic mbuf field
The mbuf timestamp is moved to a dynamic field
in order to allow removal of the deprecated static field.
The related dynamic mbuf flag is set, although was missing previously.

The timestamp is set if configured for at least one device.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
2020-11-03 16:21:15 +01:00
Thomas Monjalon
00acd96292 regex/octeontx2: fix driver name
Following the recent alignment of all driver names,
this new driver get unaligned:
	librte_regex_octeontx2_regex.so

The 'fmt_name' must be "octeontx2_regex", and if not provided,
is taken from the 'name' variable.
But the variable 'name' should not be overwritten,
to keep the automatic value from the directory name.

The library name will be composed of the class directory
and the driver directory name:
	librte_regex_octeontx2.so

Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-11-03 11:47:44 +01:00
Timothy McDaniel
41dff30cc3 event/dlb: add timeout ticks entry point
Adds the timeout ticks conversion function.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:02 +01:00
Timothy McDaniel
11a34b5ac8 event/dlb: add queue and port release
These entry points are NO-OPS. DLB does not support
reconfiguring individual queues or ports. The entire device
must be reconfigured.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:02 +01:00
Timothy McDaniel
d1112958f4 event/dlb: add self-tests
Add a variety of self-tests for both ldb and directed
ports/queues, as well as configure, start, stop, link, etc...

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
b287267d62 event/dlb: add token pop API
The PMD uses a public interface to allow applications to
control the token pop mode. Supported token pop modes are
as follows, and they impact core scheduling affinity for
ldb ports.

AUTO_POP: Pop the CQ tokens immediately after dequeueing.
DELAYED_POP: Pop CQ tokens after (dequeue_depth - 1) events
	     are released. Supported on load-balanced ports
	     only.
DEFERRED_POP: Pop the CQ tokens during next dequeue operation.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
f007362194 event/dlb: add eventdev stop and close
Add support for eventdev stop and close entry points.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
26aeabe079 event/dlb: add dequeue and its burst variants
Add support for dequeue, dequeue_burst, ...

DLB does not currently support interrupts, but instead uses
umonitor/umwait if supported by the processor. This allows
the software to monitor and wait on writes to a cache-line.

DLB supports normal and sparse cq mode. In normal mode the
hardware will pack 4 QEs into each cache line. In sparse cq
mode, the hardware will only populate one QE per cache line.
Software must be aware of the cq mode, and take the appropriate
actions, based on the mode.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
4784f1eaa3 event/dlb: add enqueue and its burst variants
Add support for enqueue and its variants.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
eb14a3421a event/dlb: add eventdev start
Add support for the eventdev start entry point.
DLB delays setting up single link resources until
eventdev start, because it is only then that it can
ascertain which ports have just one linked queue.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
8bb077f44e event/dlb: add port unlink and unlinks in progress
Add supports for the port unlink(s) eventdev entry points.
The unlink operation is an asynchronous operation executed by
a control thread, and the unlinks-in-progress function reads
a counter shared with the control thread. Port QE and memzone
memory is freed here.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
6a89b28f81 event/dlb: add port link
Add port link entry point. Directed queues are identified and created
at this stage. Their setup deferred until link-time, at which
point we know the directed port ID. Directed queue setup
will only fail if this queue is already setup or there are
no directed queues left to configure.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
ee57517013 event/dlb: add port setup
Configure the load balanced (ldb) or directed (dir) port.
The consumer queue (CQ) and producer port (PP) are also
set up here.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
f7a9172f36 event/dlb: add queue setup
Load balanced (ldb) queues are setup here.
Directed queues are not set up until link time, at which
point we know the directed port ID. Directed queue setup
will only fail if this queue is already setup or there are
no directed queues left to configure.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
e27d16ea4b event/dlb: add queue and port default conf
Add support for getting the queue and port default configuration.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
b94c709dec event/dlb: add infos get and configure
Add support for configuring the DLB hardware.
In particular, this patch configures the DLB
hardware's scheduling domain, such that it is provisioned with
the requested number of ports and queues, provided sufficient
resources are available. Individual queues and ports are
configured later in port setup and eventdev start.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
5993e5eb7d event/dlb: add xstats
Add support for DLB xstats.  Perform initialization and add
standard xstats entry points

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
813146cae3 event/dlb: add probe-time hardware init
This commit adds probe-time low level hardware
initialization.  It also adds probe-time init for both
primary and secondary DPDK processes.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
02ce8e8837 event/dlb: add flexible interface
This commit introduces the flexible interface. This
interface allows the core code to operate in PF mode (direct
hardware access) or bifurcated mode (hardware configured via
kernel driver). This driver currently only supports PF modei,
but bifurcated mode will be added in a future patch-set.
Note that the flexible interface is not used for data path
operations, and thus there are no performance concerns
related to the use of function pointers.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
19980083fd event/dlb: add eventdev probe
Add the eventdev portion of probe, and parse command line
options, but do not initialize hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
ba927e5dc1 event/dlb: add inline functions
Add miscellaneous inline functions that may be called
from multiple files.  These functions include inline
assembly of new x86 instructions, such as movdir64b,
since they are not available as builtin functions in
the minimum supported GCC version.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
ed20fd5a86 event/dlb: add definitions shared with LKM or shared code
Add headers containing structs and constants shared between
the PMD and the shared code.  The term shared code refers to
the code that implements the hardware interface. The shared code
is introduced in the probe patch, and then is extended as
additional eventdev PMD entry points are added to the patchset.
In the case of the bifurcated PMD (to be introduced in the
future), the shared code is contained in the Linux kernel
module itself.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
8da624a9e4 event/dlb: add private data structures and constants
Add headers used internally by the PMD.  They include constants,
macros for device resources, structure definitions for hardware interfaces
and software state, and various forward-declarations.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
3789483934 event/dlb: add dynamic logging
This commit adds base support for dynamic logging.
The default log level is NOTICE. Dynamic logging
is used exclusively throughout this patchset.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
218be03459 event/dlb: add documentation and build infrastructure
Note that config/rte_config.h contains several configuration
switches, providing for fine control of the PMD's
runtime behaviour.

The meson infrastructure is expanded as additional files are
added to this patchset.

Adds announcement of availability of the new driver
for Intel Dynamic Load Balancer 1.0 hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 14:46:01 +01:00
Timothy McDaniel
c105e9b3ac event/dlb2: add timeout ticks entry point
Adds the timeout ticks conversion function.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
27328fedb0 event/dlb2: add queue and port release
DLB does not support reconfiguring individual queues
or ports on the fly. The entire device must be reconfigured.
Previously allocated port QE and memzone memory
is freed in this patch.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
6f1b82886e event/dlb2: add self-tests
Add a variety of self-tests for both ldb and directed
ports/queues, as well as configure, start, stop, link, etc...

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
c667583d82 event/dlb2: add token pop API
The PMD uses a public interface to allow applications to
control the token pop mode. Supported token pop modes are
as follows, and they impact core scheduling affinity for
ldb ports.

AUTO_POP: Pop the CQ tokens immediately after dequeueing.
DELAYED_POP: Pop CQ tokens after (dequeue_depth - 1) events
             are released. Supported on load-balanced ports
             only.
DEFERRED_POP: Pop the CQ tokens during next dequeue operation.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
18991548e9 event/dlb2: add eventdev stop and close
Add support for eventdev stop and close entry points.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
a2e4f1f5e7 event/dlb2: add dequeue and its burst variants
Add support for dequeue, dequeue_burst, ...

DLB2 does not currently support interrupts, but instead use
umonitor/umwait if supported by the processor. This allows
the software to monitor and wait on writes to a cache-line.

DLB2 supports normal and sparse cq mode. In normal mode the
hardware will pack 4 QEs into each cache line. In sparse cq
mode, the hardware will only populate one QE per cache line.
Software must be aware of the cq mode, and take the appropriate
actions, based on the mode.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
f7cc194b0f event/dlb2: add enqueue and its burst variants
Add support for enqueue and its variants.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
59e1a966ea event/dlb2: add eventdev start
Add support for the eventdev start entry point.
We delay initializing some resources until
eventdev start, since the number of linked queues can be
used to determine if we are dealing with a ldb or dir resource.
If this is a device restart, then the previous configuration
will be reapplied.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
a29248b57b event/dlb2: add port unlink and unlinks in progress
Add supports for the port unlink(s) eventdev entry points.
The unlink operation is an asynchronous operation executed by
a control thread, and the unlinks-in-progress function reads
a counter shared with the control thread.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
1acd82c0a4 event/dlb2: add port link
Add port link entry point. Directed queues are identified and created
at this stage. Their setup deferred until link-time, at which
point we know the directed port ID. Directed queue setup
will only fail if this queue is already setup or there are
no directed queues left to configure.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
3a6d0c04e7 event/dlb2: add port setup
Configure the load balanced (ldb) or directed (dir) port.
The consumer queue (CQ) and producer port (PP) are also
set up here.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
7e668e575b event/dlb2: add queue setup
Load balanced (ldb) queues are setup here.
Directed queues are not set up until link time, at which
point we know the directed port ID. Directed queue setup
will only fail if this queue is already setup or there are
no directed queues left to configure.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
99f66f33c1 event/dlb2: add queue and port default conf
Add support for getting the queue and port default configuration.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
f3cad285bb event/dlb2: add infos get and configure
Add support for configuring the DLB2 hardware.
In particular, this patch configures the DLB2
hardware's scheduling domain, such that it is provisioned with
the requested number of ports and queues, provided sufficient
resources are available. Individual queues and ports are
configured later in port setup and eventdev start.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 09:40:22 +01:00
Timothy McDaniel
e88753dcc1 event/dlb2: add xstats
Add support for DLB2 xstats.  Perform initialization and add
standard xstats entry points.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Mike Ximing Chen <mike.ximing.chen@intel.com>
2020-11-02 09:40:12 +01:00
Timothy McDaniel
e7c9971a85 event/dlb2: add probe-time hardware init
This commit adds probe-time low level hardware
initialization.  It also adds probe-time init for both
primary and secondary DPDK processes.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 07:32:20 +01:00
Timothy McDaniel
17f56f6d56 event/dlb2: add flexible interface
This commit introduces the flexible interface. This
interface allows the core code to operate in PF mode (direct
hardware access) or bifurcated mode (hardware configured via
kernel driver). This driver currently only supports PF mode
but bifurcated mode will be added in a future DPDK patch-set.
Note that the flexible interface is not used for data path
operations, and thus there are no performance concerns
related to the use of function pointers.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 07:25:49 +01:00
Timothy McDaniel
5433956d51 event/dlb2: add eventdev probe
Add the eventdev portion of probe, and parse command line
options, but do not initialize hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 07:19:15 +01:00
Timothy McDaniel
7161ea01b1 event/dlb2: add inline functions
Add miscellaneous inline functions that may be called
from multiple files.  These functions include inline
assembly of new x86 instructions, such as movdir64b,
since they are not available as builtin functions in
the minimum supported GCC version.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 07:12:40 +01:00
Timothy McDaniel
355e5ab072 event/dlb2: add definitions shared with LKM or shared code
Add headers containing structs and constants shared between
the PMD and the shared code.  The term shared code refers to
the code that implements the hardware interface. The shared code
is introduced in the probe patch, and then is extended as
additional eventdev PMD entry points are added to the patchset.
In the case of the bifurcated PMD (to be introduced in the
future), the shared code is contained in the Linux kernel
module itself.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 07:06:08 +01:00
Timothy McDaniel
bc62748bd7 event/dlb2: add private data structures and constants
The header file dlb2_priv.h is used internally by the PMD.
It include constants, macros for device resources,
structure definitions for hardware interfaces and
software state, and various forward-declarations.
The header file rte_pmd_dlb2.h will be exported in a
subsequent patch, but is included here due to a data
structure dependency.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 06:59:37 +01:00
Timothy McDaniel
ef3da1e767 event/dlb2: add dynamic logging
This commit adds base support for dynamic logging.
The default log level is NOTICE. Dynamic logging
is used exclusively throughout this patchset.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 06:53:04 +01:00
Timothy McDaniel
166378a794 event/dlb2: add documentation and build infrastructure
Adds the meson build infrastructure, which includes
compile-time constants in rte_config.h. DLB2 is
only supported on Linux 64 bit X86 platforms at this time.

Adds announcement of availability for the new driver
for Intel Dynamic Load Balancer 2.0 hardware.

Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Reviewed-by: Gage Eads <gage.eads@intel.com>
2020-11-02 06:46:12 +01:00
Ophir Munk
c84520053a regex/mlx5: add out-of-order scan capability
Add missing out of order scan capability
RTE_REGEXDEV_CAPA_QUEUE_PAIR_OOS_F to mlx5 regex PMD.

Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
2020-11-03 02:03:30 +01:00
David Marchand
ca4355e4c7 eventdev: switch sequence number to dynamic mbuf field
The eventdev drivers have been hacking the deprecated field seqn for
internal test usage.
It is moved to a dynamic mbuf field in order to allow removal of seqn.

Signed-off-by: David Marchand <david.marchand@redhat.com>
2020-10-31 22:14:42 +01:00
David Marchand
ea2780632f bus/fslmc: switch sequence number to dynamic mbuf field
The dpaa2 drivers have been hacking the deprecated field seqn for
internal features.
It is moved to a dynamic mbuf field in order to allow removal of seqn.

Signed-off-by: David Marchand <david.marchand@redhat.com>
2020-10-31 22:14:37 +01:00
David Marchand
c9a1c2e588 bus/dpaa: switch sequence number to dynamic mbuf field
The dpaa drivers have been hacking the deprecated field seqn for
internal features.
It is moved to a dynamic mbuf field in order to allow removal of seqn.

Signed-off-by: David Marchand <david.marchand@redhat.com>
2020-10-31 22:14:31 +01:00
David Marchand
fea8ea1883 net/ark: remove use of seqn for debug
The seqn mbuf field is deprecated.
It is currently hacked for debug purpose, it could be changed to a
dynamic field but I see little value in the debug info it offers.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ed Czeck <ed.czeck@atomicrules.com>
2020-10-31 22:14:29 +01:00
David Marchand
2601dcffd5 crypto/scheduler: remove unused internal seqn
This field has been left behind after dropping its use.

Fixes: 8a48e03943 ("crypto/scheduler: optimize crypto op ordering")

Signed-off-by: David Marchand <david.marchand@redhat.com>
2020-10-31 22:14:28 +01:00
David Marchand
b5b2d4a51a event/dpaa2: remove dead code from self test
This code has never been used since introduction.

Fixes: 653242c337 ("event/dpaa2: add self test")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
2020-10-31 22:14:26 +01:00
Thomas Monjalon
4c27fe28ff net/vmxnet3: switch MSS hint to dynamic mbuf field
The segment count, used for MSS guess,
was stored in the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-31 16:13:11 +01:00
Thomas Monjalon
d743042821 net/bnxt: switch CFA code to dynamic mbuf field
The CFA code from mark was stored in the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.

Note: the new field has 32 bits, smaller than udata64.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-31 16:13:11 +01:00
Ed Czeck
1abc7209bb net/ark: switch user data to dynamic mbuf fields
The second field of metadata is reserved for user data
which was using a deprecated mbuf field.
It is moved to dynamic fields in order to allow removal of udata64.

The use of meta data must be enabled with a compile-time flag
RTE_PMD_ARK_{TX,RX}_USERDATA_ENABLE.
User data on Tx and Rx paths can be defined and used separately.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
2020-10-31 16:13:11 +01:00
Thomas Monjalon
70418e322b event/sw: switch test counter to dynamic mbuf field
The test worker_loopback used the deprecated mbuf field udata64.
It is moved to a dynamic field in order to allow removal of udata64.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
2020-10-31 16:13:11 +01:00
Thomas Monjalon
614af75489 security: switch metadata to dynamic mbuf field
The device-specific metadata was stored in the deprecated field udata64.
It is moved to a dynamic mbuf field in order to allow removal of udata64.

The name rte_security_dynfield is not very descriptive
but it should be replaced later by separate fields for each type of data
that drivers pass to the upper layer.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2020-10-31 16:13:11 +01:00
Bruce Richardson
a339694621 raw/ioat: fix work-queue config size
According to latest DSA spec[1], the work-queue config register size
should be based off a value read from the WQ capabilities register.
Update driver to read this value and base the start of each WQ config
off that value.

[1] https://software.intel.com/content/www/us/en/develop/download/intel-data-streaming-accelerator-preliminary-architecture-specification.html

Fixes: ff06fa2cf3 ("raw/ioat: probe idxd PCI")

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-30 16:01:45 +01:00
Thomas Monjalon
19653eed40 remove config prefix used with make
The config options CONFIG_RTE_* are simple RTE_* defines with meson.
Now that make support is dropped, update the names in logs and comments.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-10-23 19:25:21 +02:00
Ibtisam Tariq
4a91344b5e net/softnic: use POSIX network address conversion
inet_pton4 and inet_pton6 was reimplemented. Replace implementation of
inet_pton4 and inet_pton6 with libc inet_pton function

Bugzilla ID: 365
Fixes: 31ce8d8886 ("net/softnic: add command interface")

Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Ibtisam Tariq <ibtisam.tariq@emumba.com>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-10-23 11:01:25 +02:00
David Marchand
30105f664f drivers: add headers install helper
A lot of drivers export headers, reproduce the same facility than for
libraries.

Note: this change fixes an issue with the crypto scheduler headers which
were not installed properly. A separate backport will be sent to stable
branches.

Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-22 14:16:22 +02:00
Thomas Monjalon
59440fbabf bus/pci: remove unused scan by address
The function pci_update_device was used to scan a device
for probing by PCI address.
This private function (and implementations) are unused
since such probing is removed.

Fixes: f3bac43b60 ("bus/pci: remove unused function to probe by address")
Cc: stable@dpdk.org

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: David Marchand <david.marchand@redhat.com>
2020-10-20 13:57:57 +02:00
Stephen Hemminger
39fab5a0b9 net/failsafe: replace references to slave devices
The term slave is only used in some comments and can be
replaced with sub devices, as done elsewhere.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-10-20 13:17:08 +02:00
Stephen Hemminger
d250589d57 net/memif: replace master/slave arguments
Replace master/slave terms in this driver.

The memory interface drivers uses a client/server architecture
so change the variable names and device arguments to that.

The previous devargs are maintained for compatibility, but if
used cause a notice in the log.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
2020-10-20 13:17:08 +02:00
Stephen Hemminger
20b6fd653f drivers/bus: reword slave process as secondary
Correct wording is "secondary process".

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2020-10-20 13:17:08 +02:00
Stephen Hemminger
cb056611a8 eal: rename lcore master and slave
Replace master lcore with main lcore and
replace slave lcore with worker lcore.

Keep the old functions and macros but mark them as deprecated
for this release.

The "--master-lcore" command line option is also deprecated
and any usage will print a warning and use "--main-lcore"
as replacement.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
2020-10-20 13:17:08 +02:00
Bruce Richardson
a8d0d473a0 build: replace use of old build macros
Use the newer macros defined by meson in all DPDK source code, to ensure
there are no errors when the old non-standard macros are removed.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-19 22:15:44 +02:00
Bruce Richardson
a20b2c01a7 build: standardize component names and defines
As discussed on the dpdk-dev mailing list[1], we can make some easy
improvements in standardizing the naming of the various components in DPDK,
and their associated feature-enabled macros.

Following this patch, each library will have the name in format,
'librte_<name>.so', and the macro indicating that library is enabled in the
build will have the form 'RTE_LIB_<NAME>'.

Similarly, for libraries, the equivalent name formats and macros are:
'librte_<class>_<name>.so' and 'RTE_<CLASS>_<NAME>', where class is the
device type taken from the relevant driver subdirectory name, i.e. 'net',
'crypto' etc.

To avoid too many changes at once for end applications, the old macro names
will still be provided in the build in this release, but will be removed
subsequently.

[1] http://inbox.dpdk.org/dev/ef7c1a87-79ab-e405-4202-39b7ad6b0c71@solarflare.com/t/#u

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
2020-10-19 22:15:34 +02:00
Bruce Richardson
63b3907833 build: remove library name from version map file name
Since each version map file is contained in the subdirectory of the library
it refers to, there is no need to include the library name in the filename.
This makes things simpler in case of library renaming.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Rosen Xu <rosen.xu@intel.com>
2020-10-19 22:13:59 +02:00
Bruce Richardson
2ca75c65af common/qat: build drivers from common folder
Since the drivers in the common directory can be processed out of order, in
this case following the "bus" directory, we can simplify somewhat the build
of the QAT driver to be done entirely from the "common/qat" folder rather
than having it's build distributed across 3 folders.

This also opens up the possibility of building the QAT driver with crypto
only and the compression part disabled. It further allows more sensible
naming of the resulting shared library in case of standardizing library
names based on device class; i.e. common_qat is more descriptive for a
combined crypto/compression driver than either of the other two prefixes
individually.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
2020-10-19 22:12:48 +02:00
Bruce Richardson
b0b672aead build: add defines for compatibility with make build
The defines used to indicate what crypto, compression and eventdev drivers
were being built were different to those used in the make build, with meson
defining them with "_PMD" at the end, while make defined them with "_PMD"
in the middle and the specific driver name at the end. This might cause
compatibility issues for applications which used the older defines, which
switching to build against new DPDK releases.

As well as changing the default to match that of make, meson also
special-cases the crypto/compression/event drivers to have both defines
provided. This ensures compatibility for these macros with both meson and
make from older versions.

For a selection of other libraries and drivers, there were other
incompatibilities between the meson and make-defined macros which were not
previously highlighted in a deprecation notice, so we add per-macro
compatibility defines for these to ease the transition from make to meson.

Fixes: 5b9656b157 ("lib: build with meson")
Fixes: 9314afb68a ("drivers: add infrastructure for meson build")
Fixes: dcadbbde8e ("crypto/null: build with meson")
Fixes: 3c32e89f68 ("compress/isal: add skeleton ISA-L compression PMD")
Fixes: eca504f318 ("drivers/event: build skeleton and SW drivers with meson")

Cc: stable@dpdk.org

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Luca Boccassi <bluca@debian.org>
2020-10-19 22:12:28 +02:00
Ciara Power
7566f28ab0 net/virtio: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-19 16:45:02 +02:00
Ciara Power
2c5e0dd21f net/mlx5: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-19 16:45:02 +02:00
Ciara Power
a3fd191a41 net/ixgbe: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
2020-10-19 16:45:02 +02:00
Ciara Power
c3d9ed69ff net/ice: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-19 16:45:02 +02:00
Ciara Power
f28fbd1e6b net/iavf: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-19 16:45:02 +02:00
Ciara Power
a99d518520 net/fm10k: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-19 16:45:02 +02:00
Ciara Power
ac61aa6463 net/enic: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2020-10-19 16:45:02 +02:00
Ciara Power
086f926647 net/bnxt: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2020-10-19 16:45:02 +02:00
Ciara Power
2b11056d1e net/axgbe: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Amaranath Somalapuram <asomalap@amd.com>
2020-10-19 16:45:02 +02:00
Ciara Power
6e3343f435 net/i40e: check max SIMD bitwidth
When choosing a vector path to take, an extra condition must be
satisfied to ensure the max SIMD bitwidth allows for the CPU enabled
path.

Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
2020-10-19 16:45:02 +02:00
Gagandeep Singh
c6887eca58 crypto/caam_jr: fix device tree parsing for SEC_ERA
Previously, SEC_ERA was hardcoded and it was removed in [1].
Now when that hardcoded was removed, it is supposed to be
read from the device tree but it is not done correctly.
This patch calls a necessary API of_init() before using any
of_* APIs to retrieve information from the device tree and
if reading integer value that must be converted to cpu endianness
before using it.

[1] eef9e0412a ("drivers/crypto: fix build with -fno-common")

Fixes: 1d678de329 ("crypto/caam_jr: add basic job ring routines")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
2020-10-19 16:00:31 +02:00
Jun Yang
dc7a7b5cb8 raw/dpaa2_qdma: support enqueue without response wait
In this condition, user needs to check if dma transfer is completed
by its own logic.

qDMA FLE pool is not used in this condition since there is no chance to put
FLE back to pool without dequeue response.

User application is responsible to transfer FLE memory to qDMA driver
by qdma job descriptor and maintain it as well.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
2020-10-19 14:05:56 +02:00
Jun Yang
88c9fed247 raw/dpaa2_qdma: support FLE pool per queue
Don't mix SG/none-SG with same FLE pool format,
otherwise, it impacts none-SG performance.

In order to support SG queue and none-SG queue
with different FLE pool element formats, associate
FLE pool with queue instead of device.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
2020-10-19 14:05:52 +02:00
Jun Yang
83a4b2d7fb raw/dpaa2_qdma: support scatter gather in enqueue
This patch add support to add Scatter Gather support
for different jobs for qdma queues.
It also supports gathering  multiple enqueue jobs into SG enqueue job(s).

Signed-off-by: Jun Yang <jun.yang@nxp.com>
2020-10-19 14:05:49 +02:00
Jun Yang
63f696e4d4 raw/dpaa2_qdma: optimize IOVA conversion
rte_mempool_virt2iova is now used for converting with IOVA off.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
2020-10-19 14:05:44 +02:00
Jun Yang
4f166de658 raw/dpaa2_qdma: refactor the code
This patch moves qdma queue specific configurations from driver
global configuration to per-queue setup. This is required
as each queue can be configured differently.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
2020-10-19 13:57:41 +02:00
Gagandeep Singh
56b284a0e2 raw/dpaa2_qdma: reduce memset in enqueue multi
performance improvement: memset should be done only
for required memory.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
2020-10-19 13:53:32 +02:00
Gagandeep Singh
1c540f29de raw/dpaa2_qdma: change PMD API to generic rawdev
dpaa2_qdma was partially using direct pmd APIs.
This patch changes that and adapt the driver to use
more of the rawdev APIs

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
2020-10-19 13:50:04 +02:00
Kevin Laatz
894080cf8a raw/ioat: fix kvlist free
There is a null pointer check in 'idxd_vdev_parse_params()' which is
causing a coverity issue. This check is redundant as the same check is
being done in 'rte_kvargs_free()', so it is simply removed in this patch.

In addition, kvlist was only being free'd on one path in this function.
This is fixed by always free'ing kvlist before returning.

Coverity issue: 363049
Fixes: 777edf43ae ("raw/ioat: introduce vdev probe for DSA/idxd device")

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
2020-10-19 10:29:37 +02:00
Kevin Laatz
b6ab5bbd73 raw/ioat: fix dereference before null check
The 'idxd' pointer in 'idxd_rawdev_destroy()' is being dereferenced before
it is checked. To fix this, the null pointer check was moved to occur
earlier in the code.

Coverity issue: 363040
Fixes: ff06fa2cf3 ("raw/ioat: probe idxd PCI")

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
2020-10-19 09:57:16 +02:00
Ferruh Yigit
f30e69b41f ethdev: add device flag to bypass auto-filled queue xstats
Queue stats are stored in 'struct rte_eth_stats' as array and array size
is defined by 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag.

As a result of technical board discussion, decided to remove the queue
statistics from 'struct rte_eth_stats' in the long term.

Instead PMDs should represent the queue statistics via xstats, this
gives more flexibility on the number of the queues supported.

Currently queue stats in the xstats are filled by ethdev layer, using
some basic stats, when queue stats removed from basic stats the
responsibility to fill the relevant xstats will be pushed to the PMDs.

During the switch period, temporary 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS'
device flag is created. Initially all PMDs using xstats set this flag.
The PMDs implemented queue stats in the xstats should clear the flag.

When all PMDs switch to the xstats for the queue stats, queue stats
related fields from 'struct rte_eth_stats' will be removed, as well as
'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.
Later 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag also can be
removed.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Xiao Wang <xiao.w.wang@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
2020-10-16 23:27:15 +02:00
Ivan Ilchenko
62024eb827 ethdev: change stop operation callback to return int
Change eth_dev_stop_t return value from void to int.
Make eth_dev_stop_t implementations across all drivers to return
negative errno values if case of error conditions.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 22:26:41 +02:00
Ivan Ilchenko
97742c7b0a net/failsafe: check stop call status
rte_eth_dev_stop() return value was changed from void to int,
so this patch modify usage of this function across net/failsafe
according to new return type.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 22:26:41 +02:00
Ivan Ilchenko
fb0379bc5d net/bonding: check stop call status
rte_eth_dev_stop() return value was changed from void to int,
so this patch modify usage of this function across net/bonding
according to new return type.

Signed-off-by: Ivan Ilchenko <ivan.ilchenko@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 22:26:41 +02:00
Thomas Monjalon
8a5a0aad5d ethdev: allow close function to return an error
The API function rte_eth_dev_close() was returning void.
The return type is changed to int for notifying of errors.

If an error happens during a close operation,
the status of the port is undefined,
a maximum of resources having been freed.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Liron Himi <lironh@marvell.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 22:26:41 +02:00
Thomas Monjalon
0607dadf98 ethdev: reset all when releasing a port
The function rte_eth_dev_release_port() is partially resetting
the struct rte_eth_dev. The drivers were completing this reset
with more pointers set to NULL in the close or remove operations.

More pointers are reset at ethdev level,
and some redundant assignments are removed from PMDs.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2020-10-16 22:26:41 +02:00
Thomas Monjalon
b8f5d2ae75 ethdev: remove forcing stopped state upon close
When closing a port, it is supposed to be already stopped,
and marked as such with "dev_started" state zeroed by the stop API.

Resetting "dev_started" before calling the driver close operation
was hiding the case of not properly stopped port being closed.
The flag "dev_started" is not changed anymore in "rte_eth_dev_close()".

In case the "dev_stop" function is called from "dev_close",
bypassing "rte_eth_dev_stop()" API,
the "dev_started" state must be explicitly reset in the PMD
in order to keep the same behaviour.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 22:26:41 +02:00
Qi Zhang
691ad36245 common/iavf: support VF with more than 16 queues
Currently there are limitations in the virthcnl.h interface that only
allow a maximum of 16 queues to be used by a VF driver. Add support in
virtchnl.h to allow a VF driver to request >16 queues. Also, the RSS
qregion size is currently assumed to be the max number of queues a VF
can request and/or is given on initialization. With larger VFs this
assumption can no longer be made, so add a new op to support querying
the max RSS qregion size.

In order to request more queues than the initially given queues the VF
driver needs to use the VIRTCHNL_OP_REQUEST_QUEUES opcode.

The VF is given more >16 queues it should use the new
VIRTCHNL_OP_GET_MAX_RSS_QREGION to determine its max qregion size. This
is needed to correctly configure the RSS LUT and/or configure filters
based on queue base/offset and queue region size.

If the VF is configuring >16 queues it should use the following opcodes
to configure the qeueus and interrupts after successfully requesting
them.

VIRTCHNL_OP_MAP_QUEUE_VECTOR
VIRTCHNL_OP_ENABLE_QUEUES_V2
VIRTCHNL_OP_DISABLE_QUEUES_V2

Also, add support in virtchnl_vc_validate_vf_msg() to validate the above
messages. As a part of this move the virtchnl_vector_limits enumeration
directly above the function it's used.

The patch also update base code release version in readme.

Signed-off-by: Ting Xu <ting.xu@intel.com>
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-10-16 19:48:19 +02:00
Qi Zhang
9c43dd22c4 common/iavf: replace macro for MAC address length
Replace ETH_ALEN with VIRTCHNL_ETH_LENGTH_OF_ADDRESS to keep consistent.

Signed-off-by: Maciej Rabeda <maciej.rabeda@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2020-10-16 19:48:19 +02:00
Marvin Liu
8410c369b4 net/virtio: fix indirect desc length
When transmitting indirect descriptors, first desc will store net_hdr
and following descs will be mapped to mbuf segments. Total desc number
will be seg_num plus one. Meaning of variable needed is the number of
used descs in packed ring. This value will always be two for indirect
desc. Now use mbuf segments number for calculating correct desc length.

Fixes: b473061b0e ("net/virtio: fix indirect descriptors in packed datapaths")
Cc: stable@dpdk.org

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2020-10-16 19:48:19 +02:00
Kalesh AP
c4ae7adafa net/bnxt: fix UDP tunnel port removal
The HWRM supports only one global destination port for a tunnel type.

When port is stopped, driver deletes the UDP tunnel port configured
in the HW, but it does not update the counter which causes the
tunnel port addition to fail after port is started again.

Fixed to update the counter when tunnel port is deleted.

Fixes: 10d074b202 ("net/bnxt: support tunneling")
Cc: stable@dpdk.org

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
2020-10-16 19:48:19 +02:00
Chengwen Feng
f0c243a6cb net/hns3: support SVE Tx
This patch adds SVE vector instructions to optimize Tx burst process.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
2020-10-16 19:48:19 +02:00
Wei Hu (Xavier)
952ebacce4 net/hns3: support SVE Rx
This patch adds SVE vector instructions to optimize Rx burst process.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
2020-10-16 19:48:19 +02:00
John Daley
b51a6d07fa net/enic: check in error path
Coverity issue: 363046
Fixes: bb66d562ae ("net/enic: share flow actions with same signature")

Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
00c10c2211 net/mlx5: update translate function for mirroring
Translate the attribute of sample action that include sample ratio
and sub actions list.
PMD will check the destination action number in current flow,
if found multiple destination actions, then create the new destination
array rdma action that group actions for each destination.
Currently only support port or queue for destination action, and only
encap action can be attached into one port destination.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
50390aab11 net/mlx5: update flow mirroring validation
Mirroring flow using sample action with ratio is 1, and it doesn't
support jump action with the same one flow.

Sample action must have destination actions like port or queue for
mirroring, and don't need split function as sampling flow.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
4d23dd35f2 common/mlx5: add glue function for mirroring
The new DR destination array action is supported since the
rdma-core version v32.

Destination array action is used group DR actions to a single action,
And it can be used for mirroring packet and forward to every
destination (port or queue) in the array.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
0756228b27 net/mlx5: update translate function for sample action
Translate the attribute of sample action that include sample ratio
and sub actions list, then create the sample DR action.

The metadata register value will be lost in the default path after
Sampler in FDB due to CX5 HW limitation.

Since source vport also be shared with metadata register c0, MLX5
PMD would set the source vport to rdma-core and rdma-core will
restore the regc0 value after sampler.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
b4c0ddbfcc net/mlx5: split sample flow into two sub-flows
The flow with sample action will be split into two sub flows:
the prefix sub flow with the all actions preceding the sample
action and sample action itself, and the suffix sub flow with
the actions following the sample action.

The original items remain in the prefix sub flow, add the
implicit tag action with unique id to set in metadata register,
and suffix sub flow uses the tag item to match with that unique id.

The flow split as below:

Original flow: items / actions pre / sample / actions sfx ->
    prefix sub flow -
            items / actions pre / set_tag action / sample
    suffix sub flow -
            tag_item / actions sfx

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
96b1f0273c net/mlx5: validate sample action
Add sample action validate function.

Sample Flow is supported in NIC-RX and FDB domains. For the NIC-RX
the Sample Flow action list must include the destination queue action.

Only NIC-RX domain supports the optional actions list. FDB doesn't
support any optional actions, the sampled packets is always forwarded
to the E-Switch manager port.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
8cc34c0801 common/mlx5: query sampler object capability via DevX
Update function mlx5_devx_cmd_query_hca_attr() to add the NIC Flow
Table attributes query, then get the log_max_flow_sampler_num from
flow table properties.

Add the related structs definition in mlx5_prm.h.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Jiawei Wang
a3def85479 common/mlx5: add glue for sample action
The new DR sample action is supported since OFED version
5.1.2 or rdma-core version v32.

MLX5 PMD adds the rdma-core command in glue to create this action.

Sample action is used for creating the sample object to implement
the sampling/mirroring function.

Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2020-10-16 19:48:18 +02:00
Rasesh Mody
9bb607af36 net/bnx2x: add QLogic vendor id for BCM57840
Add QLogic vendor id support for BCM57840 device ids.

Fixes: 9fb557035d ("bnx2x: enable PMD build")
Cc: stable@dpdk.org

Reported-by: Souvik Dey <sodey@rbbn.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
2020-10-16 19:48:18 +02:00
Simei Su
fbd8947cdf net/ice: support drop action for DCF switch
This patch adds drop action in DCF switch filter.

Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
333fd5d42b net/sfc: support Rx interrupts for EF100
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Igor Romanov
e285f30d98 net/sfc: forward function control window offset to datapath
Store function control window offset to correctly set the offset
of prime EvQ in EF100.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
1aacc3d388 net/sfc: support user mark and flag Rx for EF100
Flow rules may be used mark packets. Support delivery of mark/flag
values to user in mbuf fields.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
6ce88e5043 net/sfc: support per-queue Rx RSS hash offload for EF100
Riverhead allows to choose Rx prefix (which contains RSS hash value
and valid flag) per queue.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
c6845644cc net/sfc: support per-queue Rx prefix for EF100
Riverhead FW supports Rx prefix choice based on required fields in Rx
prefix. The feature is generalized in libefx to provide Rx prefixes
layout for other NICs and firmware variants. Now driver can get
the prefix layout after Rx queue start and use the layout details to
check its expectations or simply in run-time.

Rx prefix choice and query interface is defined in SF-119689-TC
EF100 host interface.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
eb043628b1 net/sfc: map Rx offload RSS hash to corresponding RxQ flag
If RSS hash offload is requested, Rx queue should be configured
to request RSS hash information delivery.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
840478c2f8 common/sfc_efx/base: provide helper to check Rx prefix
A new function allows to check if used Rx prefix layout matches
available Rx prefix layout. Length check is out-of-scope of the
function and caller should ensure length is either checked or
different length with everything required in place is handled
properly.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
f784cdc5cb common/sfc_efx/base: provide control to deliver RSS hash
When Rx queue is created, allow to specify if the driver would like
to get RSS hash value calculated by the hardware.

Use the flag to choose Rx prefix on Riverhead.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
f21c6e7f81 common/sfc_efx/base: simplify requesting Rx prefix fields
Introduce an extra variable with required Rx prefix fields mask
to make it easier to request more fields.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
9e6e7f479a net/sfc: support Rx checksum offload for EF100
Also support Rx packet type offload.

Checksumming is actually always enabled. Report it per-queue offload
to give applications maximum flexibility.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
942a636499 net/sfc: support Tx VLAN insertion offload for EF100
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Ivan Malov
77cb007124 net/sfc: support tunnel TSO for EF100 native Tx
Handle VXLAN and Geneve TSO on EF100 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Ivan Malov
4f936666d7 net/sfc: support TSO for EF100 native datapath
Riverhead boards support TSO version 3.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
f71965f9df net/sfc: support tunnels for EF100 native Tx
Add support for outer IPv4/UDP and inner IPv4/UDP/TCP checksum offloads.
Use partial checksum offload for inner TCP/UDP offload.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Ivan Malov
38109b5b08 net/sfc: add header segments check for EF100 Tx
EF100 native Tx datapath demands that packet header be contiguous
when partial checksum offloads are used since helper function is
used to calculate pseudo-header checksum (and the function requires
contiguous header).

Add an explicit check for this assumption and restructure the code
to avoid TSO header linearisation check since TSO header
linearisation is not done on EF100 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
e30f1081c2 net/sfc: support IPv4 header checksum offload for EF100 Tx
Use outer layer 3 full checksum offload which does not require any
assistance from driver.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
a8e0c002f2 net/sfc: support TCP and UDP checksum offloads for EF100
Use outer layer 4 full checksum offload which does not require any
assistance from driver.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
94d31cd115 net/sfc: support multi-segment Tx for EF100
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
0cb551b690 net/sfc: implement EF100 native Tx
No offloads support yet including multi-segment (Tx gather).

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
554644e364 net/sfc: implement EF100 native Rx
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
24715bb93a net/sfc: support datapath logs which may be compiled out
Add datapath log level which limits logs included in build since
on datapath it is too expensive to dive into rte_log() function
even if it does nothing.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00
Andrew Rybchenko
2d98a5a6c2 net/sfc: log DMA allocations addresses
The information about DMA allocations is very useful for debugging.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
2020-10-16 19:48:18 +02:00