Commit Graph

27365 Commits

Author SHA1 Message Date
Thomas Monjalon
7638726df6 bus/pci: rename probe/remove operation types
The names of the prototypes pci_probe_t and pci_remove_t
are missing a prefix rte_.
These function types are simply renamed.

No compatibility break is expected for the applications
because it is considered as an internal name in the driver interface.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2021-04-06 14:52:55 +02:00
Thomas Monjalon
4d509afa7b pci: rename catch-all ID
The name of the constant PCI_ANY_ID was missing RTE_ prefix.
It is renamed, and the old name becomes a deprecated alias.

While renaming, the duplicate definitions in rte_bus_pci.h
are removed to keep only those in rte_pci.h.
Note: rte_pci.h is included in rte_bus_pci.h

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Parav Pandit <parav@nvidia.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
2021-04-06 14:52:49 +02:00
Anatoly Burakov
190f38773a power: do not skip saving original P-state governor
Currently, when we set the pstate governor to "performance", we check if
it is already set to this value, and if it is, we skip setting it.

However, we never save this value anywhere, so that next time we come
back and request the governor to be set to its original value, the
original value is empty.

Fix it by saving the original pstate governor first. While we're at it,
replace `strlcpy` with `rte_strscpy`.

Fixes: e6c6dc0f96 ("power: add p-state driver compatibility")
Cc: stable@dpdk.org

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
2021-04-06 10:36:49 +02:00
Anatoly Burakov
8a5febaac4 power: fix P-state base frequency handling
Previous fix for base frequency handling in pstate mode introduced a
couple of issues:

- When base_frequency file does not exist, it simply bails out because
  of what appears to be accidental addition of FOPEN_OR_ERR_RET. This is
  incorrect, as absence of this file is not fatal and is in fact
  expected on kernel versions earlier than 5.3
- When base_frequency file does exist, it gets opened, but never gets
  closed, resulting in a resource leak

Both issues also manifest themselves as Coverity defects (dead code, and
a resource leak), so this fix addresses both.

Coverity issue: 369693, 369694
Bugzilla ID: 668
Fixes: 4db9587bbf ("power: check sysfs base frequency")

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
2021-04-06 10:36:42 +02:00
David Marchand
aa9cb78f66 doc: fix sphinx rtd theme import in GHA
If the rtd theme is available, passing it by name is enough to select
it. Sphinx itself recognises the "sphinx_rtd_theme" name as a special
case and tries to find its path automatically.

On the other hand, passing a html_theme_path makes sphinx parse all
themes availables in this path, which in some environment (like GHA) is
/usr/share and makes sphinx error on the first zipfile it finds (in GHA,
some Azure CLI thingy) that has no sphinx theme in it.

Fixes: 46562be650 ("doc: import sphinx rtd theme when available")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Aaron Conole <aconole@redhat.com>
2021-04-02 01:39:34 +02:00
Matan Azrad
846ec2ea75 vdpa/mlx5: fix virtq cleaning
The HW virtq object can be destroyed either when the device is closed or
when the state of the virtq becomes disabled.

Some parameters of the virtq should continue to be managed when the
virtq state is changed but all of them must be initialized when the
device is closed.

Wrongly, the enable parameter stayed on when the device is closed what
might cause creation of invalid virtq in the next time a device is
assigned to the driver.

Clean all the virtqs memory when the device is closed.

Fixes: c47d6e8333 ("vdpa/mlx5: support queue update")
Cc: stable@dpdk.org

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 10:37:10 +02:00
David Marchand
6564ddcd0c net/virtio: remove duplicated port ID from virtio-user
The private virtio_user_dev structure embeds a virtio_hw which itself
contains the ethdev port_id.
Make use of it and remove the duplicate port_id field.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 10:30:17 +02:00
Ibtisam Tariq
dd0946f975 examples/vhost_crypto: remove unused short option
Short option "s" was passed to getopt_long function, while there was
no condition on this option.

Fixes: f5188211c7 ("examples/vhost_crypto: add sample application")
Cc: stable@dpdk.org

Signed-off-by: Ibtisam Tariq <ibtisam.tariq@emumba.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 10:02:18 +02:00
Marvin Liu
af584d21bf vhost: fix batch dequeue potential buffer overflow
Similar as single dequeue, the multiple accesses of descriptor length
will lead to potential risk. One-time access of descriptor length can
eliminate this risk.

Fixes: 75ed516978 ("vhost: add packed ring batch dequeue")
Cc: stable@dpdk.org

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 09:34:17 +02:00
Marvin Liu
93ed2f49de vhost: fix packed ring potential buffer overflow
Similar as split ring, the multiple accesses of descriptor length will
lead to potential risk. One-time access of descriptor length can
eliminate this risk.

Fixes: 2f3225a7d6 ("vhost: add vector filling support for packed ring")
Cc: stable@dpdk.org

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 09:34:17 +02:00
Marvin Liu
134228ca39 vhost: fix split ring potential buffer overflow
In vhost datapath, descriptor's length are mostly used in two coherent
operations. First step is used for address translation, second step is
used for memory transaction from guest to host. But the interval between
two steps will give a window for malicious guest, in which can change
descriptor length after vhost calculated buffer size. Thus may lead to
buffer overflow in vhost side. This potential risk can be eliminated by
accessing the descriptor length once.

Fixes: 1be4ebb1c4 ("vhost: support indirect descriptor in mergeable Rx")
Cc: stable@dpdk.org

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 09:34:17 +02:00
Chenbo Xia
1739f81425 examples/vhost: check memory table query
This patch fixes unchecked return value for rte_vhost_get_mem_table(),
which is reported by coverity.

Coverity issue: 364233
Fixes: ca059fa5e2 ("examples/vhost: demonstrate the new generic APIs")
Cc: stable@dpdk.org

Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 08:46:32 +02:00
Xiao Wang
629d75653b vdpa/ifc: check PCI config read
The return value of rte_pci_read_config should be checked.

Coverity issue: 302860
Fixes: a3f8150eac ("net/ifcvf: add ifcvf vDPA driver")
Cc: stable@dpdk.org

Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
2021-03-31 08:39:14 +02:00
Keiichi Watanabe
510f43fc5e examples/vhost_blk: check features before inflight API
Avoid calling rte_vhost_get_vhost_ring_inflight() and
rte_vhost_get_vring_base_from_inflight() when
VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD is not set.

Signed-off-by: Keiichi Watanabe <keiichiw@chromium.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 08:20:59 +02:00
Keiichi Watanabe
790b1c3171 vhost: get negotiated protocol features
Add rte_vhost_get_negotiated_protocol_features, which returns a set of
enabled protocol features.

Signed-off-by: Keiichi Watanabe <keiichiw@chromium.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
2021-03-31 08:15:14 +02:00
Maxime Coquelin
af4844503e vhost: optimize virtqueue structure
This patch moves vhost_virtqueue struct fields in order
to both optimize packing and move hot fields on the first
cachelines.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:48:32 +02:00
Maxime Coquelin
1818a63147 vhost: move dirty logging cache out of virtqueue
This patch moves the per-virtqueue's dirty logging cache
out of the virtqueue struct, by allocating it dynamically
only when live-migration is enabled.

It saves 8 cachelines in vhost_virtqueue struct.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:48:32 +02:00
Maxime Coquelin
2453bbf7e1 vhost: remove unused virtqueue field
This patch removes the "backend" field of the
vhost_virtqueue struct, which is not used by the
library.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:48:32 +02:00
Maxime Coquelin
97bd53721b net/virtio: pack virtqueue structure
This patch optimizes packing of the virtqueue
struct by moving fields around to fill holes.

Offset field is not used and so can be removed.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:31:50 +02:00
Maxime Coquelin
b59d4d5502 net/virtio: allocate fake mbuf in Rx queue
While it is worth clarifying whether the fake mbuf
in virtnet_rx struct is really necessary, it is sure
that it heavily impacts cache usage by being part of
the struct. Indeed, it uses two cachelines, and
requires alignment on a cacheline.

Before this series, it means it took 120 bytes in
virtnet_rx struct:

struct virtnet_rx {
 struct virtqueue *vq; /*0 8*/

 /* XXX 56 bytes hole, try to pack */

 /* --- cacheline 1 boundary (64 bytes) --- */
 struct rte_mbuf fake_mbuf __attribute__((__aligned__(64))); /*64 128*/
 /* --- cacheline 3 boundary (192 bytes) --- */

This patch allocates it using malloc in order to optimize
virtnet_rx cache usage and so virtqueue cache usage.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:31:41 +02:00
Maxime Coquelin
76fd789cc7 net/virtio: improve queue init error path
This patch improves the error path of virtio_init_queue(),
by cleaning in reversing order all resources that have
been allocated.

Suggested-by: Chenbo Xia <chenbo.xia@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:31:30 +02:00
Maxime Coquelin
3169550f03 net/virtio: remove reference to virtqueue in vrings
Vrings are part of the virtqueues, so we don't need
to have a pointer to it in Vrings descriptions.

Instead, let's just subtract from its offset to
calculate virtqueue address.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Balazs Nemeth <bnemeth@redhat.com>
2021-03-31 07:31:14 +02:00
Balazs Nemeth
a781540b00 net/qede: remove unnecessary field in Rx entry and simplify
The member page_offset is always zero. Having this in the qede_rx_entry
makes it larger than it needs to be and this has cache performance
implications so remove that field. In addition, since qede_rx_entry only
has an rte_mbuf*, remove the definition of qede_rx_entry.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:57 +01:00
Balazs Nemeth
5984000d1b net/qede: prefetch next packet to free
While handling the current mbuf, pull the next mbuf into the cache. Note
that the last four mbufs pulled into the cache are not handled, but that
doesn't matter.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:48 +01:00
Balazs Nemeth
3f1f0bad15 net/qede: prefetch hardware consumer
Ensure that, while ecore_chain_get_cons_idx is running, txq->hw_cons_ptr
is prefetched. This shows a slight performance improvement.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:45 +01:00
Balazs Nemeth
4996b959cd net/qede: free packets in bulk
rte_pktmbuf_free_bulk calls rte_mempool_put_bulk with the number of
pending packets to return to the mempool. In contrast, rte_pktmbuf_free
calls rte_mempool_put that calls rte_mempool_put_bulk with one object.
An important performance related downside of adding one packet at a time
to the mempool is that on each call, the per-core cache pointer needs to
be read from tls while a single rte_mempool_put_bulk only reads from the
tls once.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:34 +01:00
Balazs Nemeth
303e78f2bc net/qede: assume mbuf to free is never null
The ring txq->sw_tx_ring is managed with txq->sw_tx_cons. As long as
txq->sw_tx_cons is correct, there is no need to check if
txq->sw_tx_ring[idx] is null explicitly.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:30 +01:00
Balazs Nemeth
2c41740bf1 net/qede: get consumer index once
Calling ecore_chain_get_cons_idx repeatedly is slower than calling it
once and using the result for the remainder of qede_process_tx_compl.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:27 +01:00
Balazs Nemeth
6a11a1eac0 net/qede: remove flags from Tx entry
Each sw_tx_ring entry was of type struct qede_tx_entry:

struct qede_tx_entry {
       struct rte_mbuf *mbuf;
       uint8_t flags;
};

Leaving the unused flags member here has a few performance implications.
First, each qede_tx_entry takes up more memory which has caching
implications as less entries fit in a cache line while multiple entries
are frequently handled in batches. Second, an array of qede_tx_entry
entries is incompatible with existing APIs that expect an array of
rte_mbuf pointers. Consequently, an extra array need to be allocated
before calling such APIs and each entry needs to be copied over.

This patch omits the flags field and replaces the qede_tx_entry entry
by a simple rte_mbuf pointer.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
2021-03-27 15:00:11 +01:00
Alexander Kozyrev
5cc6764267 net/mlx5: reject tunnel ID modification
Modification of the 802.1Q Tag Identifier, VXLAN Network
Identifier or GENEVE Network Identifier is not supported.
Reject attempt to modify these fields via the MODIFY_FIELD
action and document this mlx5 driver limitation.

Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:21:30 +02:00
Alexander Kozyrev
5e26a99695 net/mlx5: allow modify field action on group 0
There is a limitation about copying one header field to another for
the Flow group 0. Such copy action is not allowed there. But setting
a header field with an immediate value is perfectly fine.
Allow the MODIFY_FIELD action on group 0 in case the source field
is an immediate value or a pointer to it.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:21:27 +02:00
Alexander Kozyrev
0588d64ffd net/mlx5: check extended metadata for mark modification
The MODIFY_FIELD action requires the extended metadata support
in order to manipulate on MARK register. Check if it is supported
and reject the MODIFY_FIELD action if it is not.

Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:21:25 +02:00
Alexander Kozyrev
144127ba56 net/mlx5: adjust modify field action endianness
Masks that used to modify a packet field must be in a big
endian format. Convert then to BE to ensure proper modification.

Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:21:23 +02:00
Alexander Kozyrev
8660e202b3 net/mlx5: check field size in modify field action
Add a validation check to make sure that the specified width
for MODIFY_FIELD RTE action is not bigger than a field size.

Fixes: 641dbe4fb0 ("net/mlx5: support modify field flow action")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:21:21 +02:00
Xueming Li
91766fae2b net/mlx5: probe host PF representor with sub-function
To simplify BlueField HPF representor(vf[-1]) probe, this patch allows
probe it with "sf" syntax: "sf[-1]".

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:39 +02:00
Xueming Li
7ed15acdcd net/mlx5: improve xstats of bonding port
In case of kernel bonding device, counter was read from first bonding PF
member.

This patch reads all member PFs and sums to get bond xstats.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:37 +02:00
Xueming Li
1ce7d26d1a net/mlx5: fix setting VF default MAC through representor
With kernel bonding, there was an error when setting VF MAC address
through representor. The Netlink API requires ifindex of owner PF, not
bonding device ifindex.

Uses owner PF ifindex to modify VF default MAC in case of bonding
device.

Fixes: c21e5facf7 ("net/mlx5: use bond index for netdev operations")
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:35 +02:00
Xueming Li
f5f4c48237 net/mlx5: save bonding member ports information
Since kernel bonding netdev doesn't provide statistics counter that
reflects all member ports, PMD has to manually summarize counters from
each member ports.

As a preparation, this patch collects bonding member port information
and saves to shared context data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:33 +02:00
Xueming Li
08c2772fc7 net/mlx5: support list of representor PF
To probe representors from different kernel bonding PFs, had to specify
2 separate devargs like this:
    -a 03:00.0,representor=pf0vf[0-3] -a 03:00.0,representor=pf1vf[0-3]

This patch supports range or list of PF section in devargs, so the
alternative short devargs of above is:
    -a 03:00.0,representor=pf[0-1]vf[0-3]

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:30 +02:00
Xueming Li
f926cce3fa net/mlx5: refactor bonding representor probing
To probe representor on 2nd PF of kernel bonding device, had to specify
PF1 BDF in devarg:
  <PF1_BDF>,representor=0
When closing bonding device, all representors had to be closed together
and this implies all representors have to use primary PF of bonding
device. So after probing representor port on 2nd PF, when locating new
probed device using device argument, the filter used 2nd PF as PCI
address and failed to locate new device.

Conflict happened by using current representor devargs:
 - Use PCI BDF to specify representor owner PF
 - Use PCI BDF to locate probed representor device.
 - PMD uses primary PCI BDF as PCI device.

To resolve such conflicts, new representor syntax is introduced here:
  <primary BDF>,representor=pfXvfY
All representors must use primary PF as owner PCI device, PMD internally
locate owner PCI address by checking representor "pfX" part. To EAL, all
representors are registered to primary PCI device, the 2nd PF is hidden
to EAL, thus all search should be consistent.

Same to VF representor, HPF (host PF on BlueField) uses same syntax to
probe, example: representor=pf1vf[0-3,-1]

This patch also adds pf index into kernel bonding representor port name:
	<BDF>_<ib_name>_representor_pf<X>vf<Y>

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:28 +02:00
Xueming Li
9b03958aeb net/mlx5: revert setting bonding representor to first PF
With kernel bonding, representors on second PF are being probed by
devargs:
	<primary_bdf>,representor=pf1vf<N>
No need to save primary PF port ID and lookup when probing sibling
ports, revert patch [1]

[1]:
commit e6818853c0 ("net/mlx5: set representor to first PF in bonding mode")

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:27 +02:00
Xueming Li
cb95feefdd net/mlx5: support sub-function representor
This patch adds support for SF representor. Similar to VF representor,
switch port name of SF representor in phys_port_name sysfs key is
"pf<x>sf<y>".

Device representor argument is "representors=sf[list]", list member
could be mix of instance and range. Example:
  representors=sf[0,2,4,8-12,-1]

To probe VF representor and SF representor, need to separate into 2
devices:
  -a <BDF>,representor=vf[list] -a <BDF>,representor=sf[list]

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:25 +02:00
Xueming Li
59df97f1a8 common/mlx5: support sub-function representor parsing
This patch supports representor name parsing for SF.
In sysfs, representor name stored under "phys_port_name" sysfs key,
similar to VF representor, switch port name of SF representor is
"pf<x>sf<y>".

For netlink message, net SF type is supported.

Examples:

pf0sf1
pf0sf[0-3]

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:23 +02:00
Yunjian Wang
ed9726ce83 net/mlx5: fix using flow tunnel before null check
Coverity flags that 'ctx->tunnel' variable is used before
it's checked for NULL. This patch fixes this issue.

Coverity issue: 366201
Fixes: 868d2e342c ("net/mlx5: fix tunnel offload hub multi-thread protection")
Cc: stable@dpdk.org

Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-03-31 09:16:17 +02:00
Qi Zhang
b83d270dff net/ice: refine RSS configure
The ICE_RSS_ANY_HEADERS will try to enable outer RSS for
non-tunnel case and inner RSS for tunnel case. This confuse
user.

As we already have ICE_RSS_INNER_HEADER for tunnel case,
So, replace ICE_RSS_ANY_HEADERS with ICE_RSS_OUTER_HEADERS
for all exist flow which only specified the outer pattern.

To enable inner RSS for any tunnel cases, a separated rule
should be enabled.

The patch also remove some unnecessary condition check for GTPU
in base code, as we already can support outer RSS for GTPU.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Xuan Ding <xuan.ding@intel.com>
2021-03-30 07:15:01 +02:00
Murphy Yang
8b628c22b3 net/ixgbe: fix RSS RETA being reset after port start
If one calls ‘rte_eth_dev_rss_reta_update’ with ixgbe before starting
the device (but after setting everything else), then RSS RETA
configuration will be zero after starting the device.

This patch gives a notification if the port not started.

Bugzilla ID: 664
Fixes: 249358424e ("ixgbe: RSS RETA configuration")
Cc: stable@dpdk.org

Signed-off-by: Murphy Yang <murphyx.yang@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-03-30 01:21:17 +02:00
Haiyue Wang
8052638442 net/ice: remove redundant function
The function 'ice_is_profile_rule' is defined as 'ice_is_prof_rule' in
base code, which has the exactly same function body.

So remove the 'ice_is_profile_rule', use the 'ice_is_prof_rule' instead.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-03-30 01:21:17 +02:00
Qi Zhang
6d05fc905d net/iavf: fix TSO max segment size
According to Intel® AVF spec
(https://www.intel.com/content/dam/
www/public/us/en/documents/product-specifications/
ethernet-adaptive-virtual-function-hardware-spec.pdf)
section 2.2.2.3:
The max segment size(MSS) of TSO should not be set lower than 88.

Fixes: a2b29a7733 ("net/avf: enable basic Rx Tx")
Cc: stable@dpdk.org

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
2021-03-30 01:21:17 +02:00
Junfeng Guo
df4a6d9aa7 net/iavf: support GTPU inner IPv6 for flow director
Support GTPU_(EH)_IPV6 inner L3 and L4 fields matching for AVF FDIR.

+------------------------------+---------------------------------+
|           Pattern            |            Input Set            |
+------------------------------+---------------------------------+
| eth/ipv4/gtpu/ipv6           | inner: src/dst ip               |
| eth/ipv4/gtpu/ipv6/udp       | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/ipv6/tcp       | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh/ipv6        | inner: src/dst ip               |
| eth/ipv4/gtpu/eh/ipv6/udp    | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh/ipv6/tcp    | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(0)/ipv6     | inner: src/dst ip               |
| eth/ipv4/gtpu/eh(0)/ipv6/udp | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(0)/ipv6/tcp | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(1)/ipv6     | inner: src/dst ip               |
| eth/ipv4/gtpu/eh(1)/ipv6/udp | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(1)/ipv6/tcp | inner: src/dst ip, src/dst port |
+------------------------------+---------------------------------+

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-03-30 01:21:17 +02:00
Junfeng Guo
f91c7b2da1 net/iavf: support GTPU inner IPv4 for flow director
Support GTPU_(EH)_IPV4 inner L3 and L4 fields matching for AVF FDIR.

+------------------------------+---------------------------------+
|           Pattern            |            Input Set            |
+------------------------------+---------------------------------+
| eth/ipv4/gtpu/ipv4           | inner: src/dst ip               |
| eth/ipv4/gtpu/ipv4/udp       | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/ipv4/tcp       | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh/ipv4        | inner: src/dst ip               |
| eth/ipv4/gtpu/eh/ipv4/udp    | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh/ipv4/tcp    | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(0)/ipv4     | inner: src/dst ip               |
| eth/ipv4/gtpu/eh(0)/ipv4/udp | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(0)/ipv4/tcp | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(1)/ipv4     | inner: src/dst ip               |
| eth/ipv4/gtpu/eh(1)/ipv4/udp | inner: src/dst ip, src/dst port |
| eth/ipv4/gtpu/eh(1)/ipv4/tcp | inner: src/dst ip, src/dst port |
+------------------------------+---------------------------------+

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-03-30 01:21:17 +02:00