1589 Commits

Author SHA1 Message Date
John Daley
db79f2d5c9 net/enic: support GTP header flow matching
The GTP, GTP-U, GTP-C header fields can be matched, however NIC does not
support GTP tunneling so no items after the GTP header can be specified.
If a GTP-U or GTP-C item is specified without a preceding UDP item, the
UDP destination port is implicitly matched. For GTP, the destination UDP
port must be specified but its value is not enforced.

Signed-off-by: John Daley <johndale@cisco.com>
Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-11-04 12:34:46 +01:00
Junfeng Guo
25be39cc17 net/ice: enable protocol agnostic flow offloading in FDIR
Protocol agnostic flow offloading in Flow Director is enabled by this
patch based on the Parser Library, using existing rte_flow raw API.

Note that the raw flow requires:
1. byte string of raw target packet bits.
2. byte string of mask of target packet.

Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:

flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end

Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.

Also update the ice feature list with rte_flow item raw.

Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-11-03 13:01:31 +01:00
Ivan Malov
46c6714ffd net/sfc: support port representor related flow actions
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT.

The former should be used instead of ambiguous PORT_ID.

The latter sends traffic to the entity represented by
the given ethdev (network port or VF).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Ivan Malov
0fb3e8a910 net/sfc: support represented port flow item
Add support for item REPRESENTED_PORT to match on traffic entering
the embedded switch from the entity represented by the given
ethdev (network port or VF).

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-11-02 19:26:13 +01:00
Li Zhang
160f0d11bb doc: add metering limitation in mlx5 guide
A meter policy with RSS/Queue action is not supported
when dv_xmeta_en enabled.

When dv_xmeta_en enabled in legacy creating flow,
it will split into two flows
(one set_tag with jump flow and one RSS/queue action flow).
For meter policy as termination table,
it cannot split flow and
cannot support when dv_xmeta_en enabled.

Fixes: 51ec04dc7bcf ("net/mlx5: connect meter policy to created flows")
Cc: stable@dpdk.org

Signed-off-by: Li Zhang <lizh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-31 14:27:24 +01:00
Jiawen Wu
d0759b5098 net/ngbe: support Tx done cleanup
Add support for API rte_eth_tx_done_cleanup().

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
b7aad633b3 net/ngbe: support Rx and Tx descriptor status
Supports to get the number of used Rx descriptors,
and check the status of Rx and Tx descriptors.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
24cd85f7e5 net/ngbe: support timesync
Add to support IEEE1588/802.1AS timestamping, and IEEE1588 timestamp
offload on Tx.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
71aec12796 net/ngbe: support register dump
Support to dump registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
9459ea29d1 net/ngbe: support EEPROM dump
Support to get and set device EEPROM data.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
4db3db296a net/ngbe: support device LED on/off
Support device LED on and off.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
f40e9f0e22 net/ngbe: support flow control
Support to get and set flow control.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
60229dcfc4 net/ngbe: support SR-IOV
Initialize and configure PF module to support SRIOV.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
0779d7f619 net/ngbe: support RSS hash
Support RSS hashing on Rx, and configuration of RSS hash computation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
dee93977a6 net/ngbe: support MAC filters
Add MAC addresses to filter incoming packets, support to set
multicast addresses to filter. And support to set unicast table array.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
506abd4a8b net/ngbe: support FW version query
Add firmware version get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
b83372a030 net/ngbe: support device promiscuous and allmulticast mode
Support to enable/disable promiscuous and allmulticast mode for a port.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
07baabb6a5 net/ngbe: support MTU set
Support updating port MTU.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
8b433d04ad net/ngbe: support device xstats
Add device extended stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
fdb1e85197 net/ngbe: support basic statistics
Support to read and clear basic statistics.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
59b46438fd net/ngbe: support VLAN offload and VLAN filter
Support to set VLAN and QinQ offload, and filter of a VLAN tag
identifier.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
586e602837 net/ngbe: support jumbo frame
Add to support Rx jumbo frames.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
64b36e4af1 net/ngbe: support CRC offload
Support to strip or keep CRC in Rx path.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
d148a87e69 net/ngbe: support Rx/Tx burst mode info
Support to get Rx/Tx burst mode info.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
9f32061402 net/ngbe: support TSO
Add transmit datapath with offloads, and support TCP segmentation
offload.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
ffc959f5b3 net/ngbe: support Rx checksum offload
Support IP/L4 checksum on Rx, and convert it to mbuf flags.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
79f3128d4d net/ngbe: support scattered Rx
Add scattered Rx function to support receiving segmented mbufs.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Jiawen Wu
f6aef1dacf net/ngbe: support packet type query
Add packet type macro definition and convert ptype to ptid.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
2021-10-30 00:53:19 +02:00
Maxime Coquelin
0c9d662070 net/virtio: support RSS
Provide the capability to update the hash key, hash types
and RETA table on the fly (without needing to stop/start
the device). However, the key length and the number of RETA
entries are fixed to 40B and 128 entries respectively. This
is done in order to simplify the design, but may be
revisited later as the Virtio spec provides this
flexibility.

Note that only VIRTIO_NET_F_RSS support is implemented,
VIRTIO_NET_F_HASH_REPORT, which would enable reporting the
packet RSS hash calculated by the device into mbuf.rss, is
not yet supported.

Regarding the default RSS configuration, it has been
chosen to use the default Intel ixgbe key as default key,
and default RETA is a simple modulo between the hash and
the number of Rx queues.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-29 11:23:10 +02:00
Ajit Khaparde
9446d7fcd9 doc: remove obsolete option from bnxt guide
host-based-truflow devarg is not used anymore to enable host based
flow table management functionality TruFlow. Instead this feature is
now driven by a capability indicated by the firmware.

TruFlow is not in tech preview anymore. Update the doc accordingly.

Fixes: da3731e2ea00 ("net/bnxt: check FW capability to support TRUFLOW")

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-28 19:58:54 +02:00
Ajit Khaparde
0e7bdac71d doc: update NIC feature matrix for bnxt
Support for runtime Rx/Tx queue setup and inner RSS is not updated.
Update feature matrix for bnxt PMD.

Fixes: 7ed45b1a7c0f ("net/bnxt: support RSS hash selection")
Fixes: 0105ea1296c9 ("net/bnxt: support runtime queue setup")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-28 19:58:54 +02:00
Radu Nicolau
6bc987ecb8 net/iavf: support IPsec inline crypto
Add support for inline crypto for IPsec, for ESP transport and
tunnel over IPv4 and IPv6, as well as supporting the offload for
ESP over UDP, and in conjunction with TSO for UDP and TCP flows.
Implement support for rte_security packet metadata

Add definition for IPsec descriptors, extend support for offload
in data and context descriptor to support

Add support to virtual channel mailbox for IPsec Crypto request
operations. IPsec Crypto requests receive an initial acknowledgment
from physical function driver of receipt of request and then an
asynchronous response with success/failure of request including any
response data.

Add enhanced descriptor debugging

Refactor of scalar tx burst function to support integration of offload

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
2021-10-29 04:22:04 +02:00
Rongwei Liu
7299ab6822 net/mlx5: support socket direct mode bonding
In socket direct mode, it's possible to bind any two (maybe four
in future) PCIe devices with IDs like xxxx:xx:xx.x and
yyyy:yy:yy.y. Bonding member interfaces are unnecessary to have
the same PCIe domain/bus/device ID anymore,

Kernel driver uses "system_image_guid" to identify if devices can
be bound together or not. Sysfs "phys_switch_id" is used to get
"system_image_guid" of each network interface.

OFED 5.4+ is required to support "phys_switch_id".

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
2021-10-26 13:24:20 +02:00
Olivier Matz
daa02b5cdd mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-24 13:37:43 +02:00
Ferruh Yigit
295968d174 ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
2021-10-22 18:15:38 +02:00
Ferruh Yigit
1aca4fdb00 doc: remove jumbo offload feature
Jumbo offload is no more announced as capability, and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag is removed.

This patch is also removing 'Jumbo frame' feature from documentation.

Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-22 17:26:07 +02:00
Chengchang Tang
2fc3e696a7 net/hns3: add runtime config for mailbox limit time
Current, the max waiting time for MBX response is 500ms, but in
some scenarios, it is not enough. Since it depends on the response
of the kernel mode driver, and its response time is related to the
scheduling of the system. In this special scenario, most of the
cores are isolated, and only a few cores are used for system
scheduling. When a large number of services are started, the
scheduling of the system will be very busy, and the reply of the
mbx message will time out, which will cause our PMD initialization
to fail.

This patch add a runtime config to set the max wait time. For the
above scenes, users can adjust the waiting time to a suitable value
by themselves.

Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
2021-10-22 04:11:43 +02:00
Xueming Li
dd22740cc2 ethdev: introduce shared Rx queue
In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.

This patch introduces shared Rx queue. Ports in same Rx domain and
switch domain could share Rx queue set by specifying non-zero sharing
group in Rx queue configuration.

Shared Rx queue is identified by share_rxq field of Rx queue
configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using
same shared Rx queue ID.

No special API is defined to receive packets from shared Rx queue.
Polling any member port of a shared Rx queue receives packets of that
queue for all member ports, port_id is identified by mbuf->port. PMD is
responsible to resolve shared Rx queue from device and queue data.

Shared Rx queue must be polled in same thread or core, polling a queue
ID of any member port is essentially same.

Multiple share groups are supported. PMD should support mixed
configuration by allowing multiple share groups and non-shared Rx queue
on one port.

Example grouping and polling model to reflect service priority:
 Group1, 2 shared Rx queues per port: PF, rep0, rep1
 Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
 Core0: poll PF queue0
 Core1: poll PF queue1
 Core2: poll rep2 queue0

PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.

PMD is responsible for shared Rx queue consistency checks to avoid
member port's configuration contradict each other.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
2021-10-22 00:08:50 +02:00
Satheesh Paul
00ea15e7a3 net/cnxk: support port ID flow action
This patch adds support for rte flow action type port_id to
enable directing packets from an input port PF to an output
port which is a VF of the input port PF.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-21 18:59:40 +02:00
William Tu
d1c7029a52 net/e1000: build on Windows
This patch enables building the e1000 driver for Windows.
I tested using two Windows VM on top of VMware Fusion,
creating two e1000 devices with device ID 0x10D3 (8274L),
verifying rx/tx works correctly using dpdk-testpmd.exe
rxonly and txonly mode.

Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
Tested-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Tested-by: Pallavi Kadam <pallavi.kadam@intel.com>
2021-10-21 04:58:40 +02:00
Rongwei Liu
a89f6433aa net/mlx5: set Tx queue affinity in round-robin
Previously, we set txq affinity to 0 and let firmware
to perform round-robin when bonding. Firmware uses a
global counter to assign txq affinity to different
physical ports accord to remainder after division.

There are three dis-advantages:
1. The global counter is shared between kernel and dpdk.
2. After restarting pmd or port, the previous counter value
is reused, so the new affinity is unpredictable.
3. There is no way to get what affinity is set by firmware.

In this update, we will create several TISs up to the
number of bonding ports and bind each TIS to one PF port.

For each port, it will start to pick up TIS using its port
index. Upper layer application can quickly calculate each txq's
affinity without querying.

At DPDK layer, when creating txq with 2 bonding ports, the
affinity is set like:
port 0: 1-->2-->1-->2
port 1: 2-->1-->2-->1
port 2: 1-->2-->1-->2

Note: Only applicable to DevX api.
This affinity subjects to HW hash.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
2021-10-21 12:37:00 +02:00
Jie Wang
f30157d988 net/iavf: support PPPoL2TPv2oUDP RSS Hash
Add support for PPP over L2TPv2 over UDP protocol RSS Hash based
on inner IP src/dst address and TCP/UDP src/dst port.

Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-21 14:15:59 +02:00
Jie Wang
3a929df1f2 ethdev: support L2TPv2 and PPP procotol
Added flow pattern items and header formats of L2TPv2 and PPP.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
2021-10-21 14:15:59 +02:00
Sunil Kumar Kori
58397fedc6 net/cnxk: support meter action to flow create
Meters are configured per flow using rte_flow_create API.
Implement support for meter action applied on the flow.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-19 16:25:31 +02:00
Sunil Kumar Kori
26b034f78c net/cnxk: support to validate meter policy
Implement API to validate meter policy for CNXK platform.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
2021-10-19 16:24:59 +02:00
Alvin Zhang
1506c90029 net/i40e: fix IPv6 fragment RSS offload type in flow
To keep flow format uniform with ice, this patch adds support for
this RSS rule:
    flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end \
    actions rss types ipv6-frag end queues end queues end / end

Fixes: ef4c16fd9148 ("net/i40e: refactor RSS flow")
Cc: stable@dpdk.org

Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
2021-10-19 13:06:42 +02:00
Ferruh Yigit
b563c14212 ethdev: remove jumbo offload flag
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.

And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.

Removing this additional configuration for simplification.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
2021-10-18 19:20:21 +02:00
Ferruh Yigit
1bb4a528c4 ethdev: fix max Rx packet length
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.

Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.

These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.

Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
  'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
  Ethernet frame overhead, and this overhead may be different from
  device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
  which adds additional confusion and some APIs and PMDs already
  discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
  field, this adds configuration complexity for application.

As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.

For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.

When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.

Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
2021-10-18 19:20:20 +02:00
William Tu
d2e5ab2b42 doc: fix emulated device names in e1000 guide
The device name should be 82574L Gigabit Ethernet Controller.
The patch also remove a redundant "*".

Fixes: fc1f2750a3ec ("doc: programmers guide")
Cc: stable@dpdk.org

Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
2021-10-15 15:50:50 +02:00
Andrew Rybchenko
f55b61cec9 net/sfc: support port representor flow item
Add support for item PORT_REPRESENTOR which should
be used instead of ambiguous item PORT_ID.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
2021-10-13 22:59:26 +02:00