IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload feature is exposed in eth_dev ops for devices which
can attempt IP reassembly of packets in hardware.
- rte_eth_ip_reassembly_capability_get() - to get the maximum values
of reassembly configuration which can be set.
- rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
and to enable the feature in the PMD (to be called before
rte_eth_dev_start()).
- rte_eth_ip_reassembly_conf_get() - to get the current configuration
set in PMD.
Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
the resulting reassembled IP packet would be a typical segmented mbuf in
case of success.
And if reassembly of IP fragments is failed or is incomplete (if
fragments do not come before the reass_timeout, overlap, etc), the mbuf
dynamic flags can be updated by the PMD. This is updated in a subsequent
patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Based on device support and use-case need, there are two different ways
to enable PFC. The first case is the port level PFC configuration, in
this case, rte_eth_dev_priority_flow_ctrl_set() API shall be used to
configure the PFC, and PFC frames will be generated using based on VLAN
TC value.
The second case is the queue level PFC configuration, in this
case, Any packet field content can be used to steer the packet to the
specific queue using rte_flow or RSS and then use
rte_eth_dev_priority_flow_ctrl_queue_configure() to configure the
TC mapping on each queue.
Based on congestion selected on the specific queue, configured TC
shall be used to generate PFC frames.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Jumbo offload is no more announced as capability, and
'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag is removed.
This patch is also removing 'Jumbo frame' feature from documentation.
Fixes: b563c1421282 ("ethdev: remove jumbo offload flag")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.
This patch introduces shared Rx queue. Ports in same Rx domain and
switch domain could share Rx queue set by specifying non-zero sharing
group in Rx queue configuration.
Shared Rx queue is identified by share_rxq field of Rx queue
configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using
same shared Rx queue ID.
No special API is defined to receive packets from shared Rx queue.
Polling any member port of a shared Rx queue receives packets of that
queue for all member ports, port_id is identified by mbuf->port. PMD is
responsible to resolve shared Rx queue from device and queue data.
Shared Rx queue must be polled in same thread or core, polling a queue
ID of any member port is essentially same.
Multiple share groups are supported. PMD should support mixed
configuration by allowing multiple share groups and non-shared Rx queue
on one port.
Example grouping and polling model to reflect service priority:
Group1, 2 shared Rx queues per port: PF, rep0, rep1
Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
Core0: poll PF queue0
Core1: poll PF queue1
Core2: poll rep2 queue0
PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.
PMD is responsible for shared Rx queue consistency checks to avoid
member port's configuration contradict each other.
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
rte_eth_rx_descriptor_status() should be used as a replacement.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Not all net PMD's/HW can parse packet and identify L2 header and
L3 header locations on Tx. This is inline with other Tx offloads
requirements such as L3 checksum, L4 checksum offload, etc,
where mbuf.l2_len, mbuf.l3_len etc, needs to be set for HW to be
able to generate checksum. Since Inline IPsec is also such a Tx
offload, some PMD's at least need mbuf.l2_len to be valid to
find L3 header and perform Outbound IPSec processing.
Hence, this patch updates documentation to enforce setting
mbuf.l2_len while setting PKT_TX_SEC_OFFLOAD in mbuf.ol_flags
for Inline IPsec Crypto / Protocol offload processing to
work on Tx.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
At this point, multiple different Ethernet drivers from multiple vendors
will support the PMD power management scheme. It would be useful to add
it to the NIC feature table to indicate support for it.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
The NICs overview table lists all supported features per driver.
There was a single row for "Flow API",
although rte_flow is composed of many items and actions.
The row "Flow API" is replaced with two new tables for items and actions.
Also, since rte_flow is not implemented in all drivers,
it would be ugly to add empty sections in some files.
That's why the error message for missing INI section is removed.
The lists are sorted alphabetically.
The extra files for some VF and vectorized data paths are not filled.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kiran Kumar K <kirankumark@marvell.com>
---
v6 changes:
- rebase/update
- remove deprecated shared action
Since rte_flow is the only API for filtering operations,
the legacy driver interface filter_ctrl was too much complicated
for the simple task of getting the struct rte_flow_ops.
The filter type RTE_ETH_FILTER_GENERIC and
the filter operarion RTE_ETH_FILTER_GET are removed.
The new driver callback flow_ops_get replaces filter_ctrl.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Rename PKT_RX_EIP_CKSUM_BAD to PKT_RX_OUTER_IP_CKSUM_BAD and
deprecate the original name. The new name is better aligned
with existing PKT_RX_OUTER_* flags, which should help reduce
confusion about its use.
Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Document FEC in NIC features, add information about FEC and add
implementation related support.
Fixes: b7ccfb09da95 ("ethdev: introduce FEC API")
Fixes: 9bf2ea8dbc65 ("net/hns3: support FEC")
Fixes: 62aafe035896 ("net/cxgbe: support configuring link FEC")
Cc: stable@dpdk.org
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
The networking drivers features matrix had rows to show
OS and kernel modules support:
- BSD nic_uio
- Linux UIO
- Linux VFIO
- Other kdrv
- Windows
The kernel modules details are removed to keep only OS support:
- FreeBSD
- Linux
- Windows
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The words blacklist and whitelist are avoided in text
about MAC filtering or kernel module.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Make is no longer supported for compiling DPDK, references are now
removed in the documentation.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Igor Russkikh <irusskikh@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
The DPDK datapath in the transmit direction is very flexible.
An application can build the multi-segment packet and manages
almost all data aspects - the memory pools where segments
are allocated from, the segment lengths, the memory attributes
like external buffers, registered for DMA, etc.
In the receiving direction, the datapath is much less flexible,
an application can only specify the memory pool to configure the
receiving queue and nothing more. In order to extend receiving
datapath capabilities it is proposed to add the way to provide
extended information how to split the packets being received.
The new offload flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT in device
capabilities is introduced to present the way for PMD to report to
application about supporting Rx packet split to configurable
segments. Prior invoking the rte_eth_rx_queue_setup() routine
application should check RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag.
The following structure is introduced to specify the Rx packet
segment for RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT offload:
struct rte_eth_rxseg_split {
struct rte_mempool *mp; /* memory pools to allocate segment from */
uint16_t length; /* segment maximal data length,
configures "split point" */
uint16_t offset; /* data offset from beginning
of mbuf data buffer */
uint32_t reserved; /* reserved field */
};
The segment descriptions are added to the rte_eth_rxconf structure:
rx_seg - pointer the array of segment descriptions, each element
describes the memory pool, maximal data length, initial
data offset from the beginning of data buffer in mbuf.
This array allows to specify the different settings for
each segment in individual fashion.
rx_nseg - number of elements in the array
If the extended segment descriptions is provided with these new
fields the mp parameter of the rte_eth_rx_queue_setup must be
specified as NULL to avoid ambiguity.
There are two options to specify Rx buffer configuration:
- mp is not NULL, rrx_conf.rx_nseg is zero, it is compatible
configuration, follows existing implementation, provides
the single pool and no description for segment sizes
and offsets.
- mp is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not
zero, it provides the extended configuration, individually for
each segment.
f the Rx queue is configured with new settings the packets being
received will be split into multiple segments pushed to the mbufs
with specified attributes. The PMD will split the received packets
into multiple segments according to the specification in the
description array.
For example, let's suppose we configured the Rx queue with the
following segments:
seg0 - pool0, len0=14B, off0=2
seg1 - pool1, len1=20B, off1=128B
seg2 - pool2, len2=20B, off2=0B
seg3 - pool3, len3=512B, off3=0B
The packet 46 bytes long will look like the following:
seg0 - 14B long @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
seg1 - 20B long @ 128 in mbuf from pool1
seg2 - 12B long @ 0 in mbuf from pool2
The packet 1500 bytes long will look like the following:
seg0 - 14B @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
seg1 - 20B @ 128 in mbuf from pool1
seg2 - 20B @ 0 in mbuf from pool2
seg3 - 512B @ 0 in mbuf from pool3
seg4 - 512B @ 0 in mbuf from pool3
seg5 - 422B @ 0 in mbuf from pool3
The offload RTE_ETH_RX_OFFLOAD_SCATTER must be present and
configured to support new buffer split feature (if rx_nseg
is greater than one).
The split limitations imposed by underlying PMD is reported
in the new introduced rte_eth_dev_info->rx_seg_capa field.
The new approach would allow splitting the ingress packets into
multiple parts pushed to the memory with different attributes.
For example, the packet headers can be pushed to the embedded
data buffers within mbufs and the application data into
the external buffers attached to mbufs allocated from the
different memory pools. The memory attributes for the split
parts may differ either - for example the application data
may be pushed into the external memory located on the dedicated
physical device, say GPU or NVMe. This would improve the DPDK
receiving datapath flexibility with preserving compatibility
with existing API.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This patch is a preparation to hide the 'struct eth_dev_ops' from
applications by moving some device operations from 'struct eth_dev_ops'
to 'struct rte_eth_dev'.
Mentioned ethdev APIs are in the data path and implemented as inline
because of performance reasons.
Exposing 'struct eth_dev_ops' to applications is bad because it is a
contract between ethdev and PMDs, not really needs to be known by
applications, also changes in the struct causing ABI breakages which
shouldn't.
To be able to both keep APIs inline and hide the 'struct eth_dev_ops',
moving device operations used in ethdev inline APIs to 'struct
rte_eth_dev' to the same level with Rx/Tx burst functions.
The list of dev_ops moved:
eth_rx_queue_count_t rx_queue_count;
eth_rx_descriptor_done_t rx_descriptor_done;
eth_rx_descriptor_status_t rx_descriptor_status;
eth_tx_descriptor_status_t tx_descriptor_status;
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
This patch implements API for configuration and
validation of max size for LRO aggregated packet.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add new Rx offload flag `DEV_RX_OFFLOAD_RSS_HASH` which can be used to
enable/disable PMDs write to `rte_mbuf:#️⃣:rss`.
PMDs notify the validity of `rte_mbuf:#️⃣rss` to the application
by enabling `PKT_RX_RSS_HASH ` flag in `rte_mbuf::ol_flags`.
Also update testpmd rx_offload command to include RSS_HASH
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add `rte_eth_dev_set_ptypes` function that will allow the application
to inform the PMD about reduced range of packet types to handle.
Based on the ptypes set PMDs can optimize their Rx path.
-If application doesn’t want any ptype information it can call
`rte_eth_dev_set_ptypes(ethdev_id, RTE_PTYPE_UNKNOWN, NULL, 0)`
and PMD may skip packet type processing and set rte_mbuf::packet_type to
RTE_PTYPE_UNKNOWN.
-If application doesn’t call `rte_eth_dev_set_ptypes` PMD can return
`rte_mbuf::packet_type` with `rte_eth_dev_get_supported_ptypes`.
-If application is interested only in L2/L3 layer, it can inform the PMD
to update `rte_mbuf::packet_type` with L2/L3 ptype by calling
`rte_eth_dev_set_ptypes(ethdev_id,
RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK, NULL, 0)`.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Change the type of burst mode information from bit field to free string
data, so that each PMD can describe the Rx/Tx busrt functions flexibly.
Fixes: eb5902504a13 ("ethdev: add API for getting burst mode information")
Fixes: 6b6609f68ccd ("net/i40e: support Rx/Tx burst mode info")
Fixes: e9a10e6c2102 ("net/ice: support Rx/Tx burst mode info")
Fixes: 7fe108edcf53 ("app/testpmd: show Rx/Tx burst mode description")
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ray Kinsella <ray.kinsella@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Some PMDs have more than one Rx/Tx burst paths, add the ethdev API
that allows an application to retrieve the mode information about
Rx/Tx packet burst such as Scalar or Vector, and Vector technology
like AVX2.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
As legacy filter API "filter_ctrl" is superseded since 2017
by the rte_flow API, and got the deprecated attribute in DPDK 19.05,
it is time to remove the associated features from the matrix.
Not documenting deprecated features as supported will avoid confusion.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Add rte_eth_read_clock to read the raw clock of a device.
The main use is to get the device clock conversion co-efficients to be
able to translate the raw clock of the timestamp field of the pkt mbuf
to a local synced time value.
This function was missing to allow users to convert the Rx timestamp
field to real time without the complexity of the rte_timesync* facility.
One can derivate the clock frequency by calling twice read_clock and
then keep a common time base.
Signed-off-by: Tom Barbette <barbette@kth.se>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
This patch fixes a wrong tag in guides/nics/features.rst.
The features tags should be, according to the
"Features Overview" section in this doc, one of the following:
"uses", "implements", "provides", or "related".
Hence in Inner RSS section, it should be "uses"
instead of "users".
Fixes: d0a87d9aa8de ("doc: update mlx5 guide on tunnel offloading")
Cc: stable@dpdk.org
Signed-off-by: Rami Rosen <ramirose@gmail.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Based on the PKT_TX_TCP_SEG definition,
the application needs to update PKT_TX_IPV4 or PKT_TX_IPV6
based on IPV4 or IPV6 packet and PKT_TX_IP_CKSUM ol_flags
to enable Tx TSO offload.
Fixes: dad1ec72a377 ("doc: document NIC features")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Introduced DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flags and
PKT_TX_OUTER_UDP_CKSUM mbuf ol_flags to enable Tx outer UDP
checksum offload.
To use hardware Tx outer UDP checksum offload, the user needs to,
- enable following in mbuf:
a) fill outer_l2_len and outer_l3_len in mbuf
b) set the PKT_TX_OUTER_UDP_CKSUM flag
c) set the flag PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6
- configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flags in slow path
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Introduced DEV_RX_OFFLOAD_OUTER_UDP_CKSUM Rx offload flag and
PKT_RX_OUTER_L4_CKSUM_* mbuf ol_flags to detect outer UDP checksum
status.
- To use hardware Rx outer UDP checksum offload, the user needs to
configure DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload flags in slowpath.
- Driver updates checksum status in mbuf ol_flag as
PKT_RX_OUTER_L4_CKSUM_* flags.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Based on PKT_TX_[TCP|UDP|SCTP]_CKSUM definition the user needs
to fill l2_len and l3_len mbuf fields before issuing HW Tx
checksum request.
Fixes: dad1ec72a377 ("doc: document NIC features")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Based on PKT_TX_IP_CKSUM definition the user needs
to fill l2_len and l3_len mbuf fields before issuing
HW Tx checksum request.
Fixes: dad1ec72a377 ("doc: document NIC features")
Cc: stable@dpdk.org
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Update implementation that when PKT_RX_QINQ_STRIPPED mbuf ol_flags
set by PMD, PKT_RX_QINQ, PKT_RX_VLAN_STRIPPED & PKT_RX_VLAN
should be also set.
Clarify mbuf documentations that when PKT_RX_QINQ set PKT_RX_VLAN also
should be set.
So that appllication can rely on PKT_RX_QINQ flag to access both
mbuf.vlan_tci & mbuf.vlan_tci_outer
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Removed DEV_RX_OFFLOAD_CRC_STRIP offload flag.
Without any specific Rx offload flag, default behavior by PMDs is to
strip CRC.
PMDs that support keeping CRC should advertise DEV_RX_OFFLOAD_KEEP_CRC
Rx offload capability.
Applications that require keeping CRC should check PMD capability first
and if it is supported can enable this feature by setting
DEV_RX_OFFLOAD_KEEP_CRC in Rx offload flag in rte_eth_dev_configure()
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tomasz Duszynski <tdu@semihalf.com>
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Jan Remes <remes@netcope.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Fixes: 3d12dceed2df ("ethdev: add new offload flag to keep CRC")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
In DPDK 17.11, the ethdev offloads API has changed:
commit cba7f53b717d ("ethdev: introduce Tx queue offloads API")
commit ce17eddefc20 ("ethdev: introduce Rx queue offloads API")
The new API is documented in the programmer's guide:
http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html#hardware-offload
For reminder, the main concepts in the new API were:
- All offloads are disabled by default
- Distinction between per port and per queue offloads.
The transition bits are now removed:
- Translation of the old API in ethdev
- rte_eth_conf.rxmode.ignore_offload_bitfield
- ETH_TXQ_FLAGS_IGNORE
The old API bits are now removed:
- Rx per-port rte_eth_conf.rxmode.[bit-fields]
- Tx per-queue rte_eth_txconf.txq_flags
- ETH_TXQ_FLAGS_NO*
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
It's not possible to setup a queue when the port is started
because of a check in ethdev layer. New capability flags are
added in order to relax this check for devices which support
queue setup in runtime. The functions rte_eth_[rx|tx]_queue_setup
will raise an error only if the port is started and runtime setup
of queue is not supported.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>