The flow items are not yet cleaned up in DPDK 22.11 as desired.
The plan is updated for this release, with more details.
A new plan must be precised during the next release.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
The legacy flow actions, replaced with the more generic modify action,
are not completely replaced yet in DPDK 22.11.
The plan needs to be updated for this release,
and a new plan must be precised during the next release.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Updated AESNI MB and AESNI GCM, KASUMI, ZUC, SNOW3G
and CHACHA20_POLY1305 PMD documentation guides
with information about the latest Intel IPsec Multi-buffer
library supported.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Brian Dooley <brian.dooley@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Add to release notes:
Flags field in pre-configuration structure and strict-queue flag.
Fixes: dcc9a80c20 ("ethdev: add strict queue to pre-configuration flow hints")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
Add to 22.11 release notes the NVIDIA mlx5 HWS aging support.
Fixes: 04a4de756e ("net/mlx5: support flow age action with HWS")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
The NVIDIA mlx5 driver inside 22.11 release notes, lists all features
support for queue-based async HW steering.
Before the list, miss a blank line causing it to look regular text line.
This patch adds the blank line.
Fixes: 0f4aa72b99 ("net/mlx5: support flow modify field with HWS")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
Updated release notes about the SNOW-3G and ZUC support
on ARM platform using ipsec_mb PMD. The support was added in
below patch.
Fixes: 0899a87ce7 ("crypto/ipsec_mb: enable IPsec on Arm platform")
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Add performance application to test security session create & destroy
rates supported by the security enabled cryptodev PMD. The
application would create specified number of sessions and captures the
time taken for the same before proceeding to destroy of the same. When
operating on multi-core, the number of sessions would be evenly
distributed across all cores.
The application would test with all combinations of cipher & auth
algorithms supported by the PMD.
Signed-off-by: Aakash Sasidharan <asasidharan@marvell.com>
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
This patch corrects the product name for idpf PMD.
Fixes: 549343c25d ("net/idpf: support device initialization")
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Support device init and add the following dev ops:
- dev_configure
- dev_close
- dev_infos_get
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
This commit adds ECDH key exchange algorithm to Intel QuickAssist
Technology driver.
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Introduced stubs for device driver for the ACC200
integrated VRAN accelerator on SPR-EEC
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Add support for asymmetric crypto validation starting with RSA.
For the generation of crypto values which is multiprecision in
math, openssl library is used only for this purpose.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Acked-by: Brian Dooley <brian.dooley@intel.com>
The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly
algorithms using the JOB API.
This support was added to AESNI_MB PMD previously, but the SGL feature
flags could not be added due to no SGL support for other algorithms.
This patch adds a workaround SGL approach for other algorithms
using the JOB API. The segmented input buffers are copied into a
linear buffer, which is passed as a single job to intel-ipsec-mb.
The job is processed, and on return, the linear buffer is split into the
original destination segments.
Existing AESNI_MB testcases are passing with these feature flags added.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Added base support for lookaside event mode.
Events that are coming from ethdev will be enqueued
to the event crypto adapter, processed and
enqueued back to ethdev for the transmission.
Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This library has no maintainer and, for now, nobody expressed interest
in taking over.
Mark this experimental library as deprecated and announce plan for
removal in v23.11.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
This patch adds the creation of control flow rules required to receive
default traffic (based on port configuration) with HWS.
Control flow rules are created on port start and destroyed on port stop.
Handling of destroying these rules was already implemented before that
patch.
Control flow rules are created if and only if flow isolation mode is
disabled and the creation process goes as follows:
- Port configuration is collected into a set of flags. Each flag
corresponds to a certain Ethernet pattern type, defined by
mlx5_flow_ctrl_rx_eth_pattern_type enumeration. There is a separate
flag for VLAN filtering.
- For each possible Ethernet pattern type and:
- For each possible RSS action configuration:
- If configuration flags do not match this combination, it is
omitted.
- A template table is created using this combination of pattern
and actions template (templates are fetched from hw_ctrl_rx
struct stored in the port's private data).
- Flow rules are created in this table.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit adds the support of connection tracking to HW steering as
SW steering did before.
The difference from SW steering implementation is that it takes
advantage of HW steering bulk action allocation support, in HW
steering only one single CT pool is needed.
An indexed pool is introduced to record allocated actions from bulk and
CT action state etc. Once one CT action is allocated from bulk, one
indexed object will also be allocated from the indexed pool, similar to
deallocating. That makes mlx5_aso_ct_action can also be managed by that
indexed pool, no need to be reserved from mlx5_aso_ct_pool. The single
CT pool is also saved to mlx5_aso_ct_action struct directly.
The ASO operation functions are shared with SW steering implementation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit adds HW steering counter action support.
The pool mechanism is the basic data structure for the HW steering
counter.
The HW steering's counter pool is based on the rte_ring of zero-copy
variation.
There are two global rte_rings:
1. free_list:
Store the counters indexes, which are ready for use.
2. wait_reset_list:
Store the counters indexes, which are just freed from the user and
need to query the hardware counter to get the reset value before
this counter can be reused again.
The counter pool also supports cache per HW steering's queues, which are
also based on the rte_ring of zero-copy variation.
The cache can be configured in size, preload, threshold, and fetch size,
they are all exposed via device args.
The main operations of the counter pool are as follows:
- Get one counter from the pool:
1. The user call _get_* API.
2. If the cache is enabled, dequeue one counter index from the local
cache:
2. A: if the dequeued one from the local cache is still in reset
status (counter's query_gen_when_free is equal to pool's query
gen):
I. Flush all counters in the local cache back to global
wait_reset_list.
II. Fetch _fetch_sz_ counters into the cache from the global
free list.
III. Fetch one counter from the cache.
3. If the cache is empty, fetch _fetch_sz_ counters from the global
free list into the cache and fetch one counter from the cache.
- Free one counter into the pool:
1. The user calls _put_* API.
2. Put the counter into the local cache.
3. If the local cache is full:
A: Write back all counters above _threshold_ into the global
wait_reset_list.
B: Also, write back this counter into the global wait_reset_list.
When the local cache is disabled, _get_/_put_ cache directly from/into
global list.
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This commit adds meter action for HWS steering.
HW steering meter is based on ASO. The number of meters will
be used by flows should be specified in advance in the flow
configure API.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
The new mode 4 of devarg "dv_xmeta_en" is added for HWS only. In this
mode, the Rx / Tx metadata with 32b width copy between FDB and NIC is
supported.
The mark is only supported in NIC and there is no copy supported.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch introduces support for modify_field rte_flow actions in HWS
mode that includes:
- Ingress and egress domains,
- SET and ADD operations,
- usage of arbitrary bit offsets and widths for packet and metadata
fields.
This is implemented in two phases:
1. On flow table creation the hardware commands are generated, based
on rte_flow action templates, and stored alongside action template.
2. On flow rule creation/queueing the hardware commands are updated with
values provided by the user. Any masks over immediate values, provided
in action templates, are applied to these values before enqueueing rules
for creation.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch promotes the per-queue stats API to stable.
The API has been used by the Vhost PMD since v22.07, and
David Marchand posted a patch to make use of it in next
OVS release[0].
[0]: http://patchwork.ozlabs.org/project/openvswitch/patch/20221007111613.1695524-4-david.marchand@redhat.com/
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Vhost-user library locks all VQ's access lock when processing
vring based messages, such as SET_VRING_KICK and SET_VRING_CALL,
and the data processing thread may already be started, e.g: SPDK
vhost-blk and vhost-scsi will start the data processing thread
when one vring is ready, then deadlock may happen when SPDK is
posting interrupts to VM. Here, we add a new API which allows
caller to try again later for this case.
Bugzilla ID: 1015
Fixes: c573699830 ("vhost: fix missing virtqueue lock protection")
Cc: stable@dpdk.org
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vChannels in vhost async data path. Lock protection are also added
to protect DMA vChannel configuration and unconfiguration
from concurrent calls.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Driver is disabled by default because its dependencies are not
upstreamed yet, code is available for development and investigation.
When all dependencies are upstreamed, driver can be enabled back.
Fixes: 517ed6e2d5 ("net/mana: add basic driver with build environment")
Signed-off-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Long Li <longli@microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Support device init and add following devops skeleton:
- dev_configure
- dev_start
- dev_stop
- dev_close
Note that build system (including doc) is also added in this patch.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Add the offload support of very basic items: ethernet and
port id.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
The code is very similar, but the simple case can skip a few branches
in the hot path. This improves PPS when 10KB mbufs are used.
S/G is enabled on the Rx side by offload DEV_RX_OFFLOAD_SCATTER.
S/G is enabled on the Tx side by offload DEV_TX_OFFLOAD_MULTI_SEGS.
S/G is automatically enabled on the Rx side if the provided mbufs are
too small to hold the maximum possible frame.
To enable S/G in testpmd, add these args:
--rx-offloads=0x2000 --tx-offloads=0x8000
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: R Mohamed Shah <mohamedshah.r@amd.com>
When 'ionic_cmb' is set to '1', queue memory will be allocated from
the device's onboard memory (Controller Memory Buffer). In some
configurations, this will dramatically reduce packet latency and
increase PPS.
Add the WC_ACTIVATE flag to the PCI driver flags.
Write combining must be enabled to achieve the maximum PPS.
When the queue is in the CMB, descriptors cannot be prefetched.
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Signed-off-by: Neel Patel <neel.patel@amd.com>
Linearize RX mbuf chains in the expanded info array.
Clean one and fill one per CQE (completions are not coalesced).
Touch the mbufs as little as possible in the fill stage.
When touching the mbuf in the clean stage, use the rearm_data unions.
Ring the doorbell once at the end of the bulk clean/fill.
Signed-off-by: Neel Patel <neel.patel@amd.com>
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Some PMDs (e.g. hns3) could detect hardware or firmware errors, one
error recovery mode is to report RTE_ETH_EVENT_INTR_RESET event, and
wait for application invoke rte_eth_dev_reset() to recover the port,
however, this mode has the following weaknesses:
1) Due to different hardware and software design, some NIC port recovery
process requires multiple handshakes with the firmware and PF (when the
port is VF). It takes a long time to complete the entire operation for
one port, If multiple ports (for example, multiple VFs of a PF) are
reset at the same time, other VFs may fail to be reset. (Because the
reset processing is serial, the previous VFs must be processed before
the subsequent VFs).
2) The impact on the application layer is great, and it should stop
working queues, stop calling Rx and Tx functions, and then call
rte_eth_dev_reset(), and re-setup all again.
This patch introduces proactive error handling mode, the PMD will try
to recover from the errors itself. In this process, the PMD sets the
data path pointers to dummy functions (which will prevent the crash),
and also make sure the control path operations failed with retcode
-EBUSY.
Because the PMD recovers automatically, the application can only sense
that the data flow is disconnected for a while and the control API
returns an error in this period.
In order to sense the error happening/recovering, three events were
introduced:
1) RTE_ETH_EVENT_ERR_RECOVERING: used to notify the application that it
detected an error and the recovery is being started. Upon receiving the
event, the application should not invoke any control path APIs until
receiving RTE_ETH_EVENT_RECOVERY_SUCCESS or
RTE_ETH_EVENT_RECOVERY_FAILED event.
2) RTE_ETH_EVENT_RECOVERY_SUCCESS: used to notify the application that
it recovers successful from the error, the PMD already re-configures the
port, and the effect is the same as that of the restart operation.
3) RTE_ETH_EVENT_RECOVERY_FAILED: used to notify the application that it
recovers failed from the error, the port should not usable anymore. The
application should close the port.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Increase xstats ID width from 32 to 64 bits. This also
fixes the xstats ID datatype discrepancy between reset and
rest of the xstats family.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Fixed release notes for changes made in eventdev library.
Also updated the eventdev guide had got the type of the
rte_event_vector struct's u64s union field wrong.
Fixes: 5fa63911e4 ("eventdev: replace padding type in event vector")
Fixes: 0fbb55efa5 ("eventdev: add element offset to event vector")
Fixes: d986276f9b ("eventdev: add prefix to public symbol")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
dpdk-pmdinfo.py does not produce any parseable output. The -r/--raw flag
merely prints multiple independent JSON lines which cannot be fed
directly to any JSON parser. Moreover, the script complexity is rather
high for such a simple task: extracting PMD_INFO_STRING from .rodata ELF
sections. Rewrite it so that it can produce valid JSON.
Remove the PCI database parsing for PCI-ID to Vendor-Device names
conversion. This should be done by external scripts (if really needed).
The script passes flake8, black, isort and pylint checks.
I have tested this with a matrix of python/pyelftools versions:
pyelftools
0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29
3.6 ok ok ok ok ok ok ok ok
3.7 ok ok ok ok ok ok ok ok
Python 3.8 ok ok ok ok ok ok ok ok
3.9 ok ok ok ok ok *ok ok ok
3.10 fail fail fail fail ok ok ok ok
* Also tested on FreeBSD
All failures with python 3.10 are related to the same issue:
File "elftools/construct/lib/container.py", line 5, in <module>
from collections import MutableMapping
ImportError: cannot import name 'MutableMapping' from 'collections'
Python 3.10 support is only available since pyelftools 0.26. The script
will only work with Python 3.6 and later.
Update the minimal system requirements, docs and release notes.
Signed-off-by: Robin Jarry <rjarry@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
When KNI is being used at runtime, output a warning message about its
deprecated status. This is part of the deprecation process for KNI
agreed by the DPDK technical board.[1]
[1] https://mails.dpdk.org/archives/dev/2022-June/243596.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Add API to allow uncore frequency adjustment.
Uncore is a term used by Intel to describe function
of a microprocessor that are closely connected
to the core to achieve high performance.
This is done through manipulating related uncore frequency control
sysfs entries to adjust the minimum and maximum uncore frequency values
and works on Linux for Intel hardware.
Signed-off-by: Tadhg Kearney <tadhg.kearney@intel.com>
Reviewed-by: David Hunt <david.hunt@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
Sketching algorithm provide high-fidelity approximate measurements and
appears as a promising alternative to traditional approaches such as
packet sampling.
NitroSketch [1] is a software sketching framework that optimizes
performance, provides accuracy guarantees, and supports a variety of
sketches.
This commit adds a new data structure called sketch into
membership library. This new data structure is an efficient
way to profile the traffic for heavy hitters. Also use min-heap
structure to maintain the top-k flow keys.
[1] Zaoxing Liu, Ran Ben-Basat, Gil Einziger, Yaron Kassner, Vladimir
Braverman, Roy Friedman, Vyas Sekar, "NitroSketch: Robust and General
Sketch-based Monitoring in Software Switches", in ACM SIGCOMM 2019.
https://dl.acm.org/doi/pdf/10.1145/3341302.3342076
Signed-off-by: Alan Liu <zaoxingliu@gmail.com>
Signed-off-by: Yipeng Wang <yipeng1.wang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Tested-by: Yu Jiang <yux.jiang@intel.com>
Add support for protocol based buffer split in normal Rx
data paths. When the Rx queue is configured with specific protocol type,
packets received will be directly split into protocol header and
payload parts. And the two parts will be put into different mempools.
Currently, protocol based buffer split is not supported in vectorized
paths.
A new API ice_buffer_split_supported_hdr_ptypes_get() has been
introduced, it will return the supported header protocols of ice PMD
to app for splitting.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Currently, Rx buffer split supports length based split. With Rx queue
offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment
configured, PMD will be able to split the received packets into
multiple segments.
However, length based buffer split is not suitable for NICs that do split
based on protocol headers. Given an arbitrarily variable length in Rx
packet segment, it is almost impossible to pass a fixed protocol header to
driver. Besides, the existence of tunneling results in the composition of
a packet is various, which makes the situation even worse.
This patch extends current buffer split to support protocol header based
buffer split. A new proto_hdr field is introduced in the reserved field
of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr
field defines the split position of packet, splitting will always happen
after the protocol header defined in the Rx packet segment. When Rx queue
offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding
protocol header is configured, driver will split the ingress packets into
multiple segments.
Examples for proto_hdr field defines:
To split after ETH-IPV4-UDP, it should be defined as
proto_hdr = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
RTE_PTYPE_L4_UDP
For inner ETH-IPV4-UDP, it should be defined as
proto_hdr = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER |
RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP
If the protocol header is repeated with the previously defined one,
the repeated part should be omitted. For example, split after ETH, ETH-IPV4
and ETH-IPV4-UDP, it should be defined as
proto_hdr0 = RTE_PTYPE_L2_ETHER
proto_hdr1 = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
proto_hdr2 = RTE_PTYPE_L4_UDP
If protocol header split can be supported by a PMD, the
rte_eth_buffer_split_get_supported_hdr_ptypes function can
be used to obtain a list of these protocol headers.
For example, let's suppose we configured the Rx queue with the
following segments:
seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
off0=2B
seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B
seg2 - pool2, proto_hdr2=0, off1=0B
The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like
following:
seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0
seg1 - udp header @ 128 in mbuf from pool1
seg2 - payload @ 0 in mbuf from pool2
Now buffer split can be configured in two modes. User can choose length
or protocol header to configure buffer split according to NIC's
capability. For length based buffer split, the mp, length, offset field
in Rx packet segment should be configured, while the proto_hdr field
must be 0. For protocol header based buffer split, the mp, offset,
proto_hdr field in Rx packet segment should be configured, while the
length field must be 0.
Note: When protocol header split is enabled, NIC may receive packets
which do not match all the protocol headers within the Rx segments.
At this point, NIC will have two possible split behaviors according to
matching results, one is exact match, another is longest match.
The split result of NIC must belong to one of them.
The exact match means NIC only do split when the packets exactly match all
the protocol headers in the segments. Otherwise, the whole packet will be
put into the last valid mempool. The longest match means NIC will do split
until packets mismatch the protocol header in the segments. The rest will
be put into the last valid pool.
Pseudo-code for exact match:
FOR each seg in segs except last one
IF proto_hdr is not matched THEN
BREAK
END IF
END FOR
IF loop breaked THEN
put whole pkt in last seg
ELSE
put protocol header in each seg
put everything else in last seg
END IF
Pseudo-code for longest match:
FOR each seg in segs except last one
IF proto_hdr is matched THEN
put protocol header in seg
ELSE
BREAK
END IF
END FOR
put everything else in last seg
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>