Add new Rx offload flag `DEV_RX_OFFLOAD_RSS_HASH` which can be used to
enable/disable PMDs write to `rte_mbuf:#️⃣:rss`.
PMDs notify the validity of `rte_mbuf:#️⃣rss` to the application
by enabling `PKT_RX_RSS_HASH ` flag in `rte_mbuf::ol_flags`.
Also update testpmd rx_offload command to include RSS_HASH
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add `rte_eth_dev_set_ptypes` function that will allow the application
to inform the PMD about reduced range of packet types to handle.
Based on the ptypes set PMDs can optimize their Rx path.
-If application doesn’t want any ptype information it can call
`rte_eth_dev_set_ptypes(ethdev_id, RTE_PTYPE_UNKNOWN, NULL, 0)`
and PMD may skip packet type processing and set rte_mbuf::packet_type to
RTE_PTYPE_UNKNOWN.
-If application doesn’t call `rte_eth_dev_set_ptypes` PMD can return
`rte_mbuf::packet_type` with `rte_eth_dev_get_supported_ptypes`.
-If application is interested only in L2/L3 layer, it can inform the PMD
to update `rte_mbuf::packet_type` with L2/L3 ptype by calling
`rte_eth_dev_set_ptypes(ethdev_id,
RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK, NULL, 0)`.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
This adds new device id to the list of Mellanox devices
that runs mlx5 PMD.
- ConnectX-6DX device ID
- ConnectX-6DX SRIOV device ID
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Add device IDs for E810_XXV.
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
This patch removes deprecated feature Flow Director from hinic.ini,
hinic.rst and release_19_11.rst, because the feature has been
removed from the feature list in the following commit:
Commit 030febb6642c ("doc: remove deprecated ethdev features"), and
adds Flow API feature which is for generic filtering to doc files.
Signed-off-by: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
The dynamic mbuf fields were introduced by [1]. The egress metadata is
good candidate to be moved from statically allocated field tx_metadata to
dynamic one. Because mbufs are used in half-duplex fashion only, it is
safe to share this dynamic field with ingress metadata.
The shared dynamic field contains either egress (if application going to
transmit mbuf with tx_burst) or ingress (if mbuf is received with rx_burst)
metadata and can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_TX_DYNF_METADATA/PKT_RX_DYNF_METADATA flag will be set
along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior accessing the data.
The availability of dynamic mbuf metadata field can be checked with
rte_flow_dynf_metadata_avail() routine.
DEV_TX_OFFLOAD_MATCH_METADATA offload and configuration flag is removed.
The metadata support in PMDs is engaged on dynamic field registration.
Metadata feature is getting complex. We might have some set of actions
and items that might be supported by PMDs in multiple combinations,
the supported values and masks are the subjects to query by perfroming
trials (with rte_flow_validate).
[1] http://patches.dpdk.org/patch/62040/
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
Currently, metadata can be set on egress path via mbuf tx_metadata field
with PKT_TX_METADATA flag and RTE_FLOW_ITEM_TYPE_META matches metadata.
This patch extends the metadata feature usability.
1) RTE_FLOW_ACTION_TYPE_SET_META
When supporting multiple tables, Tx metadata can also be set by a rule and
matched by another rule. This new action allows metadata to be set as a
result of flow match.
2) Metadata on ingress
There's also need to support metadata on ingress. Metadata can be set by
SET_META action and matched by META item like Tx. The final value set by
the action will be delivered to application via metadata dynamic field of
mbuf which can be accessed by RTE_FLOW_DYNF_METADATA() macro or with
rte_flow_dynf_metadata_set() and rte_flow_dynf_metadata_get() helper
routines. PKT_RX_DYNF_METADATA flag will be set along with the data.
The mbuf dynamic field must be registered by calling
rte_flow_dynf_metadata_register() prior to use SET_META action.
The availability of dynamic mbuf metadata field can be checked
with rte_flow_dynf_metadata_avail() routine.
If application is going to engage the metadata feature it registers
the metadata dynamic fields, then PMD checks the metadata field
availability and handles the appropriate fields in datapath.
For loopback/hairpin packet, metadata set on Rx/Tx may or may not be
propagated to the other path depending on hardware capability.
MARK and METADATA look similar and might operate in similar way,
but not interacting.
Initially, there were proposed two metadata related actions:
- RTE_FLOW_ACTION_TYPE_FLAG
- RTE_FLOW_ACTION_TYPE_MARK
These actions set the special flag in the packet metadata, MARK action
stores some specified value in the metadata storage, and, on the packet
receiving PMD puts the flag and value to the mbuf and applications can
see the packet was threated inside flow engine according to the appropriate
RTE flow(s). MARK and FLAG are like some kind of gateway to transfer some
per-packet information from the flow engine to the application via
receiving datapath. Also, there is the item of type RTE_FLOW_ITEM_TYPE_MARK
provided. It allows us to extend the flow match pattern with the capability
to match the metadata values set by MARK/FLAG actions on other flows.
From the datapath point of view, the MARK and FLAG are related to the
receiving side only. It would useful to have the same gateway on the
transmitting side and there was the feature of type RTE_FLOW_ITEM_TYPE_META
was proposed. The application can fill the field in mbuf and this value
will be transferred to some field in the packet metadata inside the flow
engine. It did not matter whether these metadata fields are shared because
of MARK and META items belonged to different domains (receiving and
transmitting) and could be vendor-specific.
So far, so good, DPDK proposes some entities to control metadata inside
the flow engine and gateways to exchange these values on a per-packet basis
via datapaths.
As we can see, the MARK and META means are not symmetric, there is absent
action which would allow us to set META value on the transmitting path.
So, the action of type:
- RTE_FLOW_ACTION_TYPE_SET_META was proposed.
The next, applications raise the new requirements for packet metadata.
The flow ngines are getting more complex, internal switches are introduced,
multiple ports might be supported within the same flow engine namespace.
From the DPDK points of view, it means the packets might be sent on one
eth_dev port and received on the other one, and the packet path inside
the flow engine entirely belongs to the same hardware device. The simplest
example is SR-IOV with PF, VFs and the representors. And there is a
brilliant opportunity to provide some out-of-band channel to transfer
some extra data from one port to another one, besides the packet data
itself. And applications would like to use this opportunity.
It is supposed for application to use trials (with rte_flow_validate)
to detect which metadata features (FLAG, MARK, META) actually supported
by PMD and underlying hardware. It might depend on PMD configuration,
system software, hardware settings, etc., and should be detected
in run time.
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Ori Kam <orika@mellanox.com>
Change the type of burst mode information from bit field to free string
data, so that each PMD can describe the Rx/Tx busrt functions flexibly.
Fixes: eb5902504a13 ("ethdev: add API for getting burst mode information")
Fixes: 6b6609f68ccd ("net/i40e: support Rx/Tx burst mode info")
Fixes: e9a10e6c2102 ("net/ice: support Rx/Tx burst mode info")
Fixes: 7fe108edcf53 ("app/testpmd: show Rx/Tx burst mode description")
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Ray Kinsella <ray.kinsella@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
A tag is a transient data which can be used during flow match. This can be
used to store match result from a previous table so that the same pattern
need not be matched again on the next table. Even if outer header is
decapsulated on the previous match, the match result can be kept.
Some device expose internal registers of its flow processing pipeline and
those registers are quite useful for stateful connection tracking as it
keeps status of flow matching. Multiple tags are supported by specifying
index.
Example testpmd commands are:
flow create 0 ingress pattern ... / end
actions set_tag index 2 value 0xaa00bb mask 0xffff00ff /
set_tag index 3 value 0x123456 mask 0xffffff /
vxlan_decap / jump group 1 / end
flow create 0 ingress pattern ... / end
actions set_tag index 2 value 0xcc00 mask 0xff00 /
set_tag index 3 value 0x123456 mask 0xffffff /
vxlan_decap / jump group 1 / end
flow create 0 ingress group 1
pattern tag index is 2 value spec 0xaa00bb value mask 0xffff00ff /
eth ... / end
actions ... jump group 2 / end
flow create 0 ingress group 1
pattern tag index is 2 value spec 0xcc00 value mask 0xff00 /
tag index is 3 value spec 0x123456 value mask 0xffffff /
eth ... / end
actions ... / end
flow create 0 ingress group 2
pattern tag index is 3 value spec 0x123456 value mask 0xffffff /
eth ... / end
actions ... / end
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
The function rte_eth_dev_count() was marked as deprecated in DPDK 18.05
in commit d9a42a69febf ("ethdev: deprecate port count function").
It was planned to be removed after 19.11 LTS release,
but given we must not break ABI between 19.11 and 20.11,
it is removed now.
Note the ABI version is not dumped in this commit
because other changes already did.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
This commits adds the hairpin get capabilities function.
Signed-off-by: Ori Kam <orika@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
This commit introduce hairpin queue type.
The hairpin queue in build from Rx queue binded to Tx queue.
It is used to offload traffic coming from the wire and redirect it back
to the wire.
There are 3 new functions:
- rte_eth_dev_hairpin_capability_get
- rte_eth_rx_hairpin_queue_setup
- rte_eth_tx_hairpin_queue_setup
In order to use the queue, there is a need to create rte_flow
with queue / RSS action that targets one or more of the Rx queues.
Signed-off-by: Ori Kam <orika@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
The rte_security lib has introduced replay_win_sz,
so it can be removed from the rte_ipsec lib.
The relevant tests, app are also update to reflect
the usages.
Note that esn and anti-replay fileds were earlier used
only for ipsec library, they were enabling the libipsec
by default. With this change esn and anti-replay setting
will not automatically enabled libipsec.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
At present the ipsec xfrom is missing the important step
to configure the anti replay window size.
The newly added field will also help in to enable or disable
the anti replay checking, if available in offload by means
of non-zero or zero value.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Inline processing is limited to a specified subset of traffic. It is
often unable to handle more complicated situations, such as fragmented
traffic. When using inline processing such traffic is dropped.
Introduce fallback session for inline crypto processing allowing
handling packets that normally would be dropped. A fallback session is
configured by adding 'fallback' keyword with 'lookaside-none' parameter
to an SA configuration. Only 'inline-crypto-offload" as a primary
session and 'lookaside-none' as a fall-back session combination is
supported by this patch.
Fallback session feature is not available in the legacy mode.
Signed-off-by: Marcin Smoczynski <marcinx.smoczynski@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Tested-by: Bernard Iremonger <bernard.iremonger@intel.com>
This patch adds an option to enable link time optimization. In addition
to LTO option itself (-flto) fat-lto-objects are being used. This is
because during the build pmdinfogen scans the generated ELF objects to
find this_pmd_name* symbol in symbol table. Without fat-lto-objects gcc
produces ELF only with extra symbols for internal use during linking.
Signed-off-by: Andrzej Ostruszka <aostruszka@marvell.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The rte_vfio_dma_map/unmap API's have been marked as deprecated in
release 19.05. Remove them.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
dpaa and dpaa2 config have evolved to be same. The same binary
can now work across the platforms. So, there is no need to maintain
two different build configs.
The dpaa config shall work for both generation of dpaa platforms.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
When populating a mempool, ensure that objects are not located across
several pages, except if user did not request IOVA-contiguous objects.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Add FIB (Forwarding Information Base) library. This library
implements a dataplane structures and algorithms designed for
fast longest prefix match.
Internally it consists of two parts - RIB (control plane ops) and
implementation for the dataplane tasks.
Initial version provides two implementations for both IPv4 and IPv6:
dummy (uses RIB as a dataplane) and DIR24_8 (same as current LPM)
Due to proposed design it allows to extend FIB with new algorithms
in future (for example DXR, poptrie, etc).
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Add RIB (Routing Information Base) library. This library
implements an IPv4 routing table optimized for control plane
operations. It implements a control plane struct containing routes
in a tree and provides fast add/del operations for routes.
Also it allows to perform fast subtree traversals
(i.e. retrieve existing subroutes for a given prefix).
This structure will be used as a control plane helper structure
for FIB implementation. Also it might be used standalone in other
different places such as bitmaps for example.
Internal implementation is level compressed binary trie.
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
The ether header does not need to be packed since that makes no sense for
structures with only bytes in them, but it should be aligned to a two-byte
boundary to simplify access to it from code. Other packed structures that
use this also need to be updated to take account of the change, either by
removing packing - where it is clearly unneeded - or by explicitly giving
those structures 2-byte alignment also.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
New accessor has been introduced to provide the hidden information.
This symbol can now be kept internal.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
Now that all elements of the rte_config structure have (deinlined)
accessors, we can hide it.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Those functions have been deprecated since 17.11 and have 1:1
replacement.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Let's avoid exporting structures without an identified usecase.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Remove rte_malloc_virt2phy as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Remove rte_cpu_check_supported as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
The internal structure of lcore_config does not need to be part of
visible API/ABI. Make it private to EAL.
Rearrange the structure so it takes less memory (and cache footprint).
Since we change the ABI, bump the library version.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
This example can be removed because DPDK now has a range
of libraries, especially rte_eventdev, that did not exist
previously for load balancing, making this less relevant.
Also, modern NIC cards have greater ability to do load balancing,
e.g. using RSS, over a wider range of fields than earlier cards did.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Rather than providing a shim layer on top of netmap,
we should instead encourage users to create apps using
the DPDK APIs directly.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Original DPDK rings code had explicit support for a
single watermark per-ring, but more recent releases of
DPDK had a more general mechanism where each enqueue
or dequeue call could return the remaining elements/free-slots
in the ring.
Therefore, this example is not as relevant as before and can be removed.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The main l3fwd app should work with both PF and VF devices, so remove the
VF-only l3fwd example.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The example app shows the use of TUN/TAP with DPDK, but DPDK has a built-in
TAP PMD, so this example is obsolete and so can be removed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
This example is too old and SPDK will not maintain this example
anymore. Also SPDK has submitted a new vhost example vhost-blk.
We will keep on maintaining vhost-blk and It shows the packed
ring and live recovery support.
Signed-off-by: Jin Yu <jin.yu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Those two defines have been missed.
Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Many features require to store data inside the mbuf. As the room in mbuf
structure is limited, it is not possible to have a field for each
feature. Also, changing fields in the mbuf structure can break the API
or ABI.
This commit addresses these issues, by enabling the dynamic registration
of fields or flags:
- a dynamic field is a named area in the rte_mbuf structure, with a
given size (>= 1 byte) and alignment constraint.
- a dynamic flag is a named bit in the rte_mbuf structure.
The typical use case is a PMD that registers space for an offload
feature, when the application requests to enable this feature. As
the space in mbuf is limited, the space should only be reserved if it
is going to be used (i.e when the application explicitly asks for it).
The registration can be done at any moment, but it is not possible
to unregister fields or flags.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Remove redundant data structure fields from port level data
structures and update the release notes.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
This patch enables the unaligned chunks feature for AF_XDP which allows
chunks to be placed at arbitrary places in the umem, as opposed to them
being required to be aligned to 2k. This allows for DPDK application
mempools to be mapped directly into the umem and in turn enable zero copy
transfer between umem and the PMD.
This patch replaces the zero copy via external mbuf mechanism introduced
in commit e9ff8bb71943 ("net/af_xdp: enable zero copy by external mbuf").
The pmd_zero copy vdev argument is also removed as now the PMD will
auto-detect presence of the unaligned chunks feature and enable it if so
and otherwise fall back to copy mode if not detected.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
This commit adds support for matching flows on Geneve headers.
Signed-off-by: Moti Haimovsky <motih@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Add ice_create_fdir_filter to create a rule. If a flow is matched by
flow director filter, filter rule will be set to HW. For now common
pattern and queue/passthru/drop/mark actions are supported.
Signed-off-by: Yahui Cao <yahui.cao@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
The patch reworks packet process engine's binary classifier
(switch) for the new framework. It also adds support for new
packet type like PPPoE for switch filter.
Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Added a devarg to control the mode in generic flow API.
We use none-pipeline mode by default.
Signed-off-by: Qiming Yang <qiming.yang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
Some PMDs have more than one Rx/Tx burst paths, add the ethdev API
that allows an application to retrieve the mode information about
Rx/Tx packet burst such as Scalar or Vector, and Vector technology
like AVX2.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>