The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:
- immediate source can be presented either as an unsigned
64-bit integer or pointer to data pattern in memory.
There was no explicit pointer field defined in the union.
- the byte ordering for 64-bit integer was not specified.
Many fields have shorter lengths and byte ordering
is crucial.
- how the bit offset is applied to the immediate source
field was not defined and documented.
- 64-bit integer size is not enough to provide IPv6
addresses.
In order to cover the issues and exclude any ambiguities
the following is done:
- introduce the explicit pointer field
in rte_flow_action_modify_data structure
- replace the 64-bit unsigned integer with 16-byte array
- update the modify field flow action documentation
Appropriate deprecation notice has been removed.
[1] commit 73b68f4c54 ("ethdev: introduce generic modify flow action")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Not all DPDK ports in a given switching domain may have the
privilege to manage "transfer" flows. Add an API to find a
port with sufficient privileges by any port in the domain.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Add support for item PORT_REPRESENTOR which should
be used instead of ambiguous item PORT_ID.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Action PORT_ID implementation assumes ingress only. Its semantics
suggests that support for equal action PORT_REPRESENTOR be added.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Semantics of the existing support for action PORT_ID suggests
that support for equal action REPRESENTED_PORT be implemented.
Helper functions keep port_id suffix since action
MLX5_FLOW_ACTION_PORT_ID is still used internally.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for action PORT_ID.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Add support for actions PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for action PORT_ID.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Add support for items PORT_REPRESENTOR and REPRESENTED_PORT
based on the existing support for item PORT_ID.
The use of item PORT_ID depends on the specified direction attribute.
Items PORT_REPRESENTOR and REPRESENTED_PORT, in turn, define traffic
direction themselves. The former matches traffic from the driver's
vNIC. The latter matches packets from either a v-port (network) or
a VF's vNIC (if the driver's port is a VF representor).
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Attributes "ingress" and "egress" can only apply unambiguosly
to non-"transfer" flows. In "transfer" flows, the standpoint
is effectively shifted to the embedded switch. There can be
many different endpoints connected to the switch, so the
use of "ingress" / "egress" does not shed light on which
endpoints precisely can be considered as traffic sources.
Add relevant deprecation notices and suggest the use of precise
traffic source items (PORT_REPRESENTOR and REPRESENTED_PORT).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
PF, VF and PHY_PORT require that applications have extra
knowledge of the underlying NIC and thus are hard to use.
Also, the corresponding items depend on the direction
attribute (ingress / egress), which complicates their
use in applications and interpretation in PMDs.
The concept of PORT_ID is ambiguous as it doesn't say whether
the port in question is an ethdev or the represented entity.
Items and actions PORT_REPRESENTOR, REPRESENTED_PORT
should be used instead.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to send matching traffic to the
entity represented by the given ethdev, at embedded switch level.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to send matching traffic to
the given ethdev (to the application), at embedded switch level.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic entering the
embedded switch from the entity represented by the given ethdev.
Such an entity can be a network (via a network port), a guest
machine (via a VF) or another ethdev in the same application.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
For use in "transfer" flows. Supposed to match traffic
entering the embedded switch from the given ethdev.
Must not be combined with direction attributes.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet
addresses assigned to given port.
Change testpmd to use this new function and avoid referencing directly
rte_eth_devices[].
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
GROUP is an in-house term for so-called "tunnel_match" flows.
On parsing, they are detected by virtue of PMD-internal item
MARK. It associates a given flow with its tunnel context.
Such a flow is represented by a MAE action rule which is
chained with the corresponding JUMP rule's outer rule
by virtue of matching on its recirculation ID.
GROUP flows do narrower match than JUMP flows do and
decapsulate matching packets (full offload).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
JUMP is an in-house term for so-called "tunnel_set" flows. On parsing,
they are identified by virtue of actions MARK (PMD-internal) and JUMP.
The action MARK associates a given flow with its tunnel context.
Such a flow is represented by a MAE outer rule (OR) which has its
recirculation ID set. This ID is also associated with the tunnel
context. The OR is supposed to set this ID in 8 high bits of
Rx mark in matching packets. It also counts the packets.
Packets that hit the OR but miss in action rule (AR) table,
should go to MAE admin PF (that is, to DPDK) by default.
Support for the use of action COUNT in JUMP
flows will be introduced by later patches.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Provide an API to let the application control the NIC's ability
to deliver specific kinds of per-packet metadata to the PMD.
Checks for the NIC's ability to set these kinds of metadata
in the first place (support for the flow actions) belong in
flow API responsibility domain (flow validate mechanism).
This topic is out of scope of the new API in question.
The PMD's ability to deliver received metadata to the user
by virtue of mbuf fields should be covered by mbuf library.
It is also out of scope of the new API in question.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Support of the keep-CRC offloading by checking
the relevant FW capability (scatter_fcs) for NIC support.
Supported offload:
DEV_RX_OFFLOAD_KEEP_CRC
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
Support of the VLAN stripping offloading by checking
the relevant FW capability (vlan_cap) for NIC support.
Supported offload:
DEV_RX_OFFLOAD_VLAN_STRIP
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
Support of the TSO offloading by checking
the relevant FW capability for NIC support.
Supported offloads:
DEV_TX_OFFLOAD_TCP_TSO
DEV_TX_OFFLOAD_VXLAN_TNL_TSO
DEV_TX_OFFLOAD_GRE_TNL_TSO
DEV_TX_OFFLOAD_GENEVE_TNL_TSO
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Tested-by: Idan Hackmon <idanhac@nvidia.com>
Indirect actions should be used to do shared counters.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Let the driver provide the user with information about available
representors by implementing the representor_info_get operation.
Due to the lack of any structure to representor IDs, every ID range
describes exactly one representor.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Provide minimal implementation for port representors that only can be
configured and can provide device information.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Add the argument that allows user to choose either switchdev or legacy
mode. Legacy mode enables switching by using Ethernet virtual bridging
(EVB) API. In switchdev mode, VF traffic goes via port representor
(if any) on PF, and software virtual switch (for example, Open vSwitch)
steers the traffic.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
rte_eth_rx_descriptor_status() should be used as a replacement.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Add a new cmdline to help diagnostic the bonding mode 4 in testpmd.
Show the lacp information about the bonded device and its slaves:
show bonding lacp info <bonded device port_id>
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Since the AF_XDP PMD does not work for secondary processes as reported
in Bugzilla 805, check for the process type at the beginning of probe
and return ENOTSUP if the process type is secondary.
It is planned that secondary processes will be supported by the PMD in
full in a future release by using rte_mp_msg to pass the state to the
secondary process that it requires in order to work.
Bugzilla ID: 805
Fixes: f1debd77ef ("net/af_xdp: introduce AF_XDP PMD")
Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
With data input, output and harq also supported in big
endian format, this patch updates the testbbdev application
to handle the endianness conversion as directed by the
the driver being used.
The test vectors assumes the data in the little endian order, and
thus if the driver supports big endian data processing, conversion
from little endian to big is handled by the testbbdev application.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
Add support for enqueue and dequeue the LDPC enc/dec
from the modem device.
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
This patch add support for multiple modems by assigning
a modem id as dev args in vdev creation.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
This patch adds dev args to take max queues as input
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Nicolas Chautru <nicolas.chautru@intel.com>
Added device information to capture explicitly the assumption
of the input/output data byte endianness being processed.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch add raw vector API framework for dpaa_sec driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This path add framework for raw API support.
The initial patch only test cipher only part.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The structure rte_crypto_sym_vec is updated to
add dest_sgl to support out of place processing.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The current crypto raw data vectors is extended to support
rte_security usecases, where we need total data length to know
how much additional memory space is available in buffer other
than data length so that driver/HW can write expanded size
data after encryption.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
The new fields regarding TSO support were not implemented following
feedback, it was decided to implement TSO support by using existing
mbuf fields.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The deprecation notice regarding extending rte_ipsec_sa_prm with a
new field hdr_l3_len is no longer applicable.
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add telemetry support for ipsec SAs.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add support for the IPsec NAT-Traversal use case for Tunnel mode
packets.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Added support for AES_CCM, CHACHA20_POLY1305 and AES_GMAC.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
As described in [1] and as announced in [2], The field ``dataunit_len``
of the ``struct rte_crypto_cipher_xform`` moved to the end of the
structure and extended to ``uint32_t``.
In this way, sizes bigger than 64K bytes can be supported for data-unit
lengths.
[1] commit d014dddb2d ("cryptodev: support multiple cipher
data-units")
[2] commit 9a5c09211b ("doc: announce extension of crypto data-unit
length")
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Adds max queue pairs limit devargs for crypto cnxk driver. This
can be used to set a limit on the number of maximum queue pairs
supported by the device. The default value is 63.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
Reviewed-by: Anoob Joseph <anoobj@marvell.com>
Reviewed-by: Jerin Jacob <jerinj@marvell.com>
MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used.
Fixes: fd943c764a ("mempool: deprecate xmem functions")
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add RTE_ prefix to macro used to register mempool driver.
The old one is still available but deprecated.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Add RTE_ prefix to helper macro to calculate mempool header size and
make it internal. Old macro is still available, but deprecated.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
Fix the mempool flags namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.
Flag MEMPOOL_F_NON_IO added in the release is just renamed to have RTE_
prefix.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
This patch adds the option --list (-l) to dpdk-telemetry.py which will
print all of the available file-prefixes for DPDK processes that have
telemetry enabled.
The prefixes will also be printed if the user passes an incorrect prefix
in the --file-prefix (-f) option.
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a new devarg: mr_mempool_reg_en.
On TX slow path, i.e. when an MR key for the address of the buffer
to send is not in the local cache, first try to retrieve it from
the database of registered mempools. Supported are direct and indirect
mbufs, as well as externally-attached ones from MLX5 MPRQ feature.
Lookup in the database of non-mempool memory is used as the last resort.
RX mempools are registered regardless of the devarg value.
On RX data path only the local cache and the mempool database is used.
If implicit mempool registration is disabled, these mempools
are unregistered at port stop, releasing the MRs.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in order to optimize their memory management.
Discussion: https://mails.dpdk.org/archives/dev/2021-August/216654.html
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Fix spelling error which is causing reports of other patches failing.
Fixes: 69daa9e502 ("net/cnxk: support inline security setup for cn10k")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Removing the rawdev based octeontx2-ep driver as the dependent
common/octeontx2 will soon be going away. Moreover this driver is no
longer required as the net/octeontx_ep driver is sufficient.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
Removing the rawdev based octeontx2-dma driver as the dependent
common/octeontx2 will be soon be going away. Also a new DMA driver will
be coming in this place once the rte_dmadev library is in.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
This patch add data plane API for dmadev.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
This patch add control plane API for dmadev.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
The 'dmadev' is a generic type of DMA device.
This patch introduce the 'dmadev' device allocation functions.
The infrastructure is prepared to welcome drivers in drivers/dma/
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
For processes run using "in-memory" mode sharing the same runtime dir,
we add support for connecting to the separate instance sockets created
using ":1", ":2" etc. via new "-i" or "--instance" argument. Add details
on connecting to separate instances to the telemetry howto document.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Tested-by: Conor Walsh <conor.walsh@intel.com>
Removed offload flag PKT_RX_EIP_CKSUM_BAD. PKT_RX_OUTER_IP_CKSUM_BAD
should be used as a replacement.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
The guides should be referenced locally with RST syntax
:doc: (beginning of page) or :ref: (specific chapter).
The links to doc.dpdk.org/guides/ are removed.
The links to the doc.dpdk.org/api/ are acceptable,
but should not point to a specific version, so one is fixed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: David Marchand <david.marchand@redhat.com>
get_hugepage_dir() was implemented in such a way that a --huge-dir
option had to exactly match the mountpoint, but there's no reason for
this restriction: DPDK might not be the only user of hugepages, and
shouldn't assume it owns an entire mountpoint. For example, if I have
/dev/hugepages/myapp, and /dev/hugepages/dpdk, I should be able to
specify:
--huge-dir=/dev/hugepages/dpdk/
and have DPDK only use that sub-directory.
Fix the implementation to allow a sub-directory within a suitable
hugetlbfs mountpoint to be specified, preferring the closest match.
Signed-off-by: John Levon <john.levon@nutanix.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
This patch adds tests for inner IP and inner L4 checksum
verification in IPsec mode.
Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add inner packet IPv4 hdr and L4 checksum enable options
in conf. These will be used in case of protocol offload.
Per SA, application could specify whether the
checksum(compute/verify) can be offloaded to security device.
Signed-off-by: Archana Muniganti <marchana@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The supported device list for test-crypto-perf app is
updated with following missing PMDs and sorted alphabetically.
- crypto_cn9k
- crypto_cn10k
- crypto_octeontx
- crypto_octeontx2
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Proposed change to event crypto metadata is not done as per deprecation
note. Instead, comments are updated in spec to improve readability.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
CN98xx SoC comes up with two CPT blocks wrt
CN96xx, CN93xx, to achieve higher performance.
Adding support to allocate all LFs of VF with even BDF from CPT0
and all LFs of VF with odd BDF from CPT1.
If LFs are not available in one block then they will be allocated
from alternate block.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
New paragraph is added for detailing typical VRAN usecase
and mapping to bbdev API usage.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
This implements in PMD the option to drop the CB CRC
after 4G decoding to help transport block concatenation.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
This is to support the case for operation
where CRC16 is to be appended or checked.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
Adding a missing operation when CRC16
is being used for TB CRC check.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Tom Rix <trix@redhat.com>
Added support for 256 bit key length for ZUC in
crypto_cn10k PMD.
Added support for digest length of 8 and 16 bytes
for ZUC with 256 bit key length.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add option to indicate whether UDP encapsulation ports
verification need to be done as part of inbound
IPsec processing.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Windows GSG included a section only on virt2phys driver installation,
but not on NetUIO. The content of the section duplicated documentation
in dpdk-kmods, but contained no links to it, only a reference.
Add subsections for virt2phys and NetUIO, explaining their roles.
Refer to documenttion in dpdk-kmods as an authoritative source,
but leave specific diagnostic and usage hints in the GSG.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
A more fine-grain flow API action RTE_FLOW_ACTION_TYPE_SAMPLE should
be used instead of it.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.
This patch also changes dpdk-devbind.py to list new inline
device as misc device.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The af_packet pmd driver binds to a raw socket and allows sending and
receiving of packets through the kernel.
Since commit [1], the kernel strips the vlan tags early in
__netif_receive_skb_core(), so we receive untagged packets while running
with the af_packet pmd.
Luckily for us, the skb vlan-related fields are still populated from the
stripped vlan tags, so we end up having all the information that we need
in the mbuf.
Having the pmd driver support DEV_RX_OFFLOAD_VLAN_STRIP allows the
application to control the desired vlan stripping behavior, until we
have a way to describe offloads that can't be disabled by pmd drivers.
This patch will cause a change in the default way that the af_packet pmd
treats received vlan-tagged frames. While previously, the application
was required to check the PKT_RX_VLAN_STRIPPED flag, after this patch,
the pmd will re-insert the vlan tag transparently to the user, unless
the DEV_RX_OFFLOAD_VLAN_STRIP is enabled in rxmode.offloads.
I've attempted a preliminary benchmark to understand if the change could
cause a sizable performance hit.
Setup:
Two virtual machines running on top of an ESXi hypervisor
Tx: DPDK app (running on top of vmxnet3 PMD)
Rx: af_packet (running on top of a kernel vmxnet3 interface)
Packet size :68 (packet contains a vlan tag)
Rates:
Tx - 1.419 Mpps
Rx (without vlan insertion) - 1227636 pps
Rx (with vlan insertion) - 1220081 pps
At a first glance, we don't seem to have a large degradation in terms of
packet rate.
[1]
https://github.com/torvalds/linux/commit/bcc6d47903612c3861201cc3a866fb60
Signed-off-by: Tudor Cornea <tudor.cornea@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add support to fetch port and queue stats via xstats API. Also remove
queue stats from basic stats because they're now available via xstats
API for the VF.
Signed-off-by: Nikhil Vasoya <nikhil.vasoya@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
It was announced that `min` and `max` fields and variables in public
headers will be renamed to avoid clash with `min` and `max` macros
defined in Windows headers. However, it is unnecessary, because these
are function-like macros, which are not expanded unless the next token
is an opening brace. It cannot happen for integer fields and variables.
Remove the deprecation notices.
Fixes: c4379ee599 ("doc: announce API changes for Windows compatibility")
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Definition of `rte_ether_addr` structure used a workaround allowing DPDK
and Windows SDK headers to be used in the same file, because Windows SDK
defines `s_addr` as a macro. Rename `s_addr` to `src_addr` and `d_addr`
to `dst_addr` to avoid the conflict and remove the workaround.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2021-July/215270.html
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
This patch enables building the ixgbe driver for Windows.
It also enables its dependencies on security and cryptodev.
I tested on AWS using ixgbe VF device, using dpdk-testpmd.
Signed-off-by: William Tu <u9012063@gmail.com>
Acked-by: Pallavi Kadam <pallavi.kadam@intel.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
TX redirection support by flow action RTE_FLOW_ACTION_TYPE_PHY_PORT
and RTE_FLOW_ACTION_TYPE_PORT_ID
This action is executed by HW to forward packets between ports.
If the ingress packets match the rule, the packets are switched
without software involved and perf is improved as well.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Add ice support for new ethdev APIs to enable/disable and read/write/adjust
IEEE1588 PTP timestamps. Currently, only scalar path supports 1588 PTP,
vector path doesn't.
The example command for running ptpclient is as below:
./build/examples/dpdk-ptpclient -c 1 -n 3 -- -T 0 -p 0x1
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Use the dynamic mbuf to register timestamp field and flag.
The ice has the feature to dump Rx timestamp value into dynamic
mbuf field by flex descriptor. This feature is turned on by dev
config "enable-rx-timestamp". Currently, it's only supported
under scalar path.
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The default VF driver for Intel 700 Series Ethernet Controller already
switch to iavf in DPDK 21.05. And i40evf is no need to maintain now,
so remove i40evf related code.
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Due to i40evf will be removed, so there's no need to keep the devargs
option "driver=i40evf" in iavf.
Signed-off-by: Robin Zhang <robinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The telemetry APIs have been present and unchanged for >1 year now,
so remove experimental tag from them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
- Enable IAVF PMD build on Windows
- Replace x86intrin.h with rte_vect.h to avoid __m_prefetchw conflicting
types
- Fix for pointer and integer sign warnings using Clang compiler on
Windows
- Add extra cflags '-fno-asynchronous-unwind-tables'
to avoid MinGW build error:
Error: invalid register for .seh_savexmm
Signed-off-by: Pallavi Kadam <pallavi.kadam@intel.com>
Reviewed-by: Ranjit Menon <ranjit.menon@intel.com>
Acked-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
A quite common scenario with kvargs is to lookup for a <key>=<value> in
a kvlist. For instance, check if name=foo is present in
name=toto,name=foo,name=bar. This is currently done in drivers/bus with
rte_kvargs_process() + the rte_kvargs_strcmp() handler.
This approach is not straightforward, and can be replaced by this new
function.
rte_kvargs_strcmp() is then removed.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
This updates the gtp_psc flow item to use the net header
definition of the gtp_psc to be based on RFC 38415-g30
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
1. Added support to specify l4 port masks in the template. Also enabled
source mac in the wild card key for ingress flows.
2. Added support to enable offload for ipv6 traffic within the vxlan
tunnel connection.
3. The flow counters is reduced from 7168 to 6912 for Whitney.
The stats operation is updated to reflect counts for packets
at egress from CFA instead of ingress to CFA
4. The miss path for the l2 context table is updated with correct
parif and default action handler to handle the miss path for
egress flows.
5. This support enables allocation of encapsulation, modification and
action records dynamically based on a given flow actions.
6. Reduce the l2context resource requests during open_session. Move the
SMAC from the L2Context to the EM/WM
7. Remap the parif in the bd action in order to eliminate incorrect
replication of broadcast packets. The layer 4 source port mask
was incorrectly updated in the outer layer 4 source port mask
instead of inner layer 4. Add the l3 proto to egress rules, switch
to using computed fields for l4 ports, add internal smac to f1/f2
flows, add l3 proto to ingress ipv6 flows
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The E810 supports four single-ended GPIO signals (SDP[20:23]). The 1PPS
signal outputs via SDP[20:23], which is measured by an oscilloscope.
This feature can be turned by a devargs which can select GPIO pin index
flexibly. Pin index 0 means SDP20, pin index 1 means SDP21 and so on.
The example for test command is as below:
./build/app/dpdk-testpmd -a af:00.0,pps_out='[pin:2]' -c f -n 4 -- -i
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch defines new RSS offload types for IPv4 and
L4(TCP/UDP/SCTP) checksum, which are required when users want
to distribute packets based on the IPv4 or L4 checksum field.
For example "flow create 0 ingress pattern eth / ipv4 / end
actions rss types ipv4-chksum end queues end / end", this flow
causes all matching packets to be distributed to queues on
basis of IPv4 checksum.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Reviewed-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Add option to indicate whether outer header verification
need to be done as part of inbound IPsec processing.
With inline IPsec processing, SA lookup would be happening
in the Rx path of rte_ethdev. When rte_flow is configured to
support more than one SA, SPI would be used to lookup SA.
In such cases, additional verification would be required to
ensure duplicate SPIs are not getting processed in the inline path.
For lookaside cases, the same option can be used by application
to offload tunnel verification to the PMD.
These verifications would help in averting possible DoS attacks.
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add SA lifetime configuration to register soft and hard expiry limits.
Expiry can be in units of number of packets or bytes. Crypto op
status is also updated to include new field, aux_flags, which can be
used to indicate cases such as soft expiry in case of lookaside
protocol operations.
In case of soft expiry, the packets are successfully IPsec processed but
the soft expiry would indicate that SA needs to be reconfigured. For
inline protocol capable ethdev, this would result in an eth event while
for lookaside protocol capable cryptodev, this can be communicated via
`rte_crypto_op.aux_flags` field.
In case of hard expiry, the packets will not be IPsec processed and
would result in error.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Enabled user to provide IV to be used per security
operation. This would be used with lookaside protocol
offload for comparing against known vectors.
By default, PMD would internally generate random IV.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Added tests to verify UDP encapsulation with IPsec.
The tests have IPsec packets generated from plain packets
and verifies that UDP header is added. Subsequently, the
packets are decapsulated and then resultant packet is
verified by comparing against original packet.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Added cases to verify IV generated by PMD for lookaside IPsec.
The tests compare IV generated for a batch of packets and ensures that
IV is not getting repeated in the batch.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Add negative test to validate IPsec inbound processing failure with ICV
corruption. The tests would first do IPsec encapsulation and corrupt
ICV of the generated IPsec packet. Then the packet is submitted to IPsec
outbound processing for decapsulation. Test case would validate that PMD
returns an error in such cases.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Replace pending queue with one that allows concurrent single producer and
single consumer. This relaxes the restriction of only allowing a single
lcore to operate on a given queue pair.
Signed-off-by: David George <david.george@sophos.com>
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Added support for asymmetric crypto perf throughput test.
Only modex is supported for now.
One new optype has been added.
--optype modex
./dpdk-test-crypto-perf -c 0x3 -- --devtype crypto_cn9k --optype modex
--ptest throughput
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Currently rte_security_set_pkt_metadata() and rte_security_get_userdata()
methods to set pkt metadata on Inline outbound and get userdata
after Inline inbound processing is always driver specific callbacks.
For drivers that do not have much to do in the callbacks but just
to update metadata in rte_security dynamic field and get userdata
from rte_security dynamic field, having to just to PMD specific
callback is costly per packet operation. This patch provides
a mechanism to do the same in inline function and avoid function
pointer jump if a driver supports the same.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Not all net PMD's/HW can parse packet and identify L2 header and
L3 header locations on Tx. This is inline with other Tx offloads
requirements such as L3 checksum, L4 checksum offload, etc,
where mbuf.l2_len, mbuf.l3_len etc, needs to be set for HW to be
able to generate checksum. Since Inline IPsec is also such a Tx
offload, some PMD's at least need mbuf.l2_len to be valid to
find L3 header and perform Outbound IPSec processing.
Hence, this patch updates documentation to enforce setting
mbuf.l2_len while setting PKT_TX_SEC_OFFLOAD in mbuf.ol_flags
for Inline IPsec Crypto / Protocol offload processing to
work on Tx.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add a note for the preference of using "()" rather than "\" for line
continuations in Meson.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Some docs and comments in Meson files are still mentioning
the old build system based on "make", removed in 20.11.
After one year, such references are better to be removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The release notes comments mention how to generate the documentation
with the old & removed build system.
Rather than fixing these comments, all old release notes comments
are removed, because they are useful only for the current release.
Few extra blank lines are removed for consistency.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Add ROC API to configure dual VLAN tag addition and removal.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Copy threshold has been introduced in async vhost data
path to select the appropriate copy engine to do copies
for higher efficiency.
However, it may cause packets ordering issues and also
introduces performance unpredictability.
Therefore, this patch removes copy threshold support in
async vhost data path.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
New API for these were added in 20.11 and the old API was retained
but marked deprecated. Since 21.11 is the next LTS, it is time
to remove the deprecated ones.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Adding more information about the release milestones.
This includes the scope of change, expectations, etc.
Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The documentation for the bond driver lists the name as `net/bond`
however the driver should be `net/bonding`.
Fixes: 89c67ae2cb ("doc: remove references to make from prog guide")
Cc: stable@dpdk.org
Signed-off-by: Ben Magistro <koncept1@gmail.com>
Acked-by: Min Hu (Connor) <humin29@huawei.com>
This patch adds multi-process support for testpmd.
For example the following commands run two testpmd
processes:
* the primary process:
./dpdk-testpmd --proc-type=auto -l 0-1 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=0
* the secondary process:
./dpdk-testpmd --proc-type=auto -l 2-3 -- -i \
--rxq=4 --txq=4 --num-procs=2 --proc-id=1
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
Make number of flows in flowgen configurable by setting parameter
--flowgen-flows=N.
Signed-off-by: Zhihong Wang <wangzhihong.wzh@bytedance.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
This patch add PDCP security short MAC-I support for
dpaa_sec driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch add support to handle PDCP short MAC-I domain
along with standard control and data domains as it has to
be treaty as special case with PDCP protocol offload support.
ShortMAC-I is the 16 least significant bits of calculated MAC-I. Usually
when a RRC message is exchanged between UE and eNodeB it is integrity &
ciphered protected.
MAC-I = f(key, varShortMAC-I, count, bearer, direction).
Here varShortMAC-I is prepared by using (current cellId, pci of source cell
and C-RNTI of old cell). Other parameters like count, bearer and
direction set to all 1.
crypto-perf app is updated to take short MAC as input mode.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
This patch adds support for AES_CMAC integrity
in non-security mode.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
add DES-CBC support and enable available cipher-only
test cases.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The rte_cryptodev_pmd.* files are for drivers only and should be
private to DPDK, and not installed for app use.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The API rte_cryptodev_pmd_is_valid_dev, can be used
by the application as well as PMD to check whether
the device is valid or not. Hence, _pmd is removed
from the API.
The applications and drivers which use this API are
also updated.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
When parsing a devargs, try to parse using the global device syntax
first. Fallback on legacy syntax on error.
Example of new global device syntax:
-a bus=pci,addr=82:00.0/class=eth/driver=mlx5,dv_flow_en=1
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Gaetan Rivet <grive@u256.net>
Start a new release cycle with empty release notes.
The ABI version becomes 22.0.
The map files are updated to the new ABI major number (22).
The ABI exceptions are dropped and CI ABI checks are disabled because
compatibility is not preserved.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>
The structures rte_cryptodev_sym_session and
rte_cryptodev_asym_session are not used by the
application directly. The application just need
an opaque pointer which it can attach to rte_crypto_op
while enqueue.
Hence, these structures can be internal to library
hidden from the user.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Transfer flow rules may be applied to traffic entering switch from
many sources. There are flow API pattern items which allow to specify
ingress port match criteria explicitly, but it is not documented
if ethdev port used to create flow rule adds any implicit match
criteria and how it coexists with explicit ones.
These aspects should be documented and drivers and applications
which use it in a different way must be fixed.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
By its very name, action PORT_ID means that packets hit an ethdev with the
given DPDK port ID. At least the current comments don't state the opposite.
However some drivers implement it in a different way and direct traffic to
the opposite end of the "wire" plugged to the given ethdev. For example in
the case of a VF representor traffic is redirected to the corresponding VF
itself rather than to the representor ethdev and OvS uses PORT_ID action
this way.
The documentation must be clarified and, likely, rte_flow_action_port_id
structure should be extended to support both meanings.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Currently there is a dedicated modify action for each
packet field that the application wants to change.
For example:
RTE_FLOW_ACTION_TYPE_SET_IPV4_DST to modify destination of IPv4.
A new action RTE_FLOW_ACTION_TYPE_MODIFY_FIELD added the ability
to use the same action to modify any field, in addition to be able to
modify the value based on different field and not just immediate value.
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
In the current implementation,
the action rte_flow_action_modify_field is not well defined
for fields larger than 64 bits (for example IPv6 source)
In addition, the byte order is also not well defined.
Both of those issue should be fixed.
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
To support shared Rx queue, this patch announces new offload flag
RTE_ETH_RX_OFFLOAD_SHARED_RXQ and new shared_group field to struct
rte_eth_rxconf in DPDK v21.11.
[1] mail list discussion:
https://mails.dpdk.org/archives/dev/2021-July/215575.html
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
This patch announces the renaming of struct
vhost_device_ops to rte_vhost_device_ops in DPDK v21.11.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Adrian Moreno <amorenoz@redhat.com>
Acked-by: Marvin Liu <yong.liu@intel.com>
This patch announces the marking of all the vDPA driver APIs
as internal.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Marvin Liu <yong.liu@intel.com>
This patch announces the experimental tag removal of 10 vhost APIs,
which have been experimental for more than 2 years.
All APIs could be made stable in DPDK 21.11.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Marvin Liu <yong.liu@intel.com>
IPsec xform struct would be updated to include IPsec SA lifetime
configuration. The existing member 'esn_soft_limit' would only track
ESN. And as sequence number control is getting introduced,
'esn_soft_limit' may not indicate the number of packets processed.
Replace that with a new structure to cover all lifetime cases with
support for specifying both soft and hard lifetimes.
ESN control introduced by https://patches.dpdk.org/patch/95808/
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The structure rte_security_session is not directly used
by the application. The application just need an opaque
pointer to attached to the mbuf or rte_crypto_op while
enqueue. Hence, it can be hidden inside the library
and would prevent unnecessary indirection to the priv
session data in fastpath.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The current crypto raw data vectors need to be extended to support
out of place processing. It is proposed to add additional desl_sgl
to provide details for destination sgl.
The same is also extended to support rte_security usecases, where
we need total data length to know how much additional memory space
is available in buffer other than data length so that driver/HW
can write expanded size data after encryption.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
In crypto adapter metadata, first 8 bytes of request info is a space
holder for response info. For better clarity, reserved field should be
removed from request info. New space for response info can be made by
changing type of event crypto metadata to structure from union.
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Lcore state FINISHED is used by the worker thread to indicate that
it has completed the assigned task. The state is changed to
WAIT by another thread after it observes the updated state. This
additional step is redundant. After this deprecation, the worker
thread will update the state to WAIT.
Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Feifei Wang <feifei.wang2@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
A bug with segmented packets has been discovered but the agreement
to apply the fix is not concluded at the time of DPDK 21.08 release.
This bug seems to be in DPDK for many years and should be fixed in 21.11.
Suggested-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Make driver layer as internal, remove unnecessary rte_ prefix for
structures and functions that are not a part of public API.
Promote experimental trace and vector APIs to stable.
Add reserved field to `rte_event_timer` structure.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
One reserved byte in rte_crypto_op struct would be used to indicate
warnings and other information from the crypto/security operation. This
field will be used to communicate events such as soft expiry with IPsec
in lookaside mode.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The APIs which are internal to PMD and cryptodev library
can be marked as internal so that ABI checking do not
shout for changes in interfaces which are internal to DPDK.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Announce changes to add 2 unions.
The first union will provide integral and bits access to version and IHL.
The second union will provide integral and bits access to fragment flags
and offset.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Discussion thread:
https://mails.dpdk.org/archives/dev/2021-March/202959.html
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
The mbuf offload flags do not match the DPDK namespace (they are
not prefixed by RTE_). Announce their rename in 21.11, and the
removal of the old names in 22.11.
A draft coccinelle script is provided to anticipate what the
renaming will be.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Clarifying the ABI policy on the promotion of experimental APIs to stable.
We have a fair number of APIs that have been experimental for more than
2 years. This policy amendment indicates that these APIs should be
promoted or removed, or should at least form a conversation between the
maintainer and original contributor.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Update the Minimal SW and HW version offload support
information for ASO metering and metering hierarchy.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Asaf Penso <asafp@nvidia.com>
PCI, vmbus, and auxiliary drivers printed a warning
when NUMA node had been reported as (-1) or not reported by OS:
EAL: Invalid NUMA socket, default to 0
This message and its level might confuse users because the configuration
is valid and nothing happens that requires attention or intervention.
It was also printed without the device identification and with an indent
(PCI only), which is confusing unless DEBUG logging is on to print
the header message with the device name.
Reduce level to INFO, reword the message, and suppress it when there is
only one NUMA node because NUMA awareness does not matter in this case.
Also, remove the indent for PCI.
Fixes: f0e0e86aa3 ("pci: move NUMA node check from scan to probe")
Fixes: 831dba47bd ("bus/vmbus: add Hyper-V virtual bus support")
Fixes: 1afce3086c ("bus/auxiliary: introduce auxiliary bus")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
In the recent update, the misc5 matcher was introduced to
match VxLAN header extra fields. However, ConnectX-5
doesn't support misc5 for the UDP ports different from
VXLAN's standard one (4789).
Need to fall back to the previous approach and use legacy
misc matcher if non-standard UDP port is recognized
in VxLAN flow.
Fixes: 630a587bfb ("net/mlx5: support matching on VXLAN reserved field")
Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Windows headers define `s_addr`, `min`, and `max` as macros.
If DPDK headers are included after Windows ones, DPDK structure
definitions containing fields with these names get broken (example 1),
as well as any usage of such fields (example 2). If DPDK headers
undefined these macros, it could break consumer code (example 3).
It is proposed to rename structure fields in DPDK, because Win32 headers
are used more widely than DPDK, as a general-purpose platform compared
to domain-specific kit, and are harder to fix because of that.
Exact new names are left for further discussion.
Example 1:
/* in DPDK public header included after windows.h */
struct rte_type {
int min; /* ERROR: `min` is a macro */
};
Example 2:
#include <rte_ether.h>
#include <winsock2.h>
struct rte_ether_hdr eh;
eh.s_addr.addr_bytes[0] = 0; /* ERROR: `addr_s` is a macro */
Example 3:
#include <winsock2.h>
#include <rte_ether.h>
struct in_addr addr;
addr.s_addr = 0; /* ERROR: there is no `s_addr` field,
and `s_addr` macro is undefined by DPDK. */
Commit 6c068dbd9f ("net: work around s_addr macro on Windows")
modified definition of `struct rte_ether_hdr` to avoid the issue.
However, the workaround assumes `#define s_addr S_addr.S_un`
in Windows headers, which is not a part of official API.
It also complicates the definition of `struct rte_ether_hdr`.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Khoa To <khot@microsoft.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The struct member dataunit_len is introduced in DPDK 21.05.
It is limited to 16 bits to fit a padding hole in 32-bit build.
This means the maximum data-unit length is 64 KB.
Some use cases may benefit of a bigger size as the proposed 32 bits.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Announce adding 'RTE_ETH_' prefix to all public ethdev macros/enums on
v21.11.
Backward compatibility macros will be added on v21.11 and they will be
removed on v22.11.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
will be removed and the header will be made internal.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Update the incorrect description about atomic operations
with provided wrappers in deprecation doc[1].
[1]https://mails.dpdk.org/archives/dev/2021-July/213333.html
Fixes: 7518c5c4ae ("doc: announce adoption of C11 atomic operations semantics")
Cc: stable@dpdk.org
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
APIs and data structures hasve been modified as per deprecation
note, so removing deprecation notice from the notes.
Fixes: 85f52aa422 ("sched: add pipe config params to subport struct")
Cc: stable@dpdk.org
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Documented the role of RTE_ARM_EAL_RDTSC_USE_PMU to enable
PMU based rte_rdtsc().
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Spell checked and corrected documentation.
If there are any errors, or I have changed something that wasn't an error
please reach out to me so I can update the dictionary.
Cc: stable@dpdk.org
Signed-off-by: Henry Nadeau <hnadeau@iol.unh.edu>
Currently the sample app user guides use hard coded code snippets,
this patch changes these to use literalinclude which will dynamically
update the snippets as changes are made to the code.
This was introduced in commit 413c75c33c ("doc: show how to include
code in guides"). Comments within the sample apps were updated to
accommodate this as part of this patch. This will help to ensure that
the code within the sample app user guides is up to date and not out
of sync with the actual code.
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Signed-off-by: Conor Walsh <conor.walsh@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Announce changes to make rte_security_set_pkt_metadata() and
rte_security_get_userdata() inline instead of C functions and
also addition of another field in structure rte_security_ctx for
holding flags.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The support for multiple data-units includes the next:
- Add a new command-line argument to provide the data-unit length.
- Set the length in the cipher xform.
- Validate device capabilities for this feature.
- Pad the AES-XTS operation length to be aligned to the defined data-unit.
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Running with stdout suppressed or redirected for further processing
is very confusing in the case of errors. Fix it by logging errors and
warnings to stderr.
Since lines with log messages are touched anyway concatenate split
format strings to make it easier to search using grep.
Fix indent of format string arguments.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
The first argument to rte_bsf32_safe was incorrectly declared as
a 64 bit value. The code only works on 32 bit values and the underlying
function rte_bsf32 only accepts 32 bit values. This was a mistake
introduced when the safe version was added and probably cause
by copy/paste from the 64 bit version.
The bug passed silently under the radar until some other code was
built with -Wall and -Wextra in C++ and C++ complains about the
missing cast.
Yes, this is a API signature change, but the original code was wrong.
It is an inline so not an ABI change.
Fixes: 4e261f5519 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Applications need to stop DMA transfers and finish all the inflight
packets when in VM memory hot-plug case and async vhost is used. This
patch is to provide an unsafe API to clear inflight packets which
are submitted to DMA engine in vhost async data path. Update the
program guide and release notes for virtqueue inflight packets clear
API in vhost lib.
Signed-off-by: Cheng Jiang <cheng1.jiang@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
The prerequisite info is already present in the platform guide.
No need to repeat it in individual dev guides.
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Allow user to specify his own hash key and hash ctrl if the
device is supporting that. HW interprets the key in reverse byte order,
so the PMD reorders the key before passing it to the ena_com layer.
Default key is being set in random matter each time the device is being
initialized.
Moreover, make minor adjustments for reta size setting in terms
of returning error values.
RSS code was moved to ena_rss.c file to improve readability.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
In order to support asynchronous Rx in the applications, the driver has
to configure the event file descriptors and configure the HW.
This patch configures appropriate data structures for the rte_ethdev
layer, adds .rx_queue_intr_enable and .rx_queue_intr_disable API
handlers, and configures IO queues to work in the interrupt mode, if it
was requested by the application.
Signed-off-by: Michal Krawczyk <mk@semihalf.com>
Reviewed-by: Artur Rojek <ar@semihalf.com>
Reviewed-by: Igor Chauskin <igorch@amazon.com>
Reviewed-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Shay Agroskin <shayagr@amazon.com>
The support of RFC2698 and RFC4115 are added in mlx5 PMD. Only the
ASO metering supports these two profiles.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
In the previous implementation, the policy for yellow color was not
supported. The action validation for yellow was skipped.
Since the yellow color policy needs to be supported, the validation
should also be done for the yellow color. In the meanwhile, due to
the fact that color policies of one meter should be used for the
same flow(s), the domains supported of both colors should be the
same. If both of the colors have RSS as the termination actions,
except the queues, all other parameters of RSS should be the same.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
RoCE disabling requirement is based on PCI address.
In order to support Sub-Function, a conversion is needed
in the case of an auxiliary device.
SF device can be probed with such devargs string:
auxiliary:mlx5_core.sf.<id>,class=vdpa
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Introduce SF support.
Similar to VF, SF on auxiliary bus is a portion of hardware PF,
no representor or bonding parameters for SF.
Devargs to support SF:
-a auxiliary:mlx5_core.sf.8,dv_flow_en=1
New global syntax to support SF:
-a bus=auxiliary,name=mlx5_core.sf.8/class=eth/driver=mlx5,dv_flow_en=1
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
This patch adds thread unsafe version for async register and
unregister functions.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch reworks the async configuration structure to improve code
readability. In addition, add preserved padding fields on the structure
for future usage.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch allows to check the amount of in-flight packets
for the vhost queue using async acceleration.
Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch provides the support for IPsec protocol
offload to the hardware.
Following security operations are added:
- session_create
- session_destroy
- capabilities_get
Signed-off-by: Michael Shamis <michaelsh@marvell.com>
Reviewed-by: Liron Himi <lironh@marvell.com>
Tested-by: Liron Himi <lironh@marvell.com>
In order to test the new mlx5 crypto PMD, the driver is added to the
crypto test application.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The crypto operations are done with the WQE set which contains
one UMR WQE and one rdma write WQE. Most segments of the WQE
set are initialized properly during queue setup, only limited
segments are initialized according to the crypto detail in the
datapath process.
This commit adds the enqueue and dequeue operations and updates
the WQE set segments accordingly.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
The mlx5 HW crypto operations are done by attaching crypto property
to a memory region. Once done, every access to the memory via the
crypto-enabled memory region will result with in-line encryption or
decryption of the data.
As a result, the design choice is to provide two types of WQEs. One
is UMR WQE which sets the crypto property and the other is rdma write
WQE which sends DMA command to copy data from local MR to remote MR.
The size of the WQEs will be defined by a new devarg called
max_segs_num.
This devarg also defines the maximum segments in mbuf chain that will be
supported for crypto operations.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
A keytag is a piece of data encrypted together with a DEK.
When a DEK is referenced by an MKEY.bsf through its index, the keytag is
also supplied in the BSF as plaintext. The HW will decrypt the DEK (and
the attached keytag) and will fail the operation if the keytags don't
match.
This commit adds the configuration of the keytag with devargs.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
To work with crypto engines that are marked with wrapped_import_method,
a login session is required.
A crypto login object needs to be created using DevX.
The crypto login object contains:
- The credential pointer.
- The import_KEK pointer to be used for all secured information
communicated in crypto commands (key fields), including the
provided credential in this command.
- The credential secret, wrapped by the import_KEK indicated in
this command. Size includes 8 bytes IV for wrapping.
Added devargs for the required login values:
- wcs_file - path to the file containing the credential.
- import_kek_id - the import KEK pointer.
- credential_id - the credential pointer.
Create the login DevX object in pci_probe function and destroy it in
pci_remove.
Destroying the crypto login object means logout.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Mellanox user space drivers don't deal with physical addresses as part
of a memory protection mechanism.
The device translates the given virtual address to a physical address
using the given memory key as an address space identifier.
That's why any mbuf virtual address is moved directly to the HW
descriptor(WQE).
The mapping between the virtual address to the physical address is saved
in MR configured by the kernel to the HW.
Each MR has a key that should also be moved to the WQE by the SW.
When the SW sees an unmapped address, it extends the address range and
creates a MR using a system call.
Add memory region cache management:
- 2 level cache per queue-pair - no locks.
- 1 shared cache between all the queues using a lock.
Using this way, the MR key search per data-path address is optimized.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Sessions are used in symmetric transformations in order to prepare
objects and data for packet processing stage.
A mlx5 session includes iv_offset, pointer to mlx5_crypto_dek struct,
bsf_size, bsf_p_type, block size index, encryption_order and encryption
standard.
Implement the next session operations:
mlx5_crypto_sym_session_get_size- returns the size of the mlx5
session struct.
mlx5_crypto_sym_session_configure- prepares the DEK hash-list
and saves all the session data.
mlx5_crypto_sym_session_clear - destroys the DEK hash-list.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Add a new PMD for Mellanox devices- crypto PMD.
The crypto PMD will be supported starting Nvidia ConnectX6 and
BlueField2.
The crypto PMD will add the support of encryption and decryption using
the AES-XTS symmetric algorithm.
The crypto PMD requires rdma-core and uses mlx5 DevX.
This patch adds the PCI probing, basic functions, build files and
log utility.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Isal compress PMD has build failures on Arm platform.
As dependent library ISA-L is supported on Arm platform,
support of the PMD is expanded to Arm architecture.
Fixed build failure caused by architecture specific code,
and made the PMD multi architecture compatible.
Bugzilla ID: 755
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
For now, a rule may have only one dedicated counter, shared counters
are not supported.
HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.
The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Add event vector support for cnxk event Rx adapter, add control path
APIs to get vector limits and ability to configure event vectorization
on a given Rx queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Add support for event eth Rx adapter.
Resize cn10k workslot fastpath structure to fit in 64B cacheline size.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Support AVF RSS for inner most header of GTPoGRE packet. It supports
RSS based on inner most IP src + dst address and TCP/UDP src + dst
port.
Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Add support for rte_flow_item_raw to parse custom L2 and L3
protocols.
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Query MLX5 port hardware if it is capable to offload IPv4
IHL field.
Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version required to offload IPv4 IHL is
xx_30_2000.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.
The examples for the "last_rsvd" pattern field are as below:
1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
This flow will exactly match the last 8-bits to be 0x80.
2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...
This flow will only match the MSB of the last 8-bits to be 1.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.
For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Raslan Darawsheh <rasland@nvidia.com>
The new flow item allows PMD to offload IPv4 IHL field for matching,
if hardware supports that operation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Use the new multi-monitor intrinsic to allow monitoring multiple ethdev
Rx queues while entering the energy efficient power state. The multi
version will be used unconditionally if supported, and the UMWAIT one
will only be used when multi-monitor is not supported by the hardware.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Currently, there is a hard limitation on the PMD power management
support that only allows it to support a single queue per lcore. This is
not ideal as most DPDK use cases will poll multiple queues per core.
The PMD power management mechanism relies on ethdev Rx callbacks, so it
is very difficult to implement such support because callbacks are
effectively stateless and have no visibility into what the other ethdev
devices are doing. This places limitations on what we can do within the
framework of Rx callbacks, but the basics of this implementation are as
follows:
- Replace per-queue structures with per-lcore ones, so that any device
polled from the same lcore can share data
- Any queue that is going to be polled from a specific lcore has to be
added to the list of queues to poll, so that the callback is aware of
other queues being polled by the same lcore
- Both the empty poll counter and the actual power saving mechanism is
shared between all queues polled on a particular lcore, and is only
activated when all queues in the list were polled and were determined
to have no traffic.
- The limitation on UMWAIT-based polling is not removed because UMWAIT
is incapable of monitoring more than one address.
Also, while we're at it, update and improve the docs.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Currently, we expect that only one callback can be active at any given
moment, for a particular queue configuration, which is relatively easy
to implement in a thread-safe way. However, we're about to add support
for multiple queues per lcore, which will greatly increase the
possibility of various race conditions.
We could have used something like an RCU for this use case, but absent
of a pressing need for thread safety we'll go the easy way and just
mandate that the API's are to be called when all affected ports are
stopped, and document this limitation. This greatly simplifies the
`rte_power_monitor`-related code.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Previously, the semantics of power monitor were such that we were
checking current value against the expected value, and if they matched,
then the sleep was aborted. This is somewhat inflexible, because it only
allowed us to check for a specific value in a specific way.
This commit replaces the comparison with a user callback mechanism, so
that any PMD (or other code) using `rte_power_monitor()` can define
their own comparison semantics and decision making on how to detect the
need to abort the entering of power optimized state.
Existing implementations are adjusted to follow the new semantics.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Tested-by: David Hunt <david.hunt@intel.com>
Acked-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
At this point, multiple different Ethernet drivers from multiple vendors
will support the PMD power management scheme. It would be useful to add
it to the NIC feature table to indicate support for it.
Suggested-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Add cross-compiling guidance for 32-bit aarch32 DPDK on aarch64 host.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Aaron Conole <aconole@redhat.com>
Currently in DPDK only acpi_cpufreq and pstate_cpufreq drivers are
supported, which are both not available on arm64 platforms. Add
support for cppc_cpufreq driver which works on most arm64 platforms.
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Acked-by: David Hunt <david.hunt@intel.com>
The `doc` target used `echo` as its command.
On Windows, `echo` is always a shell built-in, there is no binary.
Starting from meson 0.58, `run_target()` always searches for command
executable and no longer accepts `echo` as such on Windows.
Replace plain `echo` with a Python one-liner.
Fixes: d02a2dab2d ("doc: support building HTML guides with meson")
Cc: stable@dpdk.org
Reported-by: Rob Scheepens <rob.scheepens@nutanix.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
The current meson option 'machine' should only specify the ISA, which is
not sufficient for Arm, where setting ISA implies other settings as well
(and is used in Arm configuration as such).
Use the existing 'platform' meson option to differentiate the type of
the build (native/generic) and set ISA accordingly, unless the user
chooses to override it with a new option, 'cpu_instruction_set'.
The 'machine' option set the ISA in x86 builds and set native/default
'build type' in aarch64 builds. These two new variables, 'platform' and
'cpu_instruction_set', now properly set both ISA and build type for all
architectures in a uniform manner.
The 'machine' option also doesn't describe very well what it sets. The
new option, 'cpu_instruction_set', is much more descriptive. Keep
'machine' for backwards compatibility.
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
In order to allow\disallow configuring rules with identical
patterns, the new device argument 'allow_duplicate_pattern'
is introduced.
If allow, these rules be inserted successfully and only the
first rule take affect.
If disallow, the first rule will be inserted and other rules
be rejected.
The default is to allow.
Set it to 0 if disallow, for example:
-a <PCI_BDF>,allow_duplicate_pattern=0
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
This adds the validation when creating a policy with meter action.
Currently meter action is only allowed for green color in policy, and
8 meters are supported at maximum in one meter hierarchy.
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently when creating meter policy, a src port_id match item will
always be added in switch domain. So if one meter is used by another
port, it will not work correctly.
This issue is solved:
1. If policy fate action is port_id, add the src port_id match item,
and the meter cannot be shared by another port.
2. If policy fate action isn't port_id, don't add the src port_id
match, meter can be shared by another port.
This fix enables one meter being shared by different ports. User can
create a meter flow using a port_id match item to make this meter
shared by other port.
Fixes: afb4aa4f12 ("net/mlx5: support meter policy operations")
Cc: stable@dpdk.org
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
ConnectX-4 and ConnectX-4 Lx NICs require all L2 headers of transmitted
packets to be inlined. By default only first 18 bytes are inlined,
which is insufficient if additional encapsulation is used, like Q-in-Q.
Thus, default settings caused such traffic to be dropepd on Tx.
Document a recommendation to increase inlined data size in such cases.
Fixes: 505f1fe426 ("net/mlx5: add Tx devargs")
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
WINOF2 2.70 Windows kernel driver allows DevX rule creation
of types TCP and IPv6.
Added the types to the supported items in mlx5_flow_os_item_supported
to allow them to be created in the PMD.
Added description of new rules support in Windows kernel driver WINOF2 2.70
to the mlx5 driver guide.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>