Change default flow rule to use 8-byte encap.
The VFR conduit uses VLAN encap to send packets. So the encap record
is changed from 16B to 8B. That frees up 8B of encap records.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Verify the resource manager has been allocated prior to using it.
This can avoid potential segmentation faults.
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
When a Exact Match entry fails insertion, the allocated index needs to
be pushed back to the allocation stack. This patch takes care of that.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
The ingress rule to match on ipv4 and ipv6 is now two rules to
make sure both rules can coexist at the same time. Added count
action only for ingress flows.
Fixes: fe82f3e027 ("net/bnxt: support exact match templates")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Removed the mark id log message since it is in the data path.
Also optimized the link status debug message.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Changed the action template to support count action in addition
to a flow that does drop action.
Fixes: fe82f3e027 ("net/bnxt: support exact match templates")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
The port deinitialization now cleans up all the resources
properly. If all the ports are stopped then ULP context is
freed.
Added fix to update the correct tfp pointer in the ULP context
with the changes to support multi control channels.
Fixes: 70e64b27af ("net/bnxt: support ULP session manager cleanup")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
The fd is possibly a negative value while it is passed as an
argument to function "close". Fix the check to the fd.
Fixes: b9c9416790 ("bus/dpaa: decouple FQ portal alloc and init")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
The ptype mask for flexible descriptor in Rx function ice_recv_pkts_vec
has a reversed order, which leads to an incorrect value of the final
ptype. This patch fix the mask to parse the correct ptype of RX packets.
Fixes: c68a52b8b3 ("net/ice: support vector SSE in Rx")
Cc: stable@dpdk.org
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
This patch fixes the failure of recreate flexible fdir rule.
The root cause is that the flex_mask_flag is not reset during
flow destroy and flow flush.
Fixes: 6ced3dd72f ("net/i40e: support flexible payload parsing for FDIR")
Cc: stable@dpdk.org
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Currently, all data paths already support flow mark, so remove devargs
"flow-mark-support". FDIR matched ID will display in verbose
when packets match the created rule.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Support flow director mark ID parsing from flexible
Rx descriptor in SSE path.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Support flow director mark ID parsing from flexible
Rx descriptor in AVX path.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
The patch adds fdir_enabled flag to identify if parse flow director mark
ID from flexible Rx descriptor.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
This patch supports RxDID #22 by the following changes:
- add structure and macro definition for RxDID #22.
- support RxDID #22 format in normal path.
- change RSS hash parsing from RxDID #22 in AVX/SSE data path.
Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
The PMD supports hairpin only if DevX is supported and DV flow is
enabled.
When destination DevX TIR is not supported, the PMD tries to create TIR
action, and fails.
Avoid supporting hairpin when destination DevX TIR is not supported.
Fixes: b6b3bf86bd ("net/mlx5: get hairpin capabilities")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
There are 2 creators for Rx objects, DevX and Verbs.
There are supported DR versions when a DevX destination TIR flow action
creation cannot be supported, using this versions the TIR object should
be created by Verbs, what forces all the Rx objects to be created by
Verbs.
The selection of the Rx objects creator, wrongly, didn't take into
account the destination TIR action support what caused a failure in the
Rx flows creation.
Select Verbs creator when destination TIR action creation is not
supported by the DR version.
Fixes: 6deb19e1b2 ("net/mlx5: separate Rx queue object creations")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Currently, offload capabilities are only enabled for all Rx/Tx queues in
hns3 PF/VF PMD driver, and offload capability only applied in a Rx/Tx
queue is not supported.
So this patch moves 'DEV_TX_OFFLOAD_MBUF_FAST_FREE' from
tx_queue_offload_capa to tx_offload_capa.
Fixes: 1f5ca0b460 ("net/hns3: support some device operations")
Fixes: a5475d61fa ("net/hns3: support VF")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
The bitrate library in DPDK is actually in a "bitratestats" directory,
so that is used by meson for the macro and library name.
Therefore, we need to update references to RTE_LIBRTE_BITRATE to
RTE_LIBRTE_BITRATESTATS in testpmd to have it found. Rather than
supporting both defines, since make is being removed, we can just
replace all instances of the former define with the latter.
To ensure testpmd links ok when this is done, we also need to add
bitratestats to the list of library dependencies.
Fixes: 5b9656b157 ("lib: build with meson")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Report Rx and Tx descriptor related limitations in the nfp dev_info_get
callback function. This commit also adds NFP_ALIGN_RING_DESC to replace
a static integer value used during rx/tx queue setups to validate
descriptor alignment.
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Before this commit system call memalign was used for aligned
allocations, however memalign is deprecated.
Based on (1) - POSIX requires that memory aligned allocations can be
freed using free. Some systems provide no way to reclaim memory
allocated with memalign (because one can only pass to free a pointer
gotten from malloc, while, memalign would call malloc and then align the
obtained value).
Another issue is that 64/32 bits architectures use a minimal alignment
size. So any requested alignment below the minimal system size can be
simplified by calling malloc.
The glibc implementation allows memory obtained from posix_memalign to
be reclaimed with free. This commit replaces system call memalign with
system call posix_memalign. It also calls malloc in case the requested
alignment is below the minimal system size.
(1) https://linux.die.net/man/3/memalign
Fixes: d38e3d5266 ("common/mlx5: add memory management functions")
Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@mellanox.com>
The following sequences was working fine on mlx5:
rte_eth_dev_configure(portid, ...);
for (queueid = 0; queueid < nb_txq; queueid++)
rte_eth_tx_queue_setup(portid, queueid, ...);
for (queueid = 0; queueid < nb_rxq; queueid++)
rte_eth_rx_queue_setup(portid, queueid, ...);
// use a custom reta configuration
rte_eth_dev_rss_reta_update(portid, reta_conf, reta_size);
rte_eth_dev_start(portid);
We were able to configure a custom reta before starting the port.
The commit "net/mlx5: support RSS on hairpin" breaks this logic by
moving the code initializing the RSS reta from rte_eth_dev_configure
into rte_eth_dev_start.
To fix the issue, the skip_default_rss_reta is always set to 1 in
rte_eth_dev_rss_reta to avoid reconfigure the rss reta when the device
is started.
Fixes: 63bd16292c ("net/mlx5: support RSS on hairpin")
Cc: stable@dpdk.org
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Acked-by: Ori Kam <orika@nvidia.com>
RSS for IPv6 prefix 64bit fields are supported in this patch, so that
we can use prefix instead of full IPv6 address for RSS. The prefix
here only includes the first 64 bits of both SRC and DST IPv6 address.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Replace some function name with macro to shrink coding characters.
VIRTCHNL_DEL_PROTO_HDR_FIELD, VIRTCHNL_ADD_PROTO_HDR_FIELD
--> REFINE_PROTO_FLD.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
For tunnel and non-tunnel packets, it can share the same seg_tun info.
seg_tun[1] can be used for supporting inner fields with tunnel flow rule
or for non-tunnel packets, seg_tun[0] only used for tunnel outer part.
Add outer_input_set to distinguish inner/outer input set. So we can
identify different fields in outer or inner part.
Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
The original set conf function in FDIR was very long. Refactor to
increase readability to make it clearer and allow for more convenient
further changes.
No functional change here.
Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Outer IP hash can be configured as input sets for no inner GTPU packets.
Signed-off-by: Alvin Zhang <alvinx.zhang@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Sometime the user want to have fixed values of
encap/decap or header modify for all flows.
This will introduce the ability to choose from
fixed or dynamic values by setting the flag in
config.h
To use different value for each flow:
config.h: #define FIXED_VALUES 0
To use single value for all flows:
config.h: #define FIXED_VALUES 1
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Start support matching on icmpv4 and icmpv6.
Usage:
--icmpv4: add icmp item to match on.
--icmpv6: add icmpv6 item to match on.
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Sometimes you need to check flow performance for
certain port and not all ports. Thus a portmask
option is needed.
Usage:
--portmask=N
Where N represent the hexadecimal bitmask of ports
used.
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Instead of having single id value, use up to 256
values, thus we make sure that all flows will not
use same mark action.
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
All value must be converted into intended endianness.
Fixes: bf3688f1e8 ("app/flow-perf: add insertion rate calculation")
Cc: stable@dpdk.org
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Introduce raw-encap and raw-decap actions.
The two actions are added in command line
options, and for the data to encap or decap
the user need to parse it within the command
line.
All values of raw-encap data is set to be fixed
values.
Usage example:
--raw-encap=ether,ipv4,udp,vxlan
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Currently, each call for add_rss_action will allocate
extra memory for rss_data, which will reflect bad results
on memory consumption for all flows, and will leads into
memory leak.
In this fix, it will check if it's allocated before
reallocating it.
Fixes: bf3688f1e8 ("app/flow-perf: add insertion rate calculation")
Cc: stable@dpdk.org
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Introduce flag action support to flow perf
application.
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Introduce headers modify actions in the app.
All header modify actions will add different value
for each flow, to make sure each flow will create
and use it's own actions.
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
The old design was using the bit mask to identify
items, action and attributes.
So it was all based on the order of the code itself,
to place the order of the actions, items & attributes
inside the flows. Such design will lead into many failures
when some PMD support order different than other PMD,
in the end the rules will fail to create. Also sometimes
the user needs to have one action before other actions
and vice versa, so using new design of arrays that
take user order into consideration make more sense.
After this patch, we start supporting inner items
and more than one instance of same action.
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Currently all the sections are considered as main title under
DPDK Tools User Guides.
This fix will collect all flow perf sections under one title
which is Flow Performance Tool
Fixes: 3344cf2e30 ("app/flow-perf: add flow performance skeleton")
Cc: stable@dpdk.org
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Actions have it's own macro which is FLOW_ACTION_MASK
Fixes: bf3688f1e8 ("app/flow-perf: add insertion rate calculation")
Cc: stable@dpdk.org
Signed-off-by: Wisam Jaddo <wisamm@mellanox.com>
Acked-by: Alexander Kozyrev <akozyrev@nvidia.com>
Add ethdev APIs to support PTP timestamping
Signed-off-by: Selwin Sebastian <selwin.sebastian@amd.com>
Acked-by: Amaranath Somalapuram <asomalap@amd.com>
During MAC address insertion to MPS TCAM, add a default mask when
the mask is not explicitly specified. Otherwise, driver misses the
mask comparison and ends up inserting duplicate entries in the
MPS TCAM.
Fixes: 6fda3f0ddd ("net/cxgbe: add API to program hardware MPS table")
Cc: stable@dpdk.org
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Updating the type for 'port' variable from 'uint8_t' to 'uint16_t'.
Fixes: 8c3495f5d2 ("net/dpaa: support loopback API")
Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Pfe pmd has no need to bound host interface
for which we require if_index field.
Setting it to 0 as unused.
Fixes: fe38ad9ba7 ("net/pfe: add device start/stop")
Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Dpaa2 pmd has no need to bound host interface
for which we require if_index field.
Setting it to 0 as unused.
Fixes: 3e5a335d3f ("net/dpaa2: add basic operations")
Cc: stable@dpdk.org
Reported-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Using the rte_flow action RSS types field,
may result in undefined outcome.
For example selecting both UDP and TCP,
selecting TCP RSS type but the pattern is targeting UDP traffic.
another option is that the PMD doesn't support all requested types.
Until now, it wasn't clear what will happen in such cases.
This commit clarify this issue by stating that the PMD
will work in the best-effort mode, and will fail
in case the requested type is not supported.
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
When setup tx queues, we will create a mempool for the 'gso_ctx'.
The mempool is not freed when closing tap device. If free the tap
device and create it with different name, it will create a new
mempool. This maybe cause an OOM.
The snprintf function return value is not checked and the mempool
name may be truncated. This patch also fix it.
Fixes: 050316a883 ("net/tap: support TSO (TCP Segment Offload)")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Co-work with Jeff, setting me as new maintainer for igb, igc and ixgbe.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Wei Zhao <wei.zhao1@intel.com>
Acked-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
SW VLAN insertion relies on Ethernet addresses location in contiguous
memory (do not split across mbuf segments). There is no any formal
requirements on data location and mbuf structure which guarantee it.
So, check it explicitly to avoid corrupted packets if the condition
is violated. Typically software VLAN insertion is done on Tx prepare
stage and application will get indication that the packet is invalid
and cannot be transmitted.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>